Compare commits

...

1372 commits
v1.2.0 ... main

Author SHA1 Message Date
Nicola Murino
00155eaaf6
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-26 19:04:08 +02:00
Nicola Murino
d94f80c8da
replace utils.Contains with slices.Contains
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-24 18:27:13 +02:00
Nicola Murino
bd5eb03d9c
replace hand-written slice utilities with methods from slices package
SFTPGo depends on Go 1.22 so we can use slices package

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-24 18:17:55 +02:00
Nicola Murino
6ba1198c47
sftpd: remove unused folder prefix from Connection struct
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-24 16:44:25 +02:00
Nicola Murino
b5c821795a
allow to customize name and log from the WebUI
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-24 09:14:27 +02:00
Nicola Murino
b2926377b7
WebUI: switch favicon from ico to png
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-20 16:11:21 +02:00
Nicola Murino
99f47ca4e7
sftpfs: cache and reuse parsed private keys
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-16 19:20:28 +02:00
Nicola Murino
fef388d8cb
don't track quota for private virtual folders
they are included within the user quota.
This is a backward incompatible change.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-13 21:02:40 +02:00
Nicola Murino
92849ca473
quota: move user and folder management to a common method
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-13 19:30:40 +02:00
Nicola Murino
0952887157
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-13 17:44:13 +02:00
Nicola Murino
d010b26e1c
deb packages: update copyright year
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-13 14:06:11 +02:00
Nicola Murino
58de410850
nt: fix unused write warnings
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-03 20:42:51 +02:00
Nicola Murino
54bc3ea87d
restore: fix quota scan for users with folders associated via groups
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-03 20:35:12 +02:00
Nicola Murino
64a2f7aa4f
oidc refresh token: validate nonce only if set
As clarified in OpenID core spec errata 2, section 12.2

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-07-01 19:06:11 +02:00
Nicola Murino
55be9f0b9c
EventManager: allow to configure the timezone to use for the scheduler
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-30 18:52:59 +02:00
Nicola Murino
97ffa0394f
update deps
adapt smtp configuration to changes in upstream library

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-30 09:18:04 +02:00
dependabot[bot]
dc91ec2056
Bump docker/build-push-action from 5 to 6 (#1668)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 09:29:56 +02:00
Nicola Murino
356795f8b0
add a test case for listing files with long names
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-22 19:23:02 +02:00
Nicola Murino
3efcd94e14
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-21 20:25:35 +02:00
Nicola Murino
34bc21b3b7
update deps
fixes a bug in chi compressor Handler

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-21 18:32:59 +02:00
Nicola Murino
37845c2936
smtp: hide commit hash in user agent
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-21 18:31:42 +02:00
Nicola Murino
47924716c1
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-19 21:04:35 +02:00
Nicola Murino
1d60505629
fix test case failure on macOS with bolt provider
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-19 10:45:14 +02:00
Nicola Murino
9daf0ba767
update swagger UI to 5.17.14
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-18 20:47:39 +02:00
Nicola Murino
bdae378569
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-18 19:13:21 +02:00
Nicola Murino
363770ab84
WebClient shares: add a logout button
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-18 19:10:32 +02:00
Nicola Murino
8bc08b25dc
sftp: limit max file list
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-17 19:24:03 +02:00
Nicola Murino
e0c1b974c9
add cgo to build constraints
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-16 09:46:17 +02:00
Nicola Murino
39cf9f6943
Web UIs: remove duplication of supported languages
Fixes #1660

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-15 19:35:23 +02:00
Nicola Murino
d650defa08
remove duplicated jwt tokens validation
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-15 16:19:37 +02:00
Nicola Murino
c5c42f072b
squash database migrations
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-15 16:02:09 +02:00
Nicola Murino
bd5b32101f
csrf: reuse the cookie in reset password
no need to generate a new cookie each time.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-15 15:18:17 +02:00
Nicola Murino
8208ac817d
html pages: add robots meta tag
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-15 10:17:37 +02:00
Nicola Murino
a99c4879de
update dependencies
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-15 10:10:15 +02:00
Nicola Murino
01b666a78f
WebUIs: check login conditions before allowing password reset
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-14 19:34:42 +02:00
Nicola Murino
8294952474
WebUIs: refactor CSRF
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-14 18:09:32 +02:00
Nicola Murino
7fb5b1b996
reduce share token duration
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-08 12:13:38 +02:00
Nicola Murino
2749a98f26
CI: update workflow to 1.22.4
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-07 18:19:52 +02:00
Nicola Murino
08526da153
REST API: fix token invalidation after password change
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-07 18:19:05 +02:00
Nicola Murino
8269adf176
Windows: allow to override most of the "serve" flags from env files
The Windows specific code path was missing in 07710ad98

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-05 17:34:28 +02:00
Nicola Murino
0cddcba5a7
EventManager: add an action to rotate the log file
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-04 19:51:52 +02:00
Nicola Murino
3bd1eeacc1
make sure to return a fully populated user after plugin auth
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-06-04 18:14:09 +02:00
Nicola Murino
1698ec2eb3
EventManager: fix adding ObjectDataString for provider events
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-31 20:01:38 +02:00
Nicola Murino
07710ad98d
allow to override most of the "serve" flags from env files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-31 18:49:23 +02:00
Nicola Murino
f63bf7093c
logs: redact plugin arguments
may contain sensitive data

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-30 18:10:12 +02:00
Nicola Murino
0597bf1047
Windows setup: update MinVersion
Starting from Go version 1.21, Windows 10 or Windows Server 2016 are
required

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-30 18:07:17 +02:00
Nicola Murino
5bde4b92a2
fix test cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-29 19:35:42 +02:00
Nicola Murino
faa994e3b3
update UI theme and dependencies
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-29 19:20:56 +02:00
Nicola Murino
68cc1a8e2c
fix proxy protocol policy
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-28 19:40:37 +02:00
Nicola Murino
9c775e2213
transfer logs: add error field
Fixes #1638

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-27 19:35:48 +02:00
Nicola Murino
6c94173ca1
WebUI branding: remove unused login_image_path from config
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-27 18:43:44 +02:00
Nicola Murino
d1e0560d28
WebAdmin status page: update the color of the labels
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-26 19:34:29 +02:00
Nicola Murino
52a94b2593
docker: build Alpine based image using golang:1.22-alpine3.20
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-25 16:53:13 +02:00
dependabot[bot]
9550fd2921
Bump alpine from 3.19 to 3.20 (#1636)
Bumps alpine from 3.19 to 3.20.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-25 16:51:48 +02:00
Nicola Murino
a6549b08f9
dependabot: remove gomod
it is not really required, we update Go dependencies regularly

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-25 16:40:31 +02:00
Nicola Murino
ba3e2ecb5f
WebAdmin events page: fix rendering of some nullable strings
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-25 16:17:33 +02:00
Nicola Murino
2bd3b46e3f
update swagger ui
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-25 16:14:42 +02:00
Nicola Murino
7831ddaede
WebAdmin events page: set fixed sizes for potentially long fields
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-24 18:24:05 +02:00
Nicola Murino
613f2f1c24
WebUIs: set the lang attribute based on the chosen language
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-24 18:23:41 +02:00
Nicola Murino
525f33a07a
WebUIs: fix css loading order
Fixes #1628

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-24 18:22:58 +02:00
Nicola Murino
3f2604d33f
ssh: use 3072-bits for the auto-generated RSA key
This is the same as ssh-keygen

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-24 18:22:36 +02:00
Nicola Murino
b823bb04d2
WebAdmin: make the description visible in IP lists page
Fixes #1631

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-23 20:07:49 +02:00
Nicola Murino
9ba92d9495
WebUIs: fix datatables processing class name
was changed to dt-processing in datatables 2.0

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-23 19:47:45 +02:00
Nicola Murino
0127fc188b
SSH: allow to configure minimum key size for DHGEX
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-23 18:08:16 +02:00
Nicola Murino
3c7a651d27
plugin: don't consider file extension for env prefix
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-18 13:10:16 +02:00
Nicola Murino
50a3c0d911
defender: allow to impose a delay between login attempts
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-18 10:35:54 +02:00
Nicola Murino
b2bea85add
update README
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-16 10:40:48 +02:00
Nicola Murino
61bc0065f9
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-16 04:54:46 +02:00
Nicola Murino
19e9857fea
set version to 2.6.0
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-15 17:36:10 +02:00
Nicola Murino
665a980d62
improve error wrapping
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-14 19:10:36 +02:00
Nicola Murino
eb0c6549c4
micro optimization
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-12 18:10:03 +02:00
Nicola Murino
e7627bfcd3
fix test cases after the change in the previous commit
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-10 15:58:07 +02:00
Nicola Murino
62f5d4cb89
fix the error message for errors that occur during file transfers
we should special case path errors and replace the fs path with the
virtual path.

Thanks to @nezzzumi for reporting this issue

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-10 15:12:05 +02:00
Nicola Murino
4502509c2d
pgsql: validate target_session_attrs
silently ignore invalid values

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-09 19:55:12 +02:00
Nicola Murino
2f577c9884
fix lint warnings
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-08 19:11:03 +02:00
Nicola Murino
499c7a432d
examples and tests: update dependencies
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-08 19:03:49 +02:00
Nicola Murino
5d24d665bd
add an util method to convert []byte to string
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-08 19:01:58 +02:00
Nicola Murino
65753fe23e
WebUIs: update datatables library
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-07 18:27:17 +02:00
Nicola Murino
96825be11b
update deps and workflows
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-07 18:17:06 +02:00
Nicola Murino
ce2e65d776
remove DCO
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-07 18:04:12 +02:00
Nicola Murino
ab320c9ecc
WebUIs: remove regex search
The default DataTables2 search is easier for end users

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-04 12:41:16 +02:00
Nicola Murino
76c912083e
update theme and js dependencies
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-04 12:03:55 +02:00
Nicola Murino
ea898ed104
silence lint warning
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-04 09:52:27 +02:00
Nicola Murino
0da12ef47b
ftp login: log is TLS is enabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-03 18:47:01 +02:00
Nicola Murino
92aa89263b
WebClient: hide submit button in profile page if no change is allowed
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-03 18:38:13 +02:00
Nicola Murino
a1af33c6aa
WebClient: allow to set TLS certificates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-03 18:30:03 +02:00
Nicola Murino
58a8b2b860
S3: add support for STS temporary credentials
Fixes #1558

Co-authored-by: Nazarii Mediukh <nazar.medykh@gmail.com>
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-02 20:01:30 +02:00
Nicola Murino
d9b91d074f
update README
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-02 19:27:41 +02:00
Nicola Murino
de62be6f21
update swagger ui
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-02 19:01:32 +02:00
Nicola Murino
acfd4c3e55
ftpd: allow to ignore ASCII transfer types
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-02 19:00:29 +02:00
Nicola Murino
dd446c805d
update sponsors section
remove Dendi

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-02 09:53:35 +02:00
Nicola Murino
d3f42e39db
move server version setting to common section
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-05-01 19:42:09 +02:00
Nicola Murino
7b5ad6c38d
workflows: update actions version
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-30 20:45:27 +02:00
Nicola Murino
8edce2055d
ftpd: fix random test cases failure on FreeBSD
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-30 19:50:52 +02:00
Nicola Murino
d19976cc3f
setup: update support link
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-28 17:08:59 +02:00
Nicola Murino
193d11587d
examples: update docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 14:40:20 +02:00
Nicola Murino
4d4d2ad801
remove obsolete rest-api-cli
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 14:11:14 +02:00
Nicola Murino
9b8407aeb0
remove fail2ban docs. Built-in defender should be used
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 14:09:41 +02:00
Nicola Murino
aa4a7aa6f6
update some descriptions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 14:01:33 +02:00
Nicola Murino
4bac74a149
remove docs and add a link to new documentation website
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 13:47:14 +02:00
Nicola Murino
dd9b0b151f
sftpfs: simplify client creation
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 12:03:38 +02:00
Nicola Murino
0a8a0ee771
revert #450
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 10:50:25 +02:00
Nicola Murino
2bcf05ca45
refactor for secrets management in API and private key handling in SFTPFs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 16:17:24 +02:00
Nicola Murino
aa426016f2
sftpd: remove folder_prefix
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 11:43:25 +02:00
Nicola Murino
1fc0f21506
hooks: remove logging output from external programs
This reverts #1208 because the contributor did not respond to our
request to sign the CLA

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 11:13:16 +02:00
Nicola Murino
e1fdc10ef8
remove robots.txt endpoint
This reverts #833 because the contributor did not respond to our
request to sign the CLA

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 11:00:55 +02:00
Nicola Murino
26d19abf61
remove reading data provider username and password from file
This reverts #1455 because the contributor cannot sign the CLA

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 10:57:38 +02:00
Nicola Murino
590a1f1429
update deps and CI
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 10:26:53 +02:00
Nicola Murino
a020a4e0ed
update dependencies in tests and examples
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-20 17:22:14 +02:00
Nicola Murino
ad7dcdb628
ssh: remove the ability to fully customize the software version
many clients rely on the version string to enable/disable some features.
We only allow to hide the version number, clients must be able to reliably
identify SFTPGo

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-20 17:15:15 +02:00
Nicola Murino
a38fd26cf6
minor refactor to memory provider initialization
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-20 16:45:20 +02:00
Nicola Murino
950cf67e4c
dataprovider: small refactor for password check
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-18 18:23:16 +02:00
Nicola Murino
d8341509e7
micro optimization for external process wrapping
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-16 18:34:40 +02:00
JK
2bbd8b3a5f
fix using rsync if running sftpgo as non-root user (#1535)
Signed-off-by: Jerome Küttner <j.kuettner@mittwald.de>
2024-04-15 12:52:08 +02:00
Nicola Murino
e315e48c39
CI: re-enable FreeBSD testing now that Go 1.22 is in quarterly
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-14 15:40:12 +02:00
Nicola Murino
150a338166
removed unused methods
these methods were used in the old UIs

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-14 09:23:57 +02:00
Nicola Murino
a957474740
SMTP: document why we always load templates in service mode
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-13 14:27:59 +02:00
oftenoccur
019edf38f3
chore: fix function name in comment (#1586)
Signed-off-by: oftenoccur <ezc5@sina.com>
2024-04-12 19:51:51 +02:00
Nicola Murino
8ca069f6de
add a template for pull requests
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-12 18:01:35 +02:00
Nicola Murino
2f16b06ffe
README: add a link to the compliance page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-12 17:58:40 +02:00
Nicola Murino
456517af87
notifier plugin: add support for login succeeded events
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-10 18:39:08 +02:00
Nicola Murino
e8140d7310
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-09 19:21:06 +02:00
Nicola Murino
ff48386cc8
store used data transfer as big integer
we originally stored these values as MB but since we use bytes now,
an integer field is not enough.

Fixes #1575

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-02 18:38:22 +02:00
Nicola Murino
70cb71acfa
WebClient: don't hide initial errors in files page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-02 18:37:29 +02:00
Nicola Murino
13418e9324
update deps and npm to the latest version
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-02 18:36:41 +02:00
Nicola Murino
1196727448
dataretention: remove ignore_user_permissions
Required permissions are now automatically granted as for any other
filesystem action

Fixes #1564

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-01 15:07:03 +02:00
Nicola Murino
aaae191710
WebAPI: ensure to check rootfs before creating directories
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-01 12:40:35 +02:00
Nicola Murino
1620e16b89
WebClient: fix move and copy
Regression introduced in fc023748c1

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-01 12:00:06 +02:00
Nicola Murino
db577b154e
webclient: add more test cases for shares
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-01 11:42:22 +02:00
Nicola Murino
c6164b8ae7
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-31 20:43:48 +02:00
Nicola Murino
fc023748c1
WebClient: improve file uploads
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-31 20:42:28 +02:00
Nicola Murino
cb3bc3f604
update OpenAPI definition
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-18 19:32:01 +01:00
Nicola Murino
1dd63c29ec
update translations
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-17 11:53:45 +01:00
Nicola Murino
cc9a0d4dc2
add time-based access restrictions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-17 11:30:03 +01:00
Nicola Murino
74dd2a3b9a
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-16 10:30:46 +01:00
Nicola Murino
55c8677443
restored the log if retrieving directory entries fails
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-12 18:31:01 +01:00
Nicola Murino
26d3105f54
groups: add role placeholder
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-12 18:21:50 +01:00
Nicola Murino
ca2757d41e
copy: fix quota for FsFileCopier
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-12 08:43:23 +01:00
Nicola Murino
f38966c6ac
WebClient: refactor long-running tasks to improve browser compatibility
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-11 18:19:57 +01:00
Nicola Murino
baaef63d1d
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-07 18:11:26 +01:00
Nicola Murino
4d357a6a57
EventManager: allow to check for inactive users
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-04 19:48:10 +01:00
Nicola Murino
8b2188fcb6
remove some useless nil checks
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-03-02 18:49:07 +01:00
Nicola Murino
799fdd7098
allow IPs in defender safe list to exceed max per-host connections
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-27 18:22:21 +01:00
Nicola Murino
12f599fd65
WebUI: skip checks for static resource
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-25 18:19:21 +01:00
Nicola Murino
be2ed1089c
ssh: add username to sftp auth errors
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-25 15:45:50 +01:00
Nicola Murino
92911bda2b
require at least 2048 bits for RSA certificates/keys
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-25 11:12:57 +01:00
Nicola Murino
f7d9e56cac
ssh: remove moduli, log negotiated algorithms
Fixes #1324

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-24 20:35:09 +01:00
Nicola Murino
a577d8b3cd
WebAdmin: allow to disable 2FA
Before it was only possible using REST API

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-23 18:24:07 +01:00
Nicola Murino
76ffa107dd
check admins' two-factor requirements in the disable API as well
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-22 19:05:16 +01:00
Nicola Murino
9a6a65931e
two-factor auth: fixed validation of conflicting settings
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-22 18:20:51 +01:00
Nicola Murino
de089e51fd
Web: allow to require password change and two-factor for admins
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-21 20:45:10 +01:00
Nicola Murino
51ae2d7301
add copy permission
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-20 18:19:09 +01:00
Nicola Murino
e5fc1bd574
docs: replace the images relating to the old theme
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 19:31:37 +01:00
Nicola Murino
aaf310ffff
add some notes about internationalization support
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 13:13:48 +01:00
Nicola Murino
3ad86274d8
CI: disable tests on FreeBSD until Go 1.22 is available
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 12:29:59 +01:00
Nicola Murino
b4afdac8a0
fix test cases on Windows (again)
in Go 1.22 Readdir now works on Windows in the same way as on other
platforms

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 12:17:51 +01:00
Nicola Murino
19d405fa3a
WebClient: make directory loading message more evident
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 11:51:15 +01:00
Nicola Murino
d92f85d1dd
WebClient: improve error message when trying to move non-empty folder
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 11:22:41 +01:00
Nicola Murino
5a319dc64f
update to Go 1.22
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 10:14:14 +01:00
Nicola Murino
a45aeb3bd6
WebAdmin: allow to reorder and search groups and actions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 10:14:10 +01:00
Nicola Murino
162376fd74
add a nil check for attributes
just defensive code

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 10:14:05 +01:00
Nicola Murino
0d4e4175a8
CI: update golangci-lint action to v4
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 10:14:00 +01:00
Nicola Murino
7ca390c85a
CI: fix codecov action
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 10:13:57 +01:00
Nicola Murino
d413775060
vfs: log progress after each page iteration
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 10:13:51 +01:00
Nicola Murino
db0a467d33
refactor metadata support
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-18 10:13:46 +01:00
Nicola Murino
e2ff12c589
fix test cases on Windows
on Windows f.Readdir returns no error if f is closed

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-17 13:41:40 +01:00
Nicola Murino
849f0bd0a8
WebAdmin: clearly indicate that metadata check is no longer supported
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-17 12:33:16 +01:00
Nicola Murino
e61fb42cbc
remove metadata plugin
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-17 12:30:47 +01:00
Nicola Murino
d8339ab967
WebClient: update pdfobject
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-16 18:56:17 +01:00
Nicola Murino
410b7cd512
fix remaining lint warnings
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-16 18:55:58 +01:00
Nicola Murino
ad75543172
fix some new lint warnings
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-15 21:13:45 +01:00
Nicola Murino
757185256c
i18next: fix fallback language
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-15 20:57:19 +01:00
Nicola Murino
04dcb65eb0
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-15 20:55:18 +01:00
Nicola Murino
1ff55bbfa7
add DirLister interface
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-15 20:53:56 +01:00
Nicola Murino
c60eb050ef
WebAdmin: improve the error message when trying to delete referenced resources
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-05 19:18:37 +01:00
Nicola Murino
d7975d8d76
WebAdmin: add expired to the status in users page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-05 19:03:05 +01:00
Nicola Murino
6b07908084
WebAdmin: add groups to users page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-05 19:02:43 +01:00
Nicola Murino
c49553abd0
keyboard interactive: ask only the passcode if it is the second step
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-05 19:02:01 +01:00
Nicola Murino
ae309d64c4
WebClient: disable indicator if we redirect from the login page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-04 21:13:04 +01:00
Nicola Murino
8385acd0e3
Redirect to two-factor auth page after creating the first admin
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-04 20:58:29 +01:00
Nicola Murino
e5836c8118
WebUI: add a JSON helper function
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-04 18:16:10 +01:00
Nicola Murino
c23d779280
WebClient: load shares using an async request
instead of rendering them directly within the template

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-04 14:33:51 +01:00
Nicola Murino
364c9c8162
WebClient: improve rendering of read only fields in profile page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-04 12:36:39 +01:00
Nicola Murino
3158190945
WebClient: respect second factor requirements enforced at group level
Fixes #1506

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-04 12:09:47 +01:00
Nicola Murino
c8da72a7f7
add WP Engine to the sponsors section, thank you!!!
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-03 16:41:28 +01:00
Nicola Murino
6e041895c7
workflows: update actions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-03 16:29:44 +01:00
Nicola Murino
0aa6013342
docker: add back Distroless image
this is possible thanks to a new project sponsor

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-03 15:38:43 +01:00
Nicola Murino
6074ed21f7
dataproviders: return an uniform error for foreign key violations
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-03 14:24:50 +01:00
Nicola Murino
dcecb79f63
fix expected strings in some test cases
plain strings were converted to translation codes

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-03 12:51:12 +01:00
Nicola Murino
71e01ab26d
new WebAdmin: add test cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-03 12:42:05 +01:00
Nicola Murino
7ad6d99bd7
WebUI: remove now unused assets
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-01 20:48:52 +01:00
Nicola Murino
ad80d4e475
WIP new WebAdmin: event rules
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-02-01 20:32:43 +01:00
Nicola Murino
c85601146d
WIP new WebAdmin: event actions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-31 20:49:25 +01:00
Nicola Murino
b18b37042d
WIP new WebAdmin: add missing translations for events page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-29 10:49:21 +01:00
Nicola Murino
0900a63b83
WIP new WebAdmin: fix back pagination in events page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-28 19:54:24 +01:00
Nicola Murino
143d4611ba
WIP new WebAdmin: events page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-28 19:38:01 +01:00
Nicola Murino
caa1d70aab
WebUI: add a base template for info messages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-28 10:22:16 +01:00
Nicola Murino
a275ef17a8
relax Unix domain socket permissions so that they are group writable
Fixes #1507

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-28 09:34:07 +01:00
Nicola Murino
856aed2d60
WebUI: fix long texts in message pages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-28 09:02:50 +01:00
Nicola Murino
b52a517b16
CI FreeBSD: update to FreeBSD 14
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-27 12:50:13 +01:00
Nicola Murino
69da5c10c6
WIP new WebAdmin: configs page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-27 12:48:15 +01:00
Nicola Murino
d01fccf28c
WIP new WebAdmin: maintenance page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-26 21:03:41 +01:00
Nicola Murino
9fcff83f8f
WIP new WebAdmin: status page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-25 19:26:51 +01:00
Nicola Murino
eec9c449d4
vfs: make PipeReader an interface
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-24 19:59:50 +01:00
Nicola Murino
8180b75ef1
WIP new WebAdmin: IP lists pages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-24 19:23:15 +01:00
Nicola Murino
d381304136
WIP new WebAdmin: admin/admins pages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-22 20:22:41 +01:00
Nicola Murino
d67f00546a
WebUI: improve style for readonly fields
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-21 18:10:14 +01:00
Nicola Murino
810bf4542f
WebUI: add autocomplete="new-password" for internal password fields
See:

https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/autocomplete

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-21 17:39:08 +01:00
Nicola Murino
e38350e8b3
WIP new WebAdmin: role page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-21 17:19:25 +01:00
Nicola Murino
3f479c5537
WIP new WebAdmin: roles page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-21 16:49:04 +01:00
Nicola Murino
0d387d9799
prefer errors.As to errors.Is
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-21 15:46:38 +01:00
Nicola Murino
8648351fc7
WIP new WebAdmin: connections page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-20 15:35:05 +01:00
Nicola Murino
73b2573b14
WIP new WebAdmin: two factor auth page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-18 20:25:07 +01:00
Nicola Murino
91802fad3e
WIP new WebAdmin: profile, change password, message pages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-18 19:18:57 +01:00
Nicola Murino
87451560e3
normalize common database errors
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-17 17:36:35 +01:00
Nicola Murino
5ac99ee556
WIP new WebAdmin: folder page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-16 19:51:37 +01:00
Nicola Murino
d939a82225
user: add TLS certificates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-14 21:36:23 +01:00
Nicola Murino
0722c4369b
WIP new WebAdmin: folders page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-14 16:59:27 +01:00
Nicola Murino
1a0f734a9c
WIP new WebAdmin: remove some hard coded strings
so they can be localized

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-14 09:41:39 +01:00
Nicola Murino
bf94f8b87c
WIP new WebAdmin: group page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-14 09:09:42 +01:00
Nicola Murino
5c8214e121
WIP new WebAdmin: groups page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-11 19:26:13 +01:00
Nicola Murino
e6c8b0c86b
Merge branch 'main' of github.com:drakkan/sftpgo
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-10 20:16:28 +01:00
Nicola Murino
03ebd5b841
fix a lint warning from the previous PR
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-10 20:15:51 +01:00
Anthrazz
c21b434c4e
defender: implement logging of events and bans (#1495)
defender: implement logging of events and bans

Signed-off-by: Anthrazz <25553648+Anthrazz@users.noreply.github.com>
2024-01-10 20:12:57 +01:00
Nicola Murino
113724f340
Merge branch 'main' of github.com:drakkan/sftpgo
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-10 20:01:54 +01:00
Nicola Murino
9cde0909b0
test cases: replace expired TLS certificates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-10 19:53:48 +01:00
Nicola Murino
86eab21be8
WebAdmin: fix parsing form field
some field names changed with the new UI

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-10 18:49:20 +01:00
Nicola Murino
73d7779d89
Docker Alpine: update to 3.19
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-09 20:12:58 +01:00
Nicola Murino
9c31111249
docs: update OpenSUSE instructions to auto refresh the repo
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-09 20:09:38 +01:00
Nicola Murino
e1b5d2fe39
WebAdmin: use the new UI for user pages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-09 19:54:08 +01:00
Nicola Murino
ca880f6cbb
WebAdmin: completed base page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-01 20:09:15 +01:00
Nicola Murino
784b7585c1
remove end year from Copyright notice in files
so we don't have to update all the files every year

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-01-01 11:31:45 +01:00
Nicola Murino
ce0693feda
WebUIs: move more shared components to common/base.html
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-31 17:35:14 +01:00
Nicola Murino
3e47a4f664
WebAdmin: use the new theme for the login and setup page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-30 19:12:22 +01:00
Nicola Murino
7318d1f32a
Web: move baselogin template to common
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-30 14:13:25 +01:00
Nicola Murino
259566fcce
WebUI: allow absolute URLs for disclaimers
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-28 19:59:06 +01:00
Nicola Murino
3121c35437
WebClient: do not silently overwrite files/directories
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-28 18:43:07 +01:00
Nicola Murino
e35e07acdb
WebClient: propose to add files for empty dirs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-26 19:07:50 +01:00
Nicola Murino
a65e7782de
WebClient shares: improve feedback after link copy
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-26 09:23:43 +01:00
Nicola Murino
a9341d7c0f
WebClient: various UI/UX improvements
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-26 08:59:52 +01:00
Nicola Murino
723c15fb3e
add IDCS to the sponsors section, thank you!!!
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-24 14:07:59 +01:00
Nicola Murino
c7ba326540
update deps in tests and example projects
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-21 18:07:03 +01:00
Nicola Murino
61b5f97bf2
scp: close transfers before sending upload errors
This change should fix the random failure in TestSCPTransferQuotaLimits
because the quota is already updated when the scp command ends.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-21 18:03:07 +01:00
Nicola Murino
d396c24ad4
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-18 20:49:36 +01:00
Nicola Murino
5f30ea3658
tests: add some logs to debug some sporadic test failures
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-17 10:46:21 +01:00
Nicola Murino
ba472c3c67
portable mode: fix disabling services if enabled using a config file
clarify that a config file/env vars can still be used for further
customizations

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-17 09:09:18 +01:00
Nicola Murino
00ce4e4685
EventManager: add uid and extension placeholders
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-16 11:39:32 +01:00
Nicola Murino
26a3c3085b
WebClient: uniform translation indentation
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-14 19:45:15 +01:00
Nicola Murino
f6fac68e1f
update crowdin.yml
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-14 19:44:50 +01:00
Nicola Murino
4cc95f7269 Update Crowdin configuration file 2023-12-14 19:40:06 +01:00
Nicola Murino
fe41109c76
WebClient: add toast notifications
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-14 10:36:25 +01:00
Nicola Murino
cec6420909
add some spaces between sponsor logos
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-13 19:02:01 +01:00
Nicola Murino
55847e7f0e
add Jump Trading to the sponsors section, thank you!!!
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-13 18:10:00 +01:00
Nicola Murino
c76a18168b
WebClient: add language switcher, complete localization support
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-13 18:03:42 +01:00
Nicola Murino
f721cf5c40
WebClient: fix test cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-12 19:04:32 +01:00
Nicola Murino
ff2eed8ee9
portable mode: fix panic while validating TLS certificates
Fixes #1480

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-12 18:18:19 +01:00
Nicola Murino
61fe7c39a7
WebClient: allow to pass args for localized errors from the backend
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-12 18:04:14 +01:00
Nicola Murino
691133d7c8
WebClient: improve test coverage
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-10 18:34:09 +01:00
Nicola Murino
8ce9af4adf
dataprovider: sort related resources by name
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-10 17:50:48 +01:00
Nicola Murino
d8b040e57c
refuse to start if the config file is invalid
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-10 16:50:15 +01:00
Nicola Murino
c71f0426ae
WebClient WIP: add support for localizations
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-12-10 16:40:13 +01:00
Nicola Murino
7572daf9cc
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-28 20:14:57 +01:00
Nicola Murino
56d305fde4
CI: re-enable tests on FreeBSD
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-25 19:03:28 +01:00
Nicola Murino
74836af66e
WebUI: extract a common struct for all pages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-25 18:30:56 +01:00
Nicola Murino
ed828458ab
WebUI add title to all pages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-25 18:11:10 +01:00
Nicola Murino
6175acb572
add support for reading more secrets from files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-24 20:43:50 +01:00
patrickap
a91cf22e0f
provider: support for username and password file (#1455)
Signed-off-by: patrickap <patrick.schlageter@web.de>
2023-11-24 20:28:51 +01:00
Nicola Murino
62854e4802
WebClient: use flatpickr as time picker
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-22 20:14:49 +01:00
Nicola Murino
bde5713ed6
WebClient: cleanup some js code
also returns an error if file or directory names contain a slash
instead of silently replacing slashes with a similar symbol

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-22 15:57:33 +01:00
Nicola Murino
c14484856e
WebClient: update pdfobject
also add csp nonce when loading javascript files

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-21 16:24:43 +01:00
Nicola Murino
84e387cc9c
WebClient: fix state for shares page
rebuilt the theme and removed unused components

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-19 13:27:49 +01:00
Nicola Murino
ac309cf9a3
WebClient: remove data schema usage from mfa page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-18 20:06:31 +01:00
Nicola Murino
59bdd4bc4e
WebClient: add support for more languages to the editor
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-18 14:17:28 +01:00
Nicola Murino
271d958acf
S3: fix compatibility with the latest SDK
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-18 12:00:53 +01:00
Nicola Murino
bfa17314c6
keyboard interactive auth: respect hook disabled setting
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-18 11:28:15 +01:00
Nicola Murino
6439569f36
WebClient: add csp nonce to CodeMirror
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-18 11:18:31 +01:00
Nicola Murino
50a9ac0163
WebClient: use standard HTML5 video tag
video-js does not work well with CSP

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-17 21:42:22 +01:00
Nicola Murino
1a765c7ff7
WebClient share: add a download page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-17 19:10:03 +01:00
Nicola Murino
61e6cc6985
WebClient: remove remaining inline onclick events
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-16 18:55:14 +01:00
Nicola Murino
37b0c229fc
Web UI: propagate CSPNonce to templates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-15 18:48:16 +01:00
Nicola Murino
d32d0d7587
WebClient: remove href to javascript
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-14 19:59:53 +01:00
Nicola Murino
3c522961af
WebClient: remove inline onclick from file edit page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-14 19:41:20 +01:00
Nicola Murino
2d9e7dfba2
WebClient: remove inline onclick from MFA page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-14 19:38:09 +01:00
Nicola Murino
4a737be421
WebClient: replace some inline onclick with event listeners
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-13 22:09:55 +01:00
Nicola Murino
450ae868ff
WebClient: update theme to the latest version
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-12 16:28:21 +01:00
Nicola Murino
c8531a5492
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-10 20:39:51 +01:00
Nicola Murino
c5c5860012
ssh: allow to configure public key auth algorithms
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-09 20:03:04 +01:00
Nicola Murino
f83600225b
remove support for sha256-simd
the performance difference are no longer relavant.
We can restore this support if anyone reports performance regression
on any particular hardware

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-09 09:34:20 +01:00
Nicola Murino
a1346aa071
httpd: fixed logging of refused requests due to rate limiting/blocklisting
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-08 19:11:00 +01:00
Nicola Murino
894e12e285
WebClient: refactor alerts
Fix events handling on disabling MFA

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-07 18:52:05 +01:00
Nicola Murino
96c614550f
WebClient: remove inline style from HTML elements
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-07 18:09:24 +01:00
Nicola Murino
6295be786f
WebClient: add a ping URL
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-06 19:58:39 +01:00
Nicola Murino
789d61f170
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-06 19:17:02 +01:00
Nicola Murino
d5a9bec3da
WebClient: allow bulk move or copy actions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-06 19:10:35 +01:00
Nicola Murino
9e9d6a5585
WebClient: allow to share multiple items from the files page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-06 18:46:12 +01:00
Nicola Murino
654ce2e349
s3: allow to skip TLS verification
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 19:27:11 +01:00
Nicola Murino
9456884584
WebClient: fix display of long usernames in dropdown menu
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:33:16 +01:00
Nicola Murino
010c36cab5
WebClient: allow to set a list of default CSS
The new WIP WebClient requires 2 CSS files

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:30:33 +01:00
Nicola Murino
b872c423ee
Remove external integrations, they are not supported in the new WIP WebClient
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:30:21 +01:00
Nicola Murino
2ee2098a48
WebClient: add test cases for new backend code
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:30:17 +01:00
Nicola Murino
1acc2151cf
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:30:11 +01:00
Nicola Murino
0671178e29
WebClient: fix test cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:30:06 +01:00
Nicola Murino
7991b07165
WebClient: update video js
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:29:21 +01:00
Nicola Murino
37facd21d4
WebClient shares: fix view pdf files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:29:15 +01:00
Nicola Murino
b4d9bf9c16
update issue templates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:29:10 +01:00
Nicola Murino
5452c3c121
update swagger ui
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:29:06 +01:00
Nicola Murino
9322701615
WIP: new WebClient UI
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-05 17:26:29 +01:00
Nicola Murino
2fdcb44c14
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-03 17:26:45 +01:00
Nicola Murino
87b12af932
static files: refactor neutered http.FileSystem
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-03 17:22:28 +01:00
Nicola Murino
75c2bcff8f
TLS: disable by default cipher suites using RSA key exchange
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-01 18:35:23 +01:00
Nicola Murino
822a05aa20
TLS ciphers: use a more secure default if no preference is specified
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-01 16:39:04 +01:00
Nicola Murino
4139c79a77
improve docs and update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-01 10:58:07 +01:00
Nicola Murino
379f87f571
loaddata: do not reveal the existence of the files in error messages
return a generic error message

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-01 10:54:20 +01:00
Nicola Murino
51febb19fa
httpd: add database based token manager
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-11-01 10:54:14 +01:00
Nicola Murino
5c938e46b7
allow to restrict the env vars passed to plugins
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-29 15:19:30 +01:00
Nicola Murino
9a7a3b00dc
EventManager commands: allow to retrieve env vars from the process env
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-29 11:52:53 +01:00
Nicola Murino
daf643596d
WebClient: fix icon for 0 byte files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-29 08:27:00 +01:00
Nicola Murino
bc8d71dfc7
editfiles: fix label
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-29 08:18:58 +01:00
Nicola Murino
8c31cc47b0
web UIs: fix dismissable alerts
alerts can now be shown again after the user dismissal

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-29 08:17:24 +01:00
Nicola Murino
59378104b7
webclient: fix link for shares with a trailing space
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-29 07:33:56 +01:00
Nicola Murino
116be362ba
update nfpm to 2.34.0
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-29 07:15:14 +01:00
Nicola Murino
e1c3097546
event rules: add test case for rename after upload
This is a common pattern in WinSCP

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-28 21:02:14 +02:00
Nicola Murino
9bcdc90ca8
add basic test cases for ALPN protocols
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-28 13:07:23 +02:00
Nicola Murino
7da5d8fcea
config: rename protocols to tls_protocols
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-28 12:42:05 +02:00
Nicola Murino
4a15775f65
allow to configure ALPN protocols
Fixes #1406

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-28 12:35:26 +02:00
Nicola Murino
691e44c1dc
add more upload modes
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-25 19:05:37 +02:00
Nicola Murino
90bce505c4
improve conditional resuming of uploads
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-24 19:14:33 +02:00
Nicola Murino
320e404e4d
vfs: make PipeWriter an interface
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-23 09:56:46 +02:00
Nicola Murino
e3c4ee0833
add support for conditional resuming of uploads
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-23 09:56:40 +02:00
CUI Hao
f1e52d99ba
webadmin: fix typo on webpages (#1438)
Signed-off-by: CUI Hao <cuihao.leo@gmail.com>
2023-10-23 09:54:50 +02:00
Nicola Murino
fc460922ad
events: fix event type string conversion
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-21 11:25:39 +02:00
Nicola Murino
ba9df51b2e
fix or suppress lint warnings detected by golangci-lint 1.55.0
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-20 20:31:17 +02:00
Nicola Murino
6282f95bd3
improve temp dirs handling an some logs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-17 18:06:52 +02:00
Nicola Murino
254824b781
Docker: update to bookworm
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-13 15:55:27 +02:00
Nicola Murino
40d0945450
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-11 18:48:29 +02:00
Nicola Murino
63972edb96
httpd: add a test case for StripSlash middleware
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-08 10:46:17 +02:00
Nicola Murino
da0eb5037e
httpd: skip StripSlash middleware for URL ending with multiple slashes
Fixes #1434

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-08 10:40:08 +02:00
Nicola Murino
4b685b21a2
configs: fix backward compatibility
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-07 22:02:10 +02:00
Nicola Murino
f05fe78737
ssh: refactor host key algorithm restriction
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-07 16:07:19 +02:00
Nicola Murino
19a95d8c55
httpfs: limit body size
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-07 11:28:16 +02:00
Nicola Murino
64c7588a44
sftpd: improve permissions checking test cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-10-04 19:41:50 +02:00
Nicola Murino
c55196a525
portable mode: allow to set config dir/config file
The -c flag is no longer used for SSH commands.
This is a backward incompatible change

Fixes #1423

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-09-25 18:20:09 +02:00
Nicola Murino
1da24ea0af
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-09-24 08:44:16 +02:00
Nicola Murino
75278d64de
docs repo: add instructions for Suse/OpenSuse
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-09-14 19:41:48 +02:00
Nicola Murino
e54fd46a9e
SQL providers: make sure we don't exceed the allowed placeholders
Fixes #1415

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-09-12 19:15:40 +02:00
Nicola Murino
fac022090d
httpd: disable directory index for static files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-09-08 19:55:45 +02:00
Nicola Murino
dc7c829b73
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-09-08 19:19:12 +02:00
Nicola Murino
aefcea034a
validate API key scope
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-09-08 18:54:11 +02:00
Nicola Murino
1cbaa7c77b
WebUIs: update the css to hide the theme hard coded background image
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-09-08 18:07:23 +02:00
Nicola Murino
5ef0a2ed4b
External/plugin auth: check for password change after empty response
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-26 12:04:41 +02:00
Nicola Murino
a592e388cd
ftpd: advertise TLS support only if really enabled
if we don't have a global TLS configuration, advertise TLS only on the
bindings where it is configured instead of failing at runtime

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-21 15:48:29 +02:00
Nicola Murino
5d4145900f
CI FreeBSD: disable until Go 1.21 is available in Quarterly
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-20 23:05:40 +02:00
Nicola Murino
b94ec7597c
smtp: set default port to 587
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-20 22:32:03 +02:00
Nicola Murino
c437f0ad76
logger: update mail adapter
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-20 21:42:41 +02:00
Nicola Murino
7f7d2e57c2
docs: minor improvements
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-20 19:22:38 +02:00
Nicola Murino
397cad93df
httpd request logger: set log level based on the status code
Fixes #1393

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-20 19:01:16 +02:00
Nicola Murino
ce8dbda44b
CI FreeBSD: use Go 1.21
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-20 16:07:12 +02:00
Nicola Murino
62b87083bb
ftpd: add support for TLS session reuse
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-20 16:00:36 +02:00
Nicola Murino
de35eb77cb
ftpd: use the extra field for certificate authentication
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-18 14:39:28 +02:00
Nicola Murino
163662a65a
eventmanager: replace placeholders in multipart filename
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-14 14:34:25 +02:00
Nicola Murino
6395fa0b67
eventmanager: fix params copy
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-12 19:03:47 +02:00
Nicola Murino
f03fdd1155
add object metadata to notification events
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-12 18:51:47 +02:00
Nicola Murino
8ab4a9aa70
all: update to Go 1.21
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-10 19:23:55 +02:00
Nicola Murino
6c482a248d
portable mode: add WebClient
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-10 19:02:55 +02:00
Nicola Murino
25450d9efc
fix event validation test case
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-09 19:00:59 +02:00
Nicola Murino
60cc07bc81
eventmanager: add DELETE method to HTTP notifications
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-09 18:44:17 +02:00
Nicola Murino
d8dd4b2131
CI: fix "Build packages" workflow after Go 1.21 release
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-08 19:47:12 +02:00
Nicola Murino
b7b54b54c3
remove Chinese README
It's constantly outdated and I'm unable to maintain it.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-08 18:31:48 +02:00
Nicola Murino
5011002d84
allow to set umask on *NIX platforms
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-08 18:30:42 +02:00
Nicola Murino
f5f56129df
add VPS2day to the sponsors section, thank you!!!
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-07 19:36:31 +02:00
Nicola Murino
63212bb033
remove the legacy PreferServerCipherSuites configuration
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-07 19:11:48 +02:00
Nicola Murino
830116bcf2
shares: allow to force an expiration date
this is a soft requirement, users can reactivate expired shares by
updating the expiration date

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-07 19:07:20 +02:00
Nicola Murino
ea96fe9a26
postgres provider: add support for "allow" and "prefer" SSL modes
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-05 20:01:14 +02:00
Nicola Murino
ebdda1b62e
pre-login hook doc: add a note about partial updates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-05 12:12:16 +02:00
Nicola Murino
54a76e8c45
s3: remove usage of the now deprecated EndpointResolver
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-05 11:58:01 +02:00
Nicola Murino
132d18d5d1
sftpd: fix keyboard interactive test cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-04 21:32:14 +02:00
Nicola Murino
75e6ef6132
sftpd: remove diffie-hellman-group18-sha512 KEX
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-04 21:10:39 +02:00
Nicola Murino
af0d7b48ad
sftpd: refactor multi-step authentication
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-08-04 20:56:23 +02:00
guangwu
c03bcb3a8a
fix: typo (#1381)
Signed-off-by: guoguangwu <guoguangwu@magic-shield.com>
2023-08-04 09:20:25 +02:00
Nicola Murino
39259ad6a9
Add a link to the Azure Kubernetes Service offering
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-07-31 19:47:04 +02:00
Nicola Murino
2c070b7eda
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-07-31 19:44:36 +02:00
Nicola Murino
0413c0471c
add a specific permission to manage folders
creating/updating folders embedded in users is no longer supported.

Fixes #1349

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-07-23 18:48:49 +02:00
Bruce Weirdan
e4be4048e3
Expand reference to http clients in hooks docs (#1359)
* Expand reference to http clients in hooks docs

Before, it wasn't really clear where clients were configured and that their configuration also affected headers.

Signed-off-by: Bruce Weirdan <weirdan@gmail.com>

* Update more mentions of HTTP clients

Signed-off-by: Bruce Weirdan <weirdan@gmail.com>

---------

Signed-off-by: Bruce Weirdan <weirdan@gmail.com>
2023-07-18 15:37:57 +02:00
Nicola Murino
00366fce07
shares: respect password strength
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-07-16 16:51:38 +02:00
Nicola Murino
e88172dd7e
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-07-15 12:35:30 +02:00
Nicola Murino
a5cb26daf2
pgx: revert to an older version
pgx 5.4.1 has a memory leak

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-07-08 17:08:31 +02:00
Nicola Murino
4f8794a255
file patterns: fix denied except rules
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-07-08 17:02:47 +02:00
Nicola Murino
5e5a09f164
make GroupConditionPatterns uniform with the accepted PR
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-07-02 10:57:29 +02:00
David Stäheli
f78e4b0443
check for multiple inverse matches (#1332)
* update check for multiple inverse matches

Signed-off-by: David Stäheli <mistrdave@gmail.com>

* after match, direct return true

Signed-off-by: David Stäheli <mistrdave@gmail.com>

* apply same behaviour to checkEventGroupConditionPatterns

Signed-off-by: David Stäheli <mistrdave@gmail.com>

* fix spellmistake of function name

Signed-off-by: David Stäheli <mistrdave@gmail.com>

---------

Signed-off-by: David Stäheli <mistrdave@gmail.com>
2023-07-02 09:49:21 +02:00
Nicola Murino
51d8f3b436
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-29 14:42:16 +02:00
Nicola Murino
ecc01f4f37
Windows setup: add PrepareToInstall event function
so the service is stopped before the installation starts and
we avoid the force close app warning

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-29 11:23:04 +02:00
Joshua Treudler
2af42da371 fixed typo
Signed-off-by: Joshua Treudler <joshua@treudler.net>
2023-06-28 13:35:53 +02:00
Nicola Murino
d1e4ee7bc8
config: fix loading commands args from env vars
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-25 21:31:57 +02:00
Nicola Murino
4440c49174
add auth plugin
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-25 18:57:30 +02:00
Nicola Murino
76964a6b85
check second factor after plugin authentication
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-25 07:16:26 +02:00
Nicola Murino
66f360e66c
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-18 07:15:39 +02:00
Nicola Murino
a38ce460bb
WebClient: show user quota
Also remove per-source data transfer limits. This was an
oversight

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-16 21:06:21 +02:00
Nicola Murino
80f21d1c91
WebAdmin: don't show hidden deny policy for allowed patterns
The deny policy only applies to denied patterns, showing an allowed
pattern as hidden will confuse users

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-14 19:00:52 +02:00
Nicola Murino
1c1b76011f
WebAdmin: relax key prefix validation
try to automatically fix leading and trailing slashes

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-12 19:13:16 +02:00
Nicola Murino
957d3a7b4d
CockroachDB: use unordered_unique_rowid for primary keys
sequential values in a primary key does not perform as well

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-10 18:23:35 +02:00
Nicola Murino
d7d7b0bbf0
dataprovider: fix sql for CockroachDB
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-10 15:32:51 +02:00
Nicola Murino
a3156de4a8
CI: fix MariaDB initialization
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-10 14:09:50 +02:00
Nicola Murino
99424bfa58
squash database migrations
SQLite: remove AUTOINCREMENT from primary keys. It is not needed.

Postgres: switch from serial to identity for primary keys.
This means Postgres < 10 will not work in v2.6.x

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-10 13:06:24 +02:00
Nicola Murino
d120957736
CI: set Go version to 1.20.5
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-08 19:44:19 +02:00
Nicola Murino
324d695d93
try to fix a randomly failing test case
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-08 19:41:58 +02:00
Nicola Murino
9d60972743
WebClient: redirect to the requested URL after login
This feature is only useful and enabled for file manager urls

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-08 18:14:47 +02:00
Nicola Murino
f938af5a61
WebClient: fix sorting by size
Fixes #1313

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-04 21:45:31 +02:00
Nicola Murino
9ccdc3a597
add code of conduct
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-04 09:03:58 +02:00
Nicola Murino
3499edd5c2
WebUI: remove leading and trailing spaces from user-submitted input
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-04 08:45:17 +02:00
Nicola Murino
9470cd6e69
multi-node installations: use a different backup path for each node
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-03 17:54:24 +02:00
Nicola Murino
1f7433e798
getting started guide: add a link to the available installation methods
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-03 17:29:12 +02:00
Nicola Murino
4ba3d026b4
update README
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-03 17:19:51 +02:00
Nicola Murino
98c639579f
add issue templates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-03 17:13:43 +02:00
Nicola Murino
74e5999c63
added support for verifying sha256/sha512 passwords hash
this simplifies the migration of users from some proprietary products

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-03 16:58:45 +02:00
Nicola Murino
48939b2b4f
add XOAUTH2
start the countdown, let's see how long it takes for your favorite
Go-based proprietary SFTP server to notice this change, copy the SFTPGo
code and thus violate its license, and announce the same feature :)

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-06-03 16:17:32 +02:00
Nicola Murino
8339fee69d
smtp: add debug option
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-30 19:11:28 +02:00
Nicola Murino
a2fc7d3cc5
update security policy
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-26 19:22:37 +02:00
Nicola Murino
ae7954eee2
WebUIs: fix disclaimer paths
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-26 17:59:38 +02:00
Nicola Murino
8f934f7c82
email action: allow to configure Bcc
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-25 19:55:27 +02:00
Nicola Murino
b2781e0bfc
WebAdmin: Set TLS username to empty string if disabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-25 18:24:51 +02:00
Nicola Murino
e11473cf52
config: limit the size for env files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-25 05:25:28 +02:00
Nicola Murino
f8f8962ccb
file patterns: evaluate allowed filters before the denied ones
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-24 19:56:53 +02:00
Nicola Murino
2238043efd
EventManager: add email field placeholder
Fixes #1288

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-24 19:08:51 +02:00
Nicola Murino
d9426cef20
docker: update docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-23 13:06:42 +02:00
Nicola Murino
052d586364
docker: remove distroless
Fix #1295

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-23 13:04:42 +02:00
Nicola Murino
11ba41e903
Revert "Docker: try to add CAP_NET_BIND_SERVICE to the binary"
This reverts commit 8d12872608.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-23 12:59:49 +02:00
Nicola Murino
255985b7b0
Windows: start the service in a goroutine
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-23 12:59:27 +02:00
Nicola Murino
2b77709a04
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-21 09:23:24 +02:00
Nicola Murino
5b4a1bda2e
set version to 2.5.1
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-20 17:39:23 +02:00
Nicola Murino
3f94f6d0e7
proxy protocol: fix require policy in some edge cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-20 16:08:57 +02:00
Nicola Murino
d28a53a6cf
webdav: fix caching with external auth/plugins
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-20 12:39:07 +02:00
Nicola Murino
963cec124e
oidc docs: fix typo
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-18 18:43:46 +02:00
Nicola Murino
bbaca578cd
EventManager: add content type option for email config
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-17 19:28:13 +02:00
Nicola Murino
da30389989
fix OpenAPI schema, update js deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-17 18:24:33 +02:00
Nicola Murino
52ec36dbd6
update pwd reset template. Update deps and use new features from the OIDC library
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-17 18:10:57 +02:00
Artem Kajalainen
b524d178dd
docs: add info about IRSA for S3 authn
Signed-off-by: Artem Kajalainen <artem@iki.fi>
2023-05-16 19:35:13 +02:00
Nicola Murino
e0d9b8bddf
WebClient: update password change timestamp after password reset
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-16 19:15:45 +02:00
Nicola Murino
19da923369
webdav: add support for parsing more time formats
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-16 18:51:42 +02:00
Nicola Murino
824a70b22d
Docker Alpine: update to 3.18
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-16 18:23:41 +02:00
Nicola Murino
cea70d5d6b
update security policy
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-16 18:17:35 +02:00
Nicola Murino
adad8e658b
osfs: add optional buffering
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-16 18:08:14 +02:00
Nicola Murino
e10487ad57
EventManager: improve automatic detection of JSON body
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-12 19:22:50 +02:00
Nicola Murino
4eded56d5f
add support for log events
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-12 18:34:59 +02:00
Krasimir Popov
43d011f125 Fixing double the typo in README
Signed-off-by: Krasimir Popov <kjpopovbg@gmail.com>
2023-05-08 15:50:28 +02:00
Daniel Hammer
a292044501 Aligned help example with v2.5.0 output
Signed-off-by: Daniel Hammer <daniel.hammer+oss@gmail.com>
2023-05-06 13:11:48 +02:00
Nicola Murino
05c54614b2
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-05 19:12:50 +02:00
Nicola Murino
32020e236f
set version to 2.5.0
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-05-03 13:07:48 +02:00
Nicola Murino
b9cf6e5083
Add the link to the new Azure offer for Windows
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-26 13:53:05 +02:00
Nicola Murino
ee5b7290a0
EventManager: add more debug logs for HTTP actions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-25 20:27:40 +02:00
Nicola Murino
fd6a44c562
OpenAPI: fix filesystem action types enum
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-23 14:46:09 +02:00
Nicola Murino
8d12872608
Docker: try to add CAP_NET_BIND_SERVICE to the binary
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-19 13:41:59 +02:00
Nicola Murino
712f2053a4
REST API dumpdata: allow to specify the resources to dump
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-18 18:11:23 +02:00
Nicola Murino
54462c26f2
WebAdmin: display undefined js objects as empty string
This is probably something that changed in the recent datatables update,
before it was handled automatically

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-16 15:38:49 +02:00
Nicola Murino
d0a171558d
fix test cases for system commands
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-15 16:09:53 +02:00
Nicola Murino
1ade850557
add a log to better debug a randomically failing test case
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-15 15:08:42 +02:00
Nicola Murino
466f2e88b3
WebClient: fix rename
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-15 14:16:26 +02:00
Nicola Murino
3cb53b2c33
fix cross folder copy
also update css/js deps and other minor changes

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-13 18:23:42 +02:00
Nicola Murino
6279216c2e
webdav: fix GET as PROPFIND if a prefix is defined
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-09 20:17:37 +02:00
Nicola Murino
5219c1fdd1
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-08 19:00:05 +02:00
Nicola Murino
4294659785
try harder to convert transfer errors in well-known error types
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-08 14:55:04 +02:00
Nicola Murino
f03f1b0156
improve test cases coverage
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-08 10:01:48 +02:00
Nicola Murino
184b99d500
user: add a field to indicate whether the password is set
A structure similar to the one used for secrets would be better,
but we don't want to break backwards compatibility.

Also document that omitting the password field in the request body
will preserve the current password when updating a user using the
REST API. Added a test case for this.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-06 18:22:09 +02:00
Nicola Murino
74f05e5305
EventManager: check the parent directory before creating a zip
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-03 18:53:13 +02:00
Nicola Murino
aefa7f77c2
add a link to the Terraform provider
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-04-02 18:48:56 +02:00
Nicola Murino
084d4109b8
WebAdmin: ensure to sanitize data before rendering
Thanks to Polina Zvorykina, VK for reporting this issue

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-28 12:28:38 +02:00
Nicola Murino
b60d3f680e
user as JSON: rename 2fa_protocols to two_factor_protocols
This is a breaking change, but it is necessary to make JSON serialization of
users more compatible.
For example, Terraform does not allow JSON fields starting with numbers

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-26 15:57:53 +02:00
Nicola Murino
ee90bfb506
add unixcrypt build tag
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-26 10:33:30 +02:00
Nicola Murino
e17068a76f
postgres provider: add support for load balancing
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-25 09:29:13 +01:00
Nicola Murino
354fc9b3d6
OIDC: allow to extract custom fields from sub-structs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-23 18:15:07 +01:00
Nicola Murino
e29f6857db
EventManager: add IDP login trigger and check account action
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-22 19:02:54 +01:00
Nicola Murino
40344ec0ff
CI FreeBSD: compile and run tests using the same user
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-17 12:51:37 +01:00
Nicola Murino
783dff369b
CI FreeBSD: install git
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-15 20:20:52 +01:00
Nicola Murino
72e0325d05
run test cases also on FreeBSD
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-15 19:44:45 +01:00
Nicola Murino
2710207779
update jquery, go deps, actions/setup-go to v4
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-15 18:44:08 +01:00
Nicola Murino
b719d03ebe
WebAdmin: improve fs config layout
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-12 15:08:32 +01:00
Nicola Murino
84396343da
fix some codeql warnings
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-10 17:30:06 +01:00
Nicola Murino
14242b59a2
oidc docs: add env vars config
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-09 18:58:36 +01:00
Nicola Murino
dad346cee8
add codeql
update deps

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-05 16:38:29 +01:00
Nicola Murino
04282f94a4
update js and css deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-04 16:14:16 +01:00
Nicola Murino
0423e8f157
httpd: generate defender events for failed 2fa and password resets
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-04 13:55:48 +01:00
Nicola Murino
bdcee06665
WebClient: remove the default upload size limit
Users who want a limit can still set it.
By default, we want to allow uploads of any size

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-02 18:26:21 +01:00
Nicola Murino
ae90ed2ba0
Docker: try again to add armv7 support
Let's see if the actions are more stable now

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-02 18:11:26 +01:00
Nicola Murino
4ba3ae876d
allow to set password strength at user/group level
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-02 09:11:30 +01:00
Nicola Murino
662164c7ff
smtp: require templates only if a server is configured or in service mode
This regression was introduced after recent changes to allow setting the SMTP
settings from the WebAdmin UI.

Fixes #1217

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-03-01 18:31:02 +01:00
Nicola Murino
fad6af11e5
don't expose error messages from pre-actions and post connect hooks
always return a generic error instead to avoid leaking internal info

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-28 18:01:09 +01:00
Nicola Murino
dba088daed
printf: replace %#v with the more explicit %q
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-27 19:19:57 +01:00
Nicola Murino
a23fdea9e3
ftpd: allow hostnames as passive IP
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-27 19:19:50 +01:00
Nicola Murino
561976bcd0
WebClient: return proper status code for http.MaxBytesError
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-27 11:03:05 +01:00
Nicola Murino
874776bd12
also capture logs for pre-login and check-password commands
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-26 15:15:34 +01:00
Felix Eckhofer
ec67b67e9e Send output from external_auth_hook to logs
Signed-off-by: Felix Eckhofer <felix@eckhofer.com>
2023-02-26 07:39:34 +01:00
Felix Eckhofer
71f691b208 Fix potential ldap injection
Signed-off-by: Felix Eckhofer <felix@eckhofer.com>
2023-02-26 07:10:58 +01:00
Nicola Murino
e0cbb966f0
eventmanager: skip password expiration check for expired users
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-25 16:33:39 +01:00
Nicola Murino
df9d47900a
eventmanager: add user/folders as comma separated string in errors
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-25 15:26:38 +01:00
Nicola Murino
b8496c4d6e
eventmanager: add user expiration check
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-25 13:06:09 +01:00
Nicola Murino
b0cfaf189c
portable mode: allow to read the password from a file
Fixes #1206

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-25 10:24:23 +01:00
Nicola Murino
195cb9f081
enable keyboard interactive authentication by default
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-24 20:22:32 +01:00
Nicola Murino
9a10740218
allow ACME HTTP-01 challenge with https redirect from port 80
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-24 20:08:14 +01:00
Nicola Murino
7bcd79a70a
telemetry: improve test cases
remove an unnecessary nil check in tlsutils added as workaround
to make telemetry test cases work

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-24 11:05:46 +01:00
Nicola Murino
beb8822df4
examples: update deps
to silence dependabot alerts

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-23 19:25:30 +01:00
Nicola Murino
8805d85377
configs: add ACME section
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-23 19:25:20 +01:00
Nicola Murino
fcf9a8c673
scheduler: disable verbose logs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-21 18:18:24 +01:00
Nicola Murino
2c1319985d
sql providers: remove unnecessary []byte to string conversion
always check affected rows for updates

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-20 18:14:02 +01:00
Nicola Murino
a3fff56da5
WebAdmin: add configs section
Setting configurations is an experimental feature and is not currently
supported in the REST API

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-19 19:03:45 +01:00
Nicola Murino
14961a573f
examples: update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-18 13:46:06 +01:00
Nicola Murino
78cd5d8eba
groups: add expiration date override
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-13 19:32:36 +01:00
Nicola Murino
2df2803a37
ipfilter plugin: add protocol
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-13 13:45:45 +01:00
Nicola Murino
7738faa040
events: add elapsed to UI and exports
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-13 12:58:21 +01:00
Nicola Murino
157d1db0b1
fs events: add elapsed field to notifications
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-12 18:56:53 +01:00
Nicola Murino
7e85356325
WebClient shares: replace basic auth with a login form
basic auth will continue to work for REST API

Fixes #1166

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-12 08:29:53 +01:00
Nicola Murino
a3d0cf5ddf
fix lint errors
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-10 19:59:03 +01:00
Nicola Murino
04ab8e72f6
WebUI: make error messages user dismissible
Fixes #1171

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-10 18:07:23 +01:00
Nicola Murino
e0c3a13ac5
azblob: update to the latest SDK
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-09 13:04:12 +01:00
Nicola Murino
1b1745b7f7
move IP/Network lists to the data provider
this is a backward incompatible change, all previous file based IP/network
lists will not work anymore

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-09 09:33:33 +01:00
Nicola Murino
2412a0a369
add Dendi to the sponsors section, thank you!!!
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-02-02 18:12:21 +01:00
Nicola Murino
1e14d006b1
defender: set score_no_auth to 0 as default
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-29 18:00:27 +01:00
Nicola Murino
27c4ffd663
sftpd: fix duplicate defender error introduced in the previous commit
improve the defender test cases by verifying the expected score

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-25 21:57:27 +01:00
Nicola Murino
c0fe08b597
defender: allow to set a different score for "no auth tried" events
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-25 18:49:03 +01:00
Nicola Murino
5550a5d2c0
update users: also disconnect users from remote nodes when requested
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-24 18:53:34 +01:00
Nicola Murino
2066ad7c83
WebDAV: allow to define custom MIME type mappings
Fixes #1154

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-23 18:43:25 +01:00
Nicola Murino
61199172d0
add support for monitoring and reloading externally provided TLS certs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-22 18:31:14 +01:00
Nicola Murino
3ce4d04b27
EventManager: support placeholders within URL paths
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-22 08:46:58 +01:00
Nicola Murino
707729ee61
acme: allow to separate multiple domains with spaces
This change is required to be able to set multiple domains for the same
certificate using env vars.
The change is backward compatible for general use cases but may be
backward incompatible in some edge cases, for example:

- "sftpgo.com,www.sftpgo.com" will work as before
- "sftpgo.com, www.sftpgo.com" will not work anymore

Check the logs to see if you are affected and rename the certificate and key
to fix

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-21 18:00:23 +01:00
Nicola Murino
7b5bebc588
EventManager: add "on-demand" trigger
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-21 15:41:24 +01:00
Nicola Murino
53f17b5715
allow to disable event rules
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-19 18:33:04 +01:00
Nicola Murino
496c8bc785
allow to start if only httpd service is enabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-17 18:22:04 +01:00
Nicola Murino
396d67bb2c
web: add spellcheck hint to some more fields
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-16 19:45:12 +01:00
Daniel Hammer
bbebd9b163 "Spell-Jacking" mitigation ~ prevent sensitive data leak from spell checker.
@see https://www.otto-js.com/news/article/chrome-and-edge-enhanced-spellcheck-features-expose-pii-even-your-passwords

Signed-off-by: Daniel Hammer <daniel.hammer+oss@gmail.com>
2023-01-16 19:23:43 +01:00
Nicola Murino
c8d94f0a27
add a health check command
Useful in restricted environments where commands like curl and such
are not available.

Fixes #1129

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-16 18:54:42 +01:00
Nicola Murino
8be8343fee
README: fix link to Fs interface
Fixes #1142

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-15 15:28:55 +01:00
Nicola Murino
f3995901e3
OpenAPI: fix group settings documentation
the OpenAPI docs should really be improved, but nobody seems interested
enough to sponsor this work

Fixes #1141

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-15 15:28:52 +01:00
Nicola Murino
f2618e7de6
switch from go-simple-mail to go-mail
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-15 15:28:31 +01:00
Nicola Murino
6afbd77fd5
update css and js deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-07 18:11:46 +01:00
Nicola Murino
93e5cb36df
copy: use server side copy if available
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-07 16:28:46 +01:00
Nicola Murino
09dea57850
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-07 13:07:41 +01:00
Nicola Murino
8cad436421
conditional support for recursive renaming for cloud providers
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-06 12:33:50 +01:00
Nicola Murino
f0dedbfabf
eventmanager: auto-create destination folder for renames
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-03 18:13:01 +01:00
Nicola Murino
51f0ded222
update test certificates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-03 11:48:08 +01:00
Jon Bendtsen
6b555cf0d8 metrics only available in telemetry server
I do not know which version removed /metrics from the HTTP server, but it does not seem to be available in 2.4.2, so I updated the metrics documentation to reflect this. Replaced with links to telemetry configuration.

Signed-off-by: Jon Bendtsen <github@jonb.dk>
2023-01-03 10:27:15 +01:00
Nicola Murino
0190d0b849
update Copyright year
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-03 10:18:30 +01:00
Nicola Murino
9977c64459
docs eventmanager: update index
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-02 19:22:26 +01:00
Nicola Murino
20706e45b0
docs: basic example for a Recycle Bin function
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-02 18:51:45 +01:00
Jon Bendtsen
53864fd8c1 Add warning of docker grace vs. SFTPGo grace
Dockers default grace period is only 10 seconds, so added a warning to alert users to those cases where their SFTPGO_GRACE_TIME is larger than the docker grace
2023-01-02 17:09:59 +01:00
Nicola Murino
7fa0959af4
eventmanager: add support for global star path matching
This introduce a backward incompatible change for filesystem path matching
in the Event Manager, now patterns like "*.txt" will no longer match any
file with the "txt" suffix, you need to change them to "/**/*.txt".

Also change pre-delete behaviour, now if an error is returned the client
will get a permission denied error. This is the same as the other pre-*
action. Previously it was not possible to deny deletion of a file.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-02 15:59:00 +01:00
Nicola Murino
2611dd2c98
eventmanager: add support for pre-* actions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2023-01-01 17:59:41 +01:00
Nicola Murino
6cebc037a0
eventmanager: check disk quota before executing the compress action
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-31 16:41:32 +01:00
Nicola Murino
15ad31da54
WebClient: add copy action
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-30 19:30:16 +01:00
Nicola Murino
fe9904a54d
docs full-configuration: improve formatting
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-28 18:51:25 +01:00
Nicola Murino
831851c0c3
change the default value for naming rules
WebAdmin does not work properly is trimming trailing and leading white
spaces is disabled

Fixes #1119

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-27 18:57:48 +01:00
Nicola Murino
ea4c4dd57f
eventmanager: add copy action
refactor sftpgo-copy and sftpgo-remove commands

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-27 18:51:53 +01:00
Nicola Murino
e5a8220b8a
REST API: add location header to 201 responses
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-23 13:08:04 +01:00
Jon Bendtsen
ed949604d3
Added graceful shutdown description to docker (#1112)
* Added graceful shutdown description to docker

Describing how to use the graceful shutdown period in a docker SFTPGO container and giving some examples of what happens with both existing and new connections.

Signed-off-by: Jon Bendtsen <github@jonb.dk>
2022-12-23 12:11:15 +01:00
Nicola Murino
0841c7d7bd
REST API: remove merging of fields on updates
we use PUT verb not PATCH. We keep merging only to allow to preserve
hidden/encrypted fields.

This is a backward incompatible change, but is necessary to avoid unexpected
issues.
You have to pass complete objects on updates.

Fixes #1088

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-23 09:36:20 +01:00
Nicola Murino
e17975ed7d
dataprovider: include port in node name and make it a hash
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-20 16:40:32 +01:00
Nicola Murino
f4eb9e7cd6
OpenAPI: set charset also for text/plain responses
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-19 18:38:23 +01:00
Nicola Murino
1085f9e5ec
httpfs OpenAPI: added charset=utf-8 to application/json content type
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-19 18:35:06 +01:00
Jon Bendtsen
37eceffed9
OpenAPI: added charset=utf-8 to application/json content type (#1108)
* Added charset=utf-8 to application/json content type

This change is linked to https://github.com/drakkan/sftpgo/issues/1101 and should partially alleviate the need to change the content type in the files generated by openapi-generator-cli

Signed-off-by: Jon Bendtsen <github@jonb.dk>

* extra newline

Signed-off-by: Jon Bendtsen <github@jonb.dk>

* Signed-off-by: Jon Bendtsen github@jonb.dk

Signed-off-by: Jon Bendtsen github@jonb.dk
Signed-off-by: Jon Bendtsen <github@jonb.dk>

* This change is linked to #1101 and should partially alleviate the need to change the content type in the files generated by openapi-generator-cli.

Signed-off-by: Jon Bendtsen <github@jonb.dk>

Signed-off-by: Jon Bendtsen <github@jonb.dk>
Signed-off-by: Jon Bendtsen github@jonb.dk
2022-12-19 18:30:27 +01:00
Nicola Murino
6270b2c2d3
eventmanager: log a get task error only when required
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-19 18:10:40 +01:00
Nicola Murino
ad5bd18dd0
CI: add nosqlite build tag when CGO is disabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-18 15:21:32 +01:00
Nicola Murino
0296e0cafa
gcsfs: allow to customize upload part size/time
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-18 11:51:46 +01:00
Nicola Murino
147ad3b230
respect token validation mode for CSRF header
Fixes #1104

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-16 19:14:56 +01:00
Nicola Murino
2da3eabc12
eventmanager: add password notification check action
this action allow to send an email notification to users whose
password is about to expire

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-16 18:51:29 +01:00
Nicola Murino
ac91170d65
S3: improve "directories" detection
Fixes #1097

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-13 08:55:01 +01:00
Nicola Murino
f13b901f2d
local fs: fixed paths validation for some Windows specific edge cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-12 10:40:04 +01:00
Nicola Murino
c23c73ed34
update OpenAPI definition
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-11 17:53:41 +01:00
Nicola Murino
ad5d657a1a
add support for password policies
you can now set a password expiration and the password change requirement

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-11 17:15:34 +01:00
Nicola Murino
e2bebc99d1
AzureBlobs: update SDK to v0.6.1
Remove path escape for blob names, this issue is now fixed within
the SDK

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-10 09:44:14 +01:00
Nicola Murino
926dcbbc63
add a CLI command to reset admin passwords
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-09 18:28:16 +01:00
Nicola Murino
a7f9581d99
provider events: add support for omit_object_data search param
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-08 10:02:12 +01:00
Nicola Murino
75d911f29e
WebAdmin: allow to search and export event logs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-07 18:47:38 +01:00
Nicola Murino
91e4a54385
fix build with some features disabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-04 08:44:45 +01:00
Nicola Murino
221a4878aa
eventmanager: allow to filter based on role name
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-03 17:47:43 +01:00
Nicola Murino
2ea43647ed
ftpd: check the TYPE parameter in a case-insensitive manner
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-03 13:09:25 +01:00
Nicola Murino
04bdd3a5e4
docker: bump alpine to 3.17
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-03 12:30:53 +01:00
Nicola Murino
1f9cf194fe
add role to events
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-12-03 11:45:27 +01:00
Nicola Murino
e87118d2a8
allow WebClient login with multi-step auth enabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-29 18:43:48 +01:00
Nicola Murino
fe888729f9
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-27 12:15:56 +01:00
Nicola Murino
d7cd2ac803
add CODEOWNERS file
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-24 18:53:59 +01:00
Nicola Murino
ba9fe38b8b
azblob: handle dirs metadata
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-24 18:14:24 +01:00
Nicola Murino
7b00fe3d5a
update nfpm to 2.22.1
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-20 15:23:45 +01:00
Nicola Murino
fc1ba36ae5
fix SeaweedFS rename compatibility
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-20 13:06:58 +01:00
Nicola Murino
2290137868
WebDAV: add support for X-OC-Mtime header
it is used by Nextcloud compatible clients to set the modification time

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-19 19:39:28 +01:00
Nicola Murino
6ebe7691db
WebClient: add drag and drop upload UI
thanks to @wooneusean for the help

Fixes #951

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-19 12:31:03 +01:00
Nicola Murino
29d1993a3b
Docker: add a default moduli file
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-18 18:13:03 +01:00
Nicola Murino
81c693de4e
Ignore denied patterns for stat on "/"
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-18 18:12:37 +01:00
Nicola Murino
2017cb60e9
Per-directory permissions: add wildcards support
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-18 18:12:04 +01:00
Nicola Murino
ec4cc33364
WebAdmin users form: trim spaces from some form fields
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-17 18:26:19 +01:00
Nicola Murino
a22282f275
add support for DHGEX
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-17 18:15:53 +01:00
Nicola Murino
67de4c9c07
check more mime types for SeaweedFS dirs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-16 21:38:27 +01:00
Amir.h Yeganemehr
6591769a07 Handle empty directories with mimetype
Signed-off-by: Amir.h Yeganemehr <yeganemehr@jeyserver.com>
2022-11-16 19:47:22 +01:00
Nicola Murino
5a222807b7
add roles
Fixes #837

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-16 19:04:50 +01:00
Nicola Murino
a9207857cf
webdav: add a test case for PROPFIND with infinity Depth
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-06 07:33:56 +01:00
Nicola Murino
37ffa3b55a
portable mode: remove support for services discovery via multicast DNS
The library used for mDNS doesn't seem well maintained and I think this
feature is rarely used

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-05 18:32:36 +01:00
Nicola Murino
048591553a
allow to set a default expiration for newly created users
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-05 18:01:24 +01:00
Nicola Murino
33bfd61a0c
plugins: fix hash check
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-04 20:25:01 +01:00
Nicola Murino
965d059400
WebUI: try harder to prevent browsers from auto-filling in password fields
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-03 19:57:43 +01:00
Nicola Murino
676286182a
webdav: always open files for reading in lazy mode
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-03 08:31:40 +01:00
Nicola Murino
3b2002d9ef
shared providers: allow to immediately re-add soft-deleted event rules
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-01 17:39:53 +01:00
Nicola Murino
9d7e30807d
WebDAV: make test cases more robust
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-01 13:42:42 +01:00
Nicola Murino
91fae5c4d4
shared providers: allow to immediately re-add soft-deleted users
there is no need to wait for cache updates

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-01 12:53:08 +01:00
Nicola Murino
e3e85867b1
sftpfs: reuse connections
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-11-01 12:22:54 +01:00
Nicola Murino
5618b95372
improve some docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-30 08:34:16 +01:00
Nicola Murino
bf45d04600
eventmanager: add placeholder to get the parent directory
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-29 15:49:24 +02:00
Nicola Murino
80244bd83b
eventmanager: allow to access the backup file
so it can be used in email and other actions

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-29 14:04:31 +02:00
Nicola Murino
9a9e7d1a7f
squash database migrations
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-28 14:28:37 +02:00
Nicola Murino
6f422c3d8b
WebClient: make folder deletion recursive
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-27 08:27:44 +02:00
Nicola Murino
222f0c735b
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-23 09:20:00 +02:00
Nicola Murino
63bf8eb1a1
set version to 2.4.0
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-23 04:47:41 +02:00
Nicola Murino
db0e58ae7e
Add support for graceful shutdown
Fixes #1014

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-22 11:56:41 +02:00
Nicola Murino
87045284cc
make connections lookups constant time
Performance improves if there are many active connections.
For a few connections there is a small (unnoticeable) performance
degradation

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-20 18:17:13 +02:00
Nicola Murino
f3ee20980a
fix build in bundle mode
added bunlde mode build to CI to prevent this from happening again
in the future

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-20 07:58:34 +02:00
Nicola Murino
54f1946aba
OIDC: allow to skip JWT signature validation
It's intended for special cases where providers,such as Azure,
use the "none" algorithm

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-19 18:38:09 +02:00
Nicola Murino
47842ae614
script based hooks: don't propagate global env vars
env vars must be explicitly set

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-19 09:29:40 +02:00
Nicola Murino
7e0b62b703
update swagger-ui, codemirror, video-js
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-16 18:29:42 +02:00
Nicola Murino
15b4194e8f
event rules: allow to set min/max file size using "human" notation
10MB or 1GB instead of the size in bytes

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-16 15:28:47 +02:00
Nicola Murino
5a199acbb2
howto: add event manager
add groups section in the getting started guide.
Suggest to prefer configuration with env vars instead of modifying
the default configuration file

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-16 08:26:03 +02:00
Nicola Murino
07b3f2f4d6
config: fix for slices with default values
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-14 16:45:20 +02:00
Nicola Murino
13ee236884
Allow to read env vars from files inside the "env.d" directory
This makes it easier to set environment variables on some operating systems.
Setting configuration options from environment variables is recommended if
you want to avoid the time-consuming task of merging your changes with the
default configuration file after upgrading SFTPGo

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-13 18:43:58 +02:00
Nicola Murino
3822b7d3f7
workflows: replace deprecated set-output command
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-13 13:11:22 +02:00
Nicola Murino
2b2b69fb23
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-12 22:03:47 +02:00
Nicola Murino
4b4edef0ad
disable self connections by default
now that the event manager can create files, self connections may create
even more issues than before

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-12 18:12:12 +02:00
Nicola Murino
aa1e73326f
FTPD: fix APPE to new files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-12 11:37:31 +02:00
Nicola Murino
07012aa812
WebDAV: allow to set last modification time
This commit add a minimal dead properties implementation

Fixes #1018

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-11 19:20:58 +02:00
Nicola Murino
0e54fa5655
cryptfs: fix quota for overwrites if upload fails
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-10 19:34:15 +02:00
Nicola Murino
3e44a1dd2d
eventmanager: add support for file/directory compression
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-10 18:53:58 +02:00
Nicola Murino
a417df60b3
azblob: use UUIDs as block IDs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-07 06:54:26 +02:00
Nicola Murino
2067c5c527
azblob: rename method to initialize from SAS URL
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-02 15:03:38 +02:00
Nicola Murino
8a43486730
postgres driver: add multi hosts support
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-02 12:43:26 +02:00
Nicola Murino
2636fedce8
node token: add/pars admin username
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-02 09:51:47 +02:00
Nicola Murino
a42e9ffa6b
azblob: add support for the latest SDK
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-01 14:04:53 +02:00
Nicola Murino
0e8c41bbd1
sftpd: fix relative symlinks handling
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-30 19:23:54 +02:00
Nicola Murino
1e21aa9453
add support for checking sha256crypt passwords
they will be converted to the configured password hashing algorithm after
the first user login

Fixes #1000

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-28 19:15:02 +02:00
Nicola Murino
f9eadd7f04
API data retention check: send CSV reports for email notifications
replace the HTML email with the same CSV report used in the
event manager

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-28 18:37:32 +02:00
Nicola Murino
04dc97072b
eventmanager: add metadata check
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-26 19:00:34 +02:00
Nicola Murino
ddda0b5ece
SQLite provider: remove code only used for shared providers
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-25 20:29:43 +02:00
Nicola Murino
76e89d07d4
add support for inter-node communications
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-25 19:48:55 +02:00
Nicola Murino
a538255034
httpclient: add leaf certificates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-23 17:49:42 +02:00
Nicola Murino
4ad2a9c1fa
WebClient: validate PDF files before rendering
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-22 20:41:28 +02:00
Nicola Murino
7ae9303c99
allow to disable REST API
Fixes #987

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-22 17:27:00 +02:00
Nicola Murino
6c7b3ac5bb
oidc: update user after token refresh
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-22 08:30:22 +02:00
Nicola Murino
bd294bb3cf
WebAdmin: allow to simplify the user page
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-21 19:36:08 +02:00
Nicola Murino
7349598b19
command hooks: allow to pass custom arguments
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-20 13:58:44 +02:00
Nicola Murino
7f19f9f39c
WebClient: allow partial download of shared files
each partial download will count as a share usage

Fixes #970

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-19 19:58:35 +02:00
Nicola Murino
f19691250d
zip downloads: make zip entries relative to the current dir when possible
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-19 17:06:42 +02:00
Nicola Murino
554a1cb1f4
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-18 13:08:48 +02:00
Nicola Murino
e54237ff70
allow a client if its IP is both allowed and denied
this allows you to define a group deny policy that can be overridden
on a per-user basis.

This is a backward incompatible change

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-15 19:51:17 +02:00
Nicola Murino
e58709c822
WebAdmin: allow to specify quota and upload size in human format
For example 1 GB

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-14 21:18:32 +02:00
Nicola Murino
5eca73a399
give some hints if we fail to load HTML templates
Fixes #986

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-14 12:25:19 +02:00
Nicola Murino
f8a19f747d
WebUI: improve HTML escaping
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-13 19:16:07 +02:00
Nicola Murino
ea3c1d7a3b
WebAdmin: allow to pre-select groups on add user page
The admin will still be able to choose different groups

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-13 18:04:27 +02:00
Nicola Murino
bd585d8e52
CI: add commit info in vendored sources
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-08 17:32:28 +02:00
Nicola Murino
a40fa93d7b
CI: use the shortened 8-digit commit hash everywhere
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-08 13:44:14 +02:00
Nicola Murino
4498bbf2e4
CI: use Docker to build x86_64 Linux packages
therefore Linux packages are compiled with Docker for all supported
architectures

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-08 09:42:36 +02:00
Nicola Murino
63e3891808
WebClient/HTTP API: ensure to check home dir, when needed, in multi-node setups
Behind a load balancer with no sticky sessions enabled is not enough to check
the home dir only when the client logs in

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-07 16:23:56 +02:00
Nicola Murino
3ebdfa9b2d
data providers: allow to disable SNI for TLS connections
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-07 14:31:50 +02:00
Nicola Murino
8debde842c
eventmanager: improve docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-06 19:46:06 +02:00
Nicola Murino
3e5cf56460
eventmanager: add data retention reports
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-06 19:09:23 +02:00
Nicola Murino
f264b005ff
event rules: allow filtering based on group names
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-04 17:48:09 +02:00
Nicola Murino
bf76b0b158
docs external auth: clarify the meaning of the empty response from the hooks
Fixes #961

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-03 19:46:08 +02:00
Nicola Murino
c2a65a9a74
http actions: add multipart support
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-03 16:29:07 +02:00
Nicola Murino
3267a50ae3
MFA: allow recovery codes only if two-factor auth is enabled
Fixes #965

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-31 09:29:39 +02:00
Nicola Murino
f0839519a8
FTP: always generate a defender event if the client does not authenticate
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-30 17:14:57 +02:00
Nicola Murino
95e9106902
use the new atomic types introduced in Go 1.19
we depend on Go 1.19 anyway

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-30 15:47:41 +02:00
Nicola Murino
da03f6c4e3
eventmanager commands: allow to pass custom arguments
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-30 12:37:18 +02:00
Nicola Murino
9e77cd1a26
clarify support policy
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-29 19:14:48 +02:00
Nicola Murino
56bf51277c
eventmanager placeholders: add StatusString and ErrorString
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-29 19:03:31 +02:00
Nicola Murino
37d98ca290
users: add a setting to set the default expiration for shares
Fixes #960

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-28 14:41:42 +02:00
Nicola Murino
9473dc3937
WebAdmin: fix saving email event actions without attachments
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-22 20:17:45 +02:00
Nicola Murino
6777008aec
eventmanager: allow to add attachments to email actions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-22 19:04:17 +02:00
Nicola Murino
3e8254e398
fs actions: add first upload/download
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-21 19:01:08 +02:00
Nicola Murino
9ddd2d3588
eventmanager: add path exists filesystem action
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-20 14:13:43 +02:00
Nicola Murino
57935f585c
eventmanager: allow to execute fs actions based on schedules
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-19 15:04:00 +02:00
Nicola Murino
2b463d61e3
use epoch timestamp instead of current timestamp for unknown modification times
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-16 17:59:13 +02:00
Nicola Murino
ced4206c5f
allow cross folder renaming if the underlying resource is the same
this was only allowed for the local filesystem before this change

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-15 21:39:04 +02:00
Nicola Murino
c86db09cd8
event manager: add Certificate renewal trigger
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-12 17:37:29 +02:00
Nicola Murino
194c3c13ac
event manager: add IP blocked trigger
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-11 20:09:53 +02:00
Nicola Murino
d65c00728a
docker: add a variant with official plugins included
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-11 11:27:35 +02:00
Nicola Murino
526f6e0f6b
cloud storage providers: remove head bucket requests
let's just assume the bucket exists on "stat" requests for the "/" path

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-11 08:31:51 +02:00
Nicola Murino
a61211d32c
OIDC: allow to get the role field from a sub-struct
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-10 21:42:58 +02:00
Nicola Murino
78f75cdcb9
eventmanager: don't fail if a directory to be created already exists
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-10 19:33:02 +02:00
Nicola Murino
4cd340e07f
eventmanager: add support for filesystem actions
Fixes #931

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-10 18:41:59 +02:00
Nicola Murino
890dde0e00
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-05 18:48:58 +02:00
Nicola Murino
b1efe8d0b5
eventmanager: add support for data retention checks
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-04 21:50:38 +02:00
Nicola Murino
71fff28d29
add Aledade to the sponsors section, thank you!!!
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-03 21:47:13 +02:00
Nicola Murino
6bfdf941bc
webdav: allow to disable the WWW-Authenticate header
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-02 19:06:49 +02:00
Nicola Murino
fdc10aa6c7
CORS: add support for more parameters
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-02 18:44:34 +02:00
Nicola Murino
455bb550ee
azblob: fix SAS URL with embedded container name
Fixes #944

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-01 21:32:40 +02:00
Nicola Murino
2a827544ef
allow to edit profile to users logged in via OIDC
Fixes #942

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-01 19:41:18 +02:00
Nicola Murino
9d2b5dc07d
refactor: move eventmanager to common package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-01 18:48:54 +02:00
Nicola Murino
3ca62d76d7
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-30 10:07:09 +02:00
Nicola Murino
00b9280834
docs: some improvements and clarifications
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-27 20:49:22 +02:00
Nicola Murino
ef0a3bc571
add support for anonymous users
Fixes #935

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-27 18:54:25 +02:00
Nicola Murino
e3c5cf981f
download as zip: improve filename
include username and also filename/directory name if the user downloads
a single file/directory

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-26 19:05:42 +02:00
Nicola Murino
ec5da8b4a5
ftpd: allow to require TLS on a per-user basis
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-26 18:51:39 +02:00
Nicola Murino
81de7d271e
add support for embedding templates and other static resources
This feature is disabled by default and can be enabled using the
"bundle" build tag

Fixes #823

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-24 20:02:37 +02:00
Nicola Murino
c8158e14e0
move SFTPGo package to the internal folder
SFTPGo is a daemon and command line tool, not a library.

The public API are provided by the SDK

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-24 16:18:54 +02:00
Nicola Murino
e96ae5ca51
add folders to data provider actions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-24 08:10:23 +02:00
Nicola Murino
e059197398
WebClient: show images as gallery
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-23 21:12:16 +02:00
Nicola Murino
a2e73228d2
initprovider: don't execute actions
we are not running as service here

Fixes #932

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-23 19:38:15 +02:00
Nicola Murino
1470018054
web UI: allow to enable OIDC login and/or login forms
any combination is now supported

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-22 20:55:33 +02:00
Nicola Murino
e6bfbcd489
OIDC: allow to debug the received id_token
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-22 11:11:35 +02:00
Nicola Murino
a0bbcf6ebb
web client: add HTML5 player
See #914

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-21 18:42:22 +02:00
Nicola Murino
7f5a13d185
fix unused parameter lint warnings
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-19 23:28:33 +02:00
Nicola Murino
d5946da1e2
OIDC: allow to enable only OIDC login for Web UIs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-19 22:25:00 +02:00
Nicola Murino
21682d1c1d
add license header to source files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-17 20:16:00 +02:00
Nicola Murino
fd52475ae2
shared mode: ensure to clear webdav cache for deleted users
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-17 18:48:41 +02:00
Nicola Murino
55b47cf741
sftp realpath: resolve symlinks
Fixes #890

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-17 16:02:45 +02:00
Nicola Murino
e0ce2e2e8a
allow to customize the log level
The old log-verbose flag is not appropriate anymore.
You should now use the log-level flag to set your preferred log level.
The default level is "debug" as before, you can also set "info", "warn",
"error"

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-13 10:40:24 +02:00
Nicola Murino
8fc4971df1
ftpd: fix wildcards handling for backend with virtual dirs
Fixes #915

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-12 12:04:01 +02:00
Nicola Murino
20e8cb898a
always check root dir in multi node setups
Fixes #920

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-12 08:32:31 +02:00
Nicola Murino
b5894b257f
try to better highlight donations and sponsorships options ...
... and to better explain why they are required.

Please don't say "someone else will help the project, I'll just use it"

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-11 16:14:44 +02:00
Nicola Murino
cb517a3595
SFTPGo is now available on Elest.io
Purchasing from there will help keep SFTPGo a long-term sustainable project.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-11 09:39:01 +02:00
Nicola Murino
1b8f94c08f
add event manager
auto backup removed from setting. You can now schedule backups with
the event manager

Fixes #762

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-11 08:17:36 +02:00
Nicola Murino
e46051299f
s3: improve rename performance
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-30 18:22:58 +02:00
maximethebault
bf2dcfe307
S3: Fix timeout error when renaming large files (#899)
Remove AWS SDK Transport ResponseHeaderTimeout (finer-grained timeout are already handled by the callers)
Lower the threshold for MultipartCopy (5GB -> 500MB) to improve copy performance and reduce chance of hitting Single part copy timeout

Fixes #898

Signed-off-by: Maxime Thébault <contact@maximethebault.me>
2022-06-30 10:23:39 +02:00
Nicola Murino
719f6077ab
don't expose underlying errors to clients
log them and return a generic failure

Fixes #896

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-28 22:08:16 +02:00
Nicola Murino
101783ee86
config: fix replace from env vars for some sub list
ensure to merge configuration from files with configuration from env for
all the sub lists

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-28 19:44:12 +02:00
Nicola Murino
6843402d2e
fix get branding from env
Fixes #895

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-28 10:48:15 +02:00
Nicola Murino
88feda6bf9
clarify licensing
Fixes #891

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-27 18:04:24 +02:00
Nicola Murino
5c446ff645
subsystem mode: advertise the supported extensions
Fixes #889

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-25 18:05:13 +02:00
Nicola Murino
9a6b1a1315
Fix issues found in PR #887
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-25 10:42:17 +02:00
Andre Mainka
90009a649d
Allow OAuth Scope to be configured (#887)
Signed-off-by: BobSilent <andre_1@gmx.net>
2022-06-25 10:40:39 +02:00
Nicola Murino
8762628481
backup: include folders set on group
also fix sql tables prefix handling and add the sql prefix to CI

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-24 13:16:45 +02:00
Nicola Murino
a5e41c9336
S3: allow empty region
the region may be embedded within the endpoint for some S3 compatible
object storage

Fixes #884

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-20 19:55:01 +02:00
Nicola Murino
729f30aebf
allow to refuse an upload from a sync upload hook
Fixes #880

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-16 18:42:17 +02:00
Nicola Murino
1da213a6e3
CI: test resetprovider
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-15 20:08:31 +02:00
Nicola Murino
2b0b19da9e
add metrics for httpgs and sftpfs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-14 19:37:25 +02:00
Nicola Murino
686166f2ce
remove deprecated APIs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-14 18:30:57 +02:00
Nicola Murino
93ce593ed0
squash database migrations and remove the credentials_path setting
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-13 20:08:49 +02:00
Nicola Murino
6f4475ff72
httpfs: add support for UNIX domain sockets
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-12 18:29:49 +02:00
Nicola Murino
0b9a96ec6b
restore fast path for recursive permissions check and update some docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-12 12:04:48 +02:00
Nicola Murino
f0f5ee392b
OpenAPI schema: improve compatibility with some generators
Fixes #875

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-11 19:07:05 +02:00
Nicola Murino
dadaca141a
sql providers: remove prepared statements
preparing a statement without reusing it is useless

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-11 11:57:06 +02:00
Nicola Murino
7ab30099dd
add httpfs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-11 10:41:34 +02:00
Nicola Murino
3170991aa8
MySQL: groups is a reserved keyword since MySQL 8.0.2
add MySQL to CI, testing with MariaDB is not enough

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-10 18:08:15 +02:00
Nicola Murino
118744a860
parse IP proxy header also if listening on UNIX domain socket
Fixes #867

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-09 08:37:37 +02:00
Nicola Murino
fe6a3f2ce8
web UIs: fix date formatting on Safari
Fixes #869

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-09 07:37:50 +02:00
Nicola Murino
75efaa9741
APT and YUM repo are now available
This is possible thanks to the Oregon State University's free
mirroring service

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-09 07:30:09 +02:00
Nicola Murino
560e7f316a
allow to set an additional shared data path at build time
This is useful to simplify brew package

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-04 21:24:50 +02:00
Nicola Murino
eee5d74e87
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-04 19:29:46 +02:00
Nicola Murino
9ae473fcdc
set version to 2.3.0
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-04 05:01:48 +02:00
Nicola Murino
b774289c6d
change default value for naming_rules to 1
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-03 16:09:02 +02:00
Nicola Murino
ecf715880f
update docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-03 14:36:38 +02:00
Nicola Murino
b2e28fe3a2
groups: apply placeholders to the fs config of virtual folders
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-02 09:45:01 +02:00
Nicola Murino
cc2f23bd89
trim values for string lists which can be set as env vars
See #857

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-31 18:22:18 +02:00
Nicola Murino
7329cd804b
Fixes #855
update OpenAPI definition, add test cases, fix lint

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-30 19:01:12 +02:00
sunilke
84e3132ed1
Feat private key passphrase for sftpfs (#855)
Signed-off-by: Sunil Keswani <sunilke@zeta.tech>
2022-05-30 19:00:39 +02:00
Nicola Murino
f6b11c2d01
httpd/webdav: allow to configure trusted proxy header and depth
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-28 19:47:23 +02:00
Nicola Murino
32da923dfe
httpd: add a setting to customize tokens validation
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-28 13:28:50 +02:00
Nicola Murino
91dfa501f8
improve some docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-27 10:09:53 +02:00
Nicola Murino
7c724e18fe
add support for ACME compliant certificate authorities
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-27 07:39:55 +02:00
Nicola Murino
302f83c7a4
CI: fix for cockroach 22
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-24 11:31:49 +02:00
Nicola Murino
984ca1fb7e
web UIs: update js and css deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-23 19:14:39 +02:00
Nicola Murino
87f6a18476
web admin UI: add column visibility control to the groups table as well
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-22 19:19:14 +02:00
Nicola Murino
90c21458b8
OIDC: add support for implicit roles
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-22 14:38:25 +02:00
Nicola Murino
f536c64043
admin UI: allow to control columns visibility and ordering
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-22 11:45:49 +02:00
Nicola Murino
1a33b5bb53
allow different TLS certificates for each binding
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-21 16:34:47 +02:00
Nicola Murino
0ecaa862bd
web UIs: allow to replace the default CSS
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-21 11:05:58 +02:00
Nicola Murino
751946f47a
allow to customize timeout and env vars for program based hooks
Fixes #847

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-20 19:30:54 +02:00
Nicola Murino
796ea1dde9
allow to store temporary sessions within the data provider
so we can persist password reset codes, OIDC auth sessions and tokens.
These features will also work in multi-node setups without sicky
sessions now

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-19 19:49:51 +02:00
Tim Birkett
a87aa9b98e
feat: make MFA status visible in WebAdmin (#844)
Signed-off-by: Tim Birkett <tim.birkett@sainsburys.co.uk>
2022-05-17 19:27:12 +02:00
Nicola Murino
9abd186166
external auth http hook: properly serialize the user in the POST body
For historical reasons we send the json serialized user as a string field.
I Initially copied the code used in the script hook where it is appropriate
to convert the JSON user to string.

After some time I have noticed this error, I know that changing it now might
break existing external authentication hooks but we cannot continue with
this mistake, new users are surprised by this behavior, sorry

Fixes #836

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-15 18:26:07 +02:00
Nicola Murino
18d0bf9dc3
execute db migrations holding a database-level lock
so migrations cannot be executed concurrently if you run them from multiple
SFTPGo instances at the same time.

CockroachDB doesn't support database-level locks

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-15 15:25:12 +02:00
Nicola Murino
c9bd08cf9c
UI branding: use the short name on the login pages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-15 07:30:36 +02:00
Nicola Murino
d2f4edcdb6
sftpd statvfs: check the virtual quota against that of the filesystem
if the virtual quota limit is greater than the filesystem available space,
we need to return the filesystem limits

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-14 14:53:26 +02:00
Nicola Murino
67abf03fe3
web UIs: move common css to a separate template file
so we can reuse it instead of copying the same CSS every time

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-14 11:54:55 +02:00
Nicola Murino
5d7f6960f3
web UIs: add branding support
Fixes #829

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-13 19:40:52 +02:00
Paul Laffitte
4bea9ed760
add sftpgo logo on login pages (#835)
Signed-off-by: Paul Laffitte <paul.laffitte@enix.fr>
2022-05-13 17:12:52 +02:00
Nicola Murino
4995cf1b02
defender: allow to load blocklist/safelist also from config/env vars
Fixes #831

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-13 14:46:07 +02:00
Tim Birkett
a5d0cbbe44
chore: fix a linting error (#834)
Signed-off-by: Tim Birkett <tim.birkett@sainsburys.co.uk>
2022-05-13 11:14:38 +02:00
Tim Birkett
7b1a0d3cd3
chore: disallow all crawlers with robots.txt (#833)
Signed-off-by: Tim Birkett <tim.birkett@sainsburys.co.uk>
2022-05-13 09:23:43 +02:00
Nicola Murino
1e0b3a2a8c
web client: add share mode read/write
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-09 19:09:43 +02:00
Nicola Murino
e72bb1e124
allow building with bolt provider disabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-09 11:19:29 +02:00
Nicola Murino
164621289c
awscontainer: add a flag to disable the installation code
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-07 12:50:49 +02:00
Nicola Murino
737109b2b8
sftpfs: add more ciphers, KEXs and MACs
they are negotiated according to the order.
Restrictions are generally configured server side.
I want to avoid to expose other settings for now.

Fixes #817

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-06 09:21:57 +02:00
Herbert He
8b8e27b702
docs(cn): support README translation for Simplified Chinese (#818)
Signed-off-by: Herbert <herbert.he0229@gmail.com>
2022-05-05 19:15:49 +02:00
Dylan Legendre
4b099640de
Updating typos in openapi/swagger documentation as well as various markdown documentation files (#816)
Signed-off-by: Dylan Legendre <dylanlegendre09@gmail.com>
2022-05-05 18:26:22 +02:00
Nicola Murino
80da2dc722
try to automatically find shared data dirs in system-wide paths
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-05 11:27:19 +02:00
Nicola Murino
61947e67ae
ftpd: add basic wildcard support
this is the minimal implementation to allow mget and similar commands with
wildcards.

We only support wildcard for the last path level, for example:

- mget *.xml is supported
- mget dir/*.xml is supported
- mget */*.xml is not supported

Removed . and .. from FTP directory listing

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-04 19:32:02 +02:00
Nicola Murino
9a37e3d159
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-01 21:51:58 +02:00
Nicola Murino
14fb6c4038
always check recently updated users
also fix the query to get users for quota check for sql based providers

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-30 11:59:36 +02:00
Nicola Murino
dd9c5b2149
sql provider: enhanced folder mapping query using an upsert
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-28 14:49:57 +02:00
Nicola Murino
ecd488a840
data provider: remove prefer_database_credentials
Google Cloud Storage credentials are now always stored within the data
provider.

Added a migration to read credentials from disk and store them inside the
data provider.

After v2.3 we can also remove credentials_path

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-28 12:55:01 +02:00
Nicola Murino
4a44a7dfe1
improved readlink handling
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-27 18:38:46 +02:00
Nicola Murino
16a44a144b
webclient: don't restore checkbox status
Fixes #807

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-26 09:15:26 +02:00
Nicola Murino
97f8142b1e
azblobfs: update to the latest sdk and fix compatibility
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-25 17:34:52 +02:00
Nicola Murino
504cd3efda
add groups support
Using groups simplifies the administration of multiple accounts by
letting you assign settings once to a group, instead of multiple
times to each individual user.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-25 15:49:11 +02:00
zemsten
857b6cc10a
Update full-configuration.md (#799)
NGINX spelling

Signed-off-by: Samuel Zarn <samz@localhost.localdomain>
2022-04-22 09:22:36 +02:00
Nicola Murino
002a06629e
refactoring of user session counters
Fixes #792

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-14 19:07:41 +02:00
Nicola Murino
5bc0f4f8af
linux pkgs: disable vcs stamping
it seems to create build issues since go 1.18.1.

I need to investigate better

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-13 10:25:29 +02:00
Nicola Murino
cacfffc5bf
OIDC: add support for custom fields
These fields can be used in the pre-login hook to implement custom
logics

Fixes #787

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-12 19:31:25 +02:00
dependabot[bot]
87d7854453
Bump codecov/codecov-action from 2 to 3 (#789)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 2 to 3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-11 09:24:54 +02:00
dependabot[bot]
aa34388de0
Bump actions/download-artifact from 2 to 3 (#790)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 2 to 3.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-11 09:24:09 +02:00
Nicola Murino
a3f50029ba
update moment.js to v2.29.2
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-09 10:05:15 +02:00
Nicola Murino
f9d8b83c2a
sshd: disable by default ssh-rsa host key algo
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-04 18:52:19 +02:00
Nicola Murino
7c8bb5b18a
fix quota for uploads outside home dir if rename fails
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-03 13:48:56 +02:00
Nicola Murino
254b2ae87f
add support for AWS container
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-03 08:52:36 +02:00
Nicola Murino
5a40f998ae
check and update the password hashing algorithm on user login
also add ldap md5 variant as per-user request

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-02 22:20:21 +02:00
Nicola Murino
77f3400161
allow to mount virtual folders on root (/) path
Fixes #783

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-02 18:32:46 +02:00
Nicola Murino
3521bacc4a
web user templates: ensure we can save valid users
users with no public key and password are now valid after the recent
changes

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-01 09:47:54 +02:00
Nicola Murino
55f8171dd1
sshd: add support for host key certificates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-01 08:03:56 +02:00
Nicola Murino
a7b159aebb
ssh user certs: add a revoked list
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-31 21:49:06 +02:00
Nicola Murino
5c114b28e3
sshd: we don't need the user certificate
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-31 18:16:50 +02:00
Nicola Murino
e079444e8a
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-31 09:54:59 +02:00
Nicola Murino
3cb23ac956
be sure to close an SSH connection if all channels are idle
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-30 10:59:25 +02:00
Nicola Murino
8fb256ac91
add link to an external Traefik tutorial
update deps

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-29 18:13:43 +02:00
Nicola Murino
ca32cd5e0e
allow placeholders for add/update users and folders
remove session token for S3, a temporary token is useless for our usage

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-27 16:32:21 +02:00
Nicola Murino
e0defafa26
azblob: fix the error returned in fs.Stat
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-26 11:47:12 +01:00
Nicola Murino
5cccb872bb
add support to redirect HTTP to HTTPS
Fixes #777

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-26 10:00:02 +01:00
Nicola Murino
aaf940edab
enforce CSRF token usage by the same IP for which it was issued
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-26 08:41:50 +01:00
ismail BASKIN
853086b942
Add role field array support (#774)
oidc: add array role field support

Signed-off-by: Ismail Baskin <ismailbaskin5@gmail.com>
2022-03-25 10:36:35 +01:00
Nicola Murino
81bdba6782
docker: re-add ppc64le
The alpine image for Go 1.18 is now available

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-25 09:32:48 +01:00
Nicola Murino
d955ddcef9
check that the jwt token is used by the same IP for which it
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-24 22:03:17 +01:00
Nicola Murino
4bbb195711
plugin: reload IP filter plugin on demand
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-24 10:21:13 +01:00
Nicola Murino
a193089646
add jq to full docker image variants
Fixes #767

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-23 11:35:23 +01:00
Nicola Murino
9bfdc10172
add support for ipfilter plugins
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-23 10:58:01 +01:00
Nicola Murino
b062b38ef4
docker: add rsync to "full" images
there are better alternatives and rsync will only work on local
filesystem, but it can still be useful to some people

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-22 17:29:14 +01:00
Nicola Murino
a31a9dc32c
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-21 17:52:18 +01:00
Pr0pHesyer
fa43791ea9 Optimized typography for better readability
Signed-off-by: Pr0pHesyer <proskire@protonmail.com>
2022-03-21 15:05:43 +01:00
Nicola Murino
93b9c1617e
web UI: allow to load custom css
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-19 21:44:27 +01:00
Nicola Murino
4c710d731f
update to Go 1.18
temporarily disabled docker image for ppcle64 as alpine image
is not yet available

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-18 21:52:00 +01:00
Nicola Murino
d9f30e7ac5
add a global whitelist
if defined only the listed IPs/networks can access the configured
services, all other client connections will be dropped before they
even try to authenticate

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-17 22:10:52 +01:00
Nicola Murino
03da7f696c
SFTPGo is now listed on Azure Marketplace
Fixes #684

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-17 14:59:02 +01:00
Nicola Murino
883a3dceaf
db defender: fix getHost query and add more test cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-16 18:22:08 +01:00
lucatiozzo91
7b86e2ac59 always show banned host in ui
When an host is banned but the updated_time field is in the past the ui
didn't show the record.

Fixes #758

Signed-off-by: lucatiozzo91 <luca.tiozzo91@gmail.com>
2022-03-16 17:38:29 +01:00
Nicola Murino
8502d7b051
improve transfer quota limits test case
ReadAll can read more bytes than the effective size, for this test
io.Copy is better

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-15 22:13:07 +01:00
Nicola Murino
6f8b71b89f
s3fs: migrate to AWS SDK V2
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-15 19:16:50 +01:00
Nicola Murino
7e7f662a23
ensure that defaults defined in code match the default config file
Fixes #754

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-14 10:42:14 +01:00
Nicola Murino
0bec1c6012
change the default value for prefer_database_credentials to true ...
... and deprecate this setting.

In the future we'll remove prefer_database_credentials and
credentials_path and we will not allow the credentials to be saved on
the filesystem

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-13 14:29:11 +01:00
Nicola Murino
5582f5c811
data provider: add automatic backups
Automatic backup are enabled by default, a new backup will be saved
each day at midnight.

The backups_path setting was moved from the httpd section to the
data_provider one, please adjust your configuration file and or your
env vars

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-13 13:45:07 +01:00
Nicola Murino
48ed3dab1f
update docs and deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-11 17:11:49 +01:00
Nicola Murino
d8de0faef5
allow to require two-factor auth for users
Fixes #721

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-06 16:57:13 +01:00
Nicola Murino
df828b6021
gcsfs: use pagers when listing bucket objects
Hopefully fixes #746

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-04 18:46:17 +01:00
Nicola Murino
056daaddfc
always execute fs checks for users not logged in after an update
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-03 19:31:54 +01:00
Nicola Murino
5c2fd8d52a
add support for a start directory
Fixes #705

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-03 12:44:56 +01:00
Nicola Murino
4519bffa39
S3: add support for assume role
Fixes #736

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-28 20:19:13 +01:00
Nicola Murino
1ea7429921
initprovider: add load data options
Fixes #741

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-28 17:05:18 +01:00
dependabot[bot]
816c174036 Bump github.com/mattn/go-sqlite3 from 1.14.11 to 1.14.12
Bumps [github.com/mattn/go-sqlite3](https://github.com/mattn/go-sqlite3) from 1.14.11 to 1.14.12.
- [Release notes](https://github.com/mattn/go-sqlite3/releases)
- [Commits](https://github.com/mattn/go-sqlite3/compare/v1.14.11...v1.14.12)

---
updated-dependencies:
- dependency-name: github.com/mattn/go-sqlite3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-28 09:26:17 +01:00
Nicola Murino
79857a8733
config: restore defaults for smtp templates path
It was mistakenly deleted in the previous commit

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-27 14:16:38 +01:00
Nicola Murino
dcc3292dbc
web setup: add an optional installation code
The purpose of this code is to prevent anyone who can access to
the initial setup screen from creating an admin user

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-27 13:08:47 +01:00
Nicola Murino
7f674a7fb3
add more details to the server status page
add all supported fields to the OpenAPI docs

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-26 16:43:29 +01:00
Nicola Murino
b64d3c2fbf
simplify rename permission
before this patch we allow a rename in the following cases:

- the user has rename permission on both source and target path
- the user has delete permission on source path and create/upload on
  target path

we now check only the rename/rename_files/rename_dirs permissions.
This is what SFTPGo users expect.

This is a backward incompatible change and it will not backported to
the 2.2.x branch

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-26 12:19:09 +01:00
Nicola Murino
7fc5cb80d6
deb/rpm packages: attempt to set the cap_net_bind_service capability
so the service can bind to privileged ports without running as root user

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-26 10:10:51 +01:00
Andrea Mattia
92460f811f Simplify sed commands in Dockerfile(s)
Closes #740

Signed-off-by: Andrea Mattia <andrea.mattia@kireygroup.com>
2022-02-25 20:58:26 +01:00
Nicola Murino
e18ad55067
S3: add support for session tokens
Fixes #736

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-25 15:30:04 +01:00
Nicola Murino
4e9dae6fa4
allow to cache external authentications
Fixes #733

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-25 11:51:10 +01:00
Nicola Murino
f5a0559be6
don't execute fs check if the user has recent activity
The check could be expensive with some backends and is generally
only required the first time that a user logs in

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-24 16:11:35 +01:00
Nicola Murino
670018f05e
OIDC: add profile and email scope to OAuth2 config
Fixes #728

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-22 10:20:14 +01:00
Nicola Murino
8bbf54d2b6
azure blobs: add support for multipart downloads
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-21 19:01:31 +01:00
Nicola Murino
d31cccf85f
azblob: switch to the new azure-go-sdk
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-20 14:43:24 +01:00
Nicola Murino
c19b03a3f7
shares: add permission to deny sharing without password
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-19 13:31:58 +01:00
Nicola Murino
c6b8644828
OIDC: execute pre-login hook after IDP authentication
so the SFTPGo users can be auto-created using the hook

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-19 10:53:35 +01:00
Nicola Murino
f1a255aa6c
httpd: allow to restrict allowed hosts ...
... and to add security headers to the responses

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-17 18:22:27 +01:00
Nicola Murino
876bf8aa4f
sftpfs: improve remove
we know if the client asks to remove a file or directory so let's
use the appropriate command without letting the sftp library guess
the appropriate behavior

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-16 16:46:28 +01:00
Nicola Murino
900e519ff1
SFTP: respect file open flags also for file creation
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-16 16:05:56 +01:00
Nicola Murino
f1832d4478
shares: add an upload form for shares with write scope
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-15 19:19:25 +01:00
Nicola Murino
ebbbf81e65
logger: fix UTC time func
Fixes #719

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-14 12:30:00 +01:00
Nicola Murino
1fccd05e9e
allow to configure the minimum version of TLS to be enabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-13 15:56:07 +01:00
Nicola Murino
66945c0a02
Web UIs: add OpenID Connect support
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-13 14:30:20 +01:00
Nicola Murino
fa0ca8fe89
quota summary and docs improvements
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-08 12:43:08 +01:00
clach04
c478c7dae9 Docker readme typo
Signed-off-by: clach04 <clach04@gmail.com>
2022-02-06 19:21:35 +01:00
Nicola Murino
9382db751c
make HTTP shares browsable
if you share a single folder with read scope, you can now browse the share
and download single files

Fixes #674
See #677

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-06 16:46:43 +01:00
Nicola Murino
7e2a8e70c9
update zerolog deps
The updated version avoid to always create a socket connected to the
journald on application start.

Now the socket is only created if we log to the journald

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-03 17:55:36 +01:00
Nicola Murino
cd35636939
S3: add a timeout for single part uploads
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-01 12:15:56 +01:00
Nicola Murino
d51adb041e
update data transfer quota only if the current IP has some limits
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-31 19:30:25 +01:00
Nicola Murino
02db00d008
dataprovider: add naming rules
naming rules allow to support case insensitive usernames, trim trailing
and leading white spaces, and accept any valid UTF-8 characters in
usernames.

If you were enabling `skip_natural_keys_validation` now you need to
set `naming_rules` to `1`

Fixes #687

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-31 18:01:37 +01:00
Nicola Murino
fb2d59ec92
data provider: add config options for certs validation/authentication
Fixes #682

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-30 18:04:03 +01:00
Nicola Murino
1df1225eed
add support for data transfer bandwidth limits
with total limit or separate settings for uploads and downloads and
overrides based on the client's IP address.

Limits can be reset using the REST API

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-30 11:42:36 +01:00
Moroi
aca71bff7a
Fixed typo
Signed-off-by: Moroi <4635854+Rango-dz@users.noreply.github.com>
2022-01-26 22:00:59 +01:00
Jeremy Clerc
9709aed5e6 httpd: webpath redirect using status found (302)
301 MovedPermanently is cached by the browser which can
be annoying when it is is on base path like / while one
may reuse the domain (e.g. localhost) for other apps/tests.

Fixes #695

Signed-off-by: Jeremy Clerc <jeremy@clerc.io>
2022-01-26 21:50:37 +01:00
Nicola Murino
d2a4178846
check quota usage between ongoing transfers
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-20 18:19:20 +01:00
Nicola Murino
d73be7aee5
remove the use of some unnecessary pointers
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-16 12:09:17 +01:00
Nicola Murino
ffe7f7ff16
doc improvements and minor changes
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-16 09:50:23 +01:00
Nicola Murino
a6ed6fc721
pattern filters: don't allow files in hidden dirs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-15 19:06:02 +01:00
Nicola Murino
c3831de94e
add hide policy to pattern filters
Disallowed files/dirs can be completly hidden. This may cause performance
issues for large directories

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-15 17:16:49 +01:00
Marc
9b6b9cca3d systemd-security: add some easy wins
We can tighten security by adding the following to
the systemd service file:

* NoNewPrivileges: should never be needed
* DevicePolicy: only basics required
* PrivateDevices: only needs mounted stuff, never devs
* ProtectSystem: no need to change boot
* RestrictAddressFamilies: INET, UNIX only

Signed-off-by: Marc <mail@lpcvoid.com>
2022-01-15 13:31:59 +01:00
Nicola Murino
64d1ea2d89
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 18:48:08 +01:00
Nicola Murino
1c51239da8
Admin UI: allow to create multiple users/folders from templates
the clone button is not needed anymore, you can select a user and
click on template to generate one or more similar users or you can
create users/folders from an empty template

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-12 19:01:19 +01:00
Nicola Murino
51c15de892
web admin: simplify user page
The page to add/edit users should be less less intimidating now.
All the advanced settings are hidden by default. Permissions are set
to any, so if you also have a users base dir set, to add a user
you have to simply set username, password or public key and save

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-10 19:44:16 +01:00
Nicola Murino
b8efb1b8ec
squash database migrations.
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-09 12:25:53 +01:00
Nicola Murino
ec1d20f46f
sshd: improve docs about supported ciphers, KEX and MACs
also added a check to ensure that the configured values are valid

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-06 18:09:49 +01:00
Nicola Murino
1f619d5ea6
make the sdk a separate module
The SFTPGo SDK now is at the following URL

https://github.com/sftpgo/sdk

Fixes #657

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-06 11:54:43 +01:00
Nicola Murino
6d3d94a01f
move kms implementation outside the sdk package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-06 10:11:47 +01:00
Nicola Murino
0a3d94f73d
log at info level the service configurations
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-05 13:22:49 +01:00
Nicola Murino
7c68b03d07
move plugin handling outside the sdk package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-05 11:37:45 +01:00
Nicola Murino
2912b2e92e
sdk: add a logger interface
we are now ready to make the sdk a separate module

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-04 16:07:41 +01:00
Nicola Murino
a6fe802370
move kms definitions to the sdk package
This is the first step to make the sdk a separate module

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-04 12:49:30 +01:00
Nicola Murino
ad483b7581
httpd: switch back to chi Recoverer now that the required patch is merged
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-04 09:48:16 +01:00
Nicola Murino
df86955f28
eventsearcher plugin: add support to search for provider, bucket, endpoint
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-03 17:02:52 +01:00
Nicola Murino
00ec426a80
notifier plugins: add provider, bucket and endpoint to nottifier params
Fixes #656

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-02 19:22:44 +01:00
Nicola Murino
222db53410
notifiers plugin: replace params with a struct
Fixes #658

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-02 15:16:35 +01:00
Nicola Murino
4d85dc108f
document that SFTPGo is also available as a winget package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-01 18:19:48 +01:00
Nicola Murino
6d582a821b
back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-31 16:01:23 +01:00
Nicola Murino
794afbf85e
update release workflow
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-31 14:17:51 +01:00
Nicola Murino
e3f3997c5e
set version to 2.2.1
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-31 13:42:03 +01:00
Nicola Murino
f78090e47f
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-29 18:11:00 +01:00
Nicola Murino
4d7a4aa99a
check rename source and target 2021-12-28 12:03:52 +01:00
Nicola Murino
c36217c654
improve some docs 2021-12-26 14:54:29 +01:00
Nicola Murino
59bb578b89
web client: allow to move files between folders
Fixes #653
2021-12-25 17:13:23 +01:00
Nicola Murino
7d8823307f
defender: add provider driver
Fixes #616
2021-12-25 12:08:07 +01:00
Nicola Murino
8174349032
console logger: enable colors on Windows too ...
... now that zerolog supports this feature
2021-12-20 18:47:18 +01:00
Nicola Murino
00a02dc14d
howto: add two-factor authentication 2021-12-19 18:08:12 +01:00
Nicola Murino
ced73ed04e
REST API: add an option to create missing dirs 2021-12-19 12:14:53 +01:00
Nicola Murino
cc73bb811b
change log level from warn to error where appropriate
Fixes #649
2021-12-16 19:53:00 +01:00
Nicola Murino
a587228cf0
add support for metadata plugins 2021-12-16 18:18:36 +01:00
Nicola Murino
1472a0f415
hooks: preserve MFA related configs
if a user is updated using pre-login or external auth hook we need to
preserve the MFA related configs in the same way we do if the user is
updated using the REST API
2021-12-11 11:08:20 +01:00
Nicola Murino
0bb141960f
add support for different bandwidth limits based on client IP 2021-12-10 18:43:26 +01:00
Nicola Murino
c153330ab8
web client: use fetch to upload files
also add REST API to upload a single file as POST body
2021-12-08 19:25:22 +01:00
Nicola Murino
5b4ef0ee3b
windows installer: rename the sample configuration with the default values
The previous name sftpgo.json.default could create confusion for Windows
users
2021-12-05 07:58:53 +01:00
Nicola Murino
9632b6ee94
events search: improve test cases 2021-12-04 18:18:59 +01:00
Nicola Murino
78eb1c1166
update OpenAPI schema 2021-12-04 17:57:48 +01:00
Nicola Murino
a7c0b07a2a
add session id to notifier plugins/hook 2021-12-04 17:27:24 +01:00
Nicola Murino
dc1cc88a46
keyboard interactive hooks: allow to validate passcode 2021-12-04 15:14:44 +01:00
Nicola Murino
3f5451eab6
web client: save/restore file list preferences 2021-12-04 07:58:49 +01:00
Nicola Murino
30d98326ca
docker: update alpine image to 3.15 2021-12-03 19:33:37 +01:00
Nicola Murino
bedc8e288b
web client: add support for integrating external viewers/editors 2021-12-03 18:33:08 +01:00
Nicola Murino
6092b6628e
logs: use info level for login related messages
so enabling debug level is not required, for example only to understand
that a user exceeded the allowed sessions.

Also set the cache update frequency as documented
2021-12-02 19:36:42 +01:00
Nicola Murino
6ee51c5cc1
kms: remove support for compat secrets
also document how to activate the deprecated builtin provider
2021-12-01 17:53:19 +01:00
Nicola Murino
4df0ae82ac
web client: allow downloading of single shared files without compression
Fixes #629
2021-11-30 20:32:10 +01:00
Nicola Murino
5db31f0fb3
web client: allow to upload/delete multiple files 2021-11-30 18:40:50 +01:00
Nicola Murino
0f8170c10f
improve some docs and disable telemetry server by default 2021-11-29 17:58:10 +01:00
Nicola Murino
3c24cb773f
SFTP: log users connections at info level
uniform SFTP and FTP logs

Fixes #626
2021-11-29 10:15:46 +01:00
Nicola Murino
bec54ac8ae
CI: add windows x86
there still seem to be people using x86 on Windows ...
2021-11-28 21:30:31 +01:00
Nicola Murino
c330ac8418
CI: add windows arm64 2021-11-28 18:56:30 +01:00
Nicola Murino
3e478f42ea
update lint rules and fix some warnings 2021-11-27 17:04:13 +01:00
Nicola Murino
18ab757216
back to development 2021-11-27 15:07:31 +01:00
Nicola Murino
b6bcf0cd94
set version to 2.2.0 2021-11-27 11:46:05 +01:00
Nicola Murino
015aa36c56
loaddata: improve shares restore
usage and timestamps are now preserved
2021-11-27 11:12:51 +01:00
Nicola Murino
f2480ce5c9
improve chtimes handling on open files 2021-11-26 19:00:44 +01:00
Vincent Murphy
f828c58dca Add --s3-force-path-style to portable 2021-11-26 17:40:23 +01:00
Nicola Murino
dc19921b0c
web client: don't show the link for expired shares 2021-11-25 20:09:11 +01:00
Nicola Murino
3f3591bae0
web client: allow to preview images and pdf
pdf depends on browser support. It does not work on mobile devices.
2021-11-25 19:24:32 +01:00
Nicola Murino
fc048728d9
add 7digital to the sponsors section 2021-11-25 13:49:32 +01:00
Nicola Murino
aeb4675196
web admin: use a textarea for allowed/denied ip mask fields
Fixes #621
2021-11-25 13:08:12 +01:00
Nicola Murino
4652f9ede8
FTPD: allow to set different passive IPs based on the client's IP address 2021-11-25 12:45:09 +01:00
Nicola Murino
531cb5b5a1
sftpd: handle setstat requests with multiple attrs 2021-11-24 11:55:14 +01:00
Nicola Murino
9fb43b2c46
docs: clarify how multi-step auth works with external authentication
Fixes #617
2021-11-24 11:27:32 +01:00
Nicola Murino
8a8298ad46
web client: improve file upload 2021-11-22 12:25:36 +01:00
Nicola Murino
3d6b09e949
REST API: expose OpenAPI schema and render it using Swagger UI
Fixes #609
2021-11-21 09:32:51 +01:00
Nicola Murino
fb8f013ea7
web: update permissions on cookie refresh 2021-11-20 10:48:39 +01:00
Nicola Murino
c41319bb7a
CI: sign windows installer and executable 2021-11-19 22:44:50 +01:00
Nicola Murino
46157ebbb6
CI docker: remove armv7 support
CI is still unreliable if we enable armv7 support
2021-11-16 09:07:10 +01:00
Nicola Murino
200b1d08c7
docker: add armv7 2021-11-15 21:58:35 +01:00
Nicola Murino
24b0352eb6
GCS: add ACL support 2021-11-15 21:57:41 +01:00
Nicola Murino
52f3a98cc8
preserve GCS credentials on update if not set
credentials were not preserved if "prefer_database_credentials" was
set to true

Fixes #613
2021-11-15 19:12:58 +01:00
Nicola Murino
e29a3efd39
add resetprovider sub-command
Fixes #608
2021-11-15 18:40:31 +01:00
Nicola Murino
ca730e77a5
add separate permissions to delete and rename files and dirs
perm_delete and perm_rename still exist for backward compatibility,
now they are an alias to assign both new split permissions
2021-11-14 16:23:33 +01:00
Nicola Murino
0833b4698e
httpd service: add CORS support 2021-11-13 23:14:50 +01:00
Nicola Murino
ee5c5e033d
S3: add ACL support
Fixes #610
2021-11-13 16:05:40 +01:00
Nicola Murino
78233ff9a3
web UI/REST API: add password reset
In order to reset the password from the admin/client user interface,
an SMTP configuration must be added and the user/admin must have an email
address.
You can prohibit the reset functionality on a per-user basis by using a
specific restriction.

Fixes #597
2021-11-13 13:25:43 +01:00
Nicola Murino
b331dc5686
web client: show share last use and used tokens 2021-11-07 09:53:35 +01:00
Nicola Murino
dfcfcee208
Windows: fix UTC time logging 2021-11-06 16:27:01 +01:00
Nicola Murino
094ee1522e
logger: add a flag to use UTC time for logging 2021-11-06 15:18:16 +01:00
Nicola Murino
3bc58f5988
WebClient/REST API: add sharing support 2021-11-06 14:13:20 +01:00
Martijn Pieters
f6938e76dc Parse auth plugin information from env 2021-11-02 11:36:30 +01:00
Nicola Murino
570964deb3
add post-disconnect hook
Fixes #587
2021-10-29 19:55:18 +02:00
Nicola Murino
31984ffec1
update logo and add it to windows exe and installer
thanks to @asheroto for donating the new logo
2021-10-23 19:27:39 +02:00
Nicola Murino
74fc3aaf37
REST API: add events search 2021-10-23 15:47:21 +02:00
Nicola Murino
97d0a48557
plugins: improve notifier and searcher 2021-10-20 19:39:49 +02:00
Nicola Murino
3bbe67571f
plugins: add eventsearcher 2021-10-17 16:43:05 +02:00
Nicola Murino
f131ef130b
add a link to the new events store plugin 2021-10-16 17:08:34 +02:00
Nicola Murino
4a6a4ce28d
sftpfs: map path resolution error to permission denied
we do the same for os fs so that the problematic directory is excluded
from the webdav listing instead of failing the whole directory listing
2021-10-16 10:32:18 +02:00
Nicola Murino
a80ac80fcd
pkgs: update nfpm to 2.7 and use xz as compression for both deb and rpm 2021-10-13 09:15:04 +02:00
Nicola Murino
4aa9686e3b
refactor custom actions
SFTPGo is now fully auditable, all fs and provider events that change
something are notified and can be collected using hooks/plugins.

There are some backward incompatible changes for command hooks
2021-10-10 13:08:05 +02:00
Nicola Murino
64e87d64bd
web client UI: allow to edit plain text files
Fixes #567
2021-10-09 14:17:28 +02:00
Nicola Murino
9ca0b46f30
UI connections page: add a refresh button 2021-10-07 18:28:31 +02:00
Nicola Murino
6eb154bb74
webdav: add support for lock discovery 2021-10-06 09:11:56 +02:00
Nicola Murino
ea01c3a125
rate limiting: allow to exclude IP addresses/ranges
Fixes #563
2021-10-03 20:50:05 +02:00
Nicola Murino
1b4a1fbbe5
add data retention check hook 2021-10-03 15:17:49 +02:00
Nicola Murino
ec81a7ac29
actions: add a specific protocol for data retention 2021-10-03 10:22:47 +02:00
Nicola Murino
22d28a37b6
cmd: improve completion sub-commands 2021-10-03 08:14:57 +02:00
Nicola Murino
cc134cad9a
data retention: allow to notify results via e-mail 2021-10-02 22:25:41 +02:00
Nicola Murino
1459150024
WebDAV: improve logs 2021-10-01 20:37:23 +02:00
root
87751e562e Flesh out examples/ldapauth, specifically:
Support 'virtual' users who have no homeDirectory, uidNumber or gidNumber.
Permit read-only access by a user named "anonymous", with any password.
Assume a conventional DIT with users under ou=people,dc=example,dc=com.
Read the LDAP bindPassword from a file (not baked into the code).
Log progress and problems to syslog.
2021-10-01 09:10:13 +02:00
Nicola Murino
e6f969cb04
web UI: update js and css deps 2021-09-30 10:23:25 +02:00
Nicola Murino
ba1febba73
rework user and admin profiles
users and admins can now also update their email and description
2021-09-29 18:46:15 +02:00
Nicola Murino
af8fa7ff81
Docker: remove rsync from default images
it's time to encourage people to switch to more modern alternatives like
rclone
2021-09-27 11:34:11 +02:00
Nicola Murino
4ab2e4088a
CI docker: remove armv7 support
building docker images now takes too long and often fails with random
errors. I have to restart the build several times to be able to push
the images to docker hub and gcr
2021-09-27 10:25:21 +02:00
Nicola Murino
da0ccc6426
add SMTP support
it will be used in future update to add email sending capabilities
2021-09-26 20:25:37 +02:00
Maharanjan
0661876e99
Added email field for user account 2021-09-25 19:06:13 +02:00
Nicola Murino
cd72ac4fc9
CI: add armv7 support 2021-09-25 14:14:21 +02:00
Nicola Murino
da5a061b65
add basic REST APIs for data retention
Fixes #495
2021-09-25 12:20:31 +02:00
Nicola Murino
65948a47f1
systemd unit: set LimitNOFILE to 8192 2021-09-19 17:37:18 +02:00
Nicola Murino
bf4b3e6840
httpd: move the check connection middleware before the logger middleware
Fixes #543
2021-09-19 08:14:59 +02:00
Nicola Murino
6ea38188e8
minor fixes and doc improvements 2021-09-18 10:50:17 +02:00
Nicola Murino
b5639a51fd
don't generate defender events for HTTP/WebDAV requests with no auth
it is quite common for HTTP clients to send a first request without
the Authorization header and then send the credentials after receiving
a 401 response. We don't want to generate defender events in this case
2021-09-11 18:23:11 +02:00
Nicola Murino
5c34d814d6
fix a possible nil pointer dereference
it can happen by upgrading from very old versions
2021-09-11 14:19:17 +02:00
Nicola Murino
0eca4f1866
update deps 2021-09-08 12:29:47 +02:00
Nicola Murino
b52f829f05
docker: replace mime-support package with media-types
This way the size of the slim image is similar to the previous buster
based images
2021-09-07 21:04:46 +02:00
Nicola Murino
90f64c9f63
distroless image: minor changes 2021-09-07 19:52:28 +02:00
Oleksandr Shvets
c106498dd8
docker: added distroless image 2021-09-06 19:10:28 +02:00
Nicola Murino
7bad65a43e
user: add a permission to disable changing api key authentication
also implement the missing APIs to enable/disable api key authentication
2021-09-06 18:46:35 +02:00
Nicola Murino
101c2962ab
web client UI: add a permission to disable password change
Fixes #528
2021-09-05 18:49:13 +02:00
Nicola Murino
59140a6d51
add additional data to MFA secrets and fix pointers management 2021-09-05 14:10:12 +02:00
Nicola Murino
b1d54f69d9
admin: fix possible nil pointer dereference
this possible bug was introduced in the previous commit
2021-09-04 13:56:29 +02:00
Nicola Murino
374de07c7b
update deps 2021-09-04 13:30:23 +02:00
Nicola Murino
8a4c21b64a
add builtin two-factor auth support
The builtin two-factor authentication is based on time-based one time
passwords (RFC 6238) which works with Authy, Google Authenticator and
other compatible apps.
2021-09-04 12:11:04 +02:00
Nicola Murino
16ba7ddb34
CI: also runs test cases using GOARCH 386
This way we can detect unaligned 64-bit atomic operations that only happen
on 32 bit platforms
2021-08-28 12:03:23 +02:00
Nicola Murino
bd9506da42
BaseConnection struct: ensure 64 bit alignment
Fixes #516
2021-08-28 10:06:49 +02:00
Nicola Murino
b903a6e46f
data provider: remove default admin
you need to load initial data or set "create_default_admin" to true
and the appropriate env vars if you don't want to use the web admin
setup screen to create the default admin
2021-08-20 10:37:51 +02:00
Nicola Murino
bcf088f586
data provider: update internal caches if the data provider is shared 2021-08-20 09:35:06 +02:00
Nicola Murino
be3857d572
dataprovider: add timestamp fields for users and admins 2021-08-19 15:51:43 +02:00
Nicola Murino
b99d4ce82e
fix folders validation
Fixes #510
2021-08-19 11:28:53 +02:00
Nicola Murino
0a558203da
improve proxy documentation
Fixes #507
2021-08-18 15:27:07 +02:00
Nicola Murino
5a549a88fe
update to Go 1.17 2021-08-18 14:39:56 +02:00
Nicola Murino
fe953d6b38
REST API: add support for API key authentication 2021-08-17 18:08:32 +02:00
erwiese
05c62b9f40
add documentation for defender scores (#500)
Co-authored-by: Erwin Wiesensarter <erwin.wiesensarter@bkg.bund.de>
2021-08-13 15:40:33 +02:00
Nicola Murino
555dc3b0c0
transfer logs: add FTP mode 2021-08-10 13:07:38 +02:00
Nicola Murino
0de0d3308c
improve error messages for generic failures 2021-08-08 19:30:21 +02:00
Nicola Murino
a20373b613
add support for auth plugins 2021-08-08 17:09:48 +02:00
Nicola Murino
ced2e16f41
add support for password validation rules
Fixes #494
2021-08-06 18:56:07 +02:00
Nicola Murino
3ac832c8dd
docker: bump Alpine to 3.14 2021-08-05 19:38:30 +02:00
Nicola Murino
a3c087456b
ftpd: add some security checks 2021-08-05 18:38:15 +02:00
Nicola Murino
419774158a
remove PayPal link
I'm having some issues with my PayPal account, remove it for now
2021-08-03 20:36:10 +02:00
Nicola Murino
0503215e7a
web client: try to prevent browsers from caching requests
Fixes #493
2021-08-03 19:58:03 +02:00
dependabot[bot]
9541843ff7
Bump github.com/shirou/gopsutil/v3 from 3.21.6 to 3.21.7 (#491)
Bumps [github.com/shirou/gopsutil/v3](https://github.com/shirou/gopsutil) from 3.21.6 to 3.21.7.
- [Release notes](https://github.com/shirou/gopsutil/releases)
- [Commits](https://github.com/shirou/gopsutil/compare/v3.21.6...v3.21.7)

---
updated-dependencies:
- dependency-name: github.com/shirou/gopsutil/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-02 10:11:09 +02:00
dependabot[bot]
98f22ba110
Bump uraimo/run-on-arch-action from 2.1.0 to 2.1.1 (#490)
Bumps [uraimo/run-on-arch-action](https://github.com/uraimo/run-on-arch-action) from 2.1.0 to 2.1.1.
- [Release notes](https://github.com/uraimo/run-on-arch-action/releases)
- [Commits](https://github.com/uraimo/run-on-arch-action/compare/v2.1.0...v2.1.1)

---
updated-dependencies:
- dependency-name: uraimo/run-on-arch-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-02 10:10:24 +02:00
Nicola Murino
1e9a19e326
add a howto to use SFTPGo as OpenSSH's SFTP subsystem 2021-07-31 19:09:09 +02:00
mmcgeefeedo
0046c9960a
add support to override default admin credentials via env vars 2021-07-31 10:39:53 +02:00
Nicola Murino
7640612a95
update deps 2021-07-31 10:22:38 +02:00
Nicola Murino
a26962f367
add dot and dot dot directories to sftp/ftp file listing 2021-07-31 09:42:23 +02:00
Nicola Murino
f778e47d22
sftpd: minor improvements and docs for the prefix middleware 2021-07-29 20:12:23 +02:00
Nicola Murino
4781921336
fix loading enabled_ssh_commands config key 2021-07-29 00:54:22 +02:00
mmcgeefeedo
3ae8abda9e
sftpd: add folder prefix middleware 2021-07-29 00:32:55 +02:00
Nicola Murino
90b324d707
Add a link on the login pages to switch between admin and web client login
The links are hidden if only the web admin or only thw web client is
enabled and can also be controlled using the "hide_login_url" setting

Fixes #485
2021-07-27 18:43:00 +02:00
Nicola Murino
3a22aae34f
web UI: add support for upload, create dirs, rename, delete 2021-07-26 20:55:49 +02:00
dependabot[bot]
45a0473fec
Bump codecov/codecov-action from 1 to 2.0.2 (#486)
* Bump codecov/codecov-action from 1 to 2

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v2.0.2)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicola Murino <nicola.murino@gmail.com>
2021-07-26 11:08:48 +02:00
Nicola Murino
a7313e4492
webdav: add new test cases and fix some lock related issues
Our net/webdav branch now include the following patches:

https://github.com/golang/net/pull/92
https://github.com/golang/net/pull/93
https://github.com/golang/net/pull/94
2021-07-25 09:55:14 +02:00
Nicola Murino
c41ae116eb
improve logging
Fixes #381
2021-07-24 20:11:17 +02:00
Nicola Murino
83c7453957
user API: allow to disable writes ...
... even if the user has permissions for these actions
2021-07-23 21:41:02 +02:00
Nicola Murino
85a47810ff
S3: expose more properties, possible backward incompatible change
Before these changes we implictly set S3ForcePathStyle if an endpoint
was provided.

This can cause issues with some S3 compatible object storages and must
be explicitly set now.

AWS is also deprecating this setting

https://aws.amazon.com/it/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/
2021-07-23 16:56:48 +02:00
Nicola Murino
c997ef876c
S3: fix Ceph compatibility
This hack will no longer be needed once Ceph tags a new version and vendors
using it update their servers.

This code is taken from rclone, thank you!

Fixes #483
2021-07-23 11:41:31 +02:00
Nicola Murino
ae8ccadad2
users API: add API to create, delete, rename files and directories 2021-07-23 10:19:27 +02:00
Nicola Murino
5967aa1aa5
FTP: enable ftpserverlib logging and make debug mode configurable 2021-07-20 17:22:08 +02:00
Nicola Murino
c900cde8e4
notifiers plugin: add settings to retry unhandled events 2021-07-20 12:51:21 +02:00
Nicola Murino
13183a9f76
deps cleanup 2021-07-17 15:42:59 +02:00
Nicola Murino
5a568b4077
KMS: allow to provide the master encryption key as string 2021-07-17 15:34:48 +02:00
Nicola Murino
030507a2ce
add some docs for the plugin system 2021-07-17 14:14:42 +02:00
Nicola Murino
338301955f
move cloud KMS providers to an external plugin 2021-07-17 13:08:05 +02:00
Nicola Murino
6d313f6d8f
expose KMS as plugin 2021-07-16 18:22:42 +02:00
Nicola Murino
776dffcf12
kms: improve modularity 2021-07-13 21:17:21 +02:00
Nicola Murino
e1a2451c22
s3: allow to configure the chunk download timeout 2021-07-11 18:39:45 +02:00
Nicola Murino
7344366ce8
sftpd: remove workarounds for directory listing
The underlying issue was fixed in pkg/sftp 1.13.2
2021-07-11 16:26:40 +02:00
Nicola Murino
bd5191dfc5
add experimental plugin system 2021-07-11 15:26:51 +02:00
Nicola Murino
bfa4085932
improve docs 2021-07-03 18:23:36 +02:00
Nicola Murino
302ec2558c
add notifications for mkdir/rmdir 2021-07-03 18:07:55 +02:00
Nicola Murino
ff19879ffd
allow to use a persistent signing key for JWT and CSRF tokens
Fixes #466
2021-07-01 20:17:40 +02:00
Nicola Murino
04001f7ad3
FTP: try to return more specific error codes/messages for some errors
We now return 552 code for quota exceeded errors and 553 in the following
cases:

- filename denied by a filter
- no upload permission
- no overwrite permission
- pre upload hook error

Fixes #442
2021-06-28 19:40:04 +02:00
Nicola Murino
076b2f0ee0
modules: add v2 support 2021-06-26 07:31:41 +02:00
Nicola Murino
93dfb03eaf
GCS: add a trailing / to "directories"
This way SFTPGo should be compatible with Google Cloud console.

This change should be backward compatibile, testing is welcome

Fixes #464
2021-06-24 19:36:01 +02:00
Nicola Murino
e09bdd43d4
defender: fix GetHost for blocklist entries too 2021-06-20 21:57:19 +02:00
Nicola Murino
ac8d8a3da1
update portable mode docs 2021-06-19 19:40:53 +02:00
Manuel Reithuber
a4157e83e9 template fsconfig: updated form-group css classes so we can further improve onFilesystemChanged()
it doesn't reference any vfs providers at all anymore :)
2021-06-19 19:27:54 +02:00
Manuel Reithuber
13f23838a1 template fsconfig.html: using string provider name in onFilesystemChanged() 2021-06-19 19:27:54 +02:00
Manuel Reithuber
fd4c388b23 added vfs.ListProviders() and using it in template fsconfig.html (added a new ListFSProviders template function for that) 2021-06-19 19:27:54 +02:00
Manuel Reithuber
88b10da596 updated utils.LoadTemplate() to call template.ParseFiles() directly and added a way to specify a base template (will be used in the next commit) 2021-06-19 19:27:54 +02:00
Manuel Reithuber
c07dc74d48 template fsconfig.html: simplified code in onFilesystemChanged() 2021-06-19 19:27:54 +02:00
Manuel Reithuber
b48e01155c FilesystemProvider: added .Name() which reverses vfs.GetProviderByName(), and added .ShortInfo(); using .ShortInfo() in User.GetInfoString() 2021-06-19 19:27:54 +02:00
Manuel Reithuber
0ff010cc94 added vfs.GetProviderByName(), using it in for sftpgo portable and for parsing the webadmin form field 2021-06-19 19:27:54 +02:00
Nicola Murino
81aac15a6c
defender: don't return expired hosts/banned ip in GetHost too 2021-06-19 18:51:33 +02:00
Nicola Murino
c1b862394d
move other errors to utils package 2021-06-19 13:06:01 +02:00
Manuel Reithuber
f19937b715
move Filesystem config validation to vfs 2021-06-19 12:24:43 +02:00
Nicola Murino
f2f612b450
defender: don't return expired hosts/banned ip 2021-06-19 11:02:46 +02:00
Nicola Murino
0c2640bbab
update deps 2021-06-19 09:56:49 +02:00
Nicola Murino
3bb0ca1d2b
config: remove deprecated configuration keys 2021-06-19 09:47:06 +02:00
Nicola Murino
d5b42f72e2
squash database migrations, remove compat data provider code 2021-06-19 09:03:20 +02:00
Nicola Murino
62744e081b
get HTTPD binding from env: respect the documented default 2021-06-17 15:57:41 +02:00
Nicola Murino
9dcaf1555f
back to development 2021-06-16 19:28:25 +02:00
Nicola Murino
a09cf5c8b9
set version to 2.1.0 2021-06-16 17:45:09 +02:00
Nicola Murino
47ebe42375
FTP: fix LIST on files 2021-06-15 06:38:56 +02:00
Nicola Murino
4d97ab9eb9
Let's Encrypt tutorial: use sudo where appropriate 2021-06-14 22:35:08 +02:00
Nicola Murino
8ed13dc4a9
docs: document how to use Let's Encrypt Certificates 2021-06-14 22:05:55 +02:00
Nicola Murino
3b66dd0873
Linux packages: fix static resources copy 2021-06-14 14:18:15 +02:00
Nicola Murino
d992f0ffcc
update deps 2021-06-13 08:54:22 +02:00
Nicola Murino
6c5a7e8f13
improve installation docs, add paypal link to fundings 2021-06-12 10:05:25 +02:00
Nicola Murino
9d3d7db29c
azblob: store SAS URL as kms.Secret 2021-06-11 22:27:36 +02:00
Nicola Murino
8607788975
s3fs: use "application/x-directory" as folder mime type
This change improve s3fs-fuse compatibility

Fixes #451
2021-06-08 13:52:36 +02:00
Nicola Murino
4be6307d87
webadmin: add defender page 2021-06-08 13:24:28 +02:00
Nicola Murino
feec2118bb
improve defender and quotas REST API 2021-06-07 21:52:43 +02:00
Nicola Murino
43182fc25e
OpenAPI: add users API
These new APIs match the web client features.

I'm aware that some API do not follow REST best practises.

I want to avoid things likes "/user/folders/<path>"

where "path" must be encoded and making it optional create issues, so
I defined resources as query parameters instead of path parameters
2021-06-05 16:07:09 +02:00
Nicola Murino
976f588863
improve docs to enable FTP/WebDAV
Fixes #447
2021-06-02 09:49:31 +02:00
Nicola Murino
575bcf1f03
add remote address to transfer and commands logs 2021-06-01 22:28:43 +02:00
Nicola Murino
969c992bfd
pre-upload: execute the hook just before opening the target file 2021-05-31 22:40:47 +02:00
Nicola Murino
c1239fbf59
pre-upload action: add file open flags
Reading the flags the hook receiver can detect if the client wants to
truncate the target file
2021-05-31 22:33:23 +02:00
Nicola Murino
c63b923ec3
cryptfs: add support for atomic uploads 2021-05-31 21:45:29 +02:00
dependabot[bot]
574c4029fc
Bump uraimo/run-on-arch-action from 2.0.9 to 2.0.10 (#444)
Bumps [uraimo/run-on-arch-action](https://github.com/uraimo/run-on-arch-action) from 2.0.9 to 2.0.10.
- [Release notes](https://github.com/uraimo/run-on-arch-action/releases)
- [Commits](https://github.com/uraimo/run-on-arch-action/compare/v2.0.9...v2.0.10)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-31 10:05:25 +02:00
Nicola Murino
423d8306be
webclient: allow to download multiple files as zip 2021-05-30 23:07:46 +02:00
Nicola Murino
fc7066a25c
cross device rename: remove the source if copy suceeded 2021-05-27 22:23:14 +02:00
Nicola Murino
e1bf46c6a5
local fs rename: if it fails with a cross device error try a copy
I don't want to add a new setting for this, at least until we get the
first complain for a slow rename :)

Fixes #440
2021-05-27 20:14:12 +02:00
Nicola Murino
3b46e6a6fb
add support for a global temp path
Fixes #436
2021-05-27 15:38:27 +02:00
Nicola Murino
7a85c66ee7
webclient: defer file list rendering
combined with server side processing I can now list a directory with
about 100.000 files in less than 2 seconds without losing client side
filtering and pagination
2021-05-27 09:40:46 +02:00
Nicola Murino
25a44030f9
actions: add pre-download and pre-upload
Downloads and uploads can be denied based on hook response
2021-05-26 07:48:37 +02:00
Nicola Murino
600268ebb8
httpclient: allow to set custom headers 2021-05-25 08:36:01 +02:00
Nicola Murino
1223957f91
webclient: use different icons based on the file extension 2021-05-24 19:09:03 +02:00
Nicola Murino
15cde2dd1a
improve test coverage 2021-05-23 22:29:55 +02:00
Nicola Murino
50e441849a
try to make the web admin more user friendly
removed all the textarea with fields separated using "::".
This should, hopefully, improve user experience
2021-05-23 22:02:01 +02:00
Nicola Murino
02bb09ec01
remove deprecated file extensions filters
these filters were deprecated a long time ago, everyone should use
patterns filters now
2021-05-22 12:28:05 +02:00
Nicola Murino
402947a43c
update deps 2021-05-22 10:42:30 +02:00
Nicola Murino
b9bc8d722d
try to improve web client credentials page
I should do the same for the admin page too
2021-05-22 09:54:27 +02:00
Nicola Murino
0cb5c49cf3
map path resolution errors to Permission errors
this way the affected paths will be ignored in WebDAV

Fixes #432
2021-05-21 13:04:22 +02:00
Nicola Murino
9fc4be6d40
minor doc fixes 2021-05-20 18:34:38 +02:00
Nicola Murino
ecfed4dc04
Add a Getting Started Guide 2021-05-20 18:16:27 +02:00
dependabot[bot]
b415e4d98f
Bump github.com/lib/pq from 1.10.1 to 1.10.2 (#429)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.1 to 1.10.2.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.10.1...v1.10.2)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-17 09:27:16 +02:00
Nicola Murino
7d059efe06
add an example backup script 2021-05-16 22:28:08 +02:00
Nicola Murino
60cfbd2989
setup: auto login after creating the first admin 2021-05-16 21:36:57 +02:00
Nicola Murino
8ecf64f481
httpclient: accepts timeouts as float
Fixes #428
2021-05-16 12:50:06 +02:00
Nicola Murino
019b0f2fd5
http cookie: add max-age and samesite
update deps too
2021-05-16 09:13:00 +02:00
Nicola Murino
15d6cd144a
another try to better understand the random webdav test case failure 2021-05-15 08:56:36 +02:00
Nicola Murino
f59f62317e
sftpd: fix file upload resume detection
WinSCP does not set the APPEND flag while resuming a file upload,
so we detect a file upload resume if the TRUNCATE flag is not set.
The APPEND flag is now ignored.

Fixes #420
2021-05-15 08:39:01 +02:00
Nicola Murino
f2b93c0402
add a setup screen to create the first admin user
If you prefer to auto-create the first admin you can enable the
"create_default_admin" configuration key and SFTPGo will work as before.

You can also create the first admin by loading initial data: now you can
set both username and password, before you could only change the password
2021-05-14 19:21:15 +02:00
Nicola Murino
0540b8780e
redact credentials within hooks
go-retryablehttp does not redact credentials, so we still log them
when we use it

https://github.com/hashicorp/go-retryablehttp/pull/133
2021-05-12 22:44:17 +02:00
Nicola Murino
fa45c9c138
allow to execute actions for file operations and SSH commands synchronously
The actions to run synchronously can be configured via the `execute_sync`
configuration key.

Executing an action synchronously means that SFTPGo will not return a result
code to the client until your hook have completed its execution.

Fixes #409
2021-05-11 12:45:14 +02:00
Nicola Murino
b67cd0d3df
ensure no client is connected before running max connections test cases 2021-05-11 08:04:57 +02:00
Nicola Murino
c8f7fc9bc9
httpd/webdav: add a list of hosts allowed to send proxy headers
X-Forwarded-For, X-Real-IP and X-Forwarded-Proto headers will be ignored
for hosts not included in this list.

This is a backward incompatible change, before the proxy headers were
always used
2021-05-11 06:54:06 +02:00
dependabot[bot]
f1b998ce16
Bump github.com/otiai10/copy from 1.5.1 to 1.6.0 (#414)
Bumps [github.com/otiai10/copy](https://github.com/otiai10/copy) from 1.5.1 to 1.6.0.
- [Release notes](https://github.com/otiai10/copy/releases)
- [Commits](https://github.com/otiai10/copy/compare/v1.5.1...v1.6.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-10 14:02:09 +02:00
dependabot[bot]
aaa758e978
Bump github.com/minio/sio from 0.2.1 to 0.3.0 (#412)
Bumps [github.com/minio/sio](https://github.com/minio/sio) from 0.2.1 to 0.3.0.
- [Release notes](https://github.com/minio/sio/releases)
- [Commits](https://github.com/minio/sio/compare/v0.2.1...v0.3.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-10 11:34:01 +02:00
dependabot[bot]
716946a148
Bump github.com/aws/aws-sdk-go from 1.38.35 to 1.38.36 (#413)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.38.35 to 1.38.36.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.38.35...v1.38.36)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-10 11:10:58 +02:00
Nicola Murino
15934d72cc webdav test: increase log size
the latest 10 lines are not enough to understand the issue, try with 20
2021-05-09 10:09:25 +02:00
Nicola Murino
8f6cdacd00
allow to limit the number of per-host connections 2021-05-08 19:45:21 +02:00
Nicola Murino
8f736da4b8
webdav test: add some more logs
QuotaLimits test case sometime fails when running in CI, try to
understand the reason
2021-05-07 22:24:06 +02:00
Nicola Murino
4ea4202b99
httpd/webdav: use a custom listener with read and write deadlines 2021-05-07 20:41:20 +02:00
Nicola Murino
d4bfc3f6b5
fix lint configuration and a warning 2021-05-06 22:06:22 +02:00
Nicola Murino
23d9ebfc91
add a basic front-end web interface for end-users
Fixes #339 #321 #398
2021-05-06 21:35:43 +02:00
dependabot[bot]
5c99f4fb60
Bump github.com/shirou/gopsutil/v3 from 3.21.3 to 3.21.4 (#406)
Bumps [github.com/shirou/gopsutil/v3](https://github.com/shirou/gopsutil) from 3.21.3 to 3.21.4.
- [Release notes](https://github.com/shirou/gopsutil/releases)
- [Commits](https://github.com/shirou/gopsutil/compare/v3.21.3...v3.21.4)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 14:44:07 +02:00
dependabot[bot]
2263c7e20f
Bump github.com/hashicorp/go-retryablehttp from 0.6.8 to 0.7.0 (#405)
Bumps [github.com/hashicorp/go-retryablehttp](https://github.com/hashicorp/go-retryablehttp) from 0.6.8 to 0.7.0.
- [Release notes](https://github.com/hashicorp/go-retryablehttp/releases)
- [Commits](https://github.com/hashicorp/go-retryablehttp/compare/v0.6.8...v0.7.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 14:43:53 +02:00
dependabot[bot]
515b2d917e
Bump github.com/fclairamb/ftpserverlib from 0.13.0 to 0.13.1 (#404)
Bumps [github.com/fclairamb/ftpserverlib](https://github.com/fclairamb/ftpserverlib) from 0.13.0 to 0.13.1.
- [Release notes](https://github.com/fclairamb/ftpserverlib/releases)
- [Commits](https://github.com/fclairamb/ftpserverlib/compare/v0.13.0...v0.13.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 13:29:54 +02:00
dependabot[bot]
af4723356d
Bump github.com/lestrrat-go/jwx from 1.1.7 to 1.2.0 (#403)
Bumps [github.com/lestrrat-go/jwx](https://github.com/lestrrat-go/jwx) from 1.1.7 to 1.2.0.
- [Release notes](https://github.com/lestrrat-go/jwx/releases)
- [Changelog](https://github.com/lestrrat-go/jwx/blob/main/Changes)
- [Commits](https://github.com/lestrrat-go/jwx/compare/v1.1.7...v1.2.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 13:29:16 +02:00
dependabot[bot]
068dd34a38
Bump github.com/aws/aws-sdk-go from 1.38.25 to 1.38.30 (#402)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.38.25 to 1.38.30.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.38.25...v1.38.30)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 11:41:25 +02:00
dependabot[bot]
b16a5c2caf
Bump github.com/go-chi/chi/v5 from 5.0.2 to 5.0.3 (#401)
Bumps [github.com/go-chi/chi/v5](https://github.com/go-chi/chi) from 5.0.2 to 5.0.3.
- [Release notes](https://github.com/go-chi/chi/releases)
- [Changelog](https://github.com/go-chi/chi/blob/master/CHANGELOG.md)
- [Commits](https://github.com/go-chi/chi/compare/v5.0.2...v5.0.3)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 11:41:09 +02:00
Nicola Murino
a383957cfa
OpenAPI: document that also folder-quota-update supports partial updates 2021-04-28 19:33:32 +02:00
Nicola Murino
00f97aabb4
OpenAPI: document that quota-update support partial updates
If the update mode is "add" and you pass only used_quota_size or only
used_quota_files the missing field will remain unchanged
2021-04-28 19:16:15 +02:00
Nicola Murino
32db0787bb
add an example script for scheduled quota updates 2021-04-26 21:53:09 +02:00
Nicola Murino
1275328fdf
Authentication errors: try to avoid user enumeration
Fixes #395
2021-04-26 19:48:21 +02:00
Nicola Murino
7778716fa7
update crypto and net dependencies 2021-04-25 18:12:02 +02:00
dependabot[bot]
77476d0f56
Bump github.com/aws/aws-sdk-go from 1.38.21 to 1.38.25 (#394)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.38.21 to 1.38.25.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.38.21...v1.38.25)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 17:07:59 +02:00
dependabot[bot]
c7a1fc2996
Bump cloud.google.com/go/storage from 1.14.0 to 1.15.0 (#392)
Bumps [cloud.google.com/go/storage](https://github.com/googleapis/google-cloud-go) from 1.14.0 to 1.15.0.
- [Release notes](https://github.com/googleapis/google-cloud-go/releases)
- [Changelog](https://github.com/googleapis/google-cloud-go/blob/master/CHANGES.md)
- [Commits](https://github.com/googleapis/google-cloud-go/compare/spanner/v1.14.0...spanner/v1.15.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 17:07:36 +02:00
dependabot[bot]
e7d8e73be8
Bump github.com/lib/pq from 1.10.0 to 1.10.1 (#391)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.0 to 1.10.1.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.10.0...v1.10.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 17:07:26 +02:00
dependabot[bot]
3ee27f4370
Bump golangci/golangci-lint-action from v2 to v2.5.2 (#389)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from v2 to v2.5.2.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v2...5c56cd6c9dc07901af25baab6f2b0d9f3b7c3018)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 16:41:17 +02:00
Nicola Murino
92424cd1c2
dependabot: limit the number of open pull requests 2021-04-25 16:39:41 +02:00
Nicola Murino
0190dad984
docker: update github script to v4 2021-04-25 15:59:29 +02:00
Nicola Murino
198258f4e7
add dependabot
Fixes #388
2021-04-25 15:54:19 +02:00
Nicola Murino
5be4b6bd44
localfs: fix subdir check if the user has the root dir as home 2021-04-25 14:36:29 +02:00
Nicola Murino
3941255733
docs: fix a typo 2021-04-25 09:42:19 +02:00
Nicola Murino
46998252e5
use bcrypt as default password hashing algo
argon2id has a high memory cost and, if not properly tuned, it can lead to
resource starvation.

Advanced users can still configure and use argon2id.
Passwords stored as argon2id will continue to work
2021-04-25 09:38:33 +02:00
Nicola Murino
74b51f0ad3
update nfpm 2021-04-23 22:53:13 +02:00
Nicola Murino
b11865f971
CI: add support for darwin/arm64
I have no way to test the produced binaries on a real Silicon M1
2021-04-20 23:00:27 +02:00
Nicola Murino
f4369cdbef
fix max connections check
Also make sure to close the ssh client connection in test cases
2021-04-20 18:12:16 +02:00
Nicola Murino
92638ce93d
add support for hashing password using bcrypt
argon2id remains the default
2021-04-20 13:55:09 +02:00
Nicola Murino
6ef85d6026
add, optional, in memory password caching
Verifying argon2 passwords has a high memory and computational cost,
by enabling, in memory, password caching you reduce this cost
2021-04-20 09:39:36 +02:00
Nicola Murino
bc88503f25
sql providers: reuse the same context where appropriate 2021-04-19 18:58:53 +02:00
Nicola Murino
47317bed9b
make sure that Retry-After header has a value greater than zero 2021-04-19 09:16:27 +02:00
Nicola Murino
f45c89fc46
add rate limiting support for REST API/web admin too 2021-04-19 08:14:04 +02:00
Nicola Murino
112e3b2fc2
add rate limiting support 2021-04-18 12:31:06 +02:00
Nicola Murino
124c471a2b
FTPD: make sure that the passive ip, if provided, is valid
The server will refuse to start if the provided passive ip is not a
valid IPv4 address.

Fixes #376
2021-04-16 15:08:10 +02:00
Nicola Murino
683ba6cd5b
get binding from env: respect the documented default
Fixes #377
2021-04-16 13:35:13 +02:00
Nicola Murino
21fbcf4556
FTP: add support for TLS session resumption on the data connection
Fixes #374
2021-04-16 09:00:40 +02:00
Nicola Murino
2ffefbeb33
add sql_tables_prefix also to indexes and constraints
This allows you to reuse the same database for multiple SFTPGo instances

Fixes #372
2021-04-12 20:00:49 +02:00
Nicola Murino
c844fc7477
add support for delayed quota update
If there are a lot of close uploads, accumulating quota updates can
save you many queries to the data provider
2021-04-11 08:38:43 +02:00
Nicola Murino
4b98f37df1
back to development 2021-04-10 09:40:02 +02:00
Nicola Murino
0bc4db9950
web admin: make base url configurable 2021-04-09 22:02:48 +02:00
Nicola Murino
5acf29dae6
CI: replace deprecated actions with gh CLI 2021-04-08 21:29:09 +02:00
Nicola Murino
e9a42cd508
release workflow: re-add build Linux bundle
it is used as source for PPA packages
2021-04-08 08:38:51 +02:00
Nicola Murino
ed26d68948
portable mode: add SFTP buffer size 2021-04-07 19:47:39 +02:00
Nicola Murino
b389f93d97
allow to select sha256-simd using an env var 2021-04-07 16:25:58 +02:00
Nicola Murino
150aebf8d2
CI: replace xgo with QEMU
currently xgo don't allow to choose the building OS, this could cause
unexpected issues, for example v2.0.3 packages for arm64 and ppc64
don't run on Ubuntu 18.04
2021-04-07 15:12:09 +02:00
Nicola Murino
74e0223eb9
remove sha256-simd usage
sha256-simd is now deprecated

https://github.com/minio/sha256-simd/issues/58

This could slow down sha256 computation on some CPU
2021-04-05 18:23:40 +02:00
Nicola Murino
0823928f98
allow to disable login filesystem checks
SFTPGo requires that the user's home directory, virtual folder root,
and intermediate paths to virtual folders exist to work properly.
If you already know that the required directories exist, disabling
these checks will speed up login.
2021-04-05 17:57:30 +02:00
Nicola Murino
f895059660
web: add responsive table style to connections too
Fixed a small issue for sftpfs too
2021-04-05 11:28:28 +02:00
Nicola Murino
acb4310c11
add a startup hook 2021-04-05 10:07:59 +02:00
Nicola Murino
fdf3f23df5
allow to disable some hooks on a per-user basis
This way you can, for example, mix external and internal users
2021-04-04 22:32:25 +02:00
Nicola Murino
d92861a8e8
sftpfs: disable buffering for downloads if concurrent reads are disabled 2021-04-04 09:53:29 +02:00
Nicola Murino
1ee843757d
fix OpenAPI schema 2021-04-03 17:09:08 +02:00
Nicola Murino
ea26d7786c
sftpfs: add buffering support
this way we improve performance over high latency networks
2021-04-03 16:00:55 +02:00
Nicola Murino
6eb43baf3d
web: fix content type for folders form
Fixes #367
2021-04-01 19:42:18 +02:00
Nicola Murino
2f56375121
improve SFTP loop detection 2021-04-01 18:53:48 +02:00
Nicola Murino
3bfd7e4d17
sftpfs: try to detect if an SFTP user point to itself
this will cause an infinite loop on login. The check should be improved
2021-03-29 21:53:44 +02:00
Nicola Murino
e1c66d96a1
back to development 2021-03-28 22:25:24 +02:00
Nicola Murino
a43854ae9b
OpenAPI: document that secrets are automatically encrypted before saving 2021-03-28 11:23:06 +02:00
Nicola Murino
183bedd6ed
webui: add responsive extension 2021-03-28 11:02:11 +02:00
Nicola Murino
2a89a8f664
webui: minor improvements 2021-03-27 22:23:01 +01:00
Nicola Murino
5cd27ce529
document Cockroach driver name 2021-03-27 19:41:00 +01:00
Nicola Murino
cee2e18caf
convertusers: fix permissions
Fixes #363
2021-03-27 19:18:01 +01:00
Nicola Murino
9ad750da54
WebDAV: try to preserve the lock fs as much as possible 2021-03-27 19:10:27 +01:00
Nicola Murino
5f49af1780
external auth: allow to inspect and preserve an existing user 2021-03-26 15:19:01 +01:00
Nicola Murino
d5f092284a
improve signals handling 2021-03-25 19:31:21 +01:00
Nicola Murino
0e50310a66
add a test case for UID/GID limits 2021-03-25 17:30:39 +01:00
Mike Unitskyi
5939ac4801
Increase uid:gid limits (#362)
Fixes #361
2021-03-25 17:11:42 +01:00
Nicola Murino
db274f1093
crdb: fix transactions handling 2021-03-25 09:07:56 +01:00
Nicola Murino
6bc5c64a3a
webdav: ignore path, perm and not exist errors in PROPFIND
Fixes #340
2021-03-24 13:32:20 +01:00
Nicola Murino
70e035315e
data provider: add CockroachDB support 2021-03-23 19:14:15 +01:00
Nicola Murino
8a1249878a
OpenAPI schema: remove some superfluous required definitions
Fixes #356
2021-03-22 19:22:41 +01:00
Nicola Murino
5e375f56dd
kms: add a lock, secrets could be modified concurrently for cached users
also reduce the size of the JSON payload omitting empty secrets
2021-03-22 19:03:25 +01:00
Nicola Murino
28f1d66ae5
link the Active Directory example in the howto section 2021-03-22 09:52:05 +01:00
Omar Ramos
79060d37a7 Added in a first draft of the page related to sftpgo-ldap-http-server. 2021-03-22 08:59:29 +01:00
Nicola Murino
800e64404b
update deps 2021-03-22 08:55:35 +01:00
Nicola Murino
54c0c1b80d
Windows: manually check if we can bind on the configured port/ports
Windows allows the coexistence of three types of sockets on the same
transport-layer service port, for example, 127.0.0.1:8080, [::1]:8080
and [::ffff:0.0.0.0]:8080

Go don't properly handles this, so we use a ugly hack

Fixes #350
2021-03-21 22:21:04 +01:00
Nicola Murino
f7c7e2951d
initialize argon params before creating the data provider
Fixes #349
2021-03-21 19:58:57 +01:00
Nicola Murino
f249286cb1
docs: add some notes about the new virtual folders support
fixe a failing test case for the memory provider
2021-03-21 19:47:11 +01:00
Nicola Murino
d6dc3a507e
extend virtual folders support to all storage backends
Fixes #241
2021-03-21 19:15:47 +01:00
Nicola Murino
0286da2356
try to auto create virtual folders if missing 2021-03-10 22:30:56 +01:00
Nicola Murino
76c08baaa0
httpclient: load CA certificates only when required
on Windows x509.SystemCertPool is not implemented and therefore we end
uo with an empty certificate pool if we load the CA certificates
unconditionally
2021-03-10 21:45:48 +01:00
Nicola Murino
67ea75cf03
improve OpenAPI schema so it is better rendered on Stoplight 2021-03-07 18:41:56 +01:00
Nicola Murino
4c658bb6f0
webdav: add prefix support 2021-03-07 17:10:45 +01:00
Nicola Murino
1ab02d5891
OpenAPI: improve schema
Fix some lint warnings
2021-03-06 17:08:24 +01:00
Nicola Murino
055506e518
sftpfs: add an option to disable concurrent reads 2021-03-06 15:41:40 +01:00
Nicola Murino
88122ba2f8
update jwtauth to v5 2021-03-05 18:50:45 +01:00
Nicola Murino
bfe0c18976
portable mode: fix WebDAV support 2021-03-05 08:41:24 +01:00
Nicola Murino
df41f0c556
add a setting to skip natural keys validation
Enabling the "skip_natural_keys_validation" data provider setting,
the natural keys for REST API/Web Admin as usernames, admin names,
folder names are not restricted to unreserved URI chars

Fixes #334 #308
2021-03-04 09:48:53 +01:00
Nicola Murino
561c5021dd
add Segmed to the sponsors section 2021-03-03 18:55:47 +01:00
Nicola Murino
ad07fc78eb
update nfpm and deps 2021-03-03 18:39:58 +01:00
Nicola Murino
3243181c5f
Add a link to the OpenAPI schema where relevant
Fixes #329
2021-03-01 22:22:05 +01:00
Nicola Murino
895117718e
SSH system command: add os separator to the resolved path when appropriate
Fixes #327
2021-03-01 22:10:45 +01:00
Nicola Murino
534b253c20
WebDAV: improve TLS certificate authentication
For each user you can now configure:

- TLS certificate auth
- TLS certificate auth and password
- Password auth

For TLS certificate auth, the certificate common name is used as
username
2021-03-01 19:28:11 +01:00
Nicola Murino
901cafc6da
metrics: reduce complexity for AddLoginResult method
fix a gocyclo warning
2021-02-28 12:23:48 +01:00
Nicola Murino
a6e36e7cad
FTP: improve TLS certificate authentication
For each user you can now configure:

- TLS certificate auth
- TLS certificate auth and password
- Password auth

For TLS auth, the certificate common name must match the name provided
using the "USER" FTP command
2021-02-28 12:10:40 +01:00
Nicola Murino
b566457e12
change license to AGPL-3 2021-02-26 19:47:48 +01:00
Nicola Murino
ca3e15578e
Use new methods in the io and os packages instead of ioutil ones
ioutil is deprecated in Go 1.16 and SFTPGo is an application, not
a library, we have no reason to keep compatibility with old Go
versions.

Go 1.16 fix some cifs related issues too.
2021-02-25 21:53:04 +01:00
Nicola Murino
4b2edff6dd
update deps 2021-02-24 22:27:52 +01:00
Nicola Murino
2146b83343
data providers: add filesystem to folder ...
... and some descriptive fields.
The filesystem support for virtual folders will be implemented in
future commits
2021-02-24 19:40:29 +01:00
Nicola Murino
3e1b07324d
GCS: remove compat code 2021-02-22 22:06:23 +01:00
Nicola Murino
8cc2dfe5c2
update pkg/sftp
we don't need my branch anymore now that all the required features for
the sftpfs are available upstream too
2021-02-22 16:27:45 +01:00
Nicola Murino
78a837e8f1
remove other compat code 2021-02-22 09:13:26 +01:00
Nicola Murino
49830516be
squash database migrations and remove compat code 2021-02-22 08:37:50 +01:00
Nicola Murino
41e1d9e68a
use Go 1.16 for CI and Docker images 2021-02-21 12:01:37 +01:00
Nicola Murino
5da4f931c5
TLS: allow to configure cipher suites
Fixes #316
2021-02-18 20:17:16 +01:00
Nicola Murino
552a96533e
back to development 2021-02-17 09:45:20 +01:00
Nicola Murino
cebd069c77
set version to 2.0.2 2021-02-17 08:10:17 +01:00
Nicola Murino
be9230e85b
micro optimizations spotted using the go-critic linter 2021-02-16 19:11:36 +01:00
Nicola Murino
b1ce6eb85b
web admin: allow to set an empty password for SFTPGo users 2021-02-15 19:38:53 +01:00
Nicola Murino
46176a54b4
minor doc fixes 2021-02-14 22:08:08 +01:00
Nicola Murino
a21ccad174
web hooks: add mutual TLS support 2021-02-13 14:41:37 +01:00
Nicola Murino
1129a868a5
Improve powershell completion
cobra 1.1.3 has much better powershell support
2021-02-13 09:10:35 +01:00
Nicola Murino
1ac66d27b6
Use IEC units for byte counting everywhere 2021-02-12 22:16:35 +01:00
Nicola Murino
6a6e8fffbc
web hooks: improve resilience by adding a configurable retry
the retryable http client is used for hooks that notify events
2021-02-12 21:42:49 +01:00
Nicola Murino
51f110bc7b
sftpd: add statvfs@openssh.com support 2021-02-11 19:45:52 +01:00
Nicola Murino
4ddfe41f23
loaddata: restore admins too 2021-02-11 08:33:32 +01:00
Nicola Murino
ddd06fc2ac
docker: add permissions to data dirs
This way data and backup dirs can be mounted as separate volumes.

Based on the proof of concept submitted by

Mark Sagi-Kazar <mark.sagikazar@gmail.com>

See #305
2021-02-10 19:04:06 +01:00
Nicola Murino
1bccb93fcb
rename default branch from master to main 2021-02-09 19:53:03 +01:00
Nicola Murino
db80781716
validation: improve error message for invalid chars 2021-02-08 21:32:59 +01:00
Nicola Murino
a2a99f9b57
merge full and slim dockerfiles
Fixes #232
2021-02-07 21:49:04 +01:00
Nicola Murino
cd4a68cc96
set version to 2.0.1 2021-02-06 15:28:30 +01:00
Nicola Murino
b37eb68993
docker alpine: revert to 3.12 since we have to release 2.0.1 2021-02-06 14:58:19 +01:00
Nicola Murino
b13958a8d6
docker: fix httpd address 2021-02-06 14:51:55 +01:00
Nicola Murino
17e2b234a0
dataprovider: fix migration with old mysql versions
Fixes #298
2021-02-06 14:33:51 +01:00
Nicola Murino
4ef1775e9a
docker: switch to Alpine 3.13 2021-02-06 12:54:13 +01:00
Nicola Murino
363977b474
back to development 2021-02-06 12:23:26 +01:00
Nicola Murino
05ae0ea5f2
config: fix bindings backward compatibility 2021-02-06 09:53:31 +01:00
Nicola Murino
8de7a81674
revertprovider: only accept the supported version 2021-02-05 13:55:19 +01:00
Nicola Murino
d32b195a57
httpd: reuse the same compressor among bindings 2021-02-04 22:32:55 +01:00
Nicola Murino
267d9f1831
web ui: allow to create folders from a template 2021-02-04 19:09:43 +01:00
Nicola Murino
17a42a0c11
webdav: add compression support
Fixes #295
2021-02-04 09:06:41 +01:00
Nicola Murino
a219d25cac
webdav: update the doc
the user specific path is now gone
2021-02-04 07:46:40 +01:00
Nicola Murino
ce731020a7
webdav: remove the username path prefix
so we have the same URIs for all protocols

Fixes #293
2021-02-04 07:12:04 +01:00
Nicola Murino
fc9082c422
webdav: try to handle HEAD for collection too
The underlying golang webdav library returns Method Not Allowed for
HEAD requests on directories:

https://github.com/golang/net/blob/master/webdav/webdav.go#L210

let's see if we can workaround this inside SFTPGo itself in a similar
way as we do for GET.

The HEAD response will not return a Content-Length, we cannot handle
this inside SFTPGo.

Fixes #294
2021-02-03 22:36:13 +01:00
Nicola Murino
4872ba2ea0
README: add "Sponsors" section 2021-02-03 14:37:11 +01:00
Nicola Murino
70bb3c34ce
sftpfs: improve endpoint validation
Validation will fail if the endpoint is not specified as host:port
2021-02-03 11:29:04 +01:00
Nicola Murino
1cde50f050
sftpd: improve logging if filesystem creation fails 2021-02-03 09:45:04 +01:00
Nicola Murino
e9dd4ecdf0
web admin: add CSRF 2021-02-03 08:55:28 +01:00
Nicola Murino
f863530653
JWT: only accepts tokens from the expected header or cookie 2021-02-02 13:11:47 +01:00
Nicola Murino
4f609cfa30
JWT: add token audience
a token released for API audience cannot be used for web pages and
vice-versa
2021-02-02 09:14:10 +01:00
Nicola Murino
78bf808322
virtual folders: change dataprovider structure
This way we no longer depend on the local file system path and so we can
add support for cloud backends in future updates
2021-02-01 19:04:15 +01:00
Nicola Murino
afe1da92c5
web UI cookie: set the Secure flags if we are over TLS 2021-01-28 13:29:16 +01:00
Nicola Murino
9985224966
examples: add a script for bulk user update
you can use this sample script a a basis if you need to update
some common parameters for multiple users while preserving the others
2021-01-27 19:18:37 +01:00
Nicola Murino
02679d6df3
web ui: save the state of the tables
the state will be saved for 1 hour
2021-01-27 08:41:21 +01:00
Nicola Murino
c2bbd468c4
REST API: add logout and store invalidated token 2021-01-26 22:35:36 +01:00
Nicola Murino
46ab8f8d78
post-login hook: add the full user JSON serialized
Fixes #284
2021-01-26 18:05:44 +01:00
Nicola Murino
54321c5240
web ui: allow to create multiple users from a template 2021-01-25 21:31:33 +01:00
Nicola Murino
5fcbf2528f
html templates: minor improvements 2021-01-24 17:43:54 +01:00
Nicola Murino
ea096db8e4
sftpfs: set the correct file mode 2021-01-23 10:32:15 +01:00
Nicola Murino
0caeb68680
sftpfs: fix stat info 2021-01-23 09:42:49 +01:00
Nicola Murino
2b9ba1d520
web admin: try to uniform UI 2021-01-23 09:28:45 +01:00
Nicola Murino
80f5ccd357
web admin: add backup/restore 2021-01-22 19:42:18 +01:00
Nicola Murino
820169c5c6
windows service: simplify code
update testify to 1.7.0 too
2021-01-21 19:07:13 +01:00
Nicola Murino
aff75953e3
ssh requests: send a reply only if the client requested it 2021-01-21 09:28:41 +01:00
Nicola Murino
c0e09374a8
scp: fix wildcard uploads
Fixes #285
2021-01-20 22:37:59 +01:00
Nicola Murino
57976b4085
httpd: add mTLS and multiple bindings support 2021-01-19 18:59:41 +01:00
Nicola Murino
899f1a1844
improve windows service
ensure to exit the service process in any case
2021-01-18 21:46:26 +01:00
Nicola Murino
41a1af863e
OpenAPI: minor changes 2021-01-18 13:24:38 +01:00
Nicola Murino
778ec9b88f
REST API v2
- add JWT authentication
- admins are now stored inside the data provider
- admin access can be restricted based on the source IP: both proxy
  header and connection IP are checked
- deprecate REST API CLI: it is not relevant anymore

Some other changes to the REST API can still happen before releasing
SFTPGo 2.0.0

Fixes #197
2021-01-17 22:29:08 +01:00
Giorgio Pellero
d42fcc3786
s3: don't paginate to find zero-byte-keyed dirs (#277)
Fixes #275
2021-01-14 12:01:25 +01:00
Nicola Murino
5d4f758c47
GCS: don't paginate to find compat "dirs" 2021-01-12 19:22:12 +01:00
Nicola Murino
a8a17a223a
scp: minor improvements
document that we don't support wildcard expansion.

I should refactor scp code ...
2021-01-05 22:32:30 +01:00
Nicola Murino
aa40b04576
update deps 2021-01-05 12:40:49 +01:00
Nicola Murino
daac90c4e1
fix a potential race condition for pre-login and ext auth
hooks

doing something like this:

err = provider.updateUser(u)
...
return provider.userExists(username)

could be racy if another update happen before

provider.userExists(username)

also pass a pointer to updateUser so if the user is modified inside
"validateUser" we can just return the modified user without do a new
query
2021-01-05 09:50:22 +01:00
Nicola Murino
72b2c83392
defender: allow hot-reloading for safe and block lists 2021-01-04 17:52:14 +01:00
Nicola Murino
c3410a3d91
config: don't log a warning if the config file is not found
we also support configuration via env vars
2021-01-03 17:57:07 +01:00
Nicola Murino
173c1820e1
Go 1.15 is now required
VerifyConnection is not available in 1.14
2021-01-03 17:25:24 +01:00
Nicola Murino
684f4ba1a6
mutal TLS: add support for revocation lists 2021-01-03 17:03:04 +01:00
Nicola Murino
6d84c5b9e3
capture http servers error logs
otherwise they will be printed to stdout
2021-01-03 10:38:28 +01:00
Nicola Murino
4b522a2455
webdav: refactor server initialization 2021-01-03 09:51:54 +01:00
Nicola Murino
1e1c46ae1b
defender: minor docs improvements 2021-01-02 20:02:05 +01:00
Nicola Murino
d6b3acdb62
add REST API for the defender 2021-01-02 19:33:24 +01:00
Nicola Murino
037d89a320
add support for a basic built-in defender
It can help to prevent DoS and brute force password guessing
2021-01-02 14:05:09 +01:00
Nicola Murino
30eb3c4a99
update OpenAPI schema 2020-12-29 19:33:04 +01:00
Nicola Murino
0966d44c0f
httpd: add support for listening over a Unix-domain socket
Fixes #266
2020-12-29 19:02:56 +01:00
Nicola Murino
40e759c983
FTP: add support for client certificate authentication 2020-12-29 09:20:09 +01:00
Nicola Murino
141ca6777c
webdav: add support for client certificate authentication
Fixes #263
2020-12-28 19:48:23 +01:00
Nicola Murino
3c16a19269
FTP: update ftpserverlib
fixes another sneaky bug
2020-12-28 09:22:52 +01:00
Nicola Murino
b3c6d79f51
FTP: add support for ASCII transfer mode
the default remain binary, a client have to explicitly request an
ASCII transfer
2020-12-27 09:48:56 +01:00
Nicola Murino
0c56b6d504
nfpm: update to 2.1.0 2020-12-26 19:14:12 +01:00
Nicola Murino
3d2da88da9
web ui: update js and css deps 2020-12-26 18:47:09 +01:00
Nicola Murino
80c06d6b59
clone: disable decrypt error test for memory provider
This test cannot work using memory provider, we cannot change the provider
for a kms secrete without reloading it from JSON and the memory provider
will never reload users
2020-12-26 15:57:01 +01:00
Nicola Murino
e536a638c9
web UI: improve user cloning 2020-12-26 15:11:38 +01:00
Jochen Munz
bc397002d4
Feature: Clone existing user via web admin (#259)
UI based cloning of an existing user. The "add user" screen is prepopulated with existing user data.

Resolves drakkan/sftpgo#225
2020-12-26 14:58:59 +01:00
Nicola Murino
2a95d031ea
FTP: add support for AVBL command 2020-12-25 11:14:08 +01:00
Nicola Murino
1dce1eff48
improve FTP support
- allow to disable active mode
- allow to disable SITE commands
- add optional support for calculating hash value of files
- add optional support for the non standard COMB command
2020-12-24 18:48:06 +01:00
Jochen Munz
5b1d8666b3
S3fs: Handle non-ascii filename in rename operations (#257)
SFTP is based on UTF-8 filenames, so non-ASCII filenames get transported with utf-8 escaped character sequences.
At least for the S3fs provider, if such a file is stored in a nested path it cannot be used as the source for a rename operations.

This adds the necessary escaping of the path fragments.

The patch is not required for MinIO but it doesn't hurt
2020-12-24 11:13:42 +01:00
Nicola Murino
187a5b1908
sftpd: properly handle listener accept errors
continue on temporary errors and exit from the serve loop for the
other ones
2020-12-23 19:53:07 +01:00
Nicola Murino
7ab7941ddd
sftpfs: fix race condition 2020-12-23 17:15:55 +01:00
Nicola Murino
c69d63c1f8
add support for multiple bindings
Fixes #253
2020-12-23 16:12:30 +01:00
Nicola Murino
743b350fdd
httpd: add support for route undefined HEAD requests to GET handlers
HEAD responses will not include a body but the Content-Length will be
set as the equivalent GET request

Fixes #255
2020-12-20 10:22:16 +01:00
Nicola Murino
1ac610da1a
fix build on Windows 2020-12-18 16:22:52 +01:00
Nicola Murino
bcf0fa073e
telemetry server: add optional https and authentication 2020-12-18 16:04:42 +01:00
Nicola Murino
140380716d
remove unused constant 2020-12-18 10:05:08 +01:00
Nicola Murino
143df87fee
add some docs for telemetry server
move pprof to the telemetry server only
2020-12-18 09:47:22 +01:00
Márk Sági-Kazár
6d895843dc
feat: add new telemetry server (#254)
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-12-18 09:01:19 +01:00
Nicola Murino
65e6d5475f
update ftpserverlib to include the latest fixes and features 2020-12-18 08:49:32 +01:00
Nicola Murino
15609cdbc7
fix build on FreeBSD
see https://github.com/otiai10/copy/pull/36
2020-12-17 14:46:31 +01:00
Nicola Murino
f876c728ad
add support for the latest ftpserverlib and azblob versions 2020-12-17 13:40:36 +01:00
Nicola Murino
f34462e3c3
add support for limiting max concurrent client connections 2020-12-15 19:29:30 +01:00
Nicola Murino
ea0bf5e4c8
ensure 64 bit alignment for 64 bit struct fields access atomically 2020-12-14 14:52:36 +01:00
Nicola Murino
14d1b82f6b
minor README improvements 2020-12-14 07:54:27 +01:00
Nicola Murino
ed43ddd79d
enable hash commands for any supported backend 2020-12-13 15:11:55 +01:00
Nicola Murino
23192a3be7
update nfpm to 1.10.3 2020-12-13 14:29:59 +01:00
Nicola Murino
72e3d464b8
sftpfs: fix fingerprints copy for memory provider 2020-12-12 10:56:02 +01:00
Nicola Murino
a6985075b9
add sftpfs storage backend
Fixes #224
2020-12-12 10:31:09 +01:00
dharmendra kariya
4d5494912d
Update README.md (#245) 2020-12-11 08:22:50 +01:00
Nicola Murino
50982229e1
REST API: add a method to get the status of the services
added a status page to the built-in web admin
2020-12-08 11:18:34 +01:00
dharmendra kariya
6977a4a18b
Update full-configuration.md (#240)
just deleting redundant line
2020-12-08 09:09:21 +01:00
Nicola Murino
ab1bf2ad44
update deps 2020-12-06 22:20:53 +01:00
Nicola Murino
c451f742aa
revertprovider: crypted provider was not supported in v4
also ensure to initialize kms before the dataprovider, it could be
needed to downgrade secret from cloud kms providers
2020-12-06 10:36:48 +01:00
Nicola Murino
034d89876d
webdav: fix proppatch handling
also respect login delay for cached webdav users and check the home dir as
soon as the user authenticates

Fixes #239
2020-12-06 08:19:41 +01:00
Nicola Murino
4a88ea5c03
add Data At Rest Encryption support 2020-12-05 13:48:13 +01:00
Nicola Murino
95c6d41c35
config: make config file relative to the config dir
a configuration parsing error is now fatal
2020-12-03 17:16:35 +01:00
Márk Sági-Kazár
2a9ed0abca
Accept a config file path instead of a config name
Config name is a Viper concept used for searching a specific file
in various paths with various extensions.

Making it configurable is usually not a useful feature
as users mostly want to define a full or relative path
to a config file.

This change replaces config name with config file.
2020-12-03 16:23:33 +01:00
Nicola Murino
3ff6b1bf64
fix lint warnings 2020-12-02 10:02:08 +01:00
Nicola Murino
a67276ccc2
add build tags to disable kms providers 2020-12-02 09:44:18 +01:00
Nicola Murino
87b51a6fd5
kms: remember if a secret was saved without a master key
So we will be able to decrypt secret stored without a master key if a
such key is provided later
2020-12-01 22:18:16 +01:00
Nicola Murino
940836b25b
add a note about using sqlite provider over cifs shares
See #235
2020-11-30 21:59:56 +01:00
Nicola Murino
634b723b5d
add KMS support
Fixes #226
2020-11-30 21:46:34 +01:00
Nicola Murino
af0c9b76c4
update nfpm to 1.10.2 2020-11-27 18:07:27 +01:00
Nicola Murino
2142ef20c5
fix some typos 2020-11-26 22:18:12 +01:00
Nicola Murino
224ce5fe81
add revertprovider subcommand
Fixes #233
2020-11-26 22:08:33 +01:00
Nicola Murino
4bb9d07dde
user: add a free text field
Fixes #230
2020-11-25 22:26:34 +01:00
Nicola Murino
2054dfd83d
create the credential directory when needed
The credentials dir is currently required only for GCS users if
prefer database credential setting is false, so defer its creation
and don't fail to start the services if this directory is missing
2020-11-25 14:18:12 +01:00
Nicola Murino
6699f5c2cc
initial data loading: an error is no longer fatal
therefore it does not prevent the services from starting
2020-11-25 09:18:36 +01:00
Estel Smith
70bde8b2bc
memory provider: print a log if loading the initial dump fails
therefore this error is no longer fatal and does not prevent the services
from starting

Fixes #229
2020-11-25 09:15:23 +01:00
Nicola Murino
ff73e5f53c
CI Docker: don't build full image on pull request
it will fail since the slim tag is not pushed
2020-11-24 18:51:10 +01:00
Nicola Murino
0609188d3f
allow to disable SFTP service
Fixes #228
2020-11-24 13:44:57 +01:00
Nicola Murino
99cd1ccfe5
S3: fix empty directory detection
when listing empty directory MinIO returns no contents while S3 returns
1 object with the key equal to the prefix. Make detection work in both
cases

Fixes #227
2020-11-23 15:36:42 +01:00
Nicola Murino
dccc583b5d
add a dedicated struct to store encrypted credentials
also gcs credentials are now encrypted, both on disk and inside the
provider.

Data provider is automatically migrated and load data will accept
old format too but you should upgrade to the new format to avoid future
issues
2020-11-22 21:53:04 +01:00
Nicola Murino
ac435b7890
back to development 2020-11-18 21:53:23 +01:00
Nicola Murino
37fc589896
set version to 1.2.2 2020-11-18 19:24:19 +01:00
Nicola Murino
5d789a01b7
update pkg/sftp
These patches are now merged upstream:

https://github.com/pkg/sftp/pull/392
https://github.com/pkg/sftp/pull/393
2020-11-18 19:06:12 +01:00
Nicola Murino
ca0ff0d630
add a File interface so we can avoid to use os.File directly 2020-11-17 19:36:39 +01:00
Nicola Murino
969b38586e
update pkg/sftp to fix requests accumulation
Include this patch:

https://github.com/pkg/sftp/pull/393

to avoid request accumulation (no underlying fd) if we return an error.
Before this patch the accumulated requests are released only when the
client disconnects.

We use our fork for now to include

https://github.com/pkg/sftp/pull/392

too
2020-11-16 19:49:26 +01:00
Nicola Murino
e3eca424f1
web admin: allow both allowed and denied extensions/patterns for a dir
this fix a regression introduced in the previous commit
2020-11-16 19:21:50 +01:00
Nicola Murino
a6355e298e
add support for limit files using shell like patterns
Fixes #209
2020-11-15 22:04:48 +01:00
Ryan Gough
c0f47a58f2
web admin: clarify that the directories for permissions are relative
Fixes #222
2020-11-15 09:11:36 +01:00
Nicola Murino
dc845fa2f4
webdav: fix permission errors if the client try to read multiple times 2020-11-14 19:19:41 +01:00
Nicola Murino
7e855c83b3
deb packages: changes priority to optional, extra is deprecated 2020-11-14 13:54:14 +01:00
Nicola Murino
3b8a9e0963
back to development 2020-11-14 11:01:28 +01:00
Nicola Murino
4445834fd3
set version to 1.2.1 2020-11-14 09:28:53 +01:00
Nicola Murino
19a619ff65
Linux pkgs: use python3 for API CLI inside generated deb 2020-11-14 09:10:45 +01:00
Nicola Murino
66a538dc9c
CI: improve docker build action 2020-11-13 21:55:53 +01:00
Nicola Murino
1a6863f4b1
GCS uploads: check Close() error
some code simplification too
2020-11-13 18:40:18 +01:00
Nicola Murino
fbd9919afa
docker: add slim image 2020-11-12 22:40:53 +01:00
Nicola Murino
eec8bc73f4
docker: remove entrypoint
remove the VOLUME instruction from the Dockerfile so you can change
user using your own image like this:

FROM drakkan/sftpgo:tag
USER root
RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpgo
USER 1100:1100
2020-11-12 11:53:05 +01:00
Nicola Murino
5720d40fee
add setstat_mode 2
in this mode chmod/chtimes/chown can be silently ignored only for cloud
based file systems

Fixes #223
2020-11-12 10:39:46 +01:00
Nicola Murino
38e0cba675
docker: add an entrypoint
running as an arbitrary user is now possible setting the following
env vars too:

SFTPGO_PUID
SFTPGO_PGID

Fixes #217
2020-11-10 23:11:57 +01:00
Nicola Murino
4c5a0d663e
sftpd: return the error Operation Unsupported for unexpected reads
a cloud based file cannot be opened for read and write at the same
time. Return a proper error if a client try to do this.

It can happen only for SFTP
2020-11-09 21:01:56 +01:00
Nicola Murino
093df15fac
CI: add ppc64le support 2020-11-09 18:39:36 +01:00
Nicola Murino
957430e675
back to development 2020-11-08 12:56:37 +01:00
609 changed files with 199030 additions and 49648 deletions

31
.cirrus.yml Normal file
View file

@ -0,0 +1,31 @@
freebsd_task:
name: FreeBSD
matrix:
- name: FreeBSD 14.0
freebsd_instance:
image_family: freebsd-14-0
pkginstall_script:
- pkg update -f
- pkg install -y go122
- pkg install -y git
setup_script:
- ln -s /usr/local/bin/go122 /usr/local/bin/go
- pw groupadd sftpgo
- pw useradd sftpgo -g sftpgo -w none -m
- mkdir /home/sftpgo/sftpgo
- cp -R . /home/sftpgo/sftpgo
- chown -R sftpgo:sftpgo /home/sftpgo/sftpgo
compile_script:
- su sftpgo -c 'cd ~/sftpgo && go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo'
- su sftpgo -c 'cd ~/sftpgo/tests/eventsearcher && go build -trimpath -ldflags "-s -w" -o eventsearcher'
- su sftpgo -c 'cd ~/sftpgo/tests/ipfilter && go build -trimpath -ldflags "-s -w" -o ipfilter'
check_script:
- su sftpgo -c 'cd ~/sftpgo && ./sftpgo initprovider && ./sftpgo resetprovider --force'
test_script:
- su sftpgo -c 'cd ~/sftpgo && go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 20m ./... -coverprofile=coverage.txt -covermode=atomic'

108
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View file

@ -0,0 +1,108 @@
name: Open Source Bug Report
description: "Submit a report and help us improve SFTPGo"
title: "[Bug]: "
labels: ["bug"]
body:
- type: markdown
attributes:
value: |
### 👍 Thank you for contributing to our project!
Before asking for help please check the [support policy](https://github.com/drakkan/sftpgo#support-policy).
If you are a commercial user or a project sponsor please contact us using the dedicated [email address](mailto:support@sftpgo.com).
- type: checkboxes
id: before-posting
attributes:
label: "⚠️ This issue respects the following points: ⚠️"
description: All conditions are **required**.
options:
- label: This is a **bug**, not a question or a configuration issue.
required: true
- label: This issue is **not** already reported on Github _(I've searched it)_.
required: true
- type: textarea
id: bug-description
attributes:
label: Bug description
description: |
Provide a description of the bug you're experiencing.
Don't just expect someone will guess what your specific problem is and provide full details.
validations:
required: true
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce
description: |
Describe the steps to reproduce the bug.
The better your description is the fastest you'll get an _(accurate)_ answer.
value: |
1.
2.
3.
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: Describe what you expected to happen instead.
validations:
required: true
- type: input
id: version
attributes:
label: SFTPGo version
validations:
required: true
- type: input
id: data-provider
attributes:
label: Data provider
validations:
required: true
- type: dropdown
id: install-method
attributes:
label: Installation method
description: |
Select installation method you've used.
_Describe the method in the "Additional info" section if you chose "Other"._
options:
- "Community Docker image"
- "Community Deb package"
- "Community RPM package"
- "Other"
validations:
required: true
- type: textarea
attributes:
label: Configuration
description: "Describe your customizations to the configuration: both config file changes and overrides via environment variables"
value: config
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
id: usecase
attributes:
label: What are you using SFTPGo for?
description: We'd like to understand your SFTPGo usecase more
multiple: true
options:
- "Private user, home usecase (home backup/VPS)"
- "Professional user, 1 person business"
- "Small business (3-person firm with file exchange?)"
- "Medium business"
- "Enterprise"
validations:
required: true
- type: textarea
id: additional-info
attributes:
label: Additional info
description: Any additional information related to the issue.

9
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View file

@ -0,0 +1,9 @@
blank_issues_enabled: false
contact_links:
- name: Commercial Support
url: https://sftpgo.com/
about: >
If you need Professional support, so your reports are prioritized and resolved more quickly.
- name: GitHub Community Discussions
url: https://github.com/drakkan/sftpgo/discussions
about: Please ask and answer questions here.

View file

@ -0,0 +1,42 @@
name: 🚀 Feature request
description: Suggest an idea for SFTPGo
labels: ["suggestion"]
body:
- type: textarea
attributes:
label: Is your feature request related to a problem? Please describe.
description: A clear and concise description of what the problem is.
validations:
required: false
- type: textarea
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: dropdown
id: usecase
attributes:
label: What are you using SFTPGo for?
description: We'd like to understand your SFTPGo usecase more
multiple: true
options:
- "Private user, home usecase (home backup/VPS)"
- "Professional user, 1 person business"
- "Small business (3-person firm with file exchange?)"
- "Medium business"
- "Enterprise"
validations:
required: true
- type: textarea
attributes:
label: Additional context
description: Add any other context or screenshots about the feature request here.
validations:
required: false

5
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View file

@ -0,0 +1,5 @@
# Checklist for Pull Requests
- [ ] Have you signed the [Contributor License Agreement](https://sftpgo.com/cla.html)?
---

20
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,20 @@
version: 2
updates:
#- package-ecosystem: "gomod"
# directory: "/"
# schedule:
# interval: "weekly"
# open-pull-requests-limit: 2
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2

36
.github/workflows/codeql.yml vendored Normal file
View file

@ -0,0 +1,36 @@
name: "Code scanning - action"
on:
push:
pull_request:
schedule:
- cron: '30 1 * * 6'
jobs:
CodeQL-Build:
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v3
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3

View file

@ -2,7 +2,7 @@ name: CI
on:
push:
branches: [master]
branches: [main]
pull_request:
jobs:
@ -11,168 +11,240 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go: [1.15]
go: ['1.22']
os: [ubuntu-latest, macos-latest]
upload-coverage: [true]
include:
- go: 1.14
os: ubuntu-latest
upload-coverage: false
- go: 1.15
- go: '1.22'
os: windows-latest
upload-coverage: false
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go }}
- name: Build for Linux/macOS
- name: Build for Linux/macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: |
go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter
cd -
./sftpgo initprovider
./sftpgo resetprovider --force
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$GIT_COMMIT = (git describe --always --abbrev=8 --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$FILE_VERSION = $LATEST_TAG.substring(1) + "." + $COMMITS_FROM_TAG
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o sftpgo.exe
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher.exe
cd ../..
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter.exe
cd ../..
mkdir arm64
$Env:CGO_ENABLED='0'
$Env:GOOS='windows'
$Env:GOARCH='arm64'
go-winres simply --arch arm64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
mkdir x86
$Env:GOARCH='386'
go-winres simply --arch 386 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
Remove-Item Env:\CGO_ENABLED
Remove-Item Env:\GOOS
Remove-Item Env:\GOARCH
- name: Run test cases using SQLite provider
run: go test -v -p 1 -timeout 10m ./... -coverprofile=coverage.txt -covermode=atomic
run: go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
- name: Upload coverage to Codecov
if: ${{ matrix.upload-coverage }}
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v4
with:
file: ./coverage.txt
fail_ci_if_error: false
token: ${{ secrets.CODECOV_TOKEN }}
- name: Run test cases using bolt provider
run: |
go test -v -p 1 -timeout 2m ./config -covermode=atomic
go test -v -p 1 -timeout 2m ./common -covermode=atomic
go test -v -p 1 -timeout 3m ./httpd -covermode=atomic
go test -v -p 1 -timeout 8m ./sftpd -covermode=atomic
go test -v -p 1 -timeout 2m ./ftpd -covermode=atomic
go test -v -p 1 -timeout 2m ./webdavd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 2m ./internal/config -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 5m ./internal/common -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 5m ./internal/httpd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 8m ./internal/sftpd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 5m ./internal/ftpd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 5m ./internal/webdavd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 2m ./internal/telemetry -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 2m ./internal/mfa -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 2m ./internal/command -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: bolt
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
- name: Run test cases using memory provider
run: go test -v -p 1 -timeout 10m ./... -covermode=atomic
run: go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
- name: Gather cross build info
id: cross_info
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
- name: Prepare build artifact for macOS
if: startsWith(matrix.os, 'macos-') == true
run: |
GIT_COMMIT=$(git describe --always)
BUILD_DATE=$(date -u +%FT%TZ)
echo ::set-output name=sha::${GIT_COMMIT}
echo ::set-output name=created::${BUILD_DATE}
- name: Cross build with xgo
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: crazy-max/ghaction-xgo@v1
with:
dest: cross
prefix: sftpgo
targets: linux/arm64
v: true
x: false
race: false
ldflags: -s -w -X github.com/drakkan/sftpgo/version.commit=${{ steps.cross_info.outputs.sha }} -X github.com/drakkan/sftpgo/version.date=${{ steps.cross_info.outputs.created }}
buildmode: default
- name: Prepare build artifact for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: |
mkdir -p output/{bash_completion,zsh_completion}
mkdir -p output/examples/rest-api-cli
cp sftpgo output/
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo output/sftpgo_x86_64
cp sftpgo_arm64 output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r init output/
cp examples/rest-api-cli/sftpgo_api_cli output/examples/rest-api-cli/
cp -r openapi output/
cp init/com.github.drakkan.sftpgo.plist output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp cross/sftpgo-linux-arm64 output/ || :
- name: Prepare build artifact for Windows
if: startsWith(matrix.os, 'windows-')
- name: Prepare Windows installer
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\LICENSE .\output\LICENSE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$Env:SFTPGO_ISS_DEV_VERSION = $LATEST_TAG + "." + $COMMITS_FROM_TAG
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
rm .\output\sftpgo.db
copy .\arm64\sftpgo.exe .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
.\sftpgo.exe initprovider
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
$Env:SFTPGO_ISS_ARCH='arm64'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
copy .\x86\sftpgo.exe .\output
$Env:SFTPGO_ISS_ARCH='x86'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
env:
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Upload Windows installer x86_64 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v4
with:
name: sftpgo_windows_installer_x86_64
path: ./sftpgo_windows_x86_64.exe
- name: Upload Windows installer arm64 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v4
with:
name: sftpgo_windows_installer_arm64
path: ./sftpgo_windows_arm64.exe
- name: Upload Windows installer x86 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v4
with:
name: sftpgo_windows_installer_x86
path: ./sftpgo_windows_x86.exe
- name: Prepare build artifact for Windows
if: startsWith(matrix.os, 'windows-')
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
copy .\sftpgo.exe .\output
mkdir output\arm64
copy .\arm64\sftpgo.exe .\output\arm64
mkdir output\x86
copy .\x86\sftpgo.exe .\output\x86
copy .\sftpgo.json .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
- name: Upload build artifact
uses: actions/upload-artifact@v2
if: startsWith(matrix.os, 'ubuntu-') != true
uses: actions/upload-artifact@v4
with:
name: sftpgo-${{ matrix.os }}-go${{ matrix.go }}
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
path: output
- name: Build Linux Packages
id: build_linux_pkgs
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
test-build-flags:
name: Test build flags
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Build
run: |
cp -r pkgs pkgs_arm64
cd pkgs
./build.sh
cd ..
export NFPM_ARCH=arm64
export BIN_SUFFIX=-linux-arm64
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_arm64
./build.sh
PKG_VERSION=$(cat dist/version)
echo "::set-output name=pkg-version::${PKG_VERSION}"
go build -trimpath -tags nopgxregisterdefaulttypes,nogcs,nos3,noportable,nobolt,nomysql,nopgsql,nosqlite,nometrics,noazblob,unixcrypt -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
./sftpgo -v
cp -r openapi static templates internal/bundle/
go build -trimpath -tags nopgxregisterdefaulttypes,bundle -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
./sftpgo -v
- name: Upload Debian Package
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-rpm
path: pkgs/dist/rpm/*
- name: Upload Debian Package arm64
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package arm64
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-rpm
path: pkgs/dist/rpm/*
test-postgresql-mysql:
name: Test with PostgreSQL/MySQL
test-postgresql-mysql-crdb:
name: Test with PgSQL/MySQL/Cockroach
runs-on: ubuntu-latest
services:
@ -197,27 +269,64 @@ jobs:
MYSQL_USER: sftpgo
MYSQL_PASSWORD: sftpgo
options: >-
--health-cmd "mysqladmin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
--health-cmd "mariadb-admin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
--health-interval 10s
--health-timeout 5s
--health-retries 6
ports:
- 3307:3306
mysql:
image: mysql:latest
env:
MYSQL_ROOT_PASSWORD: mysql
MYSQL_DATABASE: sftpgo
MYSQL_USER: sftpgo
MYSQL_PASSWORD: sftpgo
options: >-
--health-cmd "mysqladmin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
--health-interval 10s
--health-timeout 5s
--health-retries 6
ports:
- 3308:3306
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: 1.15
go-version: '1.22'
- name: Build
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: |
go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter
cd -
- name: Run tests using MySQL provider
run: |
./sftpgo initprovider
./sftpgo resetprovider --force
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 3308
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
- name: Run tests using PostgreSQL provider
run: |
go test -v -p 1 -timeout 10m ./... -covermode=atomic
./sftpgo initprovider
./sftpgo resetprovider --force
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: postgresql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@ -226,9 +335,11 @@ jobs:
SFTPGO_DATA_PROVIDER__USERNAME: postgres
SFTPGO_DATA_PROVIDER__PASSWORD: postgres
- name: Run tests using MySQL provider
- name: Run tests using MariaDB provider
run: |
go test -v -p 1 -timeout 10m ./... -covermode=atomic
./sftpgo initprovider
./sftpgo resetprovider --force
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@ -236,13 +347,180 @@ jobs:
SFTPGO_DATA_PROVIDER__PORT: 3307
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
SFTPGO_DATA_PROVIDER__SQL_TABLES_PREFIX: prefix_
- name: Run tests using CockroachDB provider
run: |
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr :26257
sleep 10
docker exec crdb cockroach sql --insecure -e 'create database "sftpgo"'
./sftpgo initprovider
./sftpgo resetprovider --force
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
docker stop crdb
env:
SFTPGO_DATA_PROVIDER__DRIVER: cockroachdb
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 26257
SFTPGO_DATA_PROVIDER__USERNAME: root
SFTPGO_DATA_PROVIDER__PASSWORD:
SFTPGO_DATA_PROVIDER__TARGET_SESSION_ATTRS: any
SFTPGO_DATA_PROVIDER__SQL_TABLES_PREFIX: prefix_
build-linux-packages:
name: Build Linux packages
runs-on: ubuntu-latest
strategy:
matrix:
include:
- arch: amd64
distro: ubuntu:18.04
go: latest
go-arch: amd64
- arch: aarch64
distro: ubuntu18.04
go: latest
go-arch: arm64
- arch: ppc64le
distro: ubuntu18.04
go: latest
go-arch: ppc64le
- arch: armv7
distro: ubuntu18.04
go: latest
go-arch: arm7
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get commit SHA
id: get_commit
run: echo "COMMIT=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
shell: bash
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
echo '#!/bin/bash' > build.sh
echo '' >> build.sh
echo 'set -e' >> build.sh
echo 'apt-get update -q -y' >> build.sh
echo 'apt-get install -q -y curl gcc' >> build.sh
if [ ${{ matrix.go }} == 'latest' ]
then
echo 'GO_VERSION=$(curl -L https://go.dev/VERSION?m=text | head -n 1)' >> build.sh
else
echo 'GO_VERSION=${{ matrix.go }}' >> build.sh
fi
echo 'GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}' >> build.sh
echo 'curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/${GO_VERSION}.linux-${GO_DOWNLOAD_ARCH}.tar.gz' >> build.sh
echo 'tar -C /usr/local -xzf go.tar.gz' >> build.sh
echo 'export PATH=$PATH:/usr/local/go/bin' >> build.sh
echo 'go version' >> build.sh
echo 'cd /usr/local/src' >> build.sh
echo 'go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_commit.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo' >> build.sh
chmod 755 build.sh
docker run --rm --name ubuntu-build --mount type=bind,source=`pwd`,target=/usr/local/src ${{ matrix.distro }} /usr/local/src/build.sh
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
- uses: uraimo/run-on-arch-action@v2
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
with:
arch: ${{ matrix.arch }}
distro: ${{ matrix.distro }}
setup: |
mkdir -p "${PWD}/output"
dockerRunArgs: |
--volume "${PWD}/output:/output"
shell: /bin/bash
install: |
apt-get update -q -y
apt-get install -q -y curl gcc
if [ ${{ matrix.go }} == 'latest' ]
then
GO_VERSION=$(curl -L https://go.dev/VERSION?m=text | head -n 1)
else
GO_VERSION=${{ matrix.go }}
fi
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
if [ ${{ matrix.arch}} == 'armv7' ]
then
GO_DOWNLOAD_ARCH=armv6l
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/${GO_VERSION}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
go version
if [ ${{ matrix.arch}} == 'armv7' ]
then
export GOARM=7
fi
go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_commit.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
- name: Upload build artifact
uses: actions/upload-artifact@v4
with:
name: sftpgo-linux-${{ matrix.arch }}-go-${{ matrix.go }}
path: output
- name: Build Packages
id: build_linux_pkgs
run: |
export NFPM_ARCH=${{ matrix.go-arch }}
cd pkgs
./build.sh
PKG_VERSION=$(cat dist/version)
echo "pkg-version=${PKG_VERSION}" >> $GITHUB_OUTPUT
- name: Upload Debian Package
uses: actions/upload-artifact@v4
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
uses: actions/upload-artifact@v4
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-rpm
path: pkgs/dist/rpm/*
golangci-lint:
name: golangci-lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
- uses: actions/checkout@v4
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2
uses: golangci/golangci-lint-action@v6
with:
version: latest

View file

@ -5,7 +5,7 @@ on:
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
push:
branches:
- master
- main
tags:
- v*
pull_request:
@ -21,20 +21,19 @@ jobs:
docker_pkg:
- debian
- alpine
docker_image:
- drakkan/sftpgo
- ghcr.io/drakkan/sftpgo
optional_deps:
- true
- false
include:
- os: ubuntu-latest
docker_pkg: distroless
optional_deps: false
- os: ubuntu-latest
docker_pkg: debian-plugins
optional_deps: true
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Repo metadata
id: repo
uses: actions/github-script@v3
with:
script: |
const repo = await github.repos.get(context.repo)
return repo.data
uses: actions/checkout@v4
- name: Gather image information
id: info
@ -43,6 +42,7 @@ jobs:
DOCKERFILE=Dockerfile
MINOR=""
MAJOR=""
FEATURES="nopgxregisterdefaulttypes"
if [ "${{ github.event_name }}" = "schedule" ]; then
VERSION=nightly
elif [[ $GITHUB_REF == refs/tags/* ]]; then
@ -59,70 +59,130 @@ jobs:
MINOR=${VERSION%.*}
MAJOR=${MINOR%.*}
fi
VERSION_SLIM="${VERSION}-slim"
if [[ $DOCKER_PKG == alpine ]]; then
VERSION="${VERSION}-alpine"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE=Dockerfile.alpine
elif [[ $DOCKER_PKG == distroless ]]; then
VERSION="${VERSION}-distroless"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE=Dockerfile.distroless
FEATURES="${FEATURES},nosqlite"
elif [[ $DOCKER_PKG == debian-plugins ]]; then
VERSION="${VERSION}-plugins"
VERSION_SLIM="${VERSION}-slim"
FEATURES="${FEATURES},unixcrypt"
elif [[ $DOCKER_PKG == debian ]]; then
FEATURES="${FEATURES},unixcrypt"
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
if [[ $GITHUB_REF == refs/tags/* ]]; then
if [[ $DOCKER_PKG == debian ]]; then
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="$TAGS,${DOCKER_IMAGE}:${MINOR},${DOCKER_IMAGE}:${MAJOR}"
fi
TAGS="$TAGS,${DOCKER_IMAGE}:latest"
else
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="$TAGS,${DOCKER_IMAGE}:${MINOR}-alpine,${DOCKER_IMAGE}:${MAJOR}-alpine"
fi
TAGS="$TAGS,${DOCKER_IMAGE}:alpine"
DOCKER_IMAGES=("drakkan/sftpgo" "ghcr.io/drakkan/sftpgo")
TAGS="${DOCKER_IMAGES[0]}:${VERSION}"
TAGS_SLIM="${DOCKER_IMAGES[0]}:${VERSION_SLIM}"
for DOCKER_IMAGE in ${DOCKER_IMAGES[@]}; do
if [[ ${DOCKER_IMAGE} != ${DOCKER_IMAGES[0]} ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${VERSION}"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${VERSION_SLIM}"
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
if [[ $DOCKER_PKG == debian ]]; then
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR},${DOCKER_IMAGE}:${MAJOR}"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-slim,${DOCKER_IMAGE}:${MAJOR}-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:latest"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:slim"
elif [[ $DOCKER_PKG == distroless ]]; then
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-distroless,${DOCKER_IMAGE}:${MAJOR}-distroless"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-distroless-slim,${DOCKER_IMAGE}:${MAJOR}-distroless-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:distroless"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:distroless-slim"
elif [[ $DOCKER_PKG == debian-plugins ]]; then
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-plugins,${DOCKER_IMAGE}:${MAJOR}-plugins"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-plugins-slim,${DOCKER_IMAGE}:${MAJOR}-plugins-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:plugins"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:plugins-slim"
else
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-alpine,${DOCKER_IMAGE}:${MAJOR}-alpine"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-alpine-slim,${DOCKER_IMAGE}:${MAJOR}-alpine-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:alpine"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:alpine-slim"
fi
fi
done
if [[ $OPTIONAL_DEPS == true ]]; then
echo "version=${VERSION}" >> $GITHUB_OUTPUT
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo "full=true" >> $GITHUB_OUTPUT
else
echo "version=${VERSION_SLIM}" >> $GITHUB_OUTPUT
echo "tags=${TAGS_SLIM}" >> $GITHUB_OUTPUT
echo "full=false" >> $GITHUB_OUTPUT
fi
echo ::set-output name=dockerfile::${DOCKERFILE}
echo ::set-output name=version::${VERSION}
echo ::set-output name=tags::${TAGS}
echo ::set-output name=created::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
echo ::set-output name=sha::${GITHUB_SHA::8}
if [[ $DOCKER_PKG == debian-plugins ]]; then
echo "plugins=true" >> $GITHUB_OUTPUT
else
echo "plugins=false" >> $GITHUB_OUTPUT
fi
echo "dockerfile=${DOCKERFILE}" >> $GITHUB_OUTPUT
echo "features=${FEATURES}" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
echo "sha=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
env:
DOCKER_PKG: ${{ matrix.docker_pkg }}
DOCKER_IMAGE: ${{ matrix.docker_image }}
OPTIONAL_DEPS: ${{ matrix.optional_deps }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up builder
uses: docker/setup-buildx-action@v3
id: builder
- name: Login to Docker Hub
uses: docker/login-action@v1
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
if: ${{ github.event_name != 'pull_request' && matrix.docker_image == 'drakkan/sftpgo' }}
if: ${{ github.event_name != 'pull_request' }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
if: ${{ github.event_name != 'pull_request' && matrix.docker_image == 'ghcr.io/drakkan/sftpgo' }}
password: ${{ secrets.GITHUB_TOKEN }}
if: ${{ github.event_name != 'pull_request' }}
- name: Build and push
uses: docker/build-push-action@v2
uses: docker/build-push-action@v6
with:
context: .
builder: ${{ steps.builder.outputs.name }}
file: ./${{ steps.info.outputs.dockerfile }}
platforms: linux/amd64,linux/arm64,linux/ppc64le
platforms: linux/amd64,linux/arm64,linux/ppc64le,linux/arm/v7
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.info.outputs.tags }}
build-args: |
COMMIT_SHA=${{ steps.info.outputs.sha }}
INSTALL_OPTIONAL_PACKAGES=${{ steps.info.outputs.full }}
DOWNLOAD_PLUGINS=${{ steps.info.outputs.plugins }}
FEATURES=${{ steps.info.outputs.features }}
labels: |
org.opencontainers.image.title=SFTPGo
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support
org.opencontainers.image.url=${{ fromJson(steps.repo.outputs.result).html_url }}
org.opencontainers.image.documentation=${{ fromJson(steps.repo.outputs.result).html_url }}/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=${{ fromJson(steps.repo.outputs.result).clone_url }}
org.opencontainers.image.description=Full-featured and highly configurable file transfer server: SFTP, HTTP/S,FTP/S, WebDAV
org.opencontainers.image.url=https://github.com/drakkan/sftpgo
org.opencontainers.image.documentation=https://github.com/drakkan/sftpgo/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=https://github.com/drakkan/sftpgo
org.opencontainers.image.version=${{ steps.info.outputs.version }}
org.opencontainers.image.created=${{ steps.info.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=${{ fromJson(steps.repo.outputs.result).license.spdx_id }}
org.opencontainers.image.licenses=AGPL-3.0-only

View file

@ -5,172 +5,112 @@ on:
tags: 'v*'
env:
GO_VERSION: 1.15.4
GO_VERSION: 1.22.4
jobs:
create-release:
name: Create
prepare-sources-with-deps:
name: Prepare sources with deps
runs-on: ubuntu-latest
steps:
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: ${{ github.ref }}
draft: false
prerelease: false
- name: Save release upload URL
run: echo "${{ steps.create_release.outputs.upload_url }}" > ./upload_url.txt
shell: bash
- name: Store release upload URL
uses: actions/upload-artifact@v2
with:
name: upload_url
path: ./upload_url.txt
release-sources-with-deps:
name: Publish sources
needs: create-release
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Get SFTPGo version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
run: echo "VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
- name: Prepare release
run: |
go mod vendor
echo "${SFTPGO_VERSION}" > VERSION.txt
echo "${GITHUB_SHA::8}" >> VERSION.txt
tar cJvf sftpgo_${SFTPGO_VERSION}_src_with_deps.tar.xz *
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Download release upload URL
uses: actions/download-artifact@v2
- name: Upload build artifact
uses: actions/upload-artifact@v4
with:
name: upload_url
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
retention-days: 1
- name: Get release upload URL
id: upload_url
run: |
URL=$(cat upload_url.txt)
echo "::set-output name=url::${URL}"
shell: bash
- name: Upload Release
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
asset_content_type: application/x-xz
publish:
name: Publish binary
needs: create-release
prepare-window-mac:
name: Prepare binaries
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
os: [macos-12, windows-2022]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Set up Python
if: startsWith(matrix.os, 'windows-')
uses: actions/setup-python@v2
with:
python-version: 3.x
- name: Build for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
- name: Get SFTPGo version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
run: echo "VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
shell: bash
- name: Get OS name
id: get_os_name
run: |
if [ $MATRIX_OS == 'ubuntu-latest' ]
if [[ $MATRIX_OS =~ ^macos.* ]]
then
echo ::set-output name=OS::linux
elif [ $MATRIX_OS == 'macos-latest' ]
then
echo ::set-output name=OS::macOS
echo "OS=macOS" >> $GITHUB_OUTPUT
else
echo ::set-output name=OS::windows
echo "OS=windows" >> $GITHUB_OUTPUT
fi
shell: bash
env:
MATRIX_OS: ${{ matrix.os }}
- name: Build REST API CLI for Windows
- name: Build for macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
python -m pip install --upgrade pip setuptools wheel
pip install requests
pip install pygments
pip install pyinstaller
pyinstaller --hidden-import="pkg_resources.py2_warn" --noupx --onefile examples\rest-api-cli\sftpgo_api_cli
$GIT_COMMIT = (git describe --always --abbrev=8 --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
$FILE_VERSION = $Env:SFTPGO_VERSION.substring(1) + ".0"
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o sftpgo.exe
mkdir arm64
$Env:CGO_ENABLED='0'
$Env:GOOS='windows'
$Env:GOARCH='arm64'
go-winres simply --arch arm64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
mkdir x86
$Env:GOARCH='386'
go-winres simply --arch 386 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
Remove-Item Env:\CGO_ENABLED
Remove-Item Env:\GOOS
Remove-Item Env:\GOARCH
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Gather cross build info
id: cross_info
if: ${{ matrix.os == 'ubuntu-latest' }}
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
- name: Prepare Release for macOS
if: startsWith(matrix.os, 'macos-')
run: |
GIT_COMMIT=$(git describe --always)
BUILD_DATE=$(date -u +%FT%TZ)
echo ::set-output name=sha::${GIT_COMMIT}
echo ::set-output name=created::${BUILD_DATE}
- name: Cross build with xgo
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: crazy-max/ghaction-xgo@v1
with:
dest: cross
prefix: sftpgo
targets: linux/arm64
v: true
x: false
race: false
ldflags: -s -w -X github.com/drakkan/sftpgo/version.commit=${{ steps.cross_info.outputs.sha }} -X github.com/drakkan/sftpgo/version.date=${{ steps.cross_info.outputs.created }}
buildmode: default
- name: Prepare Release for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: |
mkdir -p output/{init,examples/rest-api-cli,sqlite,bash_completion,zsh_completion}
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
@ -179,51 +119,24 @@ jobs:
cp sftpgo.json output/
cp sftpgo.db output/sqlite/
cp -r static output/
cp -r openapi output/
cp -r templates output/
if [ $OS == 'linux' ]
then
cp init/sftpgo.service output/init/
else
cp init/com.github.drakkan.sftpgo.plist output/init/
fi
cp init/com.github.drakkan.sftpgo.plist output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp examples/rest-api-cli/sftpgo_api_cli output/examples/rest-api-cli/
cp -r output output_arm64
cd output
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
cd ..
cp sftpgo_arm64 output/sftpgo
cd output
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_arm64.tar.xz *
cd ..
if [ $OS == 'linux' ]
then
cp cross/sftpgo-linux-arm64 output_arm64/sftpgo
cd output_arm64
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_arm64.tar.xz *
cd ..
fi
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
OS: ${{ steps.get_os_name.outputs.OS }}
- name: Prepare Linux Packages
id: build_linux_pkgs
if: ${{ matrix.os == 'ubuntu-latest' }}
run: |
cp -r pkgs pkgs_arm64
cd pkgs
./build.sh
cd ..
export NFPM_ARCH=arm64
export BIN_SUFFIX=-linux-arm64
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_arm64
./build.sh
PKG_VERSION=${SFTPGO_VERSION:1}
echo "::set-output name=pkg-version::${PKG_VERSION}"
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Prepare Release for Windows
if: startsWith(matrix.os, 'windows-')
run: |
@ -231,131 +144,464 @@ jobs:
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\dist\sftpgo_api_cli.exe .\output
copy .\LICENSE .\output\LICENSE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
iscc windows-installer\sftpgo.iss
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
rm .\output\sftpgo.db
copy .\arm64\sftpgo.exe .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
.\sftpgo.exe initprovider
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
$Env:SFTPGO_ISS_ARCH='arm64'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
copy .\x86\sftpgo.exe .\output
$Env:SFTPGO_ISS_ARCH='x86'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
env:
SFTPGO_ISS_VERSION: ${{ steps.get_version.outputs.VERSION }}
SFTPGO_ISS_DOC_URL: https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.VERSION }}/README.md
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Prepare Portable Release for Windows
if: startsWith(matrix.os, 'windows-')
run: |
mkdir win-portable\examples\rest-api-cli
mkdir win-portable
copy .\sftpgo.exe .\win-portable
mkdir win-portable\arm64
copy .\arm64\sftpgo.exe .\win-portable\arm64
mkdir win-portable\x86
copy .\x86\sftpgo.exe .\win-portable\x86
copy .\sftpgo.json .\win-portable
copy .\sftpgo.db .\win-portable
copy .\dist\sftpgo_api_cli.exe .\win-portable\examples\rest-api-cli
(Get-Content .\win-portable\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\win-portable\sftpgo.json
copy .\output\sftpgo.db .\win-portable
copy .\LICENSE .\win-portable\LICENSE.txt
mkdir win-portable\templates
xcopy .\templates .\win-portable\templates\ /E
mkdir win-portable\static
xcopy .\static .\win-portable\static\ /E
Compress-Archive .\win-portable\* sftpgo_portable_x86_64.zip
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
OS: ${{ steps.get_os_name.outputs.OS }}
mkdir win-portable\openapi
xcopy .\openapi .\win-portable\openapi\ /E
Compress-Archive .\win-portable\* sftpgo_portable.zip
- name: Download release upload URL
uses: actions/download-artifact@v2
- name: Upload macOS x86_64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v4
with:
name: upload_url
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
retention-days: 1
- name: Get release upload URL
id: upload_url
- name: Upload macOS arm64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
retention-days: 1
- name: Upload Windows installer x86_64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
path: ./sftpgo_windows_x86_64.exe
retention-days: 1
- name: Upload Windows installer arm64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.exe
path: ./sftpgo_windows_arm64.exe
retention-days: 1
- name: Upload Windows installer x86 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86.exe
path: ./sftpgo_windows_x86.exe
retention-days: 1
- name: Upload Windows portable artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable.zip
path: ./sftpgo_portable.zip
retention-days: 1
prepare-linux:
name: Prepare Linux binaries
runs-on: ubuntu-latest
strategy:
matrix:
include:
- arch: amd64
distro: ubuntu:18.04
go-arch: amd64
deb-arch: amd64
rpm-arch: x86_64
tar-arch: x86_64
- arch: aarch64
distro: ubuntu18.04
go-arch: arm64
deb-arch: arm64
rpm-arch: aarch64
tar-arch: arm64
- arch: ppc64le
distro: ubuntu18.04
go-arch: ppc64le
deb-arch: ppc64el
rpm-arch: ppc64le
tar-arch: ppc64le
- arch: armv7
distro: ubuntu18.04
go-arch: arm7
deb-arch: armhf
rpm-arch: armv7hl
tar-arch: armv7
steps:
- uses: actions/checkout@v4
- name: Get versions
id: get_version
run: |
URL=$(cat upload_url.txt)
echo "::set-output name=url::${URL}"
echo "SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
echo "GO_VERSION=${GO_VERSION}" >> $GITHUB_OUTPUT
echo "COMMIT=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
shell: bash
- name: Upload Linux/macOS Release
if: startsWith(matrix.os, 'windows-') != true
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./output/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
asset_content_type: application/x-xz
GO_VERSION: ${{ env.GO_VERSION }}
- name: Upload Linux/arm64 Release
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./output_arm64/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
asset_content_type: application/x-xz
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
echo '#!/bin/bash' > build.sh
echo '' >> build.sh
echo 'set -e' >> build.sh
echo 'apt-get update -q -y' >> build.sh
echo 'apt-get install -q -y curl gcc' >> build.sh
echo 'curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/go${{ steps.get_version.outputs.GO_VERSION }}.linux-${{ matrix.go-arch }}.tar.gz' >> build.sh
echo 'tar -C /usr/local -xzf go.tar.gz' >> build.sh
echo 'export PATH=$PATH:/usr/local/go/bin' >> build.sh
echo 'go version' >> build.sh
echo 'cd /usr/local/src' >> build.sh
echo 'go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_version.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo' >> build.sh
- name: Upload Windows Release
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-release-asset@v1
chmod 755 build.sh
docker run --rm --name ubuntu-build --mount type=bind,source=`pwd`,target=/usr/local/src ${{ matrix.distro }} /usr/local/src/build.sh
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
cp LICENSE output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo initprovider
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
cp sftpgo.db output/sqlite/
cd output
tar cJvf sftpgo_${SFTPGO_VERSION}_linux_${{ matrix.tar-arch }}.tar.xz *
cd ..
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./sftpgo_windows_x86_64.exe
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
asset_content_type: application/x-dosexec
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Portable Windows Release
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- uses: uraimo/run-on-arch-action@v2
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./sftpgo_portable_x86_64.zip
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable_x86_64.zip
asset_content_type: application/zip
arch: ${{ matrix.arch }}
distro: ${{ matrix.distro }}
setup: |
mkdir -p "${PWD}/output"
dockerRunArgs: |
--volume "${PWD}/output:/output"
shell: /bin/bash
install: |
apt-get update -q -y
apt-get install -q -y curl gcc xz-utils
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
if [ ${{ matrix.arch}} == 'armv7' ]
then
GO_DOWNLOAD_ARCH=armv6l
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/go${{ steps.get_version.outputs.GO_VERSION }}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
go version
go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_version.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.SFTPGO_VERSION }}/README.md" >> output/README.txt
cp LICENSE output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo initprovider
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
cp sftpgo.db output/sqlite/
cd output
tar cJvf sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz *
cd ..
- name: Upload Debian Package
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload build artifact for ${{ matrix.arch }}
uses: actions/upload-artifact@v4
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_amd64.deb
asset_name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_amd64.deb
asset_content_type: application/vnd.debian.binary-package
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
path: ./output/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
retention-days: 1
- name: Build Packages
id: build_linux_pkgs
run: |
export NFPM_ARCH=${{ matrix.go-arch }}
cd pkgs
./build.sh
PKG_VERSION=${SFTPGO_VERSION:1}
echo "pkg-version=${PKG_VERSION}" >> $GITHUB_OUTPUT
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Deb Package
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
retention-days: 1
- name: Upload RPM Package
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
uses: actions/upload-artifact@v4
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.x86_64.rpm
asset_name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.x86_64.rpm
asset_content_type: application/x-rpm
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
retention-days: 1
- name: Upload Debian Package arm64
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs_arm64/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_arm64.deb
asset_name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_arm64.deb
asset_content_type: application/vnd.debian.binary-package
prepare-linux-bundle:
name: Prepare Linux bundle
needs: prepare-linux
runs-on: ubuntu-latest
- name: Upload RPM Package arm64
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Get versions
id: get_version
run: |
echo "SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v4
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs_arm64/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.aarch64.rpm
asset_name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.aarch64.rpm
asset_content_type: application/x-rpm
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Build bundle
shell: bash
run: |
mkdir -p bundle/{arm64,ppc64le,armv7}
cd bundle
tar xvf ../sftpgo_${SFTPGO_VERSION}_linux_x86_64.tar.xz
cd arm64
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_arm64.tar.xz sftpgo
cd ../ppc64le
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_ppc64le.tar.xz sftpgo
cd ../armv7
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_armv7.tar.xz sftpgo
cd ..
tar cJvf sftpgo_${SFTPGO_VERSION}_linux_bundle.tar.xz *
cd ..
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Linux bundle
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
path: ./bundle/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
retention-days: 1
create-release:
name: Release
needs: [prepare-linux-bundle, prepare-sources-with-deps, prepare-window-mac]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Get versions
id: get_version
run: |
SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}
PKG_VERSION=${SFTPGO_VERSION:1}
echo "SFTPGO_VERSION=${SFTPGO_VERSION}" >> $GITHUB_OUTPUT
echo "PKG_VERSION=${PKG_VERSION}" >> $GITHUB_OUTPUT
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Download Linux bundle artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
- name: Download Deb amd64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_amd64.deb
- name: Download Deb arm64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_arm64.deb
- name: Download Deb ppc64le artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_ppc64el.deb
- name: Download Deb armv7 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_armhf.deb
- name: Download RPM x86_64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.x86_64.rpm
- name: Download RPM aarch64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.aarch64.rpm
- name: Download RPM ppc64le artifact
uses: actions/download-artifact@v4
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.ppc64le.rpm
- name: Download RPM armv7 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.armv7hl.rpm
- name: Download macOS x86_64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_x86_64.tar.xz
- name: Download macOS arm64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_arm64.tar.xz
- name: Download Windows installer x86_64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86_64.exe
- name: Download Windows installer arm64 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_arm64.exe
- name: Download Windows installer x86 artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86.exe
- name: Download Windows portable artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_portable.zip
- name: Download source with deps artifact
uses: actions/download-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_src_with_deps.tar.xz
- name: Create release
run: |
mv sftpgo_windows_x86_64.exe sftpgo_${SFTPGO_VERSION}_windows_x86_64.exe
mv sftpgo_windows_arm64.exe sftpgo_${SFTPGO_VERSION}_windows_arm64.exe
mv sftpgo_windows_x86.exe sftpgo_${SFTPGO_VERSION}_windows_x86.exe
mv sftpgo_portable.zip sftpgo_${SFTPGO_VERSION}_windows_portable.zip
gh release create "${SFTPGO_VERSION}" -t "${SFTPGO_VERSION}"
gh release upload "${SFTPGO_VERSION}" sftpgo_*.xz --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo-*.rpm --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo_*.deb --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo_*.exe --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo_*.zip --clobber
gh release view "${SFTPGO_VERSION}"
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}

View file

@ -1,5 +1,5 @@
run:
timeout: 5m
timeout: 10m
issues-exit-code: 1
tests: true
@ -19,8 +19,19 @@ linters-settings:
simplify: true
goimports:
local-prefixes: github.com/drakkan/sftpgo
maligned:
suggest-new: true
#govet:
# report about shadowed variables
#check-shadowing: true
#enable:
# - fieldalignment
issues:
include:
- EXC0002
- EXC0012
- EXC0013
- EXC0014
- EXC0015
linters:
enable:
@ -28,15 +39,14 @@ linters:
- errcheck
- gofmt
- goimports
- golint
- revive
- unconvert
- unparam
- bodyclose
- gocyclo
- misspell
- maligned
- whitespace
- dupl
- scopelint
- rowserrcheck
- dogsled
- dogsled
- govet

1
CODEOWNERS Normal file
View file

@ -0,0 +1 @@
* @drakkan

128
CODE_OF_CONDUCT.md Normal file
View file

@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
support@sftpgo.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View file

@ -1,7 +1,9 @@
FROM golang:1.15 as builder
FROM golang:1.22-bookworm as builder
ENV GOFLAGS="-mod=readonly"
RUN apt-get update && apt-get -y upgrade && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /workspace
WORKDIR /workspace
@ -20,46 +22,46 @@ ARG FEATURES
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --abbrev=8 --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM debian:buster-slim
# Set to "true" to download the "official" plugins in /usr/local/bin
ARG DOWNLOAD_PLUGINS=false
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates mime-support && apt-get clean
RUN if [ "${DOWNLOAD_PLUGINS}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y curl && ./docker/scripts/download-plugins.sh; fi
SHELL ["/bin/bash", "-c"]
FROM debian:bookworm-slim
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apt-get update && apt-get -y upgrade && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y jq git rsync && rm -rf /var/lib/apt/lists/*; fi
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
RUN groupadd --system -g 1000 sftpgo && \
useradd --system --gid sftpgo --no-create-home \
--home-dir /var/lib/sftpgo --shell /usr/sbin/nologin \
--comment "SFTPGo user" --uid 1000 sftpgo
# Install some optional packages used by SFTPGo features
RUN apt-get update && apt-get install --no-install-recommends -y git rsync && apt-get clean
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/sftpgo /usr/local/bin/
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/sftpgo-plugin-* /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"bind_address\": \"127.0.0.1\",|\"bind_address\": \"\",|" /etc/sftpgo/sftpgo.json
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' /etc/sftpgo/sftpgo.json && \
sed -i 's|"backups"|"/srv/sftpgo/backups"|' /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo /srv/sftpgo
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
WORKDIR /var/lib/sftpgo
USER 1000:1000
VOLUME [ "/var/lib/sftpgo", "/srv/sftpgo" ]
CMD sftpgo serve
CMD ["sftpgo", "serve"]

View file

@ -1,8 +1,8 @@
FROM golang:1.15-alpine AS builder
FROM golang:1.22-alpine3.20 AS builder
ENV GOFLAGS="-mod=readonly"
RUN apk add --update --no-cache bash ca-certificates curl git gcc g++
RUN apk -U upgrade --no-cache && apk add --update --no-cache bash ca-certificates curl git gcc g++
RUN mkdir -p /workspace
WORKDIR /workspace
@ -22,49 +22,39 @@ ARG FEATURES
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --abbrev=8 --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM alpine:3.20
FROM alpine:3.12
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apk add --update --no-cache ca-certificates tzdata bash mailcap
RUN apk -U upgrade --no-cache && apk add --update --no-cache ca-certificates tzdata mailcap
SHELL ["/bin/bash", "-c"]
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache jq git rsync; fi
# set up nsswitch.conf for Go's "netgo" implementation
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457
RUN test ! -e /etc/nsswitch.conf && echo 'hosts: files dns' > /etc/nsswitch.conf
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
RUN addgroup -g 1000 -S sftpgo && \
adduser -u 1000 -h /var/lib/sftpgo -s /sbin/nologin -G sftpgo -S -D -H sftpgo
# Install some optional packages used by SFTPGo features
RUN apk add --update --no-cache rsync git
adduser -u 1000 -h /var/lib/sftpgo -s /sbin/nologin -G sftpgo -S -D -H -g "SFTPGo user" sftpgo
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"bind_address\": \"127.0.0.1\",|\"bind_address\": \"\",|" /etc/sftpgo/sftpgo.json
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' /etc/sftpgo/sftpgo.json && \
sed -i 's|"backups"|"/srv/sftpgo/backups"|' /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo /srv/sftpgo
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
WORKDIR /var/lib/sftpgo
USER 1000:1000
VOLUME [ "/var/lib/sftpgo", "/srv/sftpgo" ]
CMD sftpgo serve
CMD ["sftpgo", "serve"]

57
Dockerfile.distroless Normal file
View file

@ -0,0 +1,57 @@
FROM golang:1.22-bookworm as builder
ENV CGO_ENABLED=0 GOFLAGS="-mod=readonly"
RUN apt-get update && apt-get -y upgrade && apt-get install --no-install-recommends -y media-types && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download
ARG COMMIT_SHA
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
# For this variant we disable SQLite support since it requires CGO and so a C runtime which is not installed
# in distroless/static-* images
ARG FEATURES
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --abbrev=8 --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -v -o sftpgo
# Modify the default configuration file
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' sftpgo.json && \
sed -i 's|"backups"|"/srv/sftpgo/backups"|' sftpgo.json && \
sed -i 's|"sqlite"|"bolt"|' sftpgo.json
RUN mkdir /etc/sftpgo /var/lib/sftpgo /srv/sftpgo
FROM gcr.io/distroless/static-debian12
COPY --from=builder --chown=1000:1000 /etc/sftpgo /etc/sftpgo
COPY --from=builder --chown=1000:1000 /srv/sftpgo /srv/sftpgo
COPY --from=builder --chown=1000:1000 /var/lib/sftpgo /var/lib/sftpgo
COPY --from=builder --chown=1000:1000 /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
COPY --from=builder /etc/mime.types /etc/mime.types
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# These env vars are required to avoid the following error when calling user.Current():
# unable to get the current user: user: Current requires cgo or $USER set in environment
ENV USER=sftpgo
ENV HOME=/var/lib/sftpgo
WORKDIR /var/lib/sftpgo
USER 1000:1000
CMD ["sftpgo", "serve"]

145
LICENSE
View file

@ -1,5 +1,5 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
@ -7,17 +7,15 @@
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
@ -72,7 +60,7 @@ modification follow.
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
@ -635,40 +633,29 @@ the "copyright" line and a pointer to where the full notice is found.
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
GNU Affero General Public License for more details.
You should have received a copy of the GNU General Public License
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

237
README.md
View file

@ -1,218 +1,93 @@
# SFTPGo
![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=master&event=push)
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/master/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/drakkan/sftpgo)](https://goreportcard.com/report/github.com/drakkan/sftpgo)
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
[![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/main/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/main)
[![License: AGPL-3.0-only](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support, written in Go.
It can serve local filesystem, S3 (compatible) Object Storage, Google Cloud Storage and Azure Blob Storage.
Full-featured and highly configurable event-driven file transfer solution.
Server protocols: SFTP, HTTP/S, FTP/S, WebDAV.
Storage backends: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, other SFTP servers.
## Features
With SFTPGo you can leverage local and cloud storage backends for exchanging and storing files internally or with business partners using the same tools and processes you are already familiar with.
- SFTPGo uses virtual accounts stored inside a "data provider".
- SQLite, MySQL, PostgreSQL, bbolt (key/value store in pure Go) and in-memory data providers are supported.
- Each account is chrooted to its home directory.
- Public key and password authentication. Multiple public keys per user are supported.
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
- Per user authentication methods. You can configure the allowed authentication methods for each user.
- Custom authentication via external programs/HTTP API is supported.
- Dynamic user modification before login via external programs/HTTP API is supported.
- Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
- Bandwidth throttling is supported, with distinct settings for upload and download.
- Per user maximum concurrent sessions.
- Per user and per directory permission management: list directory contents, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group and mode, change access and modification times.
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Per user IP filters are supported: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per user and per directory file extensions filters are supported: files can be allowed or denied based on their extensions.
- Virtual folders are supported: directories outside the user home directory can be exposed as virtual folders.
- Configurable custom commands and/or HTTP notifications on file upload, download, pre-delete, delete, rename, on SSH commands and on user add, update and delete.
- Automatically terminating idle connections.
- Atomic uploads are configurable.
- Support for Git repositories over SSH.
- SCP and rsync are supported.
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
- [WebDAV](./docs/webdav.md) is supported.
- Support for serving local filesystem, S3 Compatible Object Storage and Google Cloud Storage over SFTP/SCP/FTP/WebDAV.
- Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
- [Prometheus metrics](./docs/metrics.md) are exposed.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
- [REST API](./docs/rest-api.md) for users and folders management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- Easy [migration](./examples/rest-api-cli#convert-users-from-other-stores) from Linux system user accounts.
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
- Performance analysis using built-in [profiler](./docs/profiling.md).
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
The WebAdmin UI allows to easily create and manage your users, folders, groups and other resources.
## Platforms
The WebClient UI allows end users to change their credentials, browse and manage their files in the browser and setup two-factor authentication which works with Microsoft Authenticator, Google Authenticator, Authy and other compatible apps.
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using a [GitHub Action](./.github/workflows/development.yml). The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.
## Sponsors
## Requirements
We strongly believe in Open Source software model, so we decided to make SFTPGo available to everyone, but maintaining and evolving SFTPGo takes a lot of time and work. To make development and maintenance sustainable you should consider to support the project with a [sponsorship](https://github.com/sponsors/drakkan).
- Go 1.14 or higher as build only dependency.
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x.
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
We also provide [professional services](https://sftpgo.com/#pricing) to support you in using SFTPGo to the fullest.
## Installation
The open source license grant you freedom but not assurance of help. So why would you rely on free software without support or any guarantee it will stay healthy and maintained for the upcoming years?
Binary releases for Linux, macOS, and Windows are available. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
Supporting the project benefit businesses and the community because if the project is financially sustainable, using this business model, we don't have to restrict features and/or switch to an [Open-core](https://en.wikipedia.org/wiki/Open-core_model) model. The technology stays truly open source. Everyone wins.
An official Docker image is available. Documentation is [here](./docker/README.md).
It is important to understand that you should support SFTPGo and any other Open Source project you rely on for ongoing maintenance, even if you don't have any questions or need new features, to mitigate the business risk of a project you depend on going unmaintained, with its security and development velocity implications.
Some Linux distro packages are available:
### Thank you to our sponsors
- For Arch Linux via AUR:
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/). This package follows stable releases. It requires `git`, `gcc` and `go` to build.
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/). This package follows stable releases downloading the prebuilt linux binary from GitHub. It does not require `git`, `gcc` and `go` to build.
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git master. It requires `git`, `gcc` and `go` to build.
- Deb and RPM packages are built after each commit and for each release.
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
#### Platinum sponsors
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
[<img src="./img/Aledade_logo.png" alt="Aledade logo" width="202" height="70">](https://www.aledade.com/)
</br></br>
[<img src="./img/jumptrading.png" alt="Jump Trading logo" width="362" height="63">](https://www.jumptrading.com/)
</br></br>
[<img src="./img/wpengine.png" alt="WP Engine logo" width="331" height="63">](https://wpengine.com/)
Alternately, you can [build from source](./docs/build-from-source.md).
#### Silver sponsors
## Configuration
[<img src="./img/IDCS.png" alt="IDCS logo" width="212" height="51">](https://idcs.ip-paris.fr/)
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
#### Bronze sponsors
Please make sure to [initialize the data provider](#data-provider-initialization) before running the daemon!
[<img src="./img/7digital.png" alt="7digital logo" width="178" height="56">](https://www.7digital.com/)
</br></br>
[<img src="./img/vps2day.png" alt="VPS2day logo" width="234" height="56">](https://www.vps2day.com/)
To start SFTPGo with the default settings, simply run:
## Support policy
```bash
sftpgo serve
```
You can use SFTPGo for free, respecting the obligations of the Open Source license, but please do not ask or expect free support as well.
Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a service.
Use [discussions](https://github.com/drakkan/sftpgo/discussions) to ask questions and get support from the community.
### Data provider initialization and update
If you report an invalid issue and/or ask for step-by-step support, your issue will be closed as invalid without further explanation and/or the "support request" label will be added. Invalid bug reports may confuse other users. Thanks for understanding.
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
## Documentation
SQL based data providers (SQLite, MySQL, PostgreSQL) require the creation of a database containing the required tables. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
For PostgreSQL and MySQL providers, you need to create the configured database.
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
Alternately, you can create/update the required data provider structures yourself using the `initprovider` command.
For example, you can simply execute the following command from the configuration directory:
```bash
sftpgo initprovider
```
Take a look at the CLI usage to learn how to specify a different configuration file:
```bash
sftpgo initprovider --help
```
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
## Users and folders management
After starting SFTPGo you can manage users and folders using:
- the [web based administration interface](./docs/web-admin.md)
- the [REST API](./docs/rest-api.md)
- the sample [REST API CLI](./examples/rest-api-cli)
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
## Tutorials
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
## Authentication options
### External Authentication
Custom authentication methods can easily be added. SFTPGo supports external authentication modules, and writing a new backend can be as simple as a few lines of shell script. More information can be found [here](./docs/external-auth.md).
### Keyboard Interactive Authentication
Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client.
This authentication method is typically used for multi-factor authentication.
More information can be found [here](./docs/keyboard-interactive.md).
## Dynamic user creation or modification
A user can be created or modified by an external program just before the login. More information about this can be found [here](./docs/dynamic-user-mod.md).
## Custom Actions
SFTPGo allows to configure custom commands and/or HTTP notifications on file upload, download, delete, rename, on SSH commands and on user add, update and delete.
More information about custom actions can be found [here](./docs/custom-actions.md).
## Virtual folders
Directories outside the user home directory can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
## Other hooks
You can get notified as soon as a new connection is established using the [Post-connect hook](./docs/post-connect-hook.md) and after each login using the [Post-login hook](./docs/post-login-hook.md).
You can use your own hook to [check passwords](./docs/check-password-hook.md).
## Storage backends
### S3 Compatible Object Storage backends
Each user can be mapped to the whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about S3 integration can be found [here](./docs/s3.md).
### Google Cloud Storage backend
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
### Azure Blob Storage backend
Each user can be mapped with an Azure Blob Storage container or a container virtual folder. This way, the mapped container/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Azure Blob Storage integration can be found [here](./docs/azure-blob-storage.md).
### Other Storage backends
Adding new storage backends is quite easy:
- implement the [Fs interface](./vfs/vfs.go#L18 "interface for filesystem backends").
- update the user method `GetFilesystem` to return the new backend
- update the web interface and the REST API CLI
- add the flags for the new storage backed to the `portable` mode
Anyway, some backends require a pay per use account (or they offer free account for a limited time period only). To be able to add support for such backends or to review pull requests, please provide a test account. The test account must be available for enough time to be able to maintain the backend and do basic tests before each new release.
## Brute force protection
The [connection failed logs](./docs/logs.md) can be used for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
## Account's configuration properties
Details information about account configuration properties can be found [here](./docs/account.md).
## Performance
SFTPGo can easily saturate a Gigabit connection on low end hardware with no special configuration, this is generally enough for most use cases.
More in-depth analysis of performance can be found [here](./docs/performance.md).
You can read more about supported features and documentation at [sftpgo.github.io](https://sftpgo.github.io/).
## Release Cadence
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new releases per year.
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new major releases per year and several bug fix releases.
## Acknowledgements
SFTPGo makes use of the third party libraries listed inside [go.mod](./go.mod).
Some code was initially taken from [Pterodactyl SFTP Server](https://github.com/pterodactyl/sftp-server).
We are very grateful to all the people who contributed with ideas and/or pull requests.
Thank you to [ysura](https://www.ysura.com/) for granting us stable access to a test AWS S3 account.
Thank you to [KeenThemes](https://keenthemes.com/) for granting us a custom license to use their amazing [Mega Bundle](https://keenthemes.com/products/templates-mega-bundle) for SFTPGo UI.
Thank you to [Crowdin](https://crowdin.com/) for granting us an Open Source License.
Thank you to [Incode](https://www.incode.it/) for helping us to improve the UI/UX.
## License
GNU GPLv3
SFTPGo source code is licensed under the GNU AGPL-3.0-only.
The [theme](https://keenthemes.com/products/templates-mega-bundle) used in WebAdmin and WebClient user interfaces is proprietary, this means:
- KeenThemes HTML/CSS/JS components are allowed for use only within the SFTPGo product and restricted to be used in a resealable HTML template that can compete with KeenThemes products anyhow.
- The SFTPGo WebAdmin and WebClient user interfaces (HTML, CSS and JS components) based on this theme are allowed for use only within the SFTPGo product and therefore cannot be used in derivative works/products without an explicit grant from the [SFTPGo Team](mailto:support@sftpgo.com).
More information about [compliance](https://sftpgo.com/compliance.html).
## Copyright
Copyright (C) 2019 Nicola Murino

View file

@ -2,11 +2,9 @@
## Supported Versions
Only the current release of the software is actively supported. If you need
help backporting fixes into an older release, feel free to ask.
Only the current release of the software is actively supported.
[Contact us](mailto:support@sftpgo.com) if you need early security patches and enterprise-grade security.
## Reporting a Vulnerability
Email your vulnerability information to SFTPGo's maintainer:
Nicola Murino <nicola.murino@gmail.com>
To report (possible) security issues in SFTPGo, please either send a mail to the [SFTPGo Team](mailto:support@sftpgo.com) or use Github's [private reporting feature](https://github.com/drakkan/sftpgo/security/advisories/new).

View file

@ -1,12 +0,0 @@
package cmd
import "github.com/spf13/cobra"
var genCmd = &cobra.Command{
Use: "gen",
Short: "A collection of useful generators",
}
func init() {
rootCmd.AddCommand(genCmd)
}

View file

@ -1,76 +0,0 @@
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/logger"
)
var genCompletionCmd = &cobra.Command{
Use: "completion [bash|zsh|fish|powershell]",
Short: "Generate shell completion script",
Long: `To load completions:
Bash:
$ source <(sftpgo gen completion bash)
To load completions for each session, execute once:
Linux:
$ sudo sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo
MacOS:
$ sudo sftpgo gen completion bash > /usr/local/etc/bash_completion.d/sftpgo
Zsh:
If shell completion is not already enabled in your environment you will need
to enable it. You can execute the following once:
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
To load completions for each session, execute once:
$ sftpgo gen completion zsh > "${fpath[1]}/_sftpgo"
Fish:
$ sftpgo gen completion fish | source
To load completions for each session, execute once:
$ sftpgo gen completion fish > ~/.config/fish/completions/sftpgo.fish
`,
DisableFlagsInUseLine: true,
ValidArgs: []string{"bash", "zsh", "fish", "powershell"},
Args: cobra.ExactValidArgs(1),
Run: func(cmd *cobra.Command, args []string) {
var err error
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
switch args[0] {
case "bash":
err = cmd.Root().GenBashCompletion(os.Stdout)
case "zsh":
err = cmd.Root().GenZshCompletion(os.Stdout)
case "fish":
err = cmd.Root().GenFishCompletion(os.Stdout, true)
case "powershell":
err = cmd.Root().GenPowerShellCompletion(os.Stdout)
}
if err != nil {
logger.WarnToConsole("Unable to generate shell completion script: %v", err)
os.Exit(1)
}
},
}
func init() {
genCmd.AddCommand(genCompletionCmd)
}

View file

@ -1,52 +0,0 @@
package cmd
import (
"fmt"
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
)
var (
manDir string
genManCmd = &cobra.Command{
Use: "man",
Short: "Generate man pages for SFTPGo CLI",
Long: `This command automatically generates up-to-date man pages of SFTPGo's
command-line interface. By default, it creates the man page files
in the "man" directory under the current directory.
`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if _, err := os.Stat(manDir); os.IsNotExist(err) {
err = os.MkdirAll(manDir, os.ModePerm)
if err != nil {
logger.WarnToConsole("Unable to generate man page files: %v", err)
os.Exit(1)
}
}
header := &doc.GenManHeader{
Section: "1",
Manual: "SFTPGo Manual",
Source: fmt.Sprintf("SFTPGo %v", version.Get().Version),
}
cmd.Root().DisableAutoGenTag = true
err := doc.GenManTree(cmd.Root(), header, manDir)
if err != nil {
logger.WarnToConsole("Unable to generate man page files: %v", err)
os.Exit(1)
}
},
}
)
func init() {
genManCmd.Flags().StringVarP(&manDir, "dir", "d", "man", "The directory to write the man pages")
genCmd.AddCommand(genManCmd)
}

View file

@ -1,64 +0,0 @@
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
var (
initProviderCmd = &cobra.Command{
Use: "initprovider",
Short: "Initializes and/or updates the configured data provider",
Long: `This command reads the data provider connection details from the specified
configuration file and creates the initial structure or update the existing one,
as needed.
Some data providers such as bolt and memory does not require an initialization
but they could require an update to the existing data after upgrading SFTPGo.
For SQLite/bolt providers the database file will be auto-created if missing.
For PostgreSQL and MySQL providers you need to create the configured database,
this command will create/update the required tables as needed.
To initialize/update the data provider from the configuration directory simply use:
$ sftpgo initprovider
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = utils.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
return
}
providerConf := config.GetProviderConf()
logger.InfoToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err = dataprovider.InitializeDatabase(providerConf, configDir)
if err == nil {
logger.InfoToConsole("Data provider successfully initialized/updated")
} else if err == dataprovider.ErrNoInitRequired {
logger.InfoToConsole("%v", err.Error())
} else {
logger.WarnToConsole("Unable to initialize/update the data provider: %v", err)
os.Exit(1)
}
},
}
)
func init() {
rootCmd.AddCommand(initProviderCmd)
addConfigFlags(initProviderCmd)
}

View file

@ -1,343 +0,0 @@
// +build !noportable
package cmd
import (
"fmt"
"io/ioutil"
"os"
"path"
"path/filepath"
"strings"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
)
var (
directoryToServe string
portableSFTPDPort int
portableAdvertiseService bool
portableAdvertiseCredentials bool
portableUsername string
portablePassword string
portableLogFile string
portableLogVerbose bool
portablePublicKeys []string
portablePermissions []string
portableSSHCommands []string
portableAllowedExtensions []string
portableDeniedExtensions []string
portableFsProvider int
portableS3Bucket string
portableS3Region string
portableS3AccessKey string
portableS3AccessSecret string
portableS3Endpoint string
portableS3StorageClass string
portableS3KeyPrefix string
portableS3ULPartSize int
portableS3ULConcurrency int
portableGCSBucket string
portableGCSCredentialsFile string
portableGCSAutoCredentials int
portableGCSStorageClass string
portableGCSKeyPrefix string
portableFTPDPort int
portableFTPSCert string
portableFTPSKey string
portableWebDAVPort int
portableWebDAVCert string
portableWebDAVKey string
portableAzContainer string
portableAzAccountName string
portableAzAccountKey string
portableAzEndpoint string
portableAzAccessTier string
portableAzSASURL string
portableAzKeyPrefix string
portableAzULPartSize int
portableAzULConcurrency int
portableAzUseEmulator bool
portableCmd = &cobra.Command{
Use: "portable",
Short: "Serve a single directory",
Long: `To serve the current working directory with auto generated credentials simply
use:
$ sftpgo portable
Please take a look at the usage below to customize the serving parameters`,
Run: func(cmd *cobra.Command, args []string) {
portableDir := directoryToServe
fsProvider := dataprovider.FilesystemProvider(portableFsProvider)
if !filepath.IsAbs(portableDir) {
if fsProvider == dataprovider.LocalFilesystemProvider {
portableDir, _ = filepath.Abs(portableDir)
} else {
portableDir = os.TempDir()
}
}
permissions := make(map[string][]string)
permissions["/"] = portablePermissions
var portableGCSCredentials []byte
if fsProvider == dataprovider.GCSFilesystemProvider && len(portableGCSCredentialsFile) > 0 {
fi, err := os.Stat(portableGCSCredentialsFile)
if err != nil {
fmt.Printf("Invalid GCS credentials file: %v\n", err)
os.Exit(1)
}
if fi.Size() > 1048576 {
fmt.Printf("Invalid GCS credentials file: %#v is too big %v/1048576 bytes\n", portableGCSCredentialsFile,
fi.Size())
os.Exit(1)
}
creds, err := ioutil.ReadFile(portableGCSCredentialsFile)
if err != nil {
fmt.Printf("Unable to read credentials file: %v\n", err)
}
portableGCSCredentials = creds
portableGCSAutoCredentials = 0
}
if portableFTPDPort >= 0 && len(portableFTPSCert) > 0 && len(portableFTPSKey) > 0 {
_, err := common.NewCertManager(portableFTPSCert, portableFTPSKey, "FTP portable")
if err != nil {
fmt.Printf("Unable to load FTPS key pair, cert file %#v key file %#v error: %v\n",
portableFTPSCert, portableFTPSKey, err)
os.Exit(1)
}
}
if portableWebDAVPort > 0 && len(portableWebDAVCert) > 0 && len(portableWebDAVKey) > 0 {
_, err := common.NewCertManager(portableWebDAVCert, portableWebDAVKey, "WebDAV portable")
if err != nil {
fmt.Printf("Unable to load WebDAV key pair, cert file %#v key file %#v error: %v\n",
portableWebDAVCert, portableWebDAVKey, err)
os.Exit(1)
}
}
service := service.Service{
ConfigDir: filepath.Clean(defaultConfigDir),
ConfigFile: defaultConfigName,
LogFilePath: portableLogFile,
LogMaxSize: defaultLogMaxSize,
LogMaxBackups: defaultLogMaxBackup,
LogMaxAge: defaultLogMaxAge,
LogCompress: defaultLogCompress,
LogVerbose: portableLogVerbose,
Profiler: defaultProfiler,
Shutdown: make(chan bool),
PortableMode: 1,
PortableUser: dataprovider.User{
Username: portableUsername,
Password: portablePassword,
PublicKeys: portablePublicKeys,
Permissions: permissions,
HomeDir: portableDir,
Status: 1,
FsConfig: dataprovider.Filesystem{
Provider: dataprovider.FilesystemProvider(portableFsProvider),
S3Config: vfs.S3FsConfig{
Bucket: portableS3Bucket,
Region: portableS3Region,
AccessKey: portableS3AccessKey,
AccessSecret: portableS3AccessSecret,
Endpoint: portableS3Endpoint,
StorageClass: portableS3StorageClass,
KeyPrefix: portableS3KeyPrefix,
UploadPartSize: int64(portableS3ULPartSize),
UploadConcurrency: portableS3ULConcurrency,
},
GCSConfig: vfs.GCSFsConfig{
Bucket: portableGCSBucket,
Credentials: portableGCSCredentials,
AutomaticCredentials: portableGCSAutoCredentials,
StorageClass: portableGCSStorageClass,
KeyPrefix: portableGCSKeyPrefix,
},
AzBlobConfig: vfs.AzBlobFsConfig{
Container: portableAzContainer,
AccountName: portableAzAccountName,
AccountKey: portableAzAccountKey,
Endpoint: portableAzEndpoint,
AccessTier: portableAzAccessTier,
SASURL: portableAzSASURL,
KeyPrefix: portableAzKeyPrefix,
UseEmulator: portableAzUseEmulator,
UploadPartSize: int64(portableAzULPartSize),
UploadConcurrency: portableAzULConcurrency,
},
},
Filters: dataprovider.UserFilters{
FileExtensions: parseFileExtensionsFilters(),
},
},
}
if err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableSSHCommands, portableAdvertiseService,
portableAdvertiseCredentials, portableFTPSCert, portableFTPSKey, portableWebDAVCert, portableWebDAVKey); err == nil {
service.Wait()
if service.Error == nil {
os.Exit(0)
}
}
os.Exit(1)
},
}
)
func init() {
version.AddFeature("+portable")
portableCmd.Flags().StringVarP(&directoryToServe, "directory", "d", ".", `Path to the directory to serve.
This can be an absolute path or a path
relative to the current directory
`)
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, "0 means a random unprivileged port")
portableCmd.Flags().IntVar(&portableFTPDPort, "ftpd-port", -1, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().IntVar(&portableWebDAVPort, "webdav-port", -1, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().StringSliceVarP(&portableSSHCommands, "ssh-commands", "c", sftpd.GetDefaultSSHCommands(),
`SSH commands to enable.
"*" means any supported SSH command
including scp
`)
portableCmd.Flags().StringVarP(&portableUsername, "username", "u", "", `Leave empty to use an auto generated
value`)
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", `Leave empty to use an auto generated
value`)
portableCmd.Flags().StringVarP(&portableLogFile, logFilePathFlag, "l", "", "Leave empty to disable logging")
portableCmd.Flags().BoolVarP(&portableLogVerbose, logVerboseFlag, "v", false, "Enable verbose logs")
portableCmd.Flags().StringSliceVarP(&portablePublicKeys, "public-key", "k", []string{}, "")
portableCmd.Flags().StringSliceVarP(&portablePermissions, "permissions", "g", []string{"list", "download"},
`User's permissions. "*" means any
permission`)
portableCmd.Flags().StringArrayVar(&portableAllowedExtensions, "allowed-extensions", []string{},
`Allowed file extensions case
insensitive. The format is
/dir::ext1,ext2.
For example: "/somedir::.jpg,.png"`)
portableCmd.Flags().StringArrayVar(&portableDeniedExtensions, "denied-extensions", []string{},
`Denied file extensions case
insensitive. The format is
/dir::ext1,ext2.
For example: "/somedir::.jpg,.png"`)
portableCmd.Flags().BoolVarP(&portableAdvertiseService, "advertise-service", "S", false,
`Advertise SFTP/FTP service using
multicast DNS`)
portableCmd.Flags().BoolVarP(&portableAdvertiseCredentials, "advertise-credentials", "C", false,
`If the SFTP/FTP service is
advertised via multicast DNS, this
flag allows to put username/password
inside the advertised TXT record`)
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", int(dataprovider.LocalFilesystemProvider), `0 => local filesystem
1 => AWS S3 compatible
2 => Google Cloud Storage
3 => Azure Blob Storage`)
portableCmd.Flags().StringVar(&portableS3Bucket, "s3-bucket", "", "")
portableCmd.Flags().StringVar(&portableS3Region, "s3-region", "", "")
portableCmd.Flags().StringVar(&portableS3AccessKey, "s3-access-key", "", "")
portableCmd.Flags().StringVar(&portableS3AccessSecret, "s3-access-secret", "", "")
portableCmd.Flags().StringVar(&portableS3Endpoint, "s3-endpoint", "", "")
portableCmd.Flags().StringVar(&portableS3StorageClass, "s3-storage-class", "", "")
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().IntVar(&portableS3ULPartSize, "s3-upload-part-size", 5, `The buffer size for multipart uploads
(MB)`)
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, `How many parts are uploaded in
parallel`)
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().StringVar(&portableGCSCredentialsFile, "gcs-credentials-file", "", `Google Cloud Storage JSON credentials
file`)
portableCmd.Flags().IntVar(&portableGCSAutoCredentials, "gcs-automatic-credentials", 1, `0 means explicit credentials using
a JSON credentials file, 1 automatic
`)
portableCmd.Flags().StringVar(&portableFTPSCert, "ftpd-cert", "", "Path to the certificate file for FTPS")
portableCmd.Flags().StringVar(&portableFTPSKey, "ftpd-key", "", "Path to the key file for FTPS")
portableCmd.Flags().StringVar(&portableWebDAVCert, "webdav-cert", "", `Path to the certificate file for WebDAV
over HTTPS`)
portableCmd.Flags().StringVar(&portableWebDAVKey, "webdav-key", "", `Path to the key file for WebDAV over
HTTPS`)
portableCmd.Flags().StringVar(&portableAzContainer, "az-container", "", "")
portableCmd.Flags().StringVar(&portableAzAccountName, "az-account-name", "", "")
portableCmd.Flags().StringVar(&portableAzAccountKey, "az-account-key", "", "")
portableCmd.Flags().StringVar(&portableAzSASURL, "az-sas-url", "", `Shared access signature URL`)
portableCmd.Flags().StringVar(&portableAzEndpoint, "az-endpoint", "", `Leave empty to use the default:
"blob.core.windows.net"`)
portableCmd.Flags().StringVar(&portableAzAccessTier, "az-access-tier", "", `Leave empty to use the default
container setting`)
portableCmd.Flags().StringVar(&portableAzKeyPrefix, "az-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().IntVar(&portableAzULPartSize, "az-upload-part-size", 4, `The buffer size for multipart uploads
(MB)`)
portableCmd.Flags().IntVar(&portableAzULConcurrency, "az-upload-concurrency", 2, `How many parts are uploaded in
parallel`)
portableCmd.Flags().BoolVar(&portableAzUseEmulator, "az-use-emulator", false, "")
rootCmd.AddCommand(portableCmd)
}
func parseFileExtensionsFilters() []dataprovider.ExtensionsFilter {
var extensions []dataprovider.ExtensionsFilter
for _, val := range portableAllowedExtensions {
p, exts := getExtensionsFilterValues(strings.TrimSpace(val))
if len(p) > 0 {
extensions = append(extensions, dataprovider.ExtensionsFilter{
Path: path.Clean(p),
AllowedExtensions: exts,
DeniedExtensions: []string{},
})
}
}
for _, val := range portableDeniedExtensions {
p, exts := getExtensionsFilterValues(strings.TrimSpace(val))
if len(p) > 0 {
found := false
for index, e := range extensions {
if path.Clean(e.Path) == path.Clean(p) {
extensions[index].DeniedExtensions = append(extensions[index].DeniedExtensions, exts...)
found = true
break
}
}
if !found {
extensions = append(extensions, dataprovider.ExtensionsFilter{
Path: path.Clean(p),
AllowedExtensions: []string{},
DeniedExtensions: exts,
})
}
}
}
return extensions
}
func getExtensionsFilterValues(value string) (string, []string) {
if strings.Contains(value, "::") {
dirExts := strings.Split(value, "::")
if len(dirExts) > 1 {
dir := strings.TrimSpace(dirExts[0])
exts := []string{}
for _, e := range strings.Split(dirExts[1], ",") {
cleanedExt := strings.TrimSpace(e)
if len(cleanedExt) > 0 {
exts = append(exts, cleanedExt)
}
}
if len(dir) > 0 && len(exts) > 0 {
return dir, exts
}
}
}
return "", nil
}

View file

@ -1,9 +0,0 @@
// +build noportable
package cmd
import "github.com/drakkan/sftpgo/version"
func init() {
version.AddFeature("-portable")
}

View file

@ -1,35 +0,0 @@
package cmd
import (
"fmt"
"os"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
)
var (
reloadCmd = &cobra.Command{
Use: "reload",
Short: "Reload the SFTPGo Windows Service sending a \"paramchange\" request",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),
},
}
err := s.Reload()
if err != nil {
fmt.Printf("Error sending reload signal: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Reload signal sent!\r\n")
}
},
}
)
func init() {
serviceCmd.AddCommand(reloadCmd)
}

View file

@ -1,35 +0,0 @@
package cmd
import (
"fmt"
"os"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
)
var (
rotateLogCmd = &cobra.Command{
Use: "rotatelogs",
Short: "Signal to the running service to rotate the logs",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),
},
}
err := s.RotateLogFile()
if err != nil {
fmt.Printf("Error sending rotate log file signal to the service: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Rotate log file signal sent!\r\n")
}
},
}
)
func init() {
serviceCmd.AddCommand(rotateLogCmd)
}

View file

@ -1,53 +0,0 @@
package cmd
import (
"os"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
)
var (
serveCmd = &cobra.Command{
Use: "serve",
Short: "Start the SFTP Server",
Long: `To start the SFTPGo with the default values for the command line flags simply
use:
$ sftpgo serve
Please take a look at the usage below to customize the startup options`,
Run: func(cmd *cobra.Command, args []string) {
service := service.Service{
ConfigDir: utils.CleanDirInput(configDir),
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
LogMaxBackups: logMaxBackups,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LoadDataFrom: loadDataFrom,
LoadDataMode: loadDataMode,
LoadDataQuotaScan: loadDataQuotaScan,
LoadDataClean: loadDataClean,
Profiler: profiler,
Shutdown: make(chan bool),
}
if err := service.Start(); err == nil {
service.Wait()
if service.Error == nil {
os.Exit(0)
}
}
os.Exit(1)
},
}
)
func init() {
rootCmd.AddCommand(serveCmd)
addServeFlags(serveCmd)
}

View file

@ -1,16 +0,0 @@
package cmd
import (
"github.com/spf13/cobra"
)
var (
serviceCmd = &cobra.Command{
Use: "service",
Short: "Manage SFTPGo Windows Service",
}
)
func init() {
rootCmd.AddCommand(serviceCmd)
}

View file

@ -1,52 +0,0 @@
package cmd
import (
"fmt"
"os"
"path/filepath"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
)
var (
startCmd = &cobra.Command{
Use: "start",
Short: "Start SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
configDir = utils.CleanDirInput(configDir)
if !filepath.IsAbs(logFilePath) && utils.IsFileInputValid(logFilePath) {
logFilePath = filepath.Join(configDir, logFilePath)
}
s := service.Service{
ConfigDir: configDir,
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
LogMaxBackups: logMaxBackups,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
Profiler: profiler,
Shutdown: make(chan bool),
}
winService := service.WindowsService{
Service: s,
}
err := winService.RunService()
if err != nil {
fmt.Printf("Error starting service: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service started!\r\n")
}
},
}
)
func init() {
serviceCmd.AddCommand(startCmd)
addServeFlags(startCmd)
}

View file

@ -1,180 +0,0 @@
package cmd
import (
"io"
"os"
"os/user"
"path/filepath"
"github.com/rs/xid"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/version"
)
var (
logJournalD = false
preserveHomeDir = false
baseHomeDir = ""
subsystemCmd = &cobra.Command{
Use: "startsubsys",
Short: "Use SFTPGo as SFTP file transfer subsystem",
Long: `In this mode SFTPGo speaks the server side of SFTP protocol to stdout and
expects client requests from stdin.
This mode is not intended to be called directly, but from sshd using the
Subsystem option.
For example adding a line like this one in "/etc/ssh/sshd_config":
Subsystem sftp sftpgo startsubsys
Command-line flags should be specified in the Subsystem declaration.
`,
Run: func(cmd *cobra.Command, args []string) {
logSender := "startsubsys"
connectionID := xid.New().String()
logLevel := zerolog.DebugLevel
if !logVerbose {
logLevel = zerolog.InfoLevel
}
if logJournalD {
logger.InitJournalDLogger(logLevel)
} else {
logger.InitStdErrLogger(logLevel)
}
osUser, err := user.Current()
if err != nil {
logger.Error(logSender, connectionID, "unable to get the current user: %v", err)
os.Exit(1)
}
username := osUser.Username
homedir := osUser.HomeDir
logger.Info(logSender, connectionID, "starting SFTPGo %v as subsystem, user %#v home dir %#v config dir %#v base home dir %#v",
version.Get(), username, homedir, configDir, baseHomeDir)
err = config.LoadConfig(configDir, configFile)
if err != nil {
logger.Error(logSender, connectionID, "unable to load configuration: %v", err)
os.Exit(1)
}
commonConfig := config.GetCommonConfig()
// idle connection are managed externally
commonConfig.IdleTimeout = 0
config.SetCommonConfig(commonConfig)
common.Initialize(config.GetCommonConfig())
dataProviderConf := config.GetProviderConf()
if dataProviderConf.Driver == dataprovider.SQLiteDataProviderName || dataProviderConf.Driver == dataprovider.BoltDataProviderName {
logger.Debug(logSender, connectionID, "data provider %#v not supported in subsystem mode, using %#v provider",
dataProviderConf.Driver, dataprovider.MemoryDataProviderName)
dataProviderConf.Driver = dataprovider.MemoryDataProviderName
dataProviderConf.Name = ""
dataProviderConf.PreferDatabaseCredentials = true
}
config.SetProviderConf(dataProviderConf)
err = dataprovider.Initialize(dataProviderConf, configDir)
if err != nil {
logger.Error(logSender, connectionID, "unable to initialize the data provider: %v", err)
os.Exit(1)
}
httpConfig := config.GetHTTPConfig()
httpConfig.Initialize(configDir)
user, err := dataprovider.UserExists(username)
if err == nil {
if user.HomeDir != filepath.Clean(homedir) && !preserveHomeDir {
// update the user
user.HomeDir = filepath.Clean(homedir)
err = dataprovider.UpdateUser(user)
if err != nil {
logger.Error(logSender, connectionID, "unable to update user %#v: %v", username, err)
os.Exit(1)
}
}
} else {
user.Username = username
if baseHomeDir != "" && filepath.IsAbs(baseHomeDir) {
user.HomeDir = filepath.Join(baseHomeDir, username)
} else {
user.HomeDir = filepath.Clean(homedir)
}
logger.Debug(logSender, connectionID, "home dir for new user %#v", user.HomeDir)
user.Password = connectionID
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
err = dataprovider.AddUser(user)
if err != nil {
logger.Error(logSender, connectionID, "unable to add user %#v: %v", username, err)
os.Exit(1)
}
}
err = sftpd.ServeSubSystemConnection(user, connectionID, os.Stdin, os.Stdout)
if err != nil && err != io.EOF {
logger.Warn(logSender, connectionID, "serving subsystem finished with error: %v", err)
os.Exit(1)
}
logger.Info(logSender, connectionID, "serving subsystem finished")
os.Exit(0)
},
}
)
func init() {
subsystemCmd.Flags().BoolVarP(&preserveHomeDir, "preserve-home", "p", false, `If the user already exists, the existing home
directory will not be changed`)
subsystemCmd.Flags().StringVarP(&baseHomeDir, "base-home-dir", "d", "", `If the user does not exist specify an alternate
starting directory. The home directory for a new
user will be:
[base-home-dir]/[username]
base-home-dir must be an absolute path.`)
subsystemCmd.Flags().BoolVarP(&logJournalD, "log-to-journald", "j", false, `Send logs to journald. Only available on Linux.
Use:
$ journalctl -o verbose -f
To see full logs.
If not set, the logs will be sent to the standard
error`)
viper.SetDefault(configDirKey, defaultConfigDir)
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR") //nolint:errcheck // err is not nil only if the key to bind is missing
subsystemCmd.Flags().StringVarP(&configDir, configDirFlag, "c", viper.GetString(configDirKey),
`Location for SFTPGo config dir. This directory
should contain the "sftpgo" configuration file
or the configured config-file and it is used as
the base for files with a relative path (eg. the
private keys for the SFTP server, the SQLite
database if you use SQLite as data provider).
This flag can be set using SFTPGO_CONFIG_DIR
env var too.`)
viper.BindPFlag(configDirKey, subsystemCmd.Flags().Lookup(configDirFlag)) //nolint:errcheck
viper.SetDefault(configFileKey, defaultConfigName)
viper.BindEnv(configFileKey, "SFTPGO_CONFIG_FILE") //nolint:errcheck
subsystemCmd.Flags().StringVarP(&configFile, configFileFlag, "f", viper.GetString(configFileKey),
`Name for SFTPGo configuration file. It must be
the name of a file stored in config-dir not the
absolute path to the configuration file. The
specified file name must have no extension we
automatically load JSON, YAML, TOML, HCL and
Java properties. Therefore if you set "sftpgo"
then "sftpgo.json", "sftpgo.yaml" and so on
are searched.
This flag can be set using SFTPGO_CONFIG_FILE
env var too.`)
viper.BindPFlag(configFileKey, subsystemCmd.Flags().Lookup(configFileFlag)) //nolint:errcheck
viper.SetDefault(logVerboseKey, defaultLogVerbose)
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
subsystemCmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
`Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logVerboseKey, subsystemCmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
rootCmd.AddCommand(subsystemCmd)
}

View file

@ -1,35 +0,0 @@
package cmd
import (
"fmt"
"os"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
)
var (
statusCmd = &cobra.Command{
Use: "status",
Short: "Retrieve the status for the SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),
},
}
status, err := s.Status()
if err != nil {
fmt.Printf("Error querying service status: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service status: %#v\r\n", status.String())
}
},
}
)
func init() {
serviceCmd.AddCommand(statusCmd)
}

View file

@ -1,35 +0,0 @@
package cmd
import (
"fmt"
"os"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
)
var (
stopCmd = &cobra.Command{
Use: "stop",
Short: "Stop SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),
},
}
err := s.Stop()
if err != nil {
fmt.Printf("Error stopping service: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service stopped!\r\n")
}
},
}
)
func init() {
serviceCmd.AddCommand(stopCmd)
}

View file

@ -1,35 +0,0 @@
package cmd
import (
"fmt"
"os"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
)
var (
uninstallCmd = &cobra.Command{
Use: "uninstall",
Short: "Uninstall SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),
},
}
err := s.Uninstall()
if err != nil {
fmt.Printf("Error removing service: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service uninstalled\r\n")
}
},
}
)
func init() {
serviceCmd.AddCommand(uninstallCmd)
}

View file

@ -1,205 +0,0 @@
package common
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
var (
errUnconfiguredAction = errors.New("no hook is configured for this action")
errNoHook = errors.New("unable to execute action, no hook defined")
errUnexpectedHTTResponse = errors.New("unexpected HTTP response code")
)
// ProtocolActions defines the action to execute on file operations and SSH commands
type ProtocolActions struct {
// Valid values are download, upload, pre-delete, delete, rename, ssh_cmd. Empty slice to disable
ExecuteOn []string `json:"execute_on" mapstructure:"execute_on"`
// Absolute path to an external program or an HTTP URL
Hook string `json:"hook" mapstructure:"hook"`
}
var actionHandler ActionHandler = defaultActionHandler{}
// InitializeActionHandler lets the user choose an action handler implementation.
//
// Do NOT call this function after application initialization.
func InitializeActionHandler(handler ActionHandler) {
actionHandler = handler
}
// SSHCommandActionNotification executes the defined action for the specified SSH command.
func SSHCommandActionNotification(user *dataprovider.User, filePath, target, sshCmd string, err error) {
notification := newActionNotification(user, operationSSHCmd, filePath, target, sshCmd, ProtocolSSH, 0, err)
go actionHandler.Handle(notification) // nolint:errcheck
}
// ActionHandler handles a notification for a Protocol Action.
type ActionHandler interface {
Handle(notification ActionNotification) error
}
// ActionNotification defines a notification for a Protocol Action.
type ActionNotification struct {
Action string `json:"action"`
Username string `json:"username"`
Path string `json:"path"`
TargetPath string `json:"target_path,omitempty"`
SSHCmd string `json:"ssh_cmd,omitempty"`
FileSize int64 `json:"file_size,omitempty"`
FsProvider int `json:"fs_provider"`
Bucket string `json:"bucket,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
Status int `json:"status"`
Protocol string `json:"protocol"`
}
func newActionNotification(
user *dataprovider.User,
operation, filePath, target, sshCmd, protocol string,
fileSize int64,
err error,
) ActionNotification {
var bucket, endpoint string
status := 1
if user.FsConfig.Provider == dataprovider.S3FilesystemProvider {
bucket = user.FsConfig.S3Config.Bucket
endpoint = user.FsConfig.S3Config.Endpoint
} else if user.FsConfig.Provider == dataprovider.GCSFilesystemProvider {
bucket = user.FsConfig.GCSConfig.Bucket
} else if user.FsConfig.Provider == dataprovider.AzureBlobFilesystemProvider {
bucket = user.FsConfig.AzBlobConfig.Container
if user.FsConfig.AzBlobConfig.SASURL != "" {
endpoint = user.FsConfig.AzBlobConfig.SASURL
} else {
endpoint = user.FsConfig.AzBlobConfig.Endpoint
}
}
if err == ErrQuotaExceeded {
status = 2
} else if err != nil {
status = 0
}
return ActionNotification{
Action: operation,
Username: user.Username,
Path: filePath,
TargetPath: target,
SSHCmd: sshCmd,
FileSize: fileSize,
FsProvider: int(user.FsConfig.Provider),
Bucket: bucket,
Endpoint: endpoint,
Status: status,
Protocol: protocol,
}
}
type defaultActionHandler struct{}
func (h defaultActionHandler) Handle(notification ActionNotification) error {
if !utils.IsStringInSlice(notification.Action, Config.Actions.ExecuteOn) {
return errUnconfiguredAction
}
if Config.Actions.Hook == "" {
logger.Warn(notification.Protocol, "", "Unable to send notification, no hook is defined")
return errNoHook
}
if strings.HasPrefix(Config.Actions.Hook, "http") {
return h.handleHTTP(notification)
}
return h.handleCommand(notification)
}
func (h defaultActionHandler) handleHTTP(notification ActionNotification) error {
u, err := url.Parse(Config.Actions.Hook)
if err != nil {
logger.Warn(notification.Protocol, "", "Invalid hook %#v for operation %#v: %v", Config.Actions.Hook, notification.Action, err)
return err
}
startTime := time.Now()
respCode := 0
httpClient := httpclient.GetHTTPClient()
var b bytes.Buffer
_ = json.NewEncoder(&b).Encode(notification)
resp, err := httpClient.Post(u.String(), "application/json", &b)
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
if respCode != http.StatusOK {
err = errUnexpectedHTTResponse
}
}
logger.Debug(notification.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v", notification.Action, u.String(), respCode, time.Since(startTime), err)
return err
}
func (h defaultActionHandler) handleCommand(notification ActionNotification) error {
if !filepath.IsAbs(Config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %#v", Config.Actions.Hook)
logger.Warn(notification.Protocol, "", "unable to execute notification command: %v", err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd)
cmd.Env = append(os.Environ(), notificationAsEnvVars(notification)...)
startTime := time.Now()
err := cmd.Run()
logger.Debug(notification.Protocol, "", "executed command %#v with arguments: %#v, %#v, %#v, %#v, %#v, elapsed: %v, error: %v",
Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd, time.Since(startTime), err)
return err
}
func notificationAsEnvVars(notification ActionNotification) []string {
return []string{
fmt.Sprintf("SFTPGO_ACTION=%v", notification.Action),
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", notification.Username),
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", notification.Path),
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", notification.TargetPath),
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", notification.SSHCmd),
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", notification.FileSize),
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", notification.FsProvider),
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", notification.Bucket),
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", notification.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", notification.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", notification.Protocol),
}
}

View file

@ -1,222 +0,0 @@
package common
import (
"errors"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/vfs"
)
func TestNewActionNotification(t *testing.T) {
user := &dataprovider.User{
Username: "username",
}
user.FsConfig.Provider = dataprovider.LocalFilesystemProvider
user.FsConfig.S3Config = vfs.S3FsConfig{
Bucket: "s3bucket",
Endpoint: "endpoint",
}
user.FsConfig.GCSConfig = vfs.GCSFsConfig{
Bucket: "gcsbucket",
}
user.FsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
Container: "azcontainer",
SASURL: "azsasurl",
Endpoint: "azendpoint",
}
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, errors.New("fake error"))
assert.Equal(t, user.Username, a.Username)
assert.Equal(t, 0, len(a.Bucket))
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 0, a.Status)
user.FsConfig.Provider = dataprovider.S3FilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSSH, 123, nil)
assert.Equal(t, "s3bucket", a.Bucket)
assert.Equal(t, "endpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.Provider = dataprovider.GCSFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, ErrQuotaExceeded)
assert.Equal(t, "gcsbucket", a.Bucket)
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 2, a.Status)
user.FsConfig.Provider = dataprovider.AzureBlobFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azsasurl", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.AzBlobConfig.SASURL = ""
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
}
func TestActionHTTP(t *testing.T) {
actionsCopy := Config.Actions
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationDownload},
Hook: fmt.Sprintf("http://%v", httpAddr),
}
user := &dataprovider.User{
Username: "username",
}
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
err := actionHandler.Handle(a)
assert.NoError(t, err)
Config.Actions.Hook = "http://invalid:1234"
err = actionHandler.Handle(a)
assert.Error(t, err)
Config.Actions.Hook = fmt.Sprintf("http://%v/404", httpAddr)
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, errUnexpectedHTTResponse.Error())
}
Config.Actions = actionsCopy
}
func TestActionCMD(t *testing.T) {
if runtime.GOOS == osWindows {
t.Skip("this test is not available on Windows")
}
actionsCopy := Config.Actions
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationDownload},
Hook: hookCmd,
}
user := &dataprovider.User{
Username: "username",
}
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
err = actionHandler.Handle(a)
assert.NoError(t, err)
SSHCommandActionNotification(user, "path", "target", "sha1sum", nil)
Config.Actions = actionsCopy
}
func TestWrongActions(t *testing.T) {
actionsCopy := Config.Actions
badCommand := "/bad/command"
if runtime.GOOS == osWindows {
badCommand = "C:\\bad\\command"
}
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationUpload},
Hook: badCommand,
}
user := &dataprovider.User{
Username: "username",
}
a := newActionNotification(user, operationUpload, "", "", "", ProtocolSFTP, 123, nil)
err := actionHandler.Handle(a)
assert.Error(t, err, "action with bad command must fail")
a.Action = operationDelete
err = actionHandler.Handle(a)
assert.EqualError(t, err, errUnconfiguredAction.Error())
Config.Actions.Hook = "http://foo\x7f.com/"
a.Action = operationUpload
err = actionHandler.Handle(a)
assert.Error(t, err, "action with bad url must fail")
Config.Actions.Hook = ""
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, errNoHook.Error())
}
Config.Actions.Hook = "relative path"
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, fmt.Sprintf("invalid notification command %#v", Config.Actions.Hook))
}
Config.Actions = actionsCopy
}
func TestPreDeleteAction(t *testing.T) {
if runtime.GOOS == osWindows {
t.Skip("this test is not available on Windows")
}
actionsCopy := Config.Actions
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationPreDelete},
Hook: hookCmd,
}
homeDir := filepath.Join(os.TempDir(), "test_user")
err = os.MkdirAll(homeDir, os.ModePerm)
assert.NoError(t, err)
user := dataprovider.User{
Username: "username",
HomeDir: homeDir,
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("id", homeDir, nil)
c := NewBaseConnection("id", ProtocolSFTP, user, fs)
testfile := filepath.Join(user.HomeDir, "testfile")
err = ioutil.WriteFile(testfile, []byte("test"), os.ModePerm)
assert.NoError(t, err)
info, err := os.Stat(testfile)
assert.NoError(t, err)
err = c.RemoveFile(testfile, "testfile", info)
assert.NoError(t, err)
assert.FileExists(t, testfile)
os.RemoveAll(homeDir)
Config.Actions = actionsCopy
}
type actionHandlerStub struct {
called bool
}
func (h *actionHandlerStub) Handle(notification ActionNotification) error {
h.called = true
return nil
}
func TestInitializeActionHandler(t *testing.T) {
handler := &actionHandlerStub{}
InitializeActionHandler(handler)
t.Cleanup(func() {
InitializeActionHandler(defaultActionHandler{})
})
err := actionHandler.Handle(ActionNotification{})
assert.NoError(t, err)
assert.True(t, handler.called)
}

View file

@ -1,737 +0,0 @@
// Package common defines code shared among file transfer packages and protocols
package common
import (
"context"
"errors"
"fmt"
"net"
"net/http"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pires/go-proxyproto"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/metrics"
"github.com/drakkan/sftpgo/utils"
)
// constants
const (
logSender = "common"
uploadLogSender = "Upload"
downloadLogSender = "Download"
renameLogSender = "Rename"
rmdirLogSender = "Rmdir"
mkdirLogSender = "Mkdir"
symlinkLogSender = "Symlink"
removeLogSender = "Remove"
chownLogSender = "Chown"
chmodLogSender = "Chmod"
chtimesLogSender = "Chtimes"
truncateLogSender = "Truncate"
operationDownload = "download"
operationUpload = "upload"
operationDelete = "delete"
operationPreDelete = "pre-delete"
operationRename = "rename"
operationSSHCmd = "ssh_cmd"
chtimesFormat = "2006-01-02T15:04:05" // YYYY-MM-DDTHH:MM:SS
idleTimeoutCheckInterval = 3 * time.Minute
)
// Stat flags
const (
StatAttrUIDGID = 1
StatAttrPerms = 2
StatAttrTimes = 4
StatAttrSize = 8
)
// Transfer types
const (
TransferUpload = iota
TransferDownload
)
// Supported protocols
const (
ProtocolSFTP = "SFTP"
ProtocolSCP = "SCP"
ProtocolSSH = "SSH"
ProtocolFTP = "FTP"
ProtocolWebDAV = "DAV"
)
// Upload modes
const (
UploadModeStandard = iota
UploadModeAtomic
UploadModeAtomicWithResume
)
// errors definitions
var (
ErrPermissionDenied = errors.New("permission denied")
ErrNotExist = errors.New("no such file or directory")
ErrOpUnsupported = errors.New("operation unsupported")
ErrGenericFailure = errors.New("failure")
ErrQuotaExceeded = errors.New("denying write due to space limit")
ErrSkipPermissionsCheck = errors.New("permission check skipped")
ErrConnectionDenied = errors.New("You are not allowed to connect")
errNoTransfer = errors.New("requested transfer not found")
errTransferMismatch = errors.New("transfer mismatch")
)
var (
// Config is the configuration for the supported protocols
Config Configuration
// Connections is the list of active connections
Connections ActiveConnections
// QuotaScans is the list of active quota scans
QuotaScans ActiveScans
idleTimeoutTicker *time.Ticker
idleTimeoutTickerDone chan bool
supportedProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP, ProtocolWebDAV}
)
// Initialize sets the common configuration
func Initialize(c Configuration) {
Config = c
Config.idleLoginTimeout = 2 * time.Minute
Config.idleTimeoutAsDuration = time.Duration(Config.IdleTimeout) * time.Minute
if Config.IdleTimeout > 0 {
startIdleTimeoutTicker(idleTimeoutCheckInterval)
}
}
func startIdleTimeoutTicker(duration time.Duration) {
stopIdleTimeoutTicker()
idleTimeoutTicker = time.NewTicker(duration)
idleTimeoutTickerDone = make(chan bool)
go func() {
for {
select {
case <-idleTimeoutTickerDone:
return
case <-idleTimeoutTicker.C:
Connections.checkIdles()
}
}
}()
}
func stopIdleTimeoutTicker() {
if idleTimeoutTicker != nil {
idleTimeoutTicker.Stop()
idleTimeoutTickerDone <- true
idleTimeoutTicker = nil
}
}
// ActiveTransfer defines the interface for the current active transfers
type ActiveTransfer interface {
GetID() uint64
GetType() int
GetSize() int64
GetVirtualPath() string
GetStartTime() time.Time
SignalClose()
Truncate(fsPath string, size int64) (int64, error)
GetRealFsPath(fsPath string) string
}
// ActiveConnection defines the interface for the current active connections
type ActiveConnection interface {
GetID() string
GetUsername() string
GetRemoteAddress() string
GetClientVersion() string
GetProtocol() string
GetConnectionTime() time.Time
GetLastActivity() time.Time
GetCommand() string
Disconnect() error
AddTransfer(t ActiveTransfer)
RemoveTransfer(t ActiveTransfer)
GetTransfers() []ConnectionTransfer
}
// StatAttributes defines the attributes for set stat commands
type StatAttributes struct {
Mode os.FileMode
Atime time.Time
Mtime time.Time
UID int
GID int
Flags int
Size int64
}
// ConnectionTransfer defines the trasfer details to expose
type ConnectionTransfer struct {
ID uint64 `json:"-"`
OperationType string `json:"operation_type"`
StartTime int64 `json:"start_time"`
Size int64 `json:"size"`
VirtualPath string `json:"path"`
}
func (t *ConnectionTransfer) getConnectionTransferAsString() string {
result := ""
switch t.OperationType {
case operationUpload:
result += "UL "
case operationDownload:
result += "DL "
}
result += fmt.Sprintf("%#v ", t.VirtualPath)
if t.Size > 0 {
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(t.StartTime))
speed := float64(t.Size) / float64(utils.GetTimeAsMsSinceEpoch(time.Now())-t.StartTime)
result += fmt.Sprintf("Size: %#v Elapsed: %#v Speed: \"%.1f KB/s\"", utils.ByteCountSI(t.Size),
utils.GetDurationAsString(elapsed), speed)
}
return result
}
// Configuration defines configuration parameters common to all supported protocols
type Configuration struct {
// Maximum idle timeout as minutes. If a client is idle for a time that exceeds this setting it will be disconnected.
// 0 means disabled
IdleTimeout int `json:"idle_timeout" mapstructure:"idle_timeout"`
// UploadMode 0 means standard, the files are uploaded directly to the requested path.
// 1 means atomic: the files are uploaded to a temporary path and renamed to the requested path
// when the client ends the upload. Atomic mode avoid problems such as a web server that
// serves partial files when the files are being uploaded.
// In atomic mode if there is an upload error the temporary file is deleted and so the requested
// upload path will not contain a partial file.
// 2 means atomic with resume support: as atomic but if there is an upload error the temporary
// file is renamed to the requested path and not deleted, this way a client can reconnect and resume
// the upload.
UploadMode int `json:"upload_mode" mapstructure:"upload_mode"`
// Actions to execute for SFTP file operations and SSH commands
Actions ProtocolActions `json:"actions" mapstructure:"actions"`
// SetstatMode 0 means "normal mode": requests for changing permissions and owner/group are executed.
// 1 means "ignore mode": requests for changing permissions and owner/group are silently ignored.
SetstatMode int `json:"setstat_mode" mapstructure:"setstat_mode"`
// Support for HAProxy PROXY protocol.
// If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable
// the proxy protocol. It provides a convenient way to safely transport connection information
// such as a client's address across multiple layers of NAT or TCP proxies to get the real
// client IP address instead of the proxy IP. Both protocol versions 1 and 2 are supported.
// - 0 means disabled
// - 1 means proxy protocol enabled. Proxy header will be used and requests without proxy header will be accepted.
// - 2 means proxy protocol required. Proxy header will be used and requests without proxy header will be rejected.
// If the proxy protocol is enabled in SFTPGo then you have to enable the protocol in your proxy configuration too,
// for example for HAProxy add "send-proxy" or "send-proxy-v2" to each server configuration line.
ProxyProtocol int `json:"proxy_protocol" mapstructure:"proxy_protocol"`
// List of IP addresses and IP ranges allowed to send the proxy header.
// If proxy protocol is set to 1 and we receive a proxy header from an IP that is not in the list then the
// connection will be accepted and the header will be ignored.
// If proxy protocol is set to 2 and we receive a proxy header from an IP that is not in the list then the
// connection will be rejected.
ProxyAllowed []string `json:"proxy_allowed" mapstructure:"proxy_allowed"`
// Absolute path to an external program or an HTTP URL to invoke after a user connects
// and before he tries to login. It allows you to reject the connection based on the source
// ip address. Leave empty do disable.
PostConnectHook string `json:"post_connect_hook" mapstructure:"post_connect_hook"`
idleTimeoutAsDuration time.Duration
idleLoginTimeout time.Duration
}
// IsAtomicUploadEnabled returns true if atomic upload is enabled
func (c *Configuration) IsAtomicUploadEnabled() bool {
return c.UploadMode == UploadModeAtomic || c.UploadMode == UploadModeAtomicWithResume
}
// GetProxyListener returns a wrapper for the given listener that supports the
// HAProxy Proxy Protocol or nil if the proxy protocol is not configured
func (c *Configuration) GetProxyListener(listener net.Listener) (*proxyproto.Listener, error) {
var proxyListener *proxyproto.Listener
var err error
if c.ProxyProtocol > 0 {
var policyFunc func(upstream net.Addr) (proxyproto.Policy, error)
if c.ProxyProtocol == 1 && len(c.ProxyAllowed) > 0 {
policyFunc, err = proxyproto.LaxWhiteListPolicy(c.ProxyAllowed)
if err != nil {
return nil, err
}
}
if c.ProxyProtocol == 2 {
if len(c.ProxyAllowed) == 0 {
policyFunc = func(upstream net.Addr) (proxyproto.Policy, error) {
return proxyproto.REQUIRE, nil
}
} else {
policyFunc, err = proxyproto.StrictWhiteListPolicy(c.ProxyAllowed)
if err != nil {
return nil, err
}
}
}
proxyListener = &proxyproto.Listener{
Listener: listener,
Policy: policyFunc,
}
}
return proxyListener, nil
}
// ExecutePostConnectHook executes the post connect hook if defined
func (c *Configuration) ExecutePostConnectHook(remoteAddr, protocol string) error {
if len(c.PostConnectHook) == 0 {
return nil
}
ip := utils.GetIPFromRemoteAddress(remoteAddr)
if strings.HasPrefix(c.PostConnectHook, "http") {
var url *url.URL
url, err := url.Parse(c.PostConnectHook)
if err != nil {
logger.Warn(protocol, "", "Login from ip %#v denied, invalid post connect hook %#v: %v",
ip, c.PostConnectHook, err)
return err
}
httpClient := httpclient.GetHTTPClient()
q := url.Query()
q.Add("ip", ip)
q.Add("protocol", protocol)
url.RawQuery = q.Encode()
resp, err := httpClient.Get(url.String())
if err != nil {
logger.Warn(protocol, "", "Login from ip %#v denied, error executing post connect hook: %v", ip, err)
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
logger.Warn(protocol, "", "Login from ip %#v denied, post connect hook response code: %v", ip, resp.StatusCode)
return errUnexpectedHTTResponse
}
return nil
}
if !filepath.IsAbs(c.PostConnectHook) {
err := fmt.Errorf("invalid post connect hook %#v", c.PostConnectHook)
logger.Warn(protocol, "", "Login from ip %#v denied: %v", ip, err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, c.PostConnectHook)
cmd.Env = append(os.Environ(),
fmt.Sprintf("SFTPGO_CONNECTION_IP=%v", ip),
fmt.Sprintf("SFTPGO_CONNECTION_PROTOCOL=%v", protocol))
err := cmd.Run()
if err != nil {
logger.Warn(protocol, "", "Login from ip %#v denied, connect hook error: %v", ip, err)
}
return err
}
// SSHConnection defines an ssh connection.
// Each SSH connection can open several channels for SFTP or SSH commands
type SSHConnection struct {
id string
conn net.Conn
lastActivity int64
}
// NewSSHConnection returns a new SSHConnection
func NewSSHConnection(id string, conn net.Conn) *SSHConnection {
return &SSHConnection{
id: id,
conn: conn,
lastActivity: time.Now().UnixNano(),
}
}
// GetID returns the ID for this SSHConnection
func (c *SSHConnection) GetID() string {
return c.id
}
// UpdateLastActivity updates last activity for this connection
func (c *SSHConnection) UpdateLastActivity() {
atomic.StoreInt64(&c.lastActivity, time.Now().UnixNano())
}
// GetLastActivity returns the last connection activity
func (c *SSHConnection) GetLastActivity() time.Time {
return time.Unix(0, atomic.LoadInt64(&c.lastActivity))
}
// Close closes the underlying network connection
func (c *SSHConnection) Close() error {
return c.conn.Close()
}
// ActiveConnections holds the currect active connections with the associated transfers
type ActiveConnections struct {
sync.RWMutex
connections []ActiveConnection
sshConnections []*SSHConnection
}
// GetActiveSessions returns the number of active sessions for the given username.
// We return the open sessions for any protocol
func (conns *ActiveConnections) GetActiveSessions(username string) int {
conns.RLock()
defer conns.RUnlock()
numSessions := 0
for _, c := range conns.connections {
if c.GetUsername() == username {
numSessions++
}
}
return numSessions
}
// Add adds a new connection to the active ones
func (conns *ActiveConnections) Add(c ActiveConnection) {
conns.Lock()
defer conns.Unlock()
conns.connections = append(conns.connections, c)
metrics.UpdateActiveConnectionsSize(len(conns.connections))
logger.Debug(c.GetProtocol(), c.GetID(), "connection added, num open connections: %v", len(conns.connections))
}
// Swap replaces an existing connection with the given one.
// This method is useful if you have to change some connection details
// for example for FTP is used to update the connection once the user
// authenticates
func (conns *ActiveConnections) Swap(c ActiveConnection) error {
conns.Lock()
defer conns.Unlock()
for idx, conn := range conns.connections {
if conn.GetID() == c.GetID() {
conn = nil
conns.connections[idx] = c
return nil
}
}
return errors.New("connection to swap not found")
}
// Remove removes a connection from the active ones
func (conns *ActiveConnections) Remove(connectionID string) {
conns.Lock()
defer conns.Unlock()
for idx, conn := range conns.connections {
if conn.GetID() == connectionID {
lastIdx := len(conns.connections) - 1
conns.connections[idx] = conns.connections[lastIdx]
conns.connections[lastIdx] = nil
conns.connections = conns.connections[:lastIdx]
metrics.UpdateActiveConnectionsSize(lastIdx)
logger.Debug(conn.GetProtocol(), conn.GetID(), "connection removed, num open connections: %v", lastIdx)
return
}
}
logger.Warn(logSender, "", "connection id %#v to remove not found!", connectionID)
}
// Close closes an active connection.
// It returns true on success
func (conns *ActiveConnections) Close(connectionID string) bool {
conns.RLock()
result := false
for _, c := range conns.connections {
if c.GetID() == connectionID {
defer func(conn ActiveConnection) {
err := conn.Disconnect()
logger.Debug(conn.GetProtocol(), conn.GetID(), "close connection requested, close err: %v", err)
}(c)
result = true
break
}
}
conns.RUnlock()
return result
}
// AddSSHConnection adds a new ssh connection to the active ones
func (conns *ActiveConnections) AddSSHConnection(c *SSHConnection) {
conns.Lock()
defer conns.Unlock()
conns.sshConnections = append(conns.sshConnections, c)
logger.Debug(logSender, c.GetID(), "ssh connection added, num open connections: %v", len(conns.sshConnections))
}
// RemoveSSHConnection removes a connection from the active ones
func (conns *ActiveConnections) RemoveSSHConnection(connectionID string) {
conns.Lock()
defer conns.Unlock()
for idx, conn := range conns.sshConnections {
if conn.GetID() == connectionID {
lastIdx := len(conns.sshConnections) - 1
conns.sshConnections[idx] = conns.sshConnections[lastIdx]
conns.sshConnections[lastIdx] = nil
conns.sshConnections = conns.sshConnections[:lastIdx]
logger.Debug(logSender, conn.GetID(), "ssh connection removed, num open ssh connections: %v", lastIdx)
return
}
}
logger.Warn(logSender, "", "ssh connection to remove with id %#v not found!", connectionID)
}
func (conns *ActiveConnections) checkIdles() {
conns.RLock()
for _, sshConn := range conns.sshConnections {
idleTime := time.Since(sshConn.GetLastActivity())
if idleTime > Config.idleTimeoutAsDuration {
// we close the an ssh connection if it has no active connections associated
idToMatch := fmt.Sprintf("_%v_", sshConn.GetID())
toClose := true
for _, conn := range conns.connections {
if strings.Contains(conn.GetID(), idToMatch) {
toClose = false
break
}
}
if toClose {
defer func(c *SSHConnection) {
err := c.Close()
logger.Debug(logSender, c.GetID(), "close idle SSH connection, idle time: %v, close err: %v",
time.Since(c.GetLastActivity()), err)
}(sshConn)
}
}
}
for _, c := range conns.connections {
idleTime := time.Since(c.GetLastActivity())
isUnauthenticatedFTPUser := (c.GetProtocol() == ProtocolFTP && len(c.GetUsername()) == 0)
if idleTime > Config.idleTimeoutAsDuration || (isUnauthenticatedFTPUser && idleTime > Config.idleLoginTimeout) {
defer func(conn ActiveConnection, isFTPNoAuth bool) {
err := conn.Disconnect()
logger.Debug(conn.GetProtocol(), conn.GetID(), "close idle connection, idle time: %v, username: %#v close err: %v",
time.Since(conn.GetLastActivity()), conn.GetUsername(), err)
if isFTPNoAuth {
ip := utils.GetIPFromRemoteAddress(c.GetRemoteAddress())
logger.ConnectionFailedLog("", ip, dataprovider.LoginMethodNoAuthTryed, c.GetProtocol(), "client idle")
metrics.AddNoAuthTryed()
dataprovider.ExecutePostLoginHook("", dataprovider.LoginMethodNoAuthTryed, ip, c.GetProtocol(),
dataprovider.ErrNoAuthTryed)
}
}(c, isUnauthenticatedFTPUser)
}
}
conns.RUnlock()
}
// GetStats returns stats for active connections
func (conns *ActiveConnections) GetStats() []ConnectionStatus {
conns.RLock()
defer conns.RUnlock()
stats := make([]ConnectionStatus, 0, len(conns.connections))
for _, c := range conns.connections {
stat := ConnectionStatus{
Username: c.GetUsername(),
ConnectionID: c.GetID(),
ClientVersion: c.GetClientVersion(),
RemoteAddress: c.GetRemoteAddress(),
ConnectionTime: utils.GetTimeAsMsSinceEpoch(c.GetConnectionTime()),
LastActivity: utils.GetTimeAsMsSinceEpoch(c.GetLastActivity()),
Protocol: c.GetProtocol(),
Command: c.GetCommand(),
Transfers: c.GetTransfers(),
}
stats = append(stats, stat)
}
return stats
}
// ConnectionStatus returns the status for an active connection
type ConnectionStatus struct {
// Logged in username
Username string `json:"username"`
// Unique identifier for the connection
ConnectionID string `json:"connection_id"`
// client's version string
ClientVersion string `json:"client_version,omitempty"`
// Remote address for this connection
RemoteAddress string `json:"remote_address"`
// Connection time as unix timestamp in milliseconds
ConnectionTime int64 `json:"connection_time"`
// Last activity as unix timestamp in milliseconds
LastActivity int64 `json:"last_activity"`
// Protocol for this connection
Protocol string `json:"protocol"`
// active uploads/downloads
Transfers []ConnectionTransfer `json:"active_transfers,omitempty"`
// SSH command or WevDAV method
Command string `json:"command,omitempty"`
}
// GetConnectionDuration returns the connection duration as string
func (c ConnectionStatus) GetConnectionDuration() string {
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(c.ConnectionTime))
return utils.GetDurationAsString(elapsed)
}
// GetConnectionInfo returns connection info.
// Protocol,Client Version and RemoteAddress are returned.
// For SSH commands the issued command is returned too.
func (c ConnectionStatus) GetConnectionInfo() string {
result := fmt.Sprintf("%v. Client: %#v From: %#v", c.Protocol, c.ClientVersion, c.RemoteAddress)
if c.Protocol == ProtocolSSH && len(c.Command) > 0 {
result += fmt.Sprintf(". Command: %#v", c.Command)
}
if c.Protocol == ProtocolWebDAV && len(c.Command) > 0 {
result += fmt.Sprintf(". Method: %#v", c.Command)
}
return result
}
// GetTransfersAsString returns the active transfers as string
func (c ConnectionStatus) GetTransfersAsString() string {
result := ""
for _, t := range c.Transfers {
if len(result) > 0 {
result += ". "
}
result += t.getConnectionTransferAsString()
}
return result
}
// ActiveQuotaScan defines an active quota scan for a user home dir
type ActiveQuotaScan struct {
// Username to which the quota scan refers
Username string `json:"username"`
// quota scan start time as unix timestamp in milliseconds
StartTime int64 `json:"start_time"`
}
// ActiveVirtualFolderQuotaScan defines an active quota scan for a virtual folder
type ActiveVirtualFolderQuotaScan struct {
// folder path to which the quota scan refers
MappedPath string `json:"mapped_path"`
// quota scan start time as unix timestamp in milliseconds
StartTime int64 `json:"start_time"`
}
// ActiveScans holds the active quota scans
type ActiveScans struct {
sync.RWMutex
UserHomeScans []ActiveQuotaScan
FolderScans []ActiveVirtualFolderQuotaScan
}
// GetUsersQuotaScans returns the active quota scans for users home directories
func (s *ActiveScans) GetUsersQuotaScans() []ActiveQuotaScan {
s.RLock()
defer s.RUnlock()
scans := make([]ActiveQuotaScan, len(s.UserHomeScans))
copy(scans, s.UserHomeScans)
return scans
}
// AddUserQuotaScan adds a user to the ones with active quota scans.
// Returns false if the user has a quota scan already running
func (s *ActiveScans) AddUserQuotaScan(username string) bool {
s.Lock()
defer s.Unlock()
for _, scan := range s.UserHomeScans {
if scan.Username == username {
return false
}
}
s.UserHomeScans = append(s.UserHomeScans, ActiveQuotaScan{
Username: username,
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
})
return true
}
// RemoveUserQuotaScan removes a user from the ones with active quota scans.
// Returns false if the user has no active quota scans
func (s *ActiveScans) RemoveUserQuotaScan(username string) bool {
s.Lock()
defer s.Unlock()
indexToRemove := -1
for i, scan := range s.UserHomeScans {
if scan.Username == username {
indexToRemove = i
break
}
}
if indexToRemove >= 0 {
s.UserHomeScans[indexToRemove] = s.UserHomeScans[len(s.UserHomeScans)-1]
s.UserHomeScans = s.UserHomeScans[:len(s.UserHomeScans)-1]
return true
}
return false
}
// GetVFoldersQuotaScans returns the active quota scans for virtual folders
func (s *ActiveScans) GetVFoldersQuotaScans() []ActiveVirtualFolderQuotaScan {
s.RLock()
defer s.RUnlock()
scans := make([]ActiveVirtualFolderQuotaScan, len(s.FolderScans))
copy(scans, s.FolderScans)
return scans
}
// AddVFolderQuotaScan adds a virtual folder to the ones with active quota scans.
// Returns false if the folder has a quota scan already running
func (s *ActiveScans) AddVFolderQuotaScan(folderPath string) bool {
s.Lock()
defer s.Unlock()
for _, scan := range s.FolderScans {
if scan.MappedPath == folderPath {
return false
}
}
s.FolderScans = append(s.FolderScans, ActiveVirtualFolderQuotaScan{
MappedPath: folderPath,
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
})
return true
}
// RemoveVFolderQuotaScan removes a folder from the ones with active quota scans.
// Returns false if the folder has no active quota scans
func (s *ActiveScans) RemoveVFolderQuotaScan(folderPath string) bool {
s.Lock()
defer s.Unlock()
indexToRemove := -1
for i, scan := range s.FolderScans {
if scan.MappedPath == folderPath {
indexToRemove = i
break
}
}
if indexToRemove >= 0 {
s.FolderScans[indexToRemove] = s.FolderScans[len(s.FolderScans)-1]
s.FolderScans = s.FolderScans[:len(s.FolderScans)-1]
return true
}
return false
}

View file

@ -1,536 +0,0 @@
package common
import (
"fmt"
"net"
"net/http"
"os"
"os/exec"
"runtime"
"strings"
"sync/atomic"
"testing"
"time"
"github.com/rs/zerolog"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/vfs"
)
const (
logSenderTest = "common_test"
httpAddr = "127.0.0.1:9999"
httpProxyAddr = "127.0.0.1:7777"
configDir = ".."
osWindows = "windows"
userTestUsername = "common_test_username"
userTestPwd = "common_test_pwd"
)
type providerConf struct {
Config dataprovider.Config `json:"data_provider" mapstructure:"data_provider"`
}
type fakeConnection struct {
*BaseConnection
command string
}
func (c *fakeConnection) AddUser(user dataprovider.User) error {
fs, err := user.GetFilesystem(c.GetID())
if err != nil {
return err
}
c.BaseConnection.User = user
c.BaseConnection.Fs = fs
return nil
}
func (c *fakeConnection) Disconnect() error {
Connections.Remove(c.GetID())
return nil
}
func (c *fakeConnection) GetClientVersion() string {
return ""
}
func (c *fakeConnection) GetCommand() string {
return c.command
}
func (c *fakeConnection) GetRemoteAddress() string {
return ""
}
type customNetConn struct {
net.Conn
id string
isClosed bool
}
func (c *customNetConn) Close() error {
Connections.RemoveSSHConnection(c.id)
c.isClosed = true
return c.Conn.Close()
}
func TestMain(m *testing.M) {
logfilePath := "common_test.log"
logger.InitLogger(logfilePath, 5, 1, 28, false, zerolog.DebugLevel)
viper.SetEnvPrefix("sftpgo")
replacer := strings.NewReplacer(".", "__")
viper.SetEnvKeyReplacer(replacer)
viper.SetConfigName("sftpgo")
viper.AutomaticEnv()
viper.AllowEmptyEnv(true)
driver, err := initializeDataprovider(-1)
if err != nil {
logger.WarnToConsole("error initializing data provider: %v", err)
os.Exit(1)
}
logger.InfoToConsole("Starting COMMON tests, provider: %v", driver)
Initialize(Configuration{})
httpConfig := httpclient.Config{
Timeout: 5,
}
httpConfig.Initialize(configDir)
go func() {
// start a test HTTP server to receive action notifications
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "OK\n")
})
http.HandleFunc("/404", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)
fmt.Fprintf(w, "Not found\n")
})
if err := http.ListenAndServe(httpAddr, nil); err != nil {
logger.ErrorToConsole("could not start HTTP notification server: %v", err)
os.Exit(1)
}
}()
go func() {
Config.ProxyProtocol = 2
listener, err := net.Listen("tcp", httpProxyAddr)
if err != nil {
logger.ErrorToConsole("error creating listener for proxy protocol server: %v", err)
os.Exit(1)
}
proxyListener, err := Config.GetProxyListener(listener)
if err != nil {
logger.ErrorToConsole("error creating proxy protocol listener: %v", err)
os.Exit(1)
}
Config.ProxyProtocol = 0
s := &http.Server{}
if err := s.Serve(proxyListener); err != nil {
logger.ErrorToConsole("could not start HTTP proxy protocol server: %v", err)
os.Exit(1)
}
}()
waitTCPListening(httpAddr)
waitTCPListening(httpProxyAddr)
exitCode := m.Run()
os.Remove(logfilePath) //nolint:errcheck
os.Exit(exitCode)
}
func waitTCPListening(address string) {
for {
conn, err := net.Dial("tcp", address)
if err != nil {
logger.WarnToConsole("tcp server %v not listening: %v\n", address, err)
time.Sleep(100 * time.Millisecond)
continue
}
logger.InfoToConsole("tcp server %v now listening\n", address)
conn.Close()
break
}
}
func initializeDataprovider(trackQuota int) (string, error) {
configDir := ".."
viper.AddConfigPath(configDir)
if err := viper.ReadInConfig(); err != nil {
return "", err
}
var cfg providerConf
if err := viper.Unmarshal(&cfg); err != nil {
return "", err
}
if trackQuota >= 0 && trackQuota <= 2 {
cfg.Config.TrackQuota = trackQuota
}
return cfg.Config.Driver, dataprovider.Initialize(cfg.Config, configDir)
}
func closeDataprovider() error {
return dataprovider.Close()
}
func TestSSHConnections(t *testing.T) {
conn1, conn2 := net.Pipe()
now := time.Now()
sshConn1 := NewSSHConnection("id1", conn1)
sshConn2 := NewSSHConnection("id2", conn2)
sshConn3 := NewSSHConnection("id3", conn2)
assert.Equal(t, "id1", sshConn1.GetID())
assert.Equal(t, "id2", sshConn2.GetID())
assert.Equal(t, "id3", sshConn3.GetID())
sshConn1.UpdateLastActivity()
assert.GreaterOrEqual(t, sshConn1.GetLastActivity().UnixNano(), now.UnixNano())
Connections.AddSSHConnection(sshConn1)
Connections.AddSSHConnection(sshConn2)
Connections.AddSSHConnection(sshConn3)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 3)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn1.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn1.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn2.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 1)
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn3.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 0)
Connections.RUnlock()
assert.NoError(t, sshConn1.Close())
assert.NoError(t, sshConn2.Close())
assert.NoError(t, sshConn3.Close())
}
func TestIdleConnections(t *testing.T) {
configCopy := Config
Config.IdleTimeout = 1
Initialize(Config)
conn1, conn2 := net.Pipe()
customConn1 := &customNetConn{
Conn: conn1,
id: "id1",
}
customConn2 := &customNetConn{
Conn: conn2,
id: "id2",
}
sshConn1 := NewSSHConnection(customConn1.id, customConn1)
sshConn2 := NewSSHConnection(customConn2.id, customConn2)
username := "test_user"
user := dataprovider.User{
Username: username,
}
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, user, nil)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
fakeConn := &fakeConnection{
BaseConnection: c,
}
// both ssh connections are expired but they should get removed only
// if there is no associated connection
sshConn1.lastActivity = c.lastActivity
sshConn2.lastActivity = c.lastActivity
Connections.AddSSHConnection(sshConn1)
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 1)
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, user, nil)
fakeConn = &fakeConnection{
BaseConnection: c,
}
Connections.AddSSHConnection(sshConn2)
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
cFTP := NewBaseConnection("id2", ProtocolFTP, dataprovider.User{}, nil)
cFTP.lastActivity = time.Now().UnixNano()
fakeConn = &fakeConnection{
BaseConnection: cFTP,
}
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
assert.Len(t, Connections.GetStats(), 3)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
Connections.RUnlock()
startIdleTimeoutTicker(100 * time.Millisecond)
assert.Eventually(t, func() bool { return Connections.GetActiveSessions(username) == 1 }, 1*time.Second, 200*time.Millisecond)
assert.Eventually(t, func() bool {
Connections.RLock()
defer Connections.RUnlock()
return len(Connections.sshConnections) == 1
}, 1*time.Second, 200*time.Millisecond)
stopIdleTimeoutTicker()
assert.Len(t, Connections.GetStats(), 2)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
cFTP.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
sshConn2.lastActivity = c.lastActivity
startIdleTimeoutTicker(100 * time.Millisecond)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 1*time.Second, 200*time.Millisecond)
assert.Eventually(t, func() bool {
Connections.RLock()
defer Connections.RUnlock()
return len(Connections.sshConnections) == 0
}, 1*time.Second, 200*time.Millisecond)
stopIdleTimeoutTicker()
assert.True(t, customConn1.isClosed)
assert.True(t, customConn2.isClosed)
Config = configCopy
}
func TestCloseConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolSFTP, dataprovider.User{}, nil)
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
assert.Len(t, Connections.GetStats(), 1)
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
res = Connections.Close(fakeConn.GetID())
assert.False(t, res)
Connections.Remove(fakeConn.GetID())
}
func TestSwapConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolFTP, dataprovider.User{}, nil)
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, "", Connections.GetStats()[0].Username)
}
c = NewBaseConnection("id", ProtocolFTP, dataprovider.User{
Username: userTestUsername,
}, nil)
fakeConn = &fakeConnection{
BaseConnection: c,
}
err := Connections.Swap(fakeConn)
assert.NoError(t, err)
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, userTestUsername, Connections.GetStats()[0].Username)
}
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
err = Connections.Swap(fakeConn)
assert.Error(t, err)
}
func TestAtomicUpload(t *testing.T) {
configCopy := Config
Config.UploadMode = UploadModeStandard
assert.False(t, Config.IsAtomicUploadEnabled())
Config.UploadMode = UploadModeAtomic
assert.True(t, Config.IsAtomicUploadEnabled())
Config.UploadMode = UploadModeAtomicWithResume
assert.True(t, Config.IsAtomicUploadEnabled())
Config = configCopy
}
func TestConnectionStatus(t *testing.T) {
username := "test_user"
user := dataprovider.User{
Username: username,
}
fs := vfs.NewOsFs("", os.TempDir(), nil)
c1 := NewBaseConnection("id1", ProtocolSFTP, user, fs)
fakeConn1 := &fakeConnection{
BaseConnection: c1,
}
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
t1.BytesReceived = 123
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t2.BytesSent = 456
c2 := NewBaseConnection("id2", ProtocolSSH, user, nil)
fakeConn2 := &fakeConnection{
BaseConnection: c2,
command: "md5sum",
}
c3 := NewBaseConnection("id3", ProtocolWebDAV, user, nil)
fakeConn3 := &fakeConnection{
BaseConnection: c3,
command: "PROPFIND",
}
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
Connections.Add(fakeConn1)
Connections.Add(fakeConn2)
Connections.Add(fakeConn3)
stats := Connections.GetStats()
assert.Len(t, stats, 3)
for _, stat := range stats {
assert.Equal(t, stat.Username, username)
assert.True(t, strings.HasPrefix(stat.GetConnectionInfo(), stat.Protocol))
assert.True(t, strings.HasPrefix(stat.GetConnectionDuration(), "00:"))
if stat.ConnectionID == "SFTP_id1" {
assert.Len(t, stat.Transfers, 2)
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
for _, tr := range stat.Transfers {
if tr.OperationType == operationDownload {
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "DL"))
} else if tr.OperationType == operationUpload {
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "UL"))
}
}
} else if stat.ConnectionID == "DAV_id3" {
assert.Len(t, stat.Transfers, 1)
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
} else {
assert.Equal(t, 0, len(stat.GetTransfersAsString()))
}
}
err := t1.Close()
assert.NoError(t, err)
err = t2.Close()
assert.NoError(t, err)
err = fakeConn3.SignalTransfersAbort()
assert.NoError(t, err)
assert.Equal(t, int32(1), atomic.LoadInt32(&t3.AbortTransfer))
err = t3.Close()
assert.NoError(t, err)
err = fakeConn3.SignalTransfersAbort()
assert.Error(t, err)
Connections.Remove(fakeConn1.GetID())
stats = Connections.GetStats()
assert.Len(t, stats, 2)
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
assert.Equal(t, fakeConn2.GetID(), stats[1].ConnectionID)
Connections.Remove(fakeConn2.GetID())
stats = Connections.GetStats()
assert.Len(t, stats, 1)
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
Connections.Remove(fakeConn3.GetID())
stats = Connections.GetStats()
assert.Len(t, stats, 0)
}
func TestQuotaScans(t *testing.T) {
username := "username"
assert.True(t, QuotaScans.AddUserQuotaScan(username))
assert.False(t, QuotaScans.AddUserQuotaScan(username))
if assert.Len(t, QuotaScans.GetUsersQuotaScans(), 1) {
assert.Equal(t, QuotaScans.GetUsersQuotaScans()[0].Username, username)
}
assert.True(t, QuotaScans.RemoveUserQuotaScan(username))
assert.False(t, QuotaScans.RemoveUserQuotaScan(username))
assert.Len(t, QuotaScans.GetUsersQuotaScans(), 0)
folderName := "/folder"
assert.True(t, QuotaScans.AddVFolderQuotaScan(folderName))
assert.False(t, QuotaScans.AddVFolderQuotaScan(folderName))
if assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 1) {
assert.Equal(t, QuotaScans.GetVFoldersQuotaScans()[0].MappedPath, folderName)
}
assert.True(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
assert.False(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 0)
}
func TestProxyProtocolVersion(t *testing.T) {
c := Configuration{
ProxyProtocol: 1,
}
proxyListener, err := c.GetProxyListener(nil)
assert.NoError(t, err)
assert.Nil(t, proxyListener.Policy)
c.ProxyProtocol = 2
proxyListener, err = c.GetProxyListener(nil)
assert.NoError(t, err)
assert.NotNil(t, proxyListener.Policy)
c.ProxyProtocol = 1
c.ProxyAllowed = []string{"invalid"}
_, err = c.GetProxyListener(nil)
assert.Error(t, err)
c.ProxyProtocol = 2
_, err = c.GetProxyListener(nil)
assert.Error(t, err)
}
func TestProxyProtocol(t *testing.T) {
httpClient := httpclient.GetHTTPClient()
resp, err := httpClient.Get(fmt.Sprintf("http://%v", httpProxyAddr))
if assert.NoError(t, err) {
defer resp.Body.Close()
assert.Equal(t, http.StatusBadRequest, resp.StatusCode)
}
}
func TestPostConnectHook(t *testing.T) {
Config.PostConnectHook = ""
remoteAddr := &net.IPAddr{
IP: net.ParseIP("127.0.0.1"),
Zone: "",
}
assert.NoError(t, Config.ExecutePostConnectHook(remoteAddr.String(), ProtocolFTP))
Config.PostConnectHook = "http://foo\x7f.com/"
assert.Error(t, Config.ExecutePostConnectHook(remoteAddr.String(), ProtocolSFTP))
Config.PostConnectHook = "http://invalid:1234/"
assert.Error(t, Config.ExecutePostConnectHook(remoteAddr.String(), ProtocolSFTP))
Config.PostConnectHook = fmt.Sprintf("http://%v/404", httpAddr)
assert.Error(t, Config.ExecutePostConnectHook(remoteAddr.String(), ProtocolFTP))
Config.PostConnectHook = fmt.Sprintf("http://%v", httpAddr)
assert.NoError(t, Config.ExecutePostConnectHook(remoteAddr.String(), ProtocolFTP))
Config.PostConnectHook = "invalid"
assert.Error(t, Config.ExecutePostConnectHook(remoteAddr.String(), ProtocolFTP))
if runtime.GOOS == osWindows {
Config.PostConnectHook = "C:\\bad\\command"
assert.Error(t, Config.ExecutePostConnectHook(remoteAddr.String(), ProtocolSFTP))
} else {
Config.PostConnectHook = "/invalid/path"
assert.Error(t, Config.ExecutePostConnectHook(remoteAddr.String(), ProtocolSFTP))
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.PostConnectHook = hookCmd
assert.NoError(t, Config.ExecutePostConnectHook(remoteAddr.String(), ProtocolSFTP))
}
Config.PostConnectHook = ""
}

View file

@ -1,957 +0,0 @@
package common
import (
"errors"
"fmt"
"os"
"path"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pkg/sftp"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
// BaseConnection defines common fields for a connection using any supported protocol
type BaseConnection struct {
// Unique identifier for the connection
ID string
// user associated with this connection if any
User dataprovider.User
// start time for this connection
startTime time.Time
protocol string
Fs vfs.Fs
sync.RWMutex
// last activity for this connection
lastActivity int64
transferID uint64
activeTransfers []ActiveTransfer
}
// NewBaseConnection returns a new BaseConnection
func NewBaseConnection(ID, protocol string, user dataprovider.User, fs vfs.Fs) *BaseConnection {
connID := ID
if utils.IsStringInSlice(protocol, supportedProtocols) {
connID = fmt.Sprintf("%v_%v", protocol, ID)
}
return &BaseConnection{
ID: connID,
User: user,
startTime: time.Now(),
protocol: protocol,
Fs: fs,
lastActivity: time.Now().UnixNano(),
transferID: 0,
}
}
// Log outputs a log entry to the configured logger
func (c *BaseConnection) Log(level logger.LogLevel, format string, v ...interface{}) {
logger.Log(level, c.protocol, c.ID, format, v...)
}
// GetTransferID returns an unique transfer ID for this connection
func (c *BaseConnection) GetTransferID() uint64 {
return atomic.AddUint64(&c.transferID, 1)
}
// GetID returns the connection ID
func (c *BaseConnection) GetID() string {
return c.ID
}
// GetUsername returns the authenticated username associated with this connection if any
func (c *BaseConnection) GetUsername() string {
return c.User.Username
}
// GetProtocol returns the protocol for the connection
func (c *BaseConnection) GetProtocol() string {
return c.protocol
}
// SetProtocol sets the protocol for this connection
func (c *BaseConnection) SetProtocol(protocol string) {
c.protocol = protocol
if utils.IsStringInSlice(c.protocol, supportedProtocols) {
c.ID = fmt.Sprintf("%v_%v", c.protocol, c.ID)
}
}
// GetConnectionTime returns the initial connection time
func (c *BaseConnection) GetConnectionTime() time.Time {
return c.startTime
}
// UpdateLastActivity updates last activity for this connection
func (c *BaseConnection) UpdateLastActivity() {
atomic.StoreInt64(&c.lastActivity, time.Now().UnixNano())
}
// GetLastActivity returns the last connection activity
func (c *BaseConnection) GetLastActivity() time.Time {
return time.Unix(0, atomic.LoadInt64(&c.lastActivity))
}
// AddTransfer associates a new transfer to this connection
func (c *BaseConnection) AddTransfer(t ActiveTransfer) {
c.Lock()
defer c.Unlock()
c.activeTransfers = append(c.activeTransfers, t)
c.Log(logger.LevelDebug, "transfer added, id: %v, active transfers: %v", t.GetID(), len(c.activeTransfers))
}
// RemoveTransfer removes the specified transfer from the active ones
func (c *BaseConnection) RemoveTransfer(t ActiveTransfer) {
c.Lock()
defer c.Unlock()
indexToRemove := -1
for i, v := range c.activeTransfers {
if v.GetID() == t.GetID() {
indexToRemove = i
break
}
}
if indexToRemove >= 0 {
c.activeTransfers[indexToRemove] = c.activeTransfers[len(c.activeTransfers)-1]
c.activeTransfers[len(c.activeTransfers)-1] = nil
c.activeTransfers = c.activeTransfers[:len(c.activeTransfers)-1]
c.Log(logger.LevelDebug, "transfer removed, id: %v active transfers: %v", t.GetID(), len(c.activeTransfers))
} else {
c.Log(logger.LevelWarn, "transfer to remove not found!")
}
}
// GetTransfers returns the active transfers
func (c *BaseConnection) GetTransfers() []ConnectionTransfer {
c.RLock()
defer c.RUnlock()
transfers := make([]ConnectionTransfer, 0, len(c.activeTransfers))
for _, t := range c.activeTransfers {
var operationType string
switch t.GetType() {
case TransferDownload:
operationType = operationDownload
case TransferUpload:
operationType = operationUpload
}
transfers = append(transfers, ConnectionTransfer{
ID: t.GetID(),
OperationType: operationType,
StartTime: utils.GetTimeAsMsSinceEpoch(t.GetStartTime()),
Size: t.GetSize(),
VirtualPath: t.GetVirtualPath(),
})
}
return transfers
}
// SignalTransfersAbort signals to the active transfers to exit as soon as possible
func (c *BaseConnection) SignalTransfersAbort() error {
c.RLock()
defer c.RUnlock()
if len(c.activeTransfers) == 0 {
return errors.New("no active transfer found")
}
for _, t := range c.activeTransfers {
t.SignalClose()
}
return nil
}
func (c *BaseConnection) getRealFsPath(fsPath string) string {
c.RLock()
defer c.RUnlock()
for _, t := range c.activeTransfers {
if p := t.GetRealFsPath(fsPath); len(p) > 0 {
return p
}
}
return fsPath
}
func (c *BaseConnection) truncateOpenHandle(fsPath string, size int64) (int64, error) {
c.RLock()
defer c.RUnlock()
for _, t := range c.activeTransfers {
initialSize, err := t.Truncate(fsPath, size)
if err != errTransferMismatch {
return initialSize, err
}
}
return 0, errNoTransfer
}
// ListDir reads the directory named by fsPath and returns a list of directory entries
func (c *BaseConnection) ListDir(fsPath, virtualPath string) ([]os.FileInfo, error) {
if !c.User.HasPerm(dataprovider.PermListItems, virtualPath) {
return nil, c.GetPermissionDeniedError()
}
files, err := c.Fs.ReadDir(fsPath)
if err != nil {
c.Log(logger.LevelWarn, "error listing directory: %+v", err)
return nil, c.GetFsError(err)
}
return c.User.AddVirtualDirs(files, virtualPath), nil
}
// CreateDir creates a new directory at the specified fsPath
func (c *BaseConnection) CreateDir(fsPath, virtualPath string) error {
if !c.User.HasPerm(dataprovider.PermCreateDirs, path.Dir(virtualPath)) {
return c.GetPermissionDeniedError()
}
if c.User.IsVirtualFolder(virtualPath) {
c.Log(logger.LevelWarn, "mkdir not allowed %#v is a virtual folder", virtualPath)
return c.GetPermissionDeniedError()
}
if err := c.Fs.Mkdir(fsPath); err != nil {
c.Log(logger.LevelWarn, "error creating dir: %#v error: %+v", fsPath, err)
return c.GetFsError(err)
}
vfs.SetPathPermissions(c.Fs, fsPath, c.User.GetUID(), c.User.GetGID())
logger.CommandLog(mkdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1)
return nil
}
// IsRemoveFileAllowed returns an error if removing this file is not allowed
func (c *BaseConnection) IsRemoveFileAllowed(fsPath, virtualPath string) error {
if !c.User.HasPerm(dataprovider.PermDelete, path.Dir(virtualPath)) {
return c.GetPermissionDeniedError()
}
if !c.User.IsFileAllowed(virtualPath) {
c.Log(logger.LevelDebug, "removing file %#v is not allowed", fsPath)
return c.GetPermissionDeniedError()
}
return nil
}
// RemoveFile removes a file at the specified fsPath
func (c *BaseConnection) RemoveFile(fsPath, virtualPath string, info os.FileInfo) error {
if err := c.IsRemoveFileAllowed(fsPath, virtualPath); err != nil {
return err
}
size := info.Size()
action := newActionNotification(&c.User, operationPreDelete, fsPath, "", "", c.protocol, size, nil)
actionErr := actionHandler.Handle(action)
if actionErr == nil {
c.Log(logger.LevelDebug, "remove for path %#v handled by pre-delete action", fsPath)
} else {
if err := c.Fs.Remove(fsPath, false); err != nil {
c.Log(logger.LevelWarn, "failed to remove a file/symlink %#v: %+v", fsPath, err)
return c.GetFsError(err)
}
}
logger.CommandLog(removeLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1)
if info.Mode()&os.ModeSymlink != os.ModeSymlink {
vfolder, err := c.User.GetVirtualFolderForPath(path.Dir(virtualPath))
if err == nil {
dataprovider.UpdateVirtualFolderQuota(vfolder.BaseVirtualFolder, -1, -size, false) //nolint:errcheck
if vfolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(c.User, -1, -size, false) //nolint:errcheck
}
} else {
dataprovider.UpdateUserQuota(c.User, -1, -size, false) //nolint:errcheck
}
}
if actionErr != nil {
action := newActionNotification(&c.User, operationDelete, fsPath, "", "", c.protocol, size, nil)
go actionHandler.Handle(action) // nolint:errcheck
}
return nil
}
// IsRemoveDirAllowed returns an error if removing this directory is not allowed
func (c *BaseConnection) IsRemoveDirAllowed(fsPath, virtualPath string) error {
if c.Fs.GetRelativePath(fsPath) == "/" {
c.Log(logger.LevelWarn, "removing root dir is not allowed")
return c.GetPermissionDeniedError()
}
if c.User.IsVirtualFolder(virtualPath) {
c.Log(logger.LevelWarn, "removing a virtual folder is not allowed: %#v", virtualPath)
return c.GetPermissionDeniedError()
}
if c.User.HasVirtualFoldersInside(virtualPath) {
c.Log(logger.LevelWarn, "removing a directory with a virtual folder inside is not allowed: %#v", virtualPath)
return c.GetOpUnsupportedError()
}
if c.User.IsMappedPath(fsPath) {
c.Log(logger.LevelWarn, "removing a directory mapped as virtual folder is not allowed: %#v", fsPath)
return c.GetPermissionDeniedError()
}
if !c.User.HasPerm(dataprovider.PermDelete, path.Dir(virtualPath)) {
return c.GetPermissionDeniedError()
}
return nil
}
// RemoveDir removes a directory at the specified fsPath
func (c *BaseConnection) RemoveDir(fsPath, virtualPath string) error {
if err := c.IsRemoveDirAllowed(fsPath, virtualPath); err != nil {
return err
}
var fi os.FileInfo
var err error
if fi, err = c.Fs.Lstat(fsPath); err != nil {
// see #149
if c.Fs.IsNotExist(err) && c.Fs.HasVirtualFolders() {
return nil
}
c.Log(logger.LevelWarn, "failed to remove a dir %#v: stat error: %+v", fsPath, err)
return c.GetFsError(err)
}
if !fi.IsDir() || fi.Mode()&os.ModeSymlink == os.ModeSymlink {
c.Log(logger.LevelDebug, "cannot remove %#v is not a directory", fsPath)
return c.GetGenericError(nil)
}
if err := c.Fs.Remove(fsPath, true); err != nil {
c.Log(logger.LevelWarn, "failed to remove directory %#v: %+v", fsPath, err)
return c.GetFsError(err)
}
logger.CommandLog(rmdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1)
return nil
}
// Rename renames (moves) fsSourcePath to fsTargetPath
func (c *BaseConnection) Rename(fsSourcePath, fsTargetPath, virtualSourcePath, virtualTargetPath string) error {
if c.User.IsMappedPath(fsSourcePath) {
c.Log(logger.LevelWarn, "renaming a directory mapped as virtual folder is not allowed: %#v", fsSourcePath)
return c.GetPermissionDeniedError()
}
if c.User.IsMappedPath(fsTargetPath) {
c.Log(logger.LevelWarn, "renaming to a directory mapped as virtual folder is not allowed: %#v", fsTargetPath)
return c.GetPermissionDeniedError()
}
srcInfo, err := c.Fs.Lstat(fsSourcePath)
if err != nil {
return c.GetFsError(err)
}
if !c.isRenamePermitted(fsSourcePath, virtualSourcePath, virtualTargetPath, srcInfo) {
return c.GetPermissionDeniedError()
}
initialSize := int64(-1)
if dstInfo, err := c.Fs.Lstat(fsTargetPath); err == nil {
if dstInfo.IsDir() {
c.Log(logger.LevelWarn, "attempted to rename %#v overwriting an existing directory %#v",
fsSourcePath, fsTargetPath)
return c.GetOpUnsupportedError()
}
// we are overwriting an existing file/symlink
if dstInfo.Mode().IsRegular() {
initialSize = dstInfo.Size()
}
if !c.User.HasPerm(dataprovider.PermOverwrite, path.Dir(virtualTargetPath)) {
c.Log(logger.LevelDebug, "renaming is not allowed, %#v -> %#v. Target exists but the user "+
"has no overwrite permission", virtualSourcePath, virtualTargetPath)
return c.GetPermissionDeniedError()
}
}
if srcInfo.IsDir() {
if c.User.HasVirtualFoldersInside(virtualSourcePath) {
c.Log(logger.LevelDebug, "renaming the folder %#v is not supported: it has virtual folders inside it",
virtualSourcePath)
return c.GetOpUnsupportedError()
}
if err = c.checkRecursiveRenameDirPermissions(fsSourcePath, fsTargetPath); err != nil {
c.Log(logger.LevelDebug, "error checking recursive permissions before renaming %#v: %+v", fsSourcePath, err)
return c.GetFsError(err)
}
}
if !c.hasSpaceForRename(virtualSourcePath, virtualTargetPath, initialSize, fsSourcePath) {
c.Log(logger.LevelInfo, "denying cross rename due to space limit")
return c.GetGenericError(ErrQuotaExceeded)
}
if err := c.Fs.Rename(fsSourcePath, fsTargetPath); err != nil {
c.Log(logger.LevelWarn, "failed to rename %#v -> %#v: %+v", fsSourcePath, fsTargetPath, err)
return c.GetFsError(err)
}
if dataprovider.GetQuotaTracking() > 0 {
c.updateQuotaAfterRename(virtualSourcePath, virtualTargetPath, fsTargetPath, initialSize) //nolint:errcheck
}
logger.CommandLog(renameLogSender, fsSourcePath, fsTargetPath, c.User.Username, "", c.ID, c.protocol, -1, -1,
"", "", "", -1)
action := newActionNotification(&c.User, operationRename, fsSourcePath, fsTargetPath, "", c.protocol, 0, nil)
// the returned error is used in test cases only, we already log the error inside action.execute
go actionHandler.Handle(action) // nolint:errcheck
return nil
}
// CreateSymlink creates fsTargetPath as a symbolic link to fsSourcePath
func (c *BaseConnection) CreateSymlink(fsSourcePath, fsTargetPath, virtualSourcePath, virtualTargetPath string) error {
if c.Fs.GetRelativePath(fsSourcePath) == "/" {
c.Log(logger.LevelWarn, "symlinking root dir is not allowed")
return c.GetPermissionDeniedError()
}
if c.User.IsVirtualFolder(virtualTargetPath) {
c.Log(logger.LevelWarn, "symlinking a virtual folder is not allowed")
return c.GetPermissionDeniedError()
}
if !c.User.HasPerm(dataprovider.PermCreateSymlinks, path.Dir(virtualTargetPath)) {
return c.GetPermissionDeniedError()
}
if c.isCrossFoldersRequest(virtualSourcePath, virtualTargetPath) {
c.Log(logger.LevelWarn, "cross folder symlink is not supported, src: %v dst: %v", virtualSourcePath, virtualTargetPath)
return c.GetOpUnsupportedError()
}
if c.User.IsMappedPath(fsSourcePath) {
c.Log(logger.LevelWarn, "symlinking a directory mapped as virtual folder is not allowed: %#v", fsSourcePath)
return c.GetPermissionDeniedError()
}
if c.User.IsMappedPath(fsTargetPath) {
c.Log(logger.LevelWarn, "symlinking to a directory mapped as virtual folder is not allowed: %#v", fsTargetPath)
return c.GetPermissionDeniedError()
}
if err := c.Fs.Symlink(fsSourcePath, fsTargetPath); err != nil {
c.Log(logger.LevelWarn, "failed to create symlink %#v -> %#v: %+v", fsSourcePath, fsTargetPath, err)
return c.GetFsError(err)
}
logger.CommandLog(symlinkLogSender, fsSourcePath, fsTargetPath, c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1)
return nil
}
func (c *BaseConnection) getPathForSetStatPerms(fsPath, virtualPath string) string {
pathForPerms := virtualPath
if fi, err := c.Fs.Lstat(fsPath); err == nil {
if fi.IsDir() {
pathForPerms = path.Dir(virtualPath)
}
}
return pathForPerms
}
// DoStat execute a Stat if mode = 0, Lstat if mode = 1
func (c *BaseConnection) DoStat(fsPath string, mode int) (os.FileInfo, error) {
if mode == 1 {
return c.Fs.Lstat(c.getRealFsPath(fsPath))
}
return c.Fs.Stat(c.getRealFsPath(fsPath))
}
// SetStat set StatAttributes for the specified fsPath
func (c *BaseConnection) SetStat(fsPath, virtualPath string, attributes *StatAttributes) error {
if Config.SetstatMode == 1 {
return nil
}
pathForPerms := c.getPathForSetStatPerms(fsPath, virtualPath)
if attributes.Flags&StatAttrPerms != 0 {
if !c.User.HasPerm(dataprovider.PermChmod, pathForPerms) {
return c.GetPermissionDeniedError()
}
if err := c.Fs.Chmod(c.getRealFsPath(fsPath), attributes.Mode); err != nil {
c.Log(logger.LevelWarn, "failed to chmod path %#v, mode: %v, err: %+v", fsPath, attributes.Mode.String(), err)
return c.GetFsError(err)
}
logger.CommandLog(chmodLogSender, fsPath, "", c.User.Username, attributes.Mode.String(), c.ID, c.protocol,
-1, -1, "", "", "", -1)
}
if attributes.Flags&StatAttrUIDGID != 0 {
if !c.User.HasPerm(dataprovider.PermChown, pathForPerms) {
return c.GetPermissionDeniedError()
}
if err := c.Fs.Chown(c.getRealFsPath(fsPath), attributes.UID, attributes.GID); err != nil {
c.Log(logger.LevelWarn, "failed to chown path %#v, uid: %v, gid: %v, err: %+v", fsPath, attributes.UID,
attributes.GID, err)
return c.GetFsError(err)
}
logger.CommandLog(chownLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, attributes.UID, attributes.GID,
"", "", "", -1)
}
if attributes.Flags&StatAttrTimes != 0 {
if !c.User.HasPerm(dataprovider.PermChtimes, pathForPerms) {
return c.GetPermissionDeniedError()
}
if err := c.Fs.Chtimes(c.getRealFsPath(fsPath), attributes.Atime, attributes.Mtime); err != nil {
c.Log(logger.LevelWarn, "failed to chtimes for path %#v, access time: %v, modification time: %v, err: %+v",
fsPath, attributes.Atime, attributes.Mtime, err)
return c.GetFsError(err)
}
accessTimeString := attributes.Atime.Format(chtimesFormat)
modificationTimeString := attributes.Mtime.Format(chtimesFormat)
logger.CommandLog(chtimesLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1,
accessTimeString, modificationTimeString, "", -1)
}
if attributes.Flags&StatAttrSize != 0 {
if !c.User.HasPerm(dataprovider.PermOverwrite, pathForPerms) {
return c.GetPermissionDeniedError()
}
if err := c.truncateFile(fsPath, virtualPath, attributes.Size); err != nil {
c.Log(logger.LevelWarn, "failed to truncate path %#v, size: %v, err: %+v", fsPath, attributes.Size, err)
return c.GetFsError(err)
}
logger.CommandLog(truncateLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", attributes.Size)
}
return nil
}
func (c *BaseConnection) truncateFile(fsPath, virtualPath string, size int64) error {
// check first if we have an open transfer for the given path and try to truncate the file already opened
// if we found no transfer we truncate by path.
var initialSize int64
var err error
initialSize, err = c.truncateOpenHandle(fsPath, size)
if err == errNoTransfer {
c.Log(logger.LevelDebug, "file path %#v not found in active transfers, execute trucate by path", fsPath)
var info os.FileInfo
info, err = c.Fs.Stat(fsPath)
if err != nil {
return err
}
initialSize = info.Size()
err = c.Fs.Truncate(fsPath, size)
}
if err == nil && vfs.IsLocalOsFs(c.Fs) {
sizeDiff := initialSize - size
vfolder, err := c.User.GetVirtualFolderForPath(path.Dir(virtualPath))
if err == nil {
dataprovider.UpdateVirtualFolderQuota(vfolder.BaseVirtualFolder, 0, -sizeDiff, false) //nolint:errcheck
if vfolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(c.User, 0, -sizeDiff, false) //nolint:errcheck
}
} else {
dataprovider.UpdateUserQuota(c.User, 0, -sizeDiff, false) //nolint:errcheck
}
}
return err
}
func (c *BaseConnection) checkRecursiveRenameDirPermissions(sourcePath, targetPath string) error {
dstPerms := []string{
dataprovider.PermCreateDirs,
dataprovider.PermUpload,
dataprovider.PermCreateSymlinks,
}
err := c.Fs.Walk(sourcePath, func(walkedPath string, info os.FileInfo, err error) error {
if err != nil {
return err
}
dstPath := strings.Replace(walkedPath, sourcePath, targetPath, 1)
virtualSrcPath := c.Fs.GetRelativePath(walkedPath)
virtualDstPath := c.Fs.GetRelativePath(dstPath)
// walk scans the directory tree in order, checking the parent directory permissions we are sure that all contents
// inside the parent path was checked. If the current dir has no subdirs with defined permissions inside it
// and it has all the possible permissions we can stop scanning
if !c.User.HasPermissionsInside(path.Dir(virtualSrcPath)) && !c.User.HasPermissionsInside(path.Dir(virtualDstPath)) {
if c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualSrcPath)) &&
c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualDstPath)) {
return ErrSkipPermissionsCheck
}
if c.User.HasPerm(dataprovider.PermDelete, path.Dir(virtualSrcPath)) &&
c.User.HasPerms(dstPerms, path.Dir(virtualDstPath)) {
return ErrSkipPermissionsCheck
}
}
if !c.isRenamePermitted(walkedPath, virtualSrcPath, virtualDstPath, info) {
c.Log(logger.LevelInfo, "rename %#v -> %#v is not allowed, virtual destination path: %#v",
walkedPath, dstPath, virtualDstPath)
return os.ErrPermission
}
return nil
})
if err == ErrSkipPermissionsCheck {
err = nil
}
return err
}
func (c *BaseConnection) isRenamePermitted(fsSourcePath, virtualSourcePath, virtualTargetPath string, fi os.FileInfo) bool {
if c.Fs.GetRelativePath(fsSourcePath) == "/" {
c.Log(logger.LevelWarn, "renaming root dir is not allowed")
return false
}
if c.User.IsVirtualFolder(virtualSourcePath) || c.User.IsVirtualFolder(virtualTargetPath) {
c.Log(logger.LevelWarn, "renaming a virtual folder is not allowed")
return false
}
if !c.User.IsFileAllowed(virtualSourcePath) || !c.User.IsFileAllowed(virtualTargetPath) {
if fi != nil && fi.Mode().IsRegular() {
c.Log(logger.LevelDebug, "renaming file is not allowed, source: %#v target: %#v",
virtualSourcePath, virtualTargetPath)
return false
}
}
if c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualSourcePath)) &&
c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualTargetPath)) {
return true
}
if !c.User.HasPerm(dataprovider.PermDelete, path.Dir(virtualSourcePath)) {
return false
}
if fi != nil {
if fi.IsDir() {
return c.User.HasPerm(dataprovider.PermCreateDirs, path.Dir(virtualTargetPath))
} else if fi.Mode()&os.ModeSymlink == os.ModeSymlink {
return c.User.HasPerm(dataprovider.PermCreateSymlinks, path.Dir(virtualTargetPath))
}
}
return c.User.HasPerm(dataprovider.PermUpload, path.Dir(virtualTargetPath))
}
func (c *BaseConnection) hasSpaceForRename(virtualSourcePath, virtualTargetPath string, initialSize int64,
fsSourcePath string) bool {
if dataprovider.GetQuotaTracking() == 0 {
return true
}
sourceFolder, errSrc := c.User.GetVirtualFolderForPath(path.Dir(virtualSourcePath))
dstFolder, errDst := c.User.GetVirtualFolderForPath(path.Dir(virtualTargetPath))
if errSrc != nil && errDst != nil {
// rename inside the user home dir
return true
}
if errSrc == nil && errDst == nil {
// rename between virtual folders
if sourceFolder.MappedPath == dstFolder.MappedPath {
// rename inside the same virtual folder
return true
}
}
if errSrc != nil && dstFolder.IsIncludedInUserQuota() {
// rename between user root dir and a virtual folder included in user quota
return true
}
quotaResult := c.HasSpace(true, virtualTargetPath)
return c.hasSpaceForCrossRename(quotaResult, initialSize, fsSourcePath)
}
// hasSpaceForCrossRename checks the quota after a rename between different folders
func (c *BaseConnection) hasSpaceForCrossRename(quotaResult vfs.QuotaCheckResult, initialSize int64, sourcePath string) bool {
if !quotaResult.HasSpace && initialSize == -1 {
// we are over quota and this is not a file replace
return false
}
fi, err := c.Fs.Lstat(sourcePath)
if err != nil {
c.Log(logger.LevelWarn, "cross rename denied, stat error for path %#v: %v", sourcePath, err)
return false
}
var sizeDiff int64
var filesDiff int
if fi.Mode().IsRegular() {
sizeDiff = fi.Size()
filesDiff = 1
if initialSize != -1 {
sizeDiff -= initialSize
filesDiff = 0
}
} else if fi.IsDir() {
filesDiff, sizeDiff, err = c.Fs.GetDirSize(sourcePath)
if err != nil {
c.Log(logger.LevelWarn, "cross rename denied, error getting size for directory %#v: %v", sourcePath, err)
return false
}
}
if !quotaResult.HasSpace && initialSize != -1 {
// we are over quota but we are overwriting an existing file so we check if the quota size after the rename is ok
if quotaResult.QuotaSize == 0 {
return true
}
c.Log(logger.LevelDebug, "cross rename overwrite, source %#v, used size %v, size to add %v",
sourcePath, quotaResult.UsedSize, sizeDiff)
quotaResult.UsedSize += sizeDiff
return quotaResult.GetRemainingSize() >= 0
}
if quotaResult.QuotaFiles > 0 {
remainingFiles := quotaResult.GetRemainingFiles()
c.Log(logger.LevelDebug, "cross rename, source %#v remaining file %v to add %v", sourcePath,
remainingFiles, filesDiff)
if remainingFiles < filesDiff {
return false
}
}
if quotaResult.QuotaSize > 0 {
remainingSize := quotaResult.GetRemainingSize()
c.Log(logger.LevelDebug, "cross rename, source %#v remaining size %v to add %v", sourcePath,
remainingSize, sizeDiff)
if remainingSize < sizeDiff {
return false
}
}
return true
}
// GetMaxWriteSize returns the allowed size for an upload or an error
// if no enough size is available for a resume/append
func (c *BaseConnection) GetMaxWriteSize(quotaResult vfs.QuotaCheckResult, isResume bool, fileSize int64) (int64, error) {
maxWriteSize := quotaResult.GetRemainingSize()
if isResume {
if !c.Fs.IsUploadResumeSupported() {
return 0, c.GetOpUnsupportedError()
}
if c.User.Filters.MaxUploadFileSize > 0 && c.User.Filters.MaxUploadFileSize <= fileSize {
return 0, ErrQuotaExceeded
}
if c.User.Filters.MaxUploadFileSize > 0 {
maxUploadSize := c.User.Filters.MaxUploadFileSize - fileSize
if maxUploadSize < maxWriteSize || maxWriteSize == 0 {
maxWriteSize = maxUploadSize
}
}
} else {
if maxWriteSize > 0 {
maxWriteSize += fileSize
}
if c.User.Filters.MaxUploadFileSize > 0 && (c.User.Filters.MaxUploadFileSize < maxWriteSize || maxWriteSize == 0) {
maxWriteSize = c.User.Filters.MaxUploadFileSize
}
}
return maxWriteSize, nil
}
// HasSpace checks user's quota usage
func (c *BaseConnection) HasSpace(checkFiles bool, requestPath string) vfs.QuotaCheckResult {
result := vfs.QuotaCheckResult{
HasSpace: true,
AllowedSize: 0,
AllowedFiles: 0,
UsedSize: 0,
UsedFiles: 0,
QuotaSize: 0,
QuotaFiles: 0,
}
if dataprovider.GetQuotaTracking() == 0 {
return result
}
var err error
var vfolder vfs.VirtualFolder
vfolder, err = c.User.GetVirtualFolderForPath(path.Dir(requestPath))
if err == nil && !vfolder.IsIncludedInUserQuota() {
if vfolder.HasNoQuotaRestrictions(checkFiles) {
return result
}
result.QuotaSize = vfolder.QuotaSize
result.QuotaFiles = vfolder.QuotaFiles
result.UsedFiles, result.UsedSize, err = dataprovider.GetUsedVirtualFolderQuota(vfolder.MappedPath)
} else {
if c.User.HasNoQuotaRestrictions(checkFiles) {
return result
}
result.QuotaSize = c.User.QuotaSize
result.QuotaFiles = c.User.QuotaFiles
result.UsedFiles, result.UsedSize, err = dataprovider.GetUsedQuota(c.User.Username)
}
if err != nil {
c.Log(logger.LevelWarn, "error getting used quota for %#v request path %#v: %v", c.User.Username, requestPath, err)
result.HasSpace = false
return result
}
result.AllowedFiles = result.QuotaFiles - result.UsedFiles
result.AllowedSize = result.QuotaSize - result.UsedSize
if (checkFiles && result.QuotaFiles > 0 && result.UsedFiles >= result.QuotaFiles) ||
(result.QuotaSize > 0 && result.UsedSize >= result.QuotaSize) {
c.Log(logger.LevelDebug, "quota exceed for user %#v, request path %#v, num files: %v/%v, size: %v/%v check files: %v",
c.User.Username, requestPath, result.UsedFiles, result.QuotaFiles, result.UsedSize, result.QuotaSize, checkFiles)
result.HasSpace = false
return result
}
return result
}
func (c *BaseConnection) isCrossFoldersRequest(virtualSourcePath, virtualTargetPath string) bool {
sourceFolder, errSrc := c.User.GetVirtualFolderForPath(virtualSourcePath)
dstFolder, errDst := c.User.GetVirtualFolderForPath(virtualTargetPath)
if errSrc != nil && errDst != nil {
return false
}
if errSrc == nil && errDst == nil {
return sourceFolder.MappedPath != dstFolder.MappedPath
}
return true
}
func (c *BaseConnection) updateQuotaMoveBetweenVFolders(sourceFolder, dstFolder vfs.VirtualFolder, initialSize,
filesSize int64, numFiles int) {
if sourceFolder.MappedPath == dstFolder.MappedPath {
// both files are inside the same virtual folder
if initialSize != -1 {
dataprovider.UpdateVirtualFolderQuota(dstFolder.BaseVirtualFolder, -numFiles, -initialSize, false) //nolint:errcheck
if dstFolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(c.User, -numFiles, -initialSize, false) //nolint:errcheck
}
}
return
}
// files are inside different virtual folders
dataprovider.UpdateVirtualFolderQuota(sourceFolder.BaseVirtualFolder, -numFiles, -filesSize, false) //nolint:errcheck
if sourceFolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(c.User, -numFiles, -filesSize, false) //nolint:errcheck
}
if initialSize == -1 {
dataprovider.UpdateVirtualFolderQuota(dstFolder.BaseVirtualFolder, numFiles, filesSize, false) //nolint:errcheck
if dstFolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(c.User, numFiles, filesSize, false) //nolint:errcheck
}
} else {
// we cannot have a directory here, initialSize != -1 only for files
dataprovider.UpdateVirtualFolderQuota(dstFolder.BaseVirtualFolder, 0, filesSize-initialSize, false) //nolint:errcheck
if dstFolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(c.User, 0, filesSize-initialSize, false) //nolint:errcheck
}
}
}
func (c *BaseConnection) updateQuotaMoveFromVFolder(sourceFolder vfs.VirtualFolder, initialSize, filesSize int64, numFiles int) {
// move between a virtual folder and the user home dir
dataprovider.UpdateVirtualFolderQuota(sourceFolder.BaseVirtualFolder, -numFiles, -filesSize, false) //nolint:errcheck
if sourceFolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(c.User, -numFiles, -filesSize, false) //nolint:errcheck
}
if initialSize == -1 {
dataprovider.UpdateUserQuota(c.User, numFiles, filesSize, false) //nolint:errcheck
} else {
// we cannot have a directory here, initialSize != -1 only for files
dataprovider.UpdateUserQuota(c.User, 0, filesSize-initialSize, false) //nolint:errcheck
}
}
func (c *BaseConnection) updateQuotaMoveToVFolder(dstFolder vfs.VirtualFolder, initialSize, filesSize int64, numFiles int) {
// move between the user home dir and a virtual folder
dataprovider.UpdateUserQuota(c.User, -numFiles, -filesSize, false) //nolint:errcheck
if initialSize == -1 {
dataprovider.UpdateVirtualFolderQuota(dstFolder.BaseVirtualFolder, numFiles, filesSize, false) //nolint:errcheck
if dstFolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(c.User, numFiles, filesSize, false) //nolint:errcheck
}
} else {
// we cannot have a directory here, initialSize != -1 only for files
dataprovider.UpdateVirtualFolderQuota(dstFolder.BaseVirtualFolder, 0, filesSize-initialSize, false) //nolint:errcheck
if dstFolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(c.User, 0, filesSize-initialSize, false) //nolint:errcheck
}
}
}
func (c *BaseConnection) updateQuotaAfterRename(virtualSourcePath, virtualTargetPath, targetPath string, initialSize int64) error {
// we don't allow to overwrite an existing directory so targetPath can be:
// - a new file, a symlink is as a new file here
// - a file overwriting an existing one
// - a new directory
// initialSize != -1 only when overwriting files
sourceFolder, errSrc := c.User.GetVirtualFolderForPath(path.Dir(virtualSourcePath))
dstFolder, errDst := c.User.GetVirtualFolderForPath(path.Dir(virtualTargetPath))
if errSrc != nil && errDst != nil {
// both files are contained inside the user home dir
if initialSize != -1 {
// we cannot have a directory here
dataprovider.UpdateUserQuota(c.User, -1, -initialSize, false) //nolint:errcheck
}
return nil
}
filesSize := int64(0)
numFiles := 1
if fi, err := c.Fs.Stat(targetPath); err == nil {
if fi.Mode().IsDir() {
numFiles, filesSize, err = c.Fs.GetDirSize(targetPath)
if err != nil {
c.Log(logger.LevelWarn, "failed to update quota after rename, error scanning moved folder %#v: %v",
targetPath, err)
return err
}
} else {
filesSize = fi.Size()
}
} else {
c.Log(logger.LevelWarn, "failed to update quota after rename, file %#v stat error: %+v", targetPath, err)
return err
}
if errSrc == nil && errDst == nil {
c.updateQuotaMoveBetweenVFolders(sourceFolder, dstFolder, initialSize, filesSize, numFiles)
}
if errSrc == nil && errDst != nil {
c.updateQuotaMoveFromVFolder(sourceFolder, initialSize, filesSize, numFiles)
}
if errSrc != nil && errDst == nil {
c.updateQuotaMoveToVFolder(dstFolder, initialSize, filesSize, numFiles)
}
return nil
}
// GetPermissionDeniedError returns an appropriate permission denied error for the connection protocol
func (c *BaseConnection) GetPermissionDeniedError() error {
switch c.protocol {
case ProtocolSFTP:
return sftp.ErrSSHFxPermissionDenied
case ProtocolWebDAV:
return os.ErrPermission
default:
return ErrPermissionDenied
}
}
// GetNotExistError returns an appropriate not exist error for the connection protocol
func (c *BaseConnection) GetNotExistError() error {
switch c.protocol {
case ProtocolSFTP:
return sftp.ErrSSHFxNoSuchFile
case ProtocolWebDAV:
return os.ErrNotExist
default:
return ErrNotExist
}
}
// GetOpUnsupportedError returns an appropriate operation not supported error for the connection protocol
func (c *BaseConnection) GetOpUnsupportedError() error {
switch c.protocol {
case ProtocolSFTP:
return sftp.ErrSSHFxOpUnsupported
default:
return ErrOpUnsupported
}
}
// GetGenericError returns an appropriate generic error for the connection protocol
func (c *BaseConnection) GetGenericError(err error) error {
switch c.protocol {
case ProtocolSFTP:
return sftp.ErrSSHFxFailure
default:
if err == ErrPermissionDenied || err == ErrNotExist || err == ErrOpUnsupported || err == ErrQuotaExceeded {
return err
}
return ErrGenericFailure
}
}
// GetFsError converts a filesystem error to a protocol error
func (c *BaseConnection) GetFsError(err error) error {
if c.Fs.IsNotExist(err) {
return c.GetNotExistError()
} else if c.Fs.IsPermission(err) {
return c.GetPermissionDeniedError()
} else if err != nil {
return c.GetGenericError(err)
}
return nil
}

File diff suppressed because it is too large Load diff

View file

@ -1,54 +0,0 @@
package common
import (
"crypto/tls"
"sync"
"github.com/drakkan/sftpgo/logger"
)
// CertManager defines a TLS certificate manager
type CertManager struct {
certPath string
keyPath string
sync.RWMutex
cert *tls.Certificate
}
// LoadCertificate loads the configured x509 key pair
func (m *CertManager) LoadCertificate(logSender string) error {
newCert, err := tls.LoadX509KeyPair(m.certPath, m.keyPath)
if err != nil {
logger.Warn(logSender, "", "unable to load X509 key pair, cert file %#v key file %#v error: %v",
m.certPath, m.keyPath, err)
return err
}
logger.Debug(logSender, "", "TLS certificate %#v successfully loaded", m.certPath)
m.Lock()
defer m.Unlock()
m.cert = &newCert
return nil
}
// GetCertificateFunc returns the loaded certificate
func (m *CertManager) GetCertificateFunc() func(*tls.ClientHelloInfo) (*tls.Certificate, error) {
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
m.RLock()
defer m.RUnlock()
return m.cert, nil
}
}
// NewCertManager creates a new certificate manager
func NewCertManager(certificateFile, certificateKeyFile, logSender string) (*CertManager, error) {
manager := &CertManager{
cert: nil,
certPath: certificateFile,
keyPath: certificateKeyFile,
}
err := manager.LoadCertificate(logSender)
if err != nil {
return nil, err
}
return manager, nil
}

View file

@ -1,69 +0,0 @@
package common
import (
"crypto/tls"
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
const (
httpsCert = `-----BEGIN CERTIFICATE-----
MIICHTCCAaKgAwIBAgIUHnqw7QnB1Bj9oUsNpdb+ZkFPOxMwCgYIKoZIzj0EAwIw
RTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGElu
dGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMDAyMDQwOTUzMDRaFw0zMDAyMDEw
OTUzMDRaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYD
VQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwdjAQBgcqhkjOPQIBBgUrgQQA
IgNiAARCjRMqJ85rzMC998X5z761nJ+xL3bkmGVqWvrJ51t5OxV0v25NsOgR82CA
NXUgvhVYs7vNFN+jxtb2aj6Xg+/2G/BNxkaFspIVCzgWkxiz7XE4lgUwX44FCXZM
3+JeUbKjUzBRMB0GA1UdDgQWBBRhLw+/o3+Z02MI/d4tmaMui9W16jAfBgNVHSME
GDAWgBRhLw+/o3+Z02MI/d4tmaMui9W16jAPBgNVHRMBAf8EBTADAQH/MAoGCCqG
SM49BAMCA2kAMGYCMQDqLt2lm8mE+tGgtjDmtFgdOcI72HSbRQ74D5rYTzgST1rY
/8wTi5xl8TiFUyLMUsICMQC5ViVxdXbhuG7gX6yEqSkMKZICHpO8hqFwOD/uaFVI
dV4vKmHUzwK/eIx+8Ay3neE=
-----END CERTIFICATE-----`
httpsKey = `-----BEGIN EC PARAMETERS-----
BgUrgQQAIg==
-----END EC PARAMETERS-----
-----BEGIN EC PRIVATE KEY-----
MIGkAgEBBDCfMNsN6miEE3rVyUPwElfiJSWaR5huPCzUenZOfJT04GAcQdWvEju3
UM2lmBLIXpGgBwYFK4EEACKhZANiAARCjRMqJ85rzMC998X5z761nJ+xL3bkmGVq
WvrJ51t5OxV0v25NsOgR82CANXUgvhVYs7vNFN+jxtb2aj6Xg+/2G/BNxkaFspIV
CzgWkxiz7XE4lgUwX44FCXZM3+JeUbI=
-----END EC PRIVATE KEY-----`
)
func TestLoadCertificate(t *testing.T) {
certPath := filepath.Join(os.TempDir(), "test.crt")
keyPath := filepath.Join(os.TempDir(), "test.key")
err := ioutil.WriteFile(certPath, []byte(httpsCert), os.ModePerm)
assert.NoError(t, err)
err = ioutil.WriteFile(keyPath, []byte(httpsKey), os.ModePerm)
assert.NoError(t, err)
certManager, err := NewCertManager(certPath, keyPath, logSenderTest)
assert.NoError(t, err)
certFunc := certManager.GetCertificateFunc()
if assert.NotNil(t, certFunc) {
hello := &tls.ClientHelloInfo{
ServerName: "localhost",
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305},
}
cert, err := certFunc(hello)
assert.NoError(t, err)
assert.Equal(t, certManager.cert, cert)
}
err = os.Remove(certPath)
assert.NoError(t, err)
err = os.Remove(keyPath)
assert.NoError(t, err)
}
func TestLoadInvalidCert(t *testing.T) {
certManager, err := NewCertManager("test.crt", "test.key", logSenderTest)
assert.Error(t, err)
assert.Nil(t, certManager)
}

View file

@ -1,290 +0,0 @@
package common
import (
"errors"
"os"
"path"
"sync"
"sync/atomic"
"time"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/metrics"
"github.com/drakkan/sftpgo/vfs"
)
var (
// ErrTransferClosed defines the error returned for a closed transfer
ErrTransferClosed = errors.New("transfer already closed")
)
// BaseTransfer contains protocols common transfer details for an upload or a download.
type BaseTransfer struct { //nolint:maligned
ID uint64
Fs vfs.Fs
File *os.File
Connection *BaseConnection
cancelFn func()
fsPath string
start time.Time
transferType int
MinWriteOffset int64
InitialSize int64
isNewFile bool
requestPath string
BytesSent int64
BytesReceived int64
MaxWriteSize int64
AbortTransfer int32
sync.Mutex
ErrTransfer error
}
// NewBaseTransfer returns a new BaseTransfer and adds it to the given connection
func NewBaseTransfer(file *os.File, conn *BaseConnection, cancelFn func(), fsPath, requestPath string, transferType int,
minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
t := &BaseTransfer{
ID: conn.GetTransferID(),
File: file,
Connection: conn,
cancelFn: cancelFn,
fsPath: fsPath,
start: time.Now(),
transferType: transferType,
MinWriteOffset: minWriteOffset,
InitialSize: initialSize,
isNewFile: isNewFile,
requestPath: requestPath,
BytesSent: 0,
BytesReceived: 0,
MaxWriteSize: maxWriteSize,
AbortTransfer: 0,
Fs: fs,
}
conn.AddTransfer(t)
return t
}
// GetID returns the transfer ID
func (t *BaseTransfer) GetID() uint64 {
return t.ID
}
// GetType returns the transfer type
func (t *BaseTransfer) GetType() int {
return t.transferType
}
// GetSize returns the transferred size
func (t *BaseTransfer) GetSize() int64 {
if t.transferType == TransferDownload {
return atomic.LoadInt64(&t.BytesSent)
}
return atomic.LoadInt64(&t.BytesReceived)
}
// GetStartTime returns the start time
func (t *BaseTransfer) GetStartTime() time.Time {
return t.start
}
// SignalClose signals that the transfer should be closed.
// For same protocols, for example WebDAV, we have no
// access to the network connection, so we use this method
// to make the next read or write to fail
func (t *BaseTransfer) SignalClose() {
atomic.StoreInt32(&(t.AbortTransfer), 1)
}
// GetVirtualPath returns the transfer virtual path
func (t *BaseTransfer) GetVirtualPath() string {
return t.requestPath
}
// GetFsPath returns the transfer filesystem path
func (t *BaseTransfer) GetFsPath() string {
return t.fsPath
}
// GetRealFsPath returns the real transfer filesystem path.
// If atomic uploads are enabled this differ from fsPath
func (t *BaseTransfer) GetRealFsPath(fsPath string) string {
if fsPath == t.GetFsPath() {
if t.File != nil {
return t.File.Name()
}
return t.fsPath
}
return ""
}
// SetCancelFn sets the cancel function for the transfer
func (t *BaseTransfer) SetCancelFn(cancelFn func()) {
t.cancelFn = cancelFn
}
// Truncate changes the size of the opened file.
// Supported for local fs only
func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
if fsPath == t.GetFsPath() {
if t.File != nil {
initialSize := t.InitialSize
err := t.File.Truncate(size)
if err == nil {
t.Lock()
t.InitialSize = size
if t.MaxWriteSize > 0 {
sizeDiff := initialSize - size
t.MaxWriteSize += sizeDiff
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
atomic.StoreInt64(&t.BytesReceived, 0)
}
t.Unlock()
}
t.Connection.Log(logger.LevelDebug, "file %#v truncated to size %v max write size %v new initial size %v err: %v",
fsPath, size, t.MaxWriteSize, t.InitialSize, err)
return initialSize, err
}
if size == 0 && atomic.LoadInt64(&t.BytesSent) == 0 {
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads
return 0, nil
}
return 0, ErrOpUnsupported
}
return 0, errTransferMismatch
}
// TransferError is called if there is an unexpected error.
// For example network or client issues
func (t *BaseTransfer) TransferError(err error) {
t.Lock()
defer t.Unlock()
if t.ErrTransfer != nil {
return
}
t.ErrTransfer = err
if t.cancelFn != nil {
t.cancelFn()
}
elapsed := time.Since(t.start).Nanoseconds() / 1000000
t.Connection.Log(logger.LevelWarn, "Unexpected error for transfer, path: %#v, error: \"%v\" bytes sent: %v, "+
"bytes received: %v transfer running since %v ms", t.fsPath, t.ErrTransfer, atomic.LoadInt64(&t.BytesSent),
atomic.LoadInt64(&t.BytesReceived), elapsed)
}
// Close it is called when the transfer is completed.
// It logs the transfer info, updates the user quota (for uploads)
// and executes any defined action.
// If there is an error no action will be executed and, in atomic mode,
// we try to delete the temporary file
func (t *BaseTransfer) Close() error {
defer t.Connection.RemoveTransfer(t)
var err error
numFiles := 0
if t.isNewFile {
numFiles = 1
}
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
if t.ErrTransfer == ErrQuotaExceeded && t.File != nil {
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
err = os.Remove(t.File.Name())
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
}
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %#v, deletion error: %v",
t.File.Name(), err)
} else if t.transferType == TransferUpload && t.File != nil && t.File.Name() != t.fsPath {
if t.ErrTransfer == nil || Config.UploadMode == UploadModeAtomicWithResume {
err = os.Rename(t.File.Name(), t.fsPath)
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
t.File.Name(), t.fsPath, err)
} else {
err = os.Remove(t.File.Name())
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, "+
"deletion error: %v", t.ErrTransfer, t.File.Name(), err)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
}
}
}
elapsed := time.Since(t.start).Nanoseconds() / 1000000
if t.transferType == TransferDownload {
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol)
action := newActionNotification(&t.Connection.User, operationDownload, t.fsPath, "", "", t.Connection.protocol,
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
go actionHandler.Handle(action) //nolint:errcheck
} else {
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
info, err := t.Fs.Stat(t.fsPath)
if err == nil {
fileSize = info.Size()
}
t.Connection.Log(logger.LevelDebug, "uploaded file size %v stat error: %v", fileSize, err)
t.updateQuota(numFiles, fileSize)
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol)
action := newActionNotification(&t.Connection.User, operationUpload, t.fsPath, "", "", t.Connection.protocol,
fileSize, t.ErrTransfer)
go actionHandler.Handle(action) //nolint:errcheck
}
if t.ErrTransfer != nil {
t.Connection.Log(logger.LevelWarn, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
if err == nil {
err = t.ErrTransfer
}
}
return err
}
func (t *BaseTransfer) updateQuota(numFiles int, fileSize int64) bool {
// S3 uploads are atomic, if there is an error nothing is uploaded
if t.File == nil && t.ErrTransfer != nil {
return false
}
sizeDiff := fileSize - t.InitialSize
if t.transferType == TransferUpload && (numFiles != 0 || sizeDiff > 0) {
vfolder, err := t.Connection.User.GetVirtualFolderForPath(path.Dir(t.requestPath))
if err == nil {
dataprovider.UpdateVirtualFolderQuota(vfolder.BaseVirtualFolder, numFiles, //nolint:errcheck
sizeDiff, false)
if vfolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
}
} else {
dataprovider.UpdateUserQuota(t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
}
return true
}
return false
}
// HandleThrottle manage bandwidth throttling
func (t *BaseTransfer) HandleThrottle() {
var wantedBandwidth int64
var trasferredBytes int64
if t.transferType == TransferDownload {
wantedBandwidth = t.Connection.User.DownloadBandwidth
trasferredBytes = atomic.LoadInt64(&t.BytesSent)
} else {
wantedBandwidth = t.Connection.User.UploadBandwidth
trasferredBytes = atomic.LoadInt64(&t.BytesReceived)
}
if wantedBandwidth > 0 {
// real and wanted elapsed as milliseconds, bytes as kilobytes
realElapsed := time.Since(t.start).Nanoseconds() / 1000000
// trasferredBytes / 1000 = KB/s, we multiply for 1000 to get milliseconds
wantedElapsed := 1000 * (trasferredBytes / 1000) / wantedBandwidth
if wantedElapsed > realElapsed {
toSleep := time.Duration(wantedElapsed - realElapsed)
time.Sleep(toSleep * time.Millisecond)
}
}
}

View file

@ -1,254 +0,0 @@
package common
import (
"errors"
"io/ioutil"
"os"
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/vfs"
)
func TestTransferUpdateQuota(t *testing.T) {
conn := NewBaseConnection("", ProtocolSFTP, dataprovider.User{}, nil)
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), nil),
}
errFake := errors.New("fake error")
transfer.TransferError(errFake)
assert.False(t, transfer.updateQuota(1, 0))
err := transfer.Close()
if assert.Error(t, err) {
assert.EqualError(t, err, errFake.Error())
}
mappedPath := filepath.Join(os.TempDir(), "vdir")
vdirPath := "/vdir"
conn.User.VirtualFolders = append(conn.User.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: mappedPath,
},
VirtualPath: vdirPath,
QuotaFiles: -1,
QuotaSize: -1,
})
transfer.ErrTransfer = nil
transfer.BytesReceived = 1
transfer.requestPath = "/vdir/file"
assert.True(t, transfer.updateQuota(1, 0))
err = transfer.Close()
assert.NoError(t, err)
}
func TestTransferThrottling(t *testing.T) {
u := dataprovider.User{
Username: "test",
UploadBandwidth: 50,
DownloadBandwidth: 40,
}
fs := vfs.NewOsFs("", os.TempDir(), nil)
testFileSize := int64(131072)
wantedUploadElapsed := 1000 * (testFileSize / 1000) / u.UploadBandwidth
wantedDownloadElapsed := 1000 * (testFileSize / 1000) / u.DownloadBandwidth
// some tolerance
wantedUploadElapsed -= wantedDownloadElapsed / 10
wantedDownloadElapsed -= wantedDownloadElapsed / 10
conn := NewBaseConnection("id", ProtocolSCP, u, nil)
transfer := NewBaseTransfer(nil, conn, nil, "", "", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = testFileSize
transfer.Connection.UpdateLastActivity()
startTime := transfer.Connection.GetLastActivity()
transfer.HandleThrottle()
elapsed := time.Since(startTime).Nanoseconds() / 1000000
assert.GreaterOrEqual(t, elapsed, wantedUploadElapsed, "upload bandwidth throttling not respected")
err := transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, "", "", TransferDownload, 0, 0, 0, true, fs)
transfer.BytesSent = testFileSize
transfer.Connection.UpdateLastActivity()
startTime = transfer.Connection.GetLastActivity()
transfer.HandleThrottle()
elapsed = time.Since(startTime).Nanoseconds() / 1000000
assert.GreaterOrEqual(t, elapsed, wantedDownloadElapsed, "download bandwidth throttling not respected")
err = transfer.Close()
assert.NoError(t, err)
}
func TestRealPath(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "afile.txt")
fs := vfs.NewOsFs("123", os.TempDir(), nil)
u := dataprovider.User{
Username: "user",
HomeDir: os.TempDir(),
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
file, err := os.Create(testFile)
require.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
rPath := transfer.GetRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
rPath = conn.getRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
err = transfer.Close()
assert.NoError(t, err)
err = file.Close()
assert.NoError(t, err)
transfer.File = nil
rPath = transfer.GetRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
rPath = transfer.GetRealFsPath("")
assert.Empty(t, rPath)
err = os.Remove(testFile)
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}
func TestTruncate(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("123", os.TempDir(), nil)
u := dataprovider.User{
Username: "user",
HomeDir: os.TempDir(),
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
file, err := os.Create(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
_, err = file.Write([]byte("hello"))
assert.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
assert.NoError(t, err)
assert.Equal(t, int64(103), transfer.MaxWriteSize)
err = transfer.Close()
assert.NoError(t, err)
err = file.Close()
assert.NoError(t, err)
fi, err := os.Stat(testFile)
if assert.NoError(t, err) {
assert.Equal(t, int64(2), fi.Size())
}
transfer = NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
// file.Stat will fail on a closed file
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
assert.Error(t, err)
err = transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, testFile, "", TransferUpload, 0, 0, 0, true, fs)
_, err = transfer.Truncate("mismatch", 0)
assert.EqualError(t, err, errTransferMismatch.Error())
_, err = transfer.Truncate(testFile, 0)
assert.NoError(t, err)
_, err = transfer.Truncate(testFile, 1)
assert.EqualError(t, err, ErrOpUnsupported.Error())
err = transfer.Close()
assert.NoError(t, err)
err = os.Remove(testFile)
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}
func TestTransferErrors(t *testing.T) {
isCancelled := false
cancelFn := func() {
isCancelled = true
}
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("id", os.TempDir(), nil)
u := dataprovider.User{
Username: "test",
HomeDir: os.TempDir(),
}
err := ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err := os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
conn := NewBaseConnection("id", ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
assert.Nil(t, transfer.cancelFn)
assert.Equal(t, testFile, transfer.GetFsPath())
transfer.SetCancelFn(cancelFn)
errFake := errors.New("err fake")
transfer.BytesReceived = 9
transfer.TransferError(ErrQuotaExceeded)
assert.True(t, isCancelled)
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, ErrQuotaExceeded.Error())
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
err = transfer.Close()
if assert.Error(t, err) {
assert.Error(t, err, ErrQuotaExceeded.Error())
}
assert.NoFileExists(t, testFile)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
fsPath := filepath.Join(os.TempDir(), "test_file")
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, errFake.Error())
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
err = transfer.Close()
if assert.Error(t, err) {
assert.Error(t, err, errFake.Error())
}
assert.NoFileExists(t, testFile)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
err = transfer.Close()
assert.NoError(t, err)
assert.NoFileExists(t, testFile)
assert.FileExists(t, fsPath)
err = os.Remove(fsPath)
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}

View file

@ -1,444 +0,0 @@
// Package config manages the configuration
package config
import (
"fmt"
"strings"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/ftpd"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/httpd"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/webdavd"
)
const (
logSender = "config"
// DefaultConfigName defines the name for the default config file.
// This is the file name without extension, we use viper and so we
// support all the config files format supported by viper
DefaultConfigName = "sftpgo"
// ConfigEnvPrefix defines a prefix that ENVIRONMENT variables will use
configEnvPrefix = "sftpgo"
)
var (
globalConf globalConfig
defaultSFTPDBanner = fmt.Sprintf("SFTPGo_%v", version.Get().Version)
defaultFTPDBanner = fmt.Sprintf("SFTPGo %v ready", version.Get().Version)
)
type globalConfig struct {
Common common.Configuration `json:"common" mapstructure:"common"`
SFTPD sftpd.Configuration `json:"sftpd" mapstructure:"sftpd"`
FTPD ftpd.Configuration `json:"ftpd" mapstructure:"ftpd"`
WebDAVD webdavd.Configuration `json:"webdavd" mapstructure:"webdavd"`
ProviderConf dataprovider.Config `json:"data_provider" mapstructure:"data_provider"`
HTTPDConfig httpd.Conf `json:"httpd" mapstructure:"httpd"`
HTTPConfig httpclient.Config `json:"http" mapstructure:"http"`
}
func init() {
// create a default configuration to use if no config file is provided
globalConf = globalConfig{
Common: common.Configuration{
IdleTimeout: 15,
UploadMode: 0,
Actions: common.ProtocolActions{
ExecuteOn: []string{},
Hook: "",
},
SetstatMode: 0,
ProxyProtocol: 0,
ProxyAllowed: []string{},
},
SFTPD: sftpd.Configuration{
Banner: defaultSFTPDBanner,
BindPort: 2022,
BindAddress: "",
MaxAuthTries: 0,
HostKeys: []string{},
KexAlgorithms: []string{},
Ciphers: []string{},
MACs: []string{},
TrustedUserCAKeys: []string{},
LoginBannerFile: "",
EnabledSSHCommands: sftpd.GetDefaultSSHCommands(),
KeyboardInteractiveHook: "",
PasswordAuthentication: true,
},
FTPD: ftpd.Configuration{
BindPort: 0,
BindAddress: "",
Banner: defaultFTPDBanner,
BannerFile: "",
ActiveTransfersPortNon20: false,
ForcePassiveIP: "",
PassivePortRange: ftpd.PortRange{
Start: 50000,
End: 50100,
},
CertificateFile: "",
CertificateKeyFile: "",
},
WebDAVD: webdavd.Configuration{
BindPort: 0,
BindAddress: "",
CertificateFile: "",
CertificateKeyFile: "",
Cors: webdavd.Cors{
Enabled: false,
AllowedOrigins: []string{},
AllowedMethods: []string{},
AllowedHeaders: []string{},
ExposedHeaders: []string{},
AllowCredentials: false,
MaxAge: 0,
},
Cache: webdavd.Cache{
Users: webdavd.UsersCacheConfig{
ExpirationTime: 0,
MaxSize: 50,
},
MimeTypes: webdavd.MimeCacheConfig{
Enabled: true,
MaxSize: 1000,
},
},
},
ProviderConf: dataprovider.Config{
Driver: "sqlite",
Name: "sftpgo.db",
Host: "",
Port: 5432,
Username: "",
Password: "",
ConnectionString: "",
SQLTablesPrefix: "",
ManageUsers: 1,
SSLMode: 0,
TrackQuota: 1,
PoolSize: 0,
UsersBaseDir: "",
Actions: dataprovider.UserActions{
ExecuteOn: []string{},
Hook: "",
},
ExternalAuthHook: "",
ExternalAuthScope: 0,
CredentialsPath: "credentials",
PreLoginHook: "",
PostLoginHook: "",
PostLoginScope: 0,
CheckPasswordHook: "",
CheckPasswordScope: 0,
PasswordHashing: dataprovider.PasswordHashing{
Argon2Options: dataprovider.Argon2Options{
Memory: 65536,
Iterations: 1,
Parallelism: 2,
},
},
UpdateMode: 0,
PreferDatabaseCredentials: false,
},
HTTPDConfig: httpd.Conf{
BindPort: 8080,
BindAddress: "127.0.0.1",
TemplatesPath: "templates",
StaticFilesPath: "static",
BackupsPath: "backups",
AuthUserFile: "",
CertificateFile: "",
CertificateKeyFile: "",
},
HTTPConfig: httpclient.Config{
Timeout: 20,
CACertificates: nil,
SkipTLSVerify: false,
},
}
viper.SetEnvPrefix(configEnvPrefix)
replacer := strings.NewReplacer(".", "__")
viper.SetEnvKeyReplacer(replacer)
viper.SetConfigName(DefaultConfigName)
setViperDefaults()
viper.AutomaticEnv()
viper.AllowEmptyEnv(true)
}
// GetCommonConfig returns the common protocols configuration
func GetCommonConfig() common.Configuration {
return globalConf.Common
}
// SetCommonConfig sets the common protocols configuration
func SetCommonConfig(config common.Configuration) {
globalConf.Common = config
}
// GetSFTPDConfig returns the configuration for the SFTP server
func GetSFTPDConfig() sftpd.Configuration {
return globalConf.SFTPD
}
// SetSFTPDConfig sets the configuration for the SFTP server
func SetSFTPDConfig(config sftpd.Configuration) {
globalConf.SFTPD = config
}
// GetFTPDConfig returns the configuration for the FTP server
func GetFTPDConfig() ftpd.Configuration {
return globalConf.FTPD
}
// SetFTPDConfig sets the configuration for the FTP server
func SetFTPDConfig(config ftpd.Configuration) {
globalConf.FTPD = config
}
// GetWebDAVDConfig returns the configuration for the WebDAV server
func GetWebDAVDConfig() webdavd.Configuration {
return globalConf.WebDAVD
}
// SetWebDAVDConfig sets the configuration for the WebDAV server
func SetWebDAVDConfig(config webdavd.Configuration) {
globalConf.WebDAVD = config
}
// GetHTTPDConfig returns the configuration for the HTTP server
func GetHTTPDConfig() httpd.Conf {
return globalConf.HTTPDConfig
}
// SetHTTPDConfig sets the configuration for the HTTP server
func SetHTTPDConfig(config httpd.Conf) {
globalConf.HTTPDConfig = config
}
//GetProviderConf returns the configuration for the data provider
func GetProviderConf() dataprovider.Config {
return globalConf.ProviderConf
}
//SetProviderConf sets the configuration for the data provider
func SetProviderConf(config dataprovider.Config) {
globalConf.ProviderConf = config
}
// GetHTTPConfig returns the configuration for HTTP clients
func GetHTTPConfig() httpclient.Config {
return globalConf.HTTPConfig
}
func getRedactedGlobalConf() globalConfig {
conf := globalConf
conf.ProviderConf.Password = "[redacted]"
return conf
}
// LoadConfig loads the configuration
// configDir will be added to the configuration search paths.
// The search path contains by default the current directory and on linux it contains
// $HOME/.config/sftpgo and /etc/sftpgo too.
// configName is the name of the configuration to search without extension
func LoadConfig(configDir, configName string) error {
var err error
viper.AddConfigPath(configDir)
setViperAdditionalConfigPaths()
viper.AddConfigPath(".")
viper.SetConfigName(configName)
if err = viper.ReadInConfig(); err != nil {
logger.Warn(logSender, "", "error loading configuration file: %v", err)
logger.WarnToConsole("error loading configuration file: %v", err)
}
err = viper.Unmarshal(&globalConf)
if err != nil {
logger.Warn(logSender, "", "error parsing configuration file: %v. Default configuration will be used: %+v",
err, getRedactedGlobalConf())
logger.WarnToConsole("error parsing configuration file: %v. Default configuration will be used.", err)
return err
}
checkCommonParamsCompatibility()
if strings.TrimSpace(globalConf.SFTPD.Banner) == "" {
globalConf.SFTPD.Banner = defaultSFTPDBanner
}
if strings.TrimSpace(globalConf.FTPD.Banner) == "" {
globalConf.FTPD.Banner = defaultFTPDBanner
}
if len(globalConf.ProviderConf.UsersBaseDir) > 0 && !utils.IsFileInputValid(globalConf.ProviderConf.UsersBaseDir) {
err = fmt.Errorf("invalid users base dir %#v will be ignored", globalConf.ProviderConf.UsersBaseDir)
globalConf.ProviderConf.UsersBaseDir = ""
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
}
if globalConf.Common.UploadMode < 0 || globalConf.Common.UploadMode > 2 {
err = fmt.Errorf("invalid upload_mode 0, 1 and 2 are supported, configured: %v reset upload_mode to 0",
globalConf.Common.UploadMode)
globalConf.Common.UploadMode = 0
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
}
if globalConf.Common.ProxyProtocol < 0 || globalConf.Common.ProxyProtocol > 2 {
err = fmt.Errorf("invalid proxy_protocol 0, 1 and 2 are supported, configured: %v reset proxy_protocol to 0",
globalConf.Common.ProxyProtocol)
globalConf.Common.ProxyProtocol = 0
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
}
if globalConf.ProviderConf.ExternalAuthScope < 0 || globalConf.ProviderConf.ExternalAuthScope > 7 {
err = fmt.Errorf("invalid external_auth_scope: %v reset to 0", globalConf.ProviderConf.ExternalAuthScope)
globalConf.ProviderConf.ExternalAuthScope = 0
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
}
if len(globalConf.ProviderConf.CredentialsPath) == 0 {
err = fmt.Errorf("invalid credentials path, reset to \"credentials\"")
globalConf.ProviderConf.CredentialsPath = "credentials"
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
}
checkHostKeyCompatibility()
logger.Debug(logSender, "", "config file used: '%#v', config loaded: %+v", viper.ConfigFileUsed(), getRedactedGlobalConf())
return err
}
func checkHostKeyCompatibility() {
// we copy deprecated fields to new ones to keep backward compatibility so lint is disabled
if len(globalConf.SFTPD.Keys) > 0 && len(globalConf.SFTPD.HostKeys) == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "keys is deprecated, please use host_keys")
logger.WarnToConsole("keys is deprecated, please use host_keys")
for _, k := range globalConf.SFTPD.Keys { //nolint:staticcheck
globalConf.SFTPD.HostKeys = append(globalConf.SFTPD.HostKeys, k.PrivateKey)
}
}
}
func checkCommonParamsCompatibility() {
// we copy deprecated fields to new ones to keep backward compatibility so lint is disabled
if globalConf.SFTPD.IdleTimeout > 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.idle_timeout is deprecated, please use common.idle_timeout")
logger.WarnToConsole("sftpd.idle_timeout is deprecated, please use common.idle_timeout")
globalConf.Common.IdleTimeout = globalConf.SFTPD.IdleTimeout //nolint:staticcheck
}
if len(globalConf.SFTPD.Actions.Hook) > 0 && len(globalConf.Common.Actions.Hook) == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.actions is deprecated, please use common.actions")
logger.WarnToConsole("sftpd.actions is deprecated, please use common.actions")
globalConf.Common.Actions.ExecuteOn = globalConf.SFTPD.Actions.ExecuteOn //nolint:staticcheck
globalConf.Common.Actions.Hook = globalConf.SFTPD.Actions.Hook //nolint:staticcheck
}
if globalConf.SFTPD.SetstatMode > 0 && globalConf.Common.SetstatMode == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.setstat_mode is deprecated, please use common.setstat_mode")
logger.WarnToConsole("sftpd.setstat_mode is deprecated, please use common.setstat_mode")
globalConf.Common.SetstatMode = globalConf.SFTPD.SetstatMode //nolint:staticcheck
}
if globalConf.SFTPD.UploadMode > 0 && globalConf.Common.UploadMode == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.upload_mode is deprecated, please use common.upload_mode")
logger.WarnToConsole("sftpd.upload_mode is deprecated, please use common.upload_mode")
globalConf.Common.UploadMode = globalConf.SFTPD.UploadMode //nolint:staticcheck
}
if globalConf.SFTPD.ProxyProtocol > 0 && globalConf.Common.ProxyProtocol == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.proxy_protocol is deprecated, please use common.proxy_protocol")
logger.WarnToConsole("sftpd.proxy_protocol is deprecated, please use common.proxy_protocol")
globalConf.Common.ProxyProtocol = globalConf.SFTPD.ProxyProtocol //nolint:staticcheck
globalConf.Common.ProxyAllowed = globalConf.SFTPD.ProxyAllowed //nolint:staticcheck
}
}
func setViperDefaults() {
viper.SetDefault("common.idle_timeout", globalConf.Common.IdleTimeout)
viper.SetDefault("common.upload_mode", globalConf.Common.UploadMode)
viper.SetDefault("common.actions.execute_on", globalConf.Common.Actions.ExecuteOn)
viper.SetDefault("common.actions.hook", globalConf.Common.Actions.Hook)
viper.SetDefault("common.setstat_mode", globalConf.Common.SetstatMode)
viper.SetDefault("common.proxy_protocol", globalConf.Common.ProxyProtocol)
viper.SetDefault("common.proxy_allowed", globalConf.Common.ProxyAllowed)
viper.SetDefault("common.post_connect_hook", globalConf.Common.PostConnectHook)
viper.SetDefault("sftpd.bind_port", globalConf.SFTPD.BindPort)
viper.SetDefault("sftpd.bind_address", globalConf.SFTPD.BindAddress)
viper.SetDefault("sftpd.max_auth_tries", globalConf.SFTPD.MaxAuthTries)
viper.SetDefault("sftpd.banner", globalConf.SFTPD.Banner)
viper.SetDefault("sftpd.host_keys", globalConf.SFTPD.HostKeys)
viper.SetDefault("sftpd.kex_algorithms", globalConf.SFTPD.KexAlgorithms)
viper.SetDefault("sftpd.ciphers", globalConf.SFTPD.Ciphers)
viper.SetDefault("sftpd.macs", globalConf.SFTPD.MACs)
viper.SetDefault("sftpd.trusted_user_ca_keys", globalConf.SFTPD.TrustedUserCAKeys)
viper.SetDefault("sftpd.login_banner_file", globalConf.SFTPD.LoginBannerFile)
viper.SetDefault("sftpd.enabled_ssh_commands", globalConf.SFTPD.EnabledSSHCommands)
viper.SetDefault("sftpd.keyboard_interactive_auth_hook", globalConf.SFTPD.KeyboardInteractiveHook)
viper.SetDefault("sftpd.password_authentication", globalConf.SFTPD.PasswordAuthentication)
viper.SetDefault("ftpd.bind_port", globalConf.FTPD.BindPort)
viper.SetDefault("ftpd.bind_address", globalConf.FTPD.BindAddress)
viper.SetDefault("ftpd.banner", globalConf.FTPD.Banner)
viper.SetDefault("ftpd.banner_file", globalConf.FTPD.BannerFile)
viper.SetDefault("ftpd.active_transfers_port_non_20", globalConf.FTPD.ActiveTransfersPortNon20)
viper.SetDefault("ftpd.force_passive_ip", globalConf.FTPD.ForcePassiveIP)
viper.SetDefault("ftpd.passive_port_range.start", globalConf.FTPD.PassivePortRange.Start)
viper.SetDefault("ftpd.passive_port_range.end", globalConf.FTPD.PassivePortRange.End)
viper.SetDefault("ftpd.certificate_file", globalConf.FTPD.CertificateFile)
viper.SetDefault("ftpd.certificate_key_file", globalConf.FTPD.CertificateKeyFile)
viper.SetDefault("ftpd.tls_mode", globalConf.FTPD.TLSMode)
viper.SetDefault("webdavd.bind_port", globalConf.WebDAVD.BindPort)
viper.SetDefault("webdavd.bind_address", globalConf.WebDAVD.BindAddress)
viper.SetDefault("webdavd.certificate_file", globalConf.WebDAVD.CertificateFile)
viper.SetDefault("webdavd.certificate_key_file", globalConf.WebDAVD.CertificateKeyFile)
viper.SetDefault("webdavd.cors.enabled", globalConf.WebDAVD.Cors.Enabled)
viper.SetDefault("webdavd.cors.allowed_origins", globalConf.WebDAVD.Cors.AllowedOrigins)
viper.SetDefault("webdavd.cors.allowed_methods", globalConf.WebDAVD.Cors.AllowedMethods)
viper.SetDefault("webdavd.cors.allowed_headers", globalConf.WebDAVD.Cors.AllowedHeaders)
viper.SetDefault("webdavd.cors.exposed_headers", globalConf.WebDAVD.Cors.ExposedHeaders)
viper.SetDefault("webdavd.cors.allow_credentials", globalConf.WebDAVD.Cors.AllowCredentials)
viper.SetDefault("webdavd.cors.max_age", globalConf.WebDAVD.Cors.MaxAge)
viper.SetDefault("webdavd.cache.users.expiration_time", globalConf.WebDAVD.Cache.Users.ExpirationTime)
viper.SetDefault("webdavd.cache.users.max_size", globalConf.WebDAVD.Cache.Users.MaxSize)
viper.SetDefault("webdavd.cache.mime_types.enabled", globalConf.WebDAVD.Cache.MimeTypes.Enabled)
viper.SetDefault("webdavd.cache.mime_types.max_size", globalConf.WebDAVD.Cache.MimeTypes.MaxSize)
viper.SetDefault("data_provider.driver", globalConf.ProviderConf.Driver)
viper.SetDefault("data_provider.name", globalConf.ProviderConf.Name)
viper.SetDefault("data_provider.host", globalConf.ProviderConf.Host)
viper.SetDefault("data_provider.port", globalConf.ProviderConf.Port)
viper.SetDefault("data_provider.username", globalConf.ProviderConf.Username)
viper.SetDefault("data_provider.password", globalConf.ProviderConf.Password)
viper.SetDefault("data_provider.sslmode", globalConf.ProviderConf.SSLMode)
viper.SetDefault("data_provider.connection_string", globalConf.ProviderConf.ConnectionString)
viper.SetDefault("data_provider.sql_tables_prefix", globalConf.ProviderConf.SQLTablesPrefix)
viper.SetDefault("data_provider.manage_users", globalConf.ProviderConf.ManageUsers)
viper.SetDefault("data_provider.track_quota", globalConf.ProviderConf.TrackQuota)
viper.SetDefault("data_provider.pool_size", globalConf.ProviderConf.PoolSize)
viper.SetDefault("data_provider.users_base_dir", globalConf.ProviderConf.UsersBaseDir)
viper.SetDefault("data_provider.actions.execute_on", globalConf.ProviderConf.Actions.ExecuteOn)
viper.SetDefault("data_provider.actions.hook", globalConf.ProviderConf.Actions.Hook)
viper.SetDefault("data_provider.external_auth_hook", globalConf.ProviderConf.ExternalAuthHook)
viper.SetDefault("data_provider.external_auth_scope", globalConf.ProviderConf.ExternalAuthScope)
viper.SetDefault("data_provider.credentials_path", globalConf.ProviderConf.CredentialsPath)
viper.SetDefault("data_provider.prefer_database_credentials", globalConf.ProviderConf.PreferDatabaseCredentials)
viper.SetDefault("data_provider.pre_login_hook", globalConf.ProviderConf.PreLoginHook)
viper.SetDefault("data_provider.post_login_hook", globalConf.ProviderConf.PostLoginHook)
viper.SetDefault("data_provider.post_login_scope", globalConf.ProviderConf.PostLoginScope)
viper.SetDefault("data_provider.check_password_hook", globalConf.ProviderConf.CheckPasswordHook)
viper.SetDefault("data_provider.check_password_scope", globalConf.ProviderConf.CheckPasswordScope)
viper.SetDefault("data_provider.password_hashing.argon2_options.memory", globalConf.ProviderConf.PasswordHashing.Argon2Options.Memory)
viper.SetDefault("data_provider.password_hashing.argon2_options.iterations", globalConf.ProviderConf.PasswordHashing.Argon2Options.Iterations)
viper.SetDefault("data_provider.password_hashing.argon2_options.parallelism", globalConf.ProviderConf.PasswordHashing.Argon2Options.Parallelism)
viper.SetDefault("data_provider.update_mode", globalConf.ProviderConf.UpdateMode)
viper.SetDefault("httpd.bind_port", globalConf.HTTPDConfig.BindPort)
viper.SetDefault("httpd.bind_address", globalConf.HTTPDConfig.BindAddress)
viper.SetDefault("httpd.templates_path", globalConf.HTTPDConfig.TemplatesPath)
viper.SetDefault("httpd.static_files_path", globalConf.HTTPDConfig.StaticFilesPath)
viper.SetDefault("httpd.backups_path", globalConf.HTTPDConfig.BackupsPath)
viper.SetDefault("httpd.auth_user_file", globalConf.HTTPDConfig.AuthUserFile)
viper.SetDefault("httpd.certificate_file", globalConf.HTTPDConfig.CertificateFile)
viper.SetDefault("httpd.certificate_key_file", globalConf.HTTPDConfig.CertificateKeyFile)
viper.SetDefault("http.timeout", globalConf.HTTPConfig.Timeout)
viper.SetDefault("http.ca_certificates", globalConf.HTTPConfig.CACertificates)
viper.SetDefault("http.skip_tls_verify", globalConf.HTTPConfig.SkipTLSVerify)
}

View file

@ -1,11 +0,0 @@
// +build linux
package config
import "github.com/spf13/viper"
// linux specific config search path
func setViperAdditionalConfigPaths() {
viper.AddConfigPath("$HOME/.config/sftpgo")
viper.AddConfigPath("/etc/sftpgo")
}

View file

@ -1,7 +0,0 @@
// +build !linux
package config
func setViperAdditionalConfigPaths() {
}

View file

@ -1,304 +0,0 @@
package config_test
import (
"encoding/json"
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/ftpd"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/httpd"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/utils"
)
const (
tempConfigName = "temp"
configName = "sftpgo"
)
func TestLoadConfigTest(t *testing.T) {
configDir := ".."
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
assert.NotEqual(t, httpd.Conf{}, config.GetHTTPConfig())
assert.NotEqual(t, dataprovider.Config{}, config.GetProviderConf())
assert.NotEqual(t, sftpd.Configuration{}, config.GetSFTPDConfig())
assert.NotEqual(t, httpclient.Config{}, config.GetHTTPConfig())
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err = config.LoadConfig(configDir, tempConfigName)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, []byte("{invalid json}"), os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, []byte("{\"sftpd\": {\"bind_port\": \"a\"}}"), os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestEmptyBanner(t *testing.T) {
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.Banner = " "
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, _ := json.Marshal(c)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
assert.NotEmpty(t, strings.TrimSpace(sftpdConf.Banner))
err = os.Remove(configFilePath)
assert.NoError(t, err)
ftpdConf := config.GetFTPDConfig()
ftpdConf.Banner = " "
c1 := make(map[string]ftpd.Configuration)
c1["ftpd"] = ftpdConf
jsonConf, _ = json.Marshal(c1)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NoError(t, err)
ftpdConf = config.GetFTPDConfig()
assert.NotEmpty(t, strings.TrimSpace(ftpdConf.Banner))
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidUploadMode(t *testing.T) {
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
commonConf := config.GetCommonConfig()
commonConf.UploadMode = 10
c := make(map[string]common.Configuration)
c["common"] = commonConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidExternalAuthScope(t *testing.T) {
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
providerConf := config.GetProviderConf()
providerConf.ExternalAuthScope = 10
c := make(map[string]dataprovider.Config)
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidCredentialsPath(t *testing.T) {
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
providerConf := config.GetProviderConf()
providerConf.CredentialsPath = ""
c := make(map[string]dataprovider.Config)
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidProxyProtocol(t *testing.T) {
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
commonConf := config.GetCommonConfig()
commonConf.ProxyProtocol = 10
c := make(map[string]common.Configuration)
c["common"] = commonConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidUsersBaseDir(t *testing.T) {
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
providerConf := config.GetProviderConf()
providerConf.UsersBaseDir = "."
c := make(map[string]dataprovider.Config)
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestCommonParamsCompatibility(t *testing.T) {
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.IdleTimeout = 21 //nolint:staticcheck
sftpdConf.Actions.Hook = "http://hook"
sftpdConf.Actions.ExecuteOn = []string{"upload"}
sftpdConf.SetstatMode = 1 //nolint:staticcheck
sftpdConf.UploadMode = common.UploadModeAtomicWithResume //nolint:staticcheck
sftpdConf.ProxyProtocol = 1 //nolint:staticcheck
sftpdConf.ProxyAllowed = []string{"192.168.1.1"} //nolint:staticcheck
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NoError(t, err)
commonConf := config.GetCommonConfig()
assert.Equal(t, 21, commonConf.IdleTimeout)
assert.Equal(t, "http://hook", commonConf.Actions.Hook)
assert.Len(t, commonConf.Actions.ExecuteOn, 1)
assert.True(t, utils.IsStringInSlice("upload", commonConf.Actions.ExecuteOn))
assert.Equal(t, 1, commonConf.SetstatMode)
assert.Equal(t, 1, commonConf.ProxyProtocol)
assert.Len(t, commonConf.ProxyAllowed, 1)
assert.True(t, utils.IsStringInSlice("192.168.1.1", commonConf.ProxyAllowed))
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestHostKeyCompatibility(t *testing.T) {
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.Keys = []sftpd.Key{ //nolint:staticcheck
{
PrivateKey: "rsa",
},
{
PrivateKey: "ecdsa",
},
}
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
assert.Equal(t, 2, len(sftpdConf.HostKeys))
assert.True(t, utils.IsStringInSlice("rsa", sftpdConf.HostKeys))
assert.True(t, utils.IsStringInSlice("ecdsa", sftpdConf.HostKeys))
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestSetGetConfig(t *testing.T) {
sftpdConf := config.GetSFTPDConfig()
sftpdConf.MaxAuthTries = 10
config.SetSFTPDConfig(sftpdConf)
assert.Equal(t, sftpdConf.MaxAuthTries, config.GetSFTPDConfig().MaxAuthTries)
dataProviderConf := config.GetProviderConf()
dataProviderConf.Host = "test host"
config.SetProviderConf(dataProviderConf)
assert.Equal(t, dataProviderConf.Host, config.GetProviderConf().Host)
httpdConf := config.GetHTTPDConfig()
httpdConf.BindAddress = "0.0.0.0"
config.SetHTTPDConfig(httpdConf)
assert.Equal(t, httpdConf.BindAddress, config.GetHTTPDConfig().BindAddress)
commonConf := config.GetCommonConfig()
commonConf.IdleTimeout = 10
config.SetCommonConfig(commonConf)
assert.Equal(t, commonConf.IdleTimeout, config.GetCommonConfig().IdleTimeout)
ftpdConf := config.GetFTPDConfig()
ftpdConf.CertificateFile = "cert"
ftpdConf.CertificateKeyFile = "key"
config.SetFTPDConfig(ftpdConf)
assert.Equal(t, ftpdConf.CertificateFile, config.GetFTPDConfig().CertificateFile)
assert.Equal(t, ftpdConf.CertificateKeyFile, config.GetFTPDConfig().CertificateKeyFile)
webDavConf := config.GetWebDAVDConfig()
webDavConf.CertificateFile = "dav_cert"
webDavConf.CertificateKeyFile = "dav_key"
config.SetWebDAVDConfig(webDavConf)
assert.Equal(t, webDavConf.CertificateFile, config.GetWebDAVDConfig().CertificateFile)
assert.Equal(t, webDavConf.CertificateKeyFile, config.GetWebDAVDConfig().CertificateKeyFile)
}
func TestConfigFromEnv(t *testing.T) {
os.Setenv("SFTPGO_SFTPD__BIND_ADDRESS", "127.0.0.1")
os.Setenv("SFTPGO_DATA_PROVIDER__PASSWORD_HASHING__ARGON2_OPTIONS__ITERATIONS", "41")
os.Setenv("SFTPGO_DATA_PROVIDER__POOL_SIZE", "10")
os.Setenv("SFTPGO_DATA_PROVIDER__ACTIONS__EXECUTE_ON", "add")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_SFTPD__BIND_ADDRESS")
os.Unsetenv("SFTPGO_DATA_PROVIDER__PASSWORD_HASHING__ARGON2_OPTIONS__ITERATIONS")
os.Unsetenv("SFTPGO_DATA_PROVIDER__POOL_SIZE")
os.Unsetenv("SFTPGO_DATA_PROVIDER__ACTIONS__EXECUTE_ON")
})
err := config.LoadConfig(".", "invalid config")
assert.NoError(t, err)
sftpdConfig := config.GetSFTPDConfig()
assert.Equal(t, "127.0.0.1", sftpdConfig.BindAddress)
dataProviderConf := config.GetProviderConf()
assert.Equal(t, uint32(41), dataProviderConf.PasswordHashing.Argon2Options.Iterations)
assert.Equal(t, 10, dataProviderConf.PoolSize)
assert.Len(t, dataProviderConf.Actions.ExecuteOn, 1)
assert.Contains(t, dataProviderConf.Actions.ExecuteOn, "add")
}

6
crowdin.yml Normal file
View file

@ -0,0 +1,6 @@
project_id_env: CROWDIN_PROJECT_ID
api_token_env: CROWDIN_PERSONAL_TOKEN
files:
- source: /static/locales/en/translation.json
translation: /static/locales/%two_letters_code%/%original_file_name%
type: i18next_json

File diff suppressed because it is too large Load diff

View file

@ -1,17 +0,0 @@
// +build nobolt
package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/version"
)
func init() {
version.AddFeature("-bolt")
}
func initializeBoltProvider(basePath string) error {
return errors.New("bolt disabled at build time")
}

File diff suppressed because it is too large Load diff

View file

@ -1,675 +0,0 @@
package dataprovider
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"sort"
"sync"
"time"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
var (
errMemoryProviderClosed = errors.New("memory provider is closed")
)
type memoryProviderHandle struct {
// configuration file to use for loading users
configFile string
sync.Mutex
isClosed bool
// slice with ordered usernames
usernames []string
// mapping between ID and username
usersIdx map[int64]string
// map for users, username is the key
users map[string]User
// map for virtual folders, MappedPath is the key
vfolders map[string]vfs.BaseVirtualFolder
// slice with ordered folders mapped path
vfoldersPaths []string
}
// MemoryProvider auth provider for a memory store
type MemoryProvider struct {
dbHandle *memoryProviderHandle
}
func initializeMemoryProvider(basePath string) error {
logSender = fmt.Sprintf("dataprovider_%v", MemoryDataProviderName)
configFile := ""
if utils.IsFileInputValid(config.Name) {
configFile = config.Name
if !filepath.IsAbs(configFile) {
configFile = filepath.Join(basePath, configFile)
}
}
provider = MemoryProvider{
dbHandle: &memoryProviderHandle{
isClosed: false,
usernames: []string{},
usersIdx: make(map[int64]string),
users: make(map[string]User),
vfolders: make(map[string]vfs.BaseVirtualFolder),
vfoldersPaths: []string{},
configFile: configFile,
},
}
return provider.reloadConfig()
}
func (p MemoryProvider) checkAvailability() error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
return nil
}
func (p MemoryProvider) close() error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
p.dbHandle.isClosed = true
return nil
}
func (p MemoryProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
var user User
if len(password) == 0 {
return user, errors.New("Credentials cannot be null or empty")
}
user, err := p.userExists(username)
if err != nil {
providerLog(logger.LevelWarn, "error authenticating user %#v, error: %v", username, err)
return user, err
}
return checkUserAndPass(user, password, ip, protocol)
}
func (p MemoryProvider) validateUserAndPubKey(username string, pubKey []byte) (User, string, error) {
var user User
if len(pubKey) == 0 {
return user, "", errors.New("Credentials cannot be null or empty")
}
user, err := p.userExists(username)
if err != nil {
providerLog(logger.LevelWarn, "error authenticating user %#v, error: %v", username, err)
return user, "", err
}
return checkUserAndPubKey(user, pubKey)
}
func (p MemoryProvider) getUserByID(ID int64) (User, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return User{}, errMemoryProviderClosed
}
if val, ok := p.dbHandle.usersIdx[ID]; ok {
return p.userExistsInternal(val)
}
return User{}, &RecordNotFoundError{err: fmt.Sprintf("user with ID %v does not exist", ID)}
}
func (p MemoryProvider) updateLastLogin(username string) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
user, err := p.userExistsInternal(username)
if err != nil {
return err
}
user.LastLogin = utils.GetTimeAsMsSinceEpoch(time.Now())
p.dbHandle.users[user.Username] = user
return nil
}
func (p MemoryProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
user, err := p.userExistsInternal(username)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota for user %#v error: %v", username, err)
return err
}
if reset {
user.UsedQuotaSize = sizeAdd
user.UsedQuotaFiles = filesAdd
} else {
user.UsedQuotaSize += sizeAdd
user.UsedQuotaFiles += filesAdd
}
user.LastQuotaUpdate = utils.GetTimeAsMsSinceEpoch(time.Now())
providerLog(logger.LevelDebug, "quota updated for user %#v, files increment: %v size increment: %v is reset? %v",
username, filesAdd, sizeAdd, reset)
p.dbHandle.users[user.Username] = user
return nil
}
func (p MemoryProvider) getUsedQuota(username string) (int, int64, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return 0, 0, errMemoryProviderClosed
}
user, err := p.userExistsInternal(username)
if err != nil {
providerLog(logger.LevelWarn, "unable to get quota for user %#v error: %v", username, err)
return 0, 0, err
}
return user.UsedQuotaFiles, user.UsedQuotaSize, err
}
func (p MemoryProvider) addUser(user User) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
err := validateUser(&user)
if err != nil {
return err
}
_, err = p.userExistsInternal(user.Username)
if err == nil {
return fmt.Errorf("username %#v already exists", user.Username)
}
user.ID = p.getNextID()
user.LastQuotaUpdate = 0
user.UsedQuotaSize = 0
user.UsedQuotaFiles = 0
user.LastLogin = 0
user.VirtualFolders = p.joinVirtualFoldersFields(user)
p.dbHandle.users[user.Username] = user
p.dbHandle.usersIdx[user.ID] = user.Username
p.dbHandle.usernames = append(p.dbHandle.usernames, user.Username)
sort.Strings(p.dbHandle.usernames)
return nil
}
func (p MemoryProvider) updateUser(user User) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
err := validateUser(&user)
if err != nil {
return err
}
u, err := p.userExistsInternal(user.Username)
if err != nil {
return err
}
for _, oldFolder := range u.VirtualFolders {
p.removeUserFromFolderMapping(oldFolder.MappedPath, u.Username)
}
user.VirtualFolders = p.joinVirtualFoldersFields(user)
user.LastQuotaUpdate = u.LastQuotaUpdate
user.UsedQuotaSize = u.UsedQuotaSize
user.UsedQuotaFiles = u.UsedQuotaFiles
user.LastLogin = u.LastLogin
p.dbHandle.users[user.Username] = user
return nil
}
func (p MemoryProvider) deleteUser(user User) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
u, err := p.userExistsInternal(user.Username)
if err != nil {
return err
}
for _, oldFolder := range u.VirtualFolders {
p.removeUserFromFolderMapping(oldFolder.MappedPath, u.Username)
}
delete(p.dbHandle.users, user.Username)
delete(p.dbHandle.usersIdx, user.ID)
// this could be more efficient
p.dbHandle.usernames = []string{}
for username := range p.dbHandle.users {
p.dbHandle.usernames = append(p.dbHandle.usernames, username)
}
sort.Strings(p.dbHandle.usernames)
return nil
}
func (p MemoryProvider) dumpUsers() ([]User, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
users := make([]User, 0, len(p.dbHandle.usernames))
var err error
if p.dbHandle.isClosed {
return users, errMemoryProviderClosed
}
for _, username := range p.dbHandle.usernames {
user := p.dbHandle.users[username]
err = addCredentialsToUser(&user)
if err != nil {
return users, err
}
users = append(users, user)
}
return users, err
}
func (p MemoryProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
folders := make([]vfs.BaseVirtualFolder, 0, len(p.dbHandle.vfoldersPaths))
if p.dbHandle.isClosed {
return folders, errMemoryProviderClosed
}
for _, f := range p.dbHandle.vfolders {
folders = append(folders, f)
}
return folders, nil
}
func (p MemoryProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
users := make([]User, 0, limit)
var err error
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return users, errMemoryProviderClosed
}
if limit <= 0 {
return users, err
}
if len(username) > 0 {
if offset == 0 {
user, err := p.userExistsInternal(username)
if err == nil {
users = append(users, HideUserSensitiveData(&user))
}
}
return users, err
}
itNum := 0
if order == OrderASC {
for _, username := range p.dbHandle.usernames {
itNum++
if itNum <= offset {
continue
}
user := p.dbHandle.users[username]
users = append(users, HideUserSensitiveData(&user))
if len(users) >= limit {
break
}
}
} else {
for i := len(p.dbHandle.usernames) - 1; i >= 0; i-- {
itNum++
if itNum <= offset {
continue
}
username := p.dbHandle.usernames[i]
user := p.dbHandle.users[username]
users = append(users, HideUserSensitiveData(&user))
if len(users) >= limit {
break
}
}
}
return users, err
}
func (p MemoryProvider) userExists(username string) (User, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return User{}, errMemoryProviderClosed
}
return p.userExistsInternal(username)
}
func (p MemoryProvider) userExistsInternal(username string) (User, error) {
if val, ok := p.dbHandle.users[username]; ok {
return val.getACopy(), nil
}
return User{}, &RecordNotFoundError{err: fmt.Sprintf("username %#v does not exist", username)}
}
func (p MemoryProvider) updateFolderQuota(mappedPath string, filesAdd int, sizeAdd int64, reset bool) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
folder, err := p.folderExistsInternal(mappedPath)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota for folder %#v error: %v", mappedPath, err)
return err
}
if reset {
folder.UsedQuotaSize = sizeAdd
folder.UsedQuotaFiles = filesAdd
} else {
folder.UsedQuotaSize += sizeAdd
folder.UsedQuotaFiles += filesAdd
}
folder.LastQuotaUpdate = utils.GetTimeAsMsSinceEpoch(time.Now())
p.dbHandle.vfolders[mappedPath] = folder
return nil
}
func (p MemoryProvider) getUsedFolderQuota(mappedPath string) (int, int64, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return 0, 0, errMemoryProviderClosed
}
folder, err := p.folderExistsInternal(mappedPath)
if err != nil {
providerLog(logger.LevelWarn, "unable to get quota for folder %#v error: %v", mappedPath, err)
return 0, 0, err
}
return folder.UsedQuotaFiles, folder.UsedQuotaSize, err
}
func (p MemoryProvider) joinVirtualFoldersFields(user User) []vfs.VirtualFolder {
var folders []vfs.VirtualFolder
for _, folder := range user.VirtualFolders {
f, err := p.addOrGetFolderInternal(folder.MappedPath, user.Username, folder.UsedQuotaSize, folder.UsedQuotaFiles,
folder.LastQuotaUpdate)
if err == nil {
folder.UsedQuotaFiles = f.UsedQuotaFiles
folder.UsedQuotaSize = f.UsedQuotaSize
folder.LastQuotaUpdate = f.LastQuotaUpdate
folder.ID = f.ID
folders = append(folders, folder)
}
}
return folders
}
func (p MemoryProvider) removeUserFromFolderMapping(mappedPath, username string) {
folder, err := p.folderExistsInternal(mappedPath)
if err == nil {
var usernames []string
for _, user := range folder.Users {
if user != username {
usernames = append(usernames, user)
}
}
folder.Users = usernames
p.dbHandle.vfolders[folder.MappedPath] = folder
}
}
func (p MemoryProvider) updateFoldersMappingInternal(folder vfs.BaseVirtualFolder) {
p.dbHandle.vfolders[folder.MappedPath] = folder
if !utils.IsStringInSlice(folder.MappedPath, p.dbHandle.vfoldersPaths) {
p.dbHandle.vfoldersPaths = append(p.dbHandle.vfoldersPaths, folder.MappedPath)
sort.Strings(p.dbHandle.vfoldersPaths)
}
}
func (p MemoryProvider) addOrGetFolderInternal(mappedPath, username string, usedQuotaSize int64, usedQuotaFiles int, lastQuotaUpdate int64) (vfs.BaseVirtualFolder, error) {
folder, err := p.folderExistsInternal(mappedPath)
if _, ok := err.(*RecordNotFoundError); ok {
folder := vfs.BaseVirtualFolder{
ID: p.getNextFolderID(),
MappedPath: mappedPath,
UsedQuotaSize: usedQuotaSize,
UsedQuotaFiles: usedQuotaFiles,
LastQuotaUpdate: lastQuotaUpdate,
Users: []string{username},
}
p.updateFoldersMappingInternal(folder)
return folder, nil
}
if err == nil && !utils.IsStringInSlice(username, folder.Users) {
folder.Users = append(folder.Users, username)
p.updateFoldersMappingInternal(folder)
}
return folder, err
}
func (p MemoryProvider) folderExistsInternal(mappedPath string) (vfs.BaseVirtualFolder, error) {
if val, ok := p.dbHandle.vfolders[mappedPath]; ok {
return val, nil
}
return vfs.BaseVirtualFolder{}, &RecordNotFoundError{err: fmt.Sprintf("folder %#v does not exist", mappedPath)}
}
func (p MemoryProvider) getFolders(limit, offset int, order, folderPath string) ([]vfs.BaseVirtualFolder, error) {
folders := make([]vfs.BaseVirtualFolder, 0, limit)
var err error
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return folders, errMemoryProviderClosed
}
if limit <= 0 {
return folders, err
}
if len(folderPath) > 0 {
if offset == 0 {
var folder vfs.BaseVirtualFolder
folder, err = p.folderExistsInternal(folderPath)
if err == nil {
folders = append(folders, folder)
}
}
return folders, err
}
itNum := 0
if order == OrderASC {
for _, mappedPath := range p.dbHandle.vfoldersPaths {
itNum++
if itNum <= offset {
continue
}
folder := p.dbHandle.vfolders[mappedPath]
folders = append(folders, folder)
if len(folders) >= limit {
break
}
}
} else {
for i := len(p.dbHandle.vfoldersPaths) - 1; i >= 0; i-- {
itNum++
if itNum <= offset {
continue
}
mappedPath := p.dbHandle.vfoldersPaths[i]
folder := p.dbHandle.vfolders[mappedPath]
folders = append(folders, folder)
if len(folders) >= limit {
break
}
}
}
return folders, err
}
func (p MemoryProvider) getFolderByPath(mappedPath string) (vfs.BaseVirtualFolder, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return vfs.BaseVirtualFolder{}, errMemoryProviderClosed
}
return p.folderExistsInternal(mappedPath)
}
func (p MemoryProvider) addFolder(folder vfs.BaseVirtualFolder) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
err := validateFolder(&folder)
if err != nil {
return err
}
_, err = p.folderExistsInternal(folder.MappedPath)
if err == nil {
return fmt.Errorf("folder %#v already exists", folder.MappedPath)
}
folder.ID = p.getNextFolderID()
p.dbHandle.vfolders[folder.MappedPath] = folder
p.dbHandle.vfoldersPaths = append(p.dbHandle.vfoldersPaths, folder.MappedPath)
sort.Strings(p.dbHandle.vfoldersPaths)
return nil
}
func (p MemoryProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
_, err := p.folderExistsInternal(folder.MappedPath)
if err != nil {
return err
}
for _, username := range folder.Users {
user, err := p.userExistsInternal(username)
if err == nil {
var folders []vfs.VirtualFolder
for _, userFolder := range user.VirtualFolders {
if folder.MappedPath != userFolder.MappedPath {
folders = append(folders, userFolder)
}
}
user.VirtualFolders = folders
p.dbHandle.users[user.Username] = user
}
}
delete(p.dbHandle.vfolders, folder.MappedPath)
p.dbHandle.vfoldersPaths = []string{}
for mappedPath := range p.dbHandle.vfolders {
p.dbHandle.vfoldersPaths = append(p.dbHandle.vfoldersPaths, mappedPath)
}
sort.Strings(p.dbHandle.vfoldersPaths)
return nil
}
func (p MemoryProvider) getNextID() int64 {
nextID := int64(1)
for id := range p.dbHandle.usersIdx {
if id >= nextID {
nextID = id + 1
}
}
return nextID
}
func (p MemoryProvider) getNextFolderID() int64 {
nextID := int64(1)
for _, v := range p.dbHandle.vfolders {
if v.ID >= nextID {
nextID = v.ID + 1
}
}
return nextID
}
func (p MemoryProvider) clear() {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
p.dbHandle.usernames = []string{}
p.dbHandle.usersIdx = make(map[int64]string)
p.dbHandle.users = make(map[string]User)
p.dbHandle.vfoldersPaths = []string{}
p.dbHandle.vfolders = make(map[string]vfs.BaseVirtualFolder)
}
func (p MemoryProvider) reloadConfig() error {
if len(p.dbHandle.configFile) == 0 {
providerLog(logger.LevelDebug, "no users configuration file defined")
return nil
}
providerLog(logger.LevelDebug, "loading users from file: %#v", p.dbHandle.configFile)
fi, err := os.Stat(p.dbHandle.configFile)
if err != nil {
providerLog(logger.LevelWarn, "error loading users: %v", err)
return err
}
if fi.Size() == 0 {
err = errors.New("users configuration file is invalid, its size must be > 0")
providerLog(logger.LevelWarn, "error loading users: %v", err)
return err
}
if fi.Size() > 10485760 {
err = errors.New("users configuration file is invalid, its size must be <= 10485760 bytes")
providerLog(logger.LevelWarn, "error loading users: %v", err)
return err
}
content, err := ioutil.ReadFile(p.dbHandle.configFile)
if err != nil {
providerLog(logger.LevelWarn, "error loading users: %v", err)
return err
}
var dump BackupData
err = json.Unmarshal(content, &dump)
if err != nil {
providerLog(logger.LevelWarn, "error loading users: %v", err)
return err
}
p.clear()
for _, folder := range dump.Folders {
_, err := p.getFolderByPath(folder.MappedPath)
if err == nil {
logger.Debug(logSender, "", "folder %#v already exists, restore not needed", folder.MappedPath)
continue
}
folder.Users = nil
err = p.addFolder(folder)
if err != nil {
providerLog(logger.LevelWarn, "error adding folder %#v: %v", folder.MappedPath, err)
return err
}
}
for _, user := range dump.Users {
u, err := p.userExists(user.Username)
if err == nil {
user.ID = u.ID
err = p.updateUser(user)
if err != nil {
providerLog(logger.LevelWarn, "error updating user %#v: %v", user.Username, err)
return err
}
} else {
err = p.addUser(user)
if err != nil {
providerLog(logger.LevelWarn, "error adding user %#v: %v", user.Username, err)
return err
}
}
}
providerLog(logger.LevelDebug, "user and folders loaded from file: %#v", p.dbHandle.configFile)
return nil
}
// initializeDatabase does nothing, no initilization is needed for memory provider
func (p MemoryProvider) initializeDatabase() error {
return ErrNoInitRequired
}
func (p MemoryProvider) migrateDatabase() error {
return ErrNoInitRequired
}

View file

@ -1,251 +0,0 @@
// +build !nomysql
package dataprovider
import (
"context"
"database/sql"
"fmt"
"strings"
"time"
// we import go-sql-driver/mysql here to be able to disable MySQL support using a build tag
_ "github.com/go-sql-driver/mysql"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
)
const (
mysqlUsersTableSQL = "CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`username` varchar(255) NOT NULL UNIQUE, `password` varchar(255) NULL, `public_keys` longtext NULL, " +
"`home_dir` varchar(255) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, `max_sessions` integer NOT NULL, " +
" `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, " +
"`upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, `expiration_date` bigint(20) NOT NULL, " +
"`last_login` bigint(20) NOT NULL, `status` int(11) NOT NULL, `filters` longtext DEFAULT NULL, " +
"`filesystem` longtext DEFAULT NULL);"
mysqlSchemaTableSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);"
mysqlV2SQL = "ALTER TABLE `{{users}}` ADD COLUMN `virtual_folders` longtext NULL;"
mysqlV3SQL = "ALTER TABLE `{{users}}` MODIFY `password` longtext NULL;"
mysqlV4SQL = "CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `path` varchar(512) NOT NULL UNIQUE," +
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL);" +
"ALTER TABLE `{{users}}` MODIFY `home_dir` varchar(512) NOT NULL;" +
"ALTER TABLE `{{users}}` DROP COLUMN `virtual_folders`;" +
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` varchar(512) NOT NULL, " +
"`quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `folder_id` integer NOT NULL, `user_id` integer NOT NULL);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
)
// MySQLProvider auth provider for MySQL/MariaDB database
type MySQLProvider struct {
dbHandle *sql.DB
}
func init() {
version.AddFeature("+mysql")
}
func initializeMySQLProvider() error {
var err error
logSender = fmt.Sprintf("dataprovider_%v", MySQLDataProviderName)
dbHandle, err := sql.Open("mysql", getMySQLConnectionString(false))
if err == nil {
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
getMySQLConnectionString(true), config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
dbHandle.SetConnMaxLifetime(1800 * time.Second)
provider = MySQLProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating mysql database handler, connection string: %#v, error: %v",
getMySQLConnectionString(true), err)
}
return err
}
func getMySQLConnectionString(redactedPwd bool) string {
var connectionString string
if len(config.ConnectionString) == 0 {
password := config.Password
if redactedPwd {
password = "[redacted]"
}
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8&interpolateParams=true&timeout=10s&tls=%v&writeTimeout=10s&readTimeout=10s",
config.Username, password, config.Host, config.Port, config.Name, getSSLMode())
} else {
connectionString = config.ConnectionString
}
return connectionString
}
func (p MySQLProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
func (p MySQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
func (p MySQLProvider) getUserByID(ID int64) (User, error) {
return sqlCommonGetUserByID(ID, p.dbHandle)
}
func (p MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p MySQLProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p MySQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p MySQLProvider) userExists(username string) (User, error) {
return sqlCommonCheckUserExists(username, p.dbHandle)
}
func (p MySQLProvider) addUser(user User) error {
return sqlCommonAddUser(user, p.dbHandle)
}
func (p MySQLProvider) updateUser(user User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p MySQLProvider) deleteUser(user User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p MySQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p MySQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
}
func (p MySQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p MySQLProvider) getFolders(limit, offset int, order, folderPath string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, folderPath, p.dbHandle)
}
func (p MySQLProvider) getFolderByPath(mappedPath string) (vfs.BaseVirtualFolder, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return sqlCommonCheckFolderExists(ctx, mappedPath, p.dbHandle)
}
func (p MySQLProvider) addFolder(folder vfs.BaseVirtualFolder) error {
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p MySQLProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
func (p MySQLProvider) updateFolderQuota(mappedPath string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(mappedPath, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p MySQLProvider) getUsedFolderQuota(mappedPath string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(mappedPath, p.dbHandle)
}
func (p MySQLProvider) close() error {
return p.dbHandle.Close()
}
func (p MySQLProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p MySQLProvider) initializeDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
sqlUsers := strings.Replace(mysqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
_, err = tx.Exec(strings.Replace(mysqlSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
return tx.Commit()
}
func (p MySQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
return ErrNoInitRequired
}
switch dbVersion.Version {
case 1:
err = updateMySQLDatabaseFrom1To2(p.dbHandle)
if err != nil {
return err
}
err = updateMySQLDatabaseFrom2To3(p.dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFrom3To4(p.dbHandle)
case 2:
err = updateMySQLDatabaseFrom2To3(p.dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFrom3To4(p.dbHandle)
case 3:
return updateMySQLDatabaseFrom3To4(p.dbHandle)
default:
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
func updateMySQLDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(mysqlV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func updateMySQLDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.Replace(mysqlV3SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
}
func updateMySQLDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(mysqlV4SQL, dbHandle)
}

View file

@ -1,17 +0,0 @@
// +build nomysql
package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/version"
)
func init() {
version.AddFeature("-mysql")
}
func initializeMySQLProvider() error {
return errors.New("MySQL disabled at build time")
}

View file

@ -1,250 +0,0 @@
// +build !nopgsql
package dataprovider
import (
"context"
"database/sql"
"fmt"
"strings"
// we import lib/pq here to be able to disable PostgreSQL support using a build tag
_ "github.com/lib/pq"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
)
const (
pgsqlUsersTableSQL = `CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
pgsqlSchemaTableSQL = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);`
pgsqlV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
pgsqlV3SQL = `ALTER TABLE "{{users}}" ALTER COLUMN "password" TYPE text USING "password"::text;`
pgsqlV4SQL = `CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "path" varchar(512) NOT NULL UNIQUE, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
ALTER TABLE "{{users}}" ALTER COLUMN "home_dir" TYPE varchar(512) USING "home_dir"::varchar(512);
ALTER TABLE "{{users}}" DROP COLUMN "virtual_folders" CASCADE;
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "unique_mapping" UNIQUE ("user_id", "folder_id");
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_folder_id_fk_folders_id" FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_user_id_fk_users_id" FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
)
// PGSQLProvider auth provider for PostgreSQL database
type PGSQLProvider struct {
dbHandle *sql.DB
}
func init() {
version.AddFeature("+pgsql")
}
func initializePGSQLProvider() error {
var err error
logSender = fmt.Sprintf("dataprovider_%v", PGSQLDataProviderName)
dbHandle, err := sql.Open("postgres", getPGSQLConnectionString(false))
if err == nil {
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %#v, pool size: %v",
getPGSQLConnectionString(true), config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
provider = PGSQLProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating postgres database handler, connection string: %#v, error: %v",
getPGSQLConnectionString(true), err)
}
return err
}
func getPGSQLConnectionString(redactedPwd bool) string {
var connectionString string
if len(config.ConnectionString) == 0 {
password := config.Password
if redactedPwd {
password = "[redacted]"
}
connectionString = fmt.Sprintf("host='%v' port=%v dbname='%v' user='%v' password='%v' sslmode=%v connect_timeout=10",
config.Host, config.Port, config.Name, config.Username, password, getSSLMode())
} else {
connectionString = config.ConnectionString
}
return connectionString
}
func (p PGSQLProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
func (p PGSQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
func (p PGSQLProvider) getUserByID(ID int64) (User, error) {
return sqlCommonGetUserByID(ID, p.dbHandle)
}
func (p PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p PGSQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p PGSQLProvider) userExists(username string) (User, error) {
return sqlCommonCheckUserExists(username, p.dbHandle)
}
func (p PGSQLProvider) addUser(user User) error {
return sqlCommonAddUser(user, p.dbHandle)
}
func (p PGSQLProvider) updateUser(user User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p PGSQLProvider) deleteUser(user User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p PGSQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p PGSQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
}
func (p PGSQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p PGSQLProvider) getFolders(limit, offset int, order, folderPath string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, folderPath, p.dbHandle)
}
func (p PGSQLProvider) getFolderByPath(mappedPath string) (vfs.BaseVirtualFolder, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return sqlCommonCheckFolderExists(ctx, mappedPath, p.dbHandle)
}
func (p PGSQLProvider) addFolder(folder vfs.BaseVirtualFolder) error {
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p PGSQLProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
func (p PGSQLProvider) updateFolderQuota(mappedPath string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(mappedPath, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p PGSQLProvider) getUsedFolderQuota(mappedPath string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(mappedPath, p.dbHandle)
}
func (p PGSQLProvider) close() error {
return p.dbHandle.Close()
}
func (p PGSQLProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p PGSQLProvider) initializeDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
sqlUsers := strings.Replace(pgsqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
_, err = tx.Exec(strings.Replace(pgsqlSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
return tx.Commit()
}
func (p PGSQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
return ErrNoInitRequired
}
switch dbVersion.Version {
case 1:
err = updatePGSQLDatabaseFrom1To2(p.dbHandle)
if err != nil {
return err
}
err = updatePGSQLDatabaseFrom2To3(p.dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFrom3To4(p.dbHandle)
case 2:
err = updatePGSQLDatabaseFrom2To3(p.dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFrom3To4(p.dbHandle)
case 3:
return updatePGSQLDatabaseFrom3To4(p.dbHandle)
default:
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
func updatePGSQLDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(pgsqlV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func updatePGSQLDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.Replace(pgsqlV3SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
}
func updatePGSQLDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(pgsqlV4SQL, dbHandle)
}

View file

@ -1,17 +0,0 @@
// +build nopgsql
package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/version"
)
func init() {
version.AddFeature("-pgsql")
}
func initializePGSQLProvider() error {
return errors.New("PostgreSQL disabled at build time")
}

View file

@ -1,942 +0,0 @@
package dataprovider
import (
"context"
"database/sql"
"encoding/json"
"errors"
"strings"
"time"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
const (
sqlDatabaseVersion = 4
initialDBVersionSQL = "INSERT INTO {{schema_version}} (version) VALUES (1);"
defaultSQLQueryTimeout = 10 * time.Second
longSQLQueryTimeout = 60 * time.Second
)
var errSQLFoldersAssosaction = errors.New("unable to associate virtual folders to user")
type sqlQuerier interface {
PrepareContext(ctx context.Context, query string) (*sql.Stmt, error)
}
func getUserByUsername(username string, dbHandle sqlQuerier) (User, error) {
var user User
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getUserByUsernameQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return user, err
}
defer stmt.Close()
row := stmt.QueryRowContext(ctx, username)
user, err = getUserFromDbRow(row, nil)
if err != nil {
return user, err
}
return getUserWithVirtualFolders(user, dbHandle)
}
func sqlCommonValidateUserAndPass(username, password, ip, protocol string, dbHandle *sql.DB) (User, error) {
var user User
if len(password) == 0 {
return user, errors.New("Credentials cannot be null or empty")
}
user, err := getUserByUsername(username, dbHandle)
if err != nil {
providerLog(logger.LevelWarn, "error authenticating user: %v, error: %v", username, err)
return user, err
}
return checkUserAndPass(user, password, ip, protocol)
}
func sqlCommonValidateUserAndPubKey(username string, pubKey []byte, dbHandle *sql.DB) (User, string, error) {
var user User
if len(pubKey) == 0 {
return user, "", errors.New("Credentials cannot be null or empty")
}
user, err := getUserByUsername(username, dbHandle)
if err != nil {
providerLog(logger.LevelWarn, "error authenticating user: %v, error: %v", username, err)
return user, "", err
}
return checkUserAndPubKey(user, pubKey)
}
func sqlCommonCheckAvailability(dbHandle *sql.DB) error {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return dbHandle.PingContext(ctx)
}
func sqlCommonGetUserByID(ID int64, dbHandle *sql.DB) (User, error) {
var user User
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getUserByIDQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return user, err
}
defer stmt.Close()
row := stmt.QueryRowContext(ctx, ID)
user, err = getUserFromDbRow(row, nil)
if err != nil {
return user, err
}
return getUserWithVirtualFolders(user, dbHandle)
}
func sqlCommonUpdateQuota(username string, filesAdd int, sizeAdd int64, reset bool, dbHandle *sql.DB) error {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getUpdateQuotaQuery(reset)
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, sizeAdd, filesAdd, utils.GetTimeAsMsSinceEpoch(time.Now()), username)
if err == nil {
providerLog(logger.LevelDebug, "quota updated for user %#v, files increment: %v size increment: %v is reset? %v",
username, filesAdd, sizeAdd, reset)
} else {
providerLog(logger.LevelWarn, "error updating quota for user %#v: %v", username, err)
}
return err
}
func sqlCommonGetUsedQuota(username string, dbHandle *sql.DB) (int, int64, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getQuotaQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return 0, 0, err
}
defer stmt.Close()
var usedFiles int
var usedSize int64
err = stmt.QueryRowContext(ctx, username).Scan(&usedSize, &usedFiles)
if err != nil {
providerLog(logger.LevelWarn, "error getting quota for user: %v, error: %v", username, err)
return 0, 0, err
}
return usedFiles, usedSize, err
}
func sqlCommonUpdateLastLogin(username string, dbHandle *sql.DB) error {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getUpdateLastLoginQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, utils.GetTimeAsMsSinceEpoch(time.Now()), username)
if err == nil {
providerLog(logger.LevelDebug, "last login updated for user %#v", username)
} else {
providerLog(logger.LevelWarn, "error updating last login for user %#v: %v", username, err)
}
return err
}
func sqlCommonCheckUserExists(username string, dbHandle *sql.DB) (User, error) {
var user User
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getUserByUsernameQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return user, err
}
defer stmt.Close()
row := stmt.QueryRowContext(ctx, username)
user, err = getUserFromDbRow(row, nil)
if err != nil {
return user, err
}
return getUserWithVirtualFolders(user, dbHandle)
}
func sqlCommonAddUser(user User, dbHandle *sql.DB) error {
err := validateUser(&user)
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
tx, err := dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
q := getAddUserQuery()
stmt, err := tx.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
sqlCommonRollbackTransaction(tx)
return err
}
defer stmt.Close()
permissions, err := user.GetPermissionsAsJSON()
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
publicKeys, err := user.GetPublicKeysAsJSON()
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
filters, err := user.GetFiltersAsJSON()
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
fsConfig, err := user.GetFsConfigAsJSON()
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
_, err = stmt.ExecContext(ctx, user.Username, user.Password, string(publicKeys), user.HomeDir, user.UID, user.GID, user.MaxSessions, user.QuotaSize,
user.QuotaFiles, string(permissions), user.UploadBandwidth, user.DownloadBandwidth, user.Status, user.ExpirationDate, string(filters),
string(fsConfig))
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
err = generateVirtualFoldersMapping(ctx, user, tx)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
return tx.Commit()
}
func sqlCommonUpdateUser(user User, dbHandle *sql.DB) error {
err := validateUser(&user)
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
tx, err := dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
q := getUpdateUserQuery()
stmt, err := tx.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
sqlCommonRollbackTransaction(tx)
return err
}
defer stmt.Close()
permissions, err := user.GetPermissionsAsJSON()
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
publicKeys, err := user.GetPublicKeysAsJSON()
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
filters, err := user.GetFiltersAsJSON()
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
fsConfig, err := user.GetFsConfigAsJSON()
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
_, err = stmt.ExecContext(ctx, user.Password, string(publicKeys), user.HomeDir, user.UID, user.GID, user.MaxSessions, user.QuotaSize,
user.QuotaFiles, string(permissions), user.UploadBandwidth, user.DownloadBandwidth, user.Status, user.ExpirationDate,
string(filters), string(fsConfig), user.ID)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
err = generateVirtualFoldersMapping(ctx, user, tx)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
return tx.Commit()
}
func sqlCommonDeleteUser(user User, dbHandle *sql.DB) error {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getDeleteUserQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, user.ID)
return err
}
func sqlCommonDumpUsers(dbHandle sqlQuerier) ([]User, error) {
users := make([]User, 0, 100)
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
q := getDumpUsersQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return nil, err
}
defer stmt.Close()
rows, err := stmt.QueryContext(ctx)
if err != nil {
return users, err
}
defer rows.Close()
for rows.Next() {
u, err := getUserFromDbRow(nil, rows)
if err != nil {
return users, err
}
err = addCredentialsToUser(&u)
if err != nil {
return users, err
}
users = append(users, u)
}
return getUsersWithVirtualFolders(users, dbHandle)
}
func sqlCommonGetUsers(limit int, offset int, order string, username string, dbHandle sqlQuerier) ([]User, error) {
users := make([]User, 0, limit)
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getUsersQuery(order, username)
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return nil, err
}
defer stmt.Close()
var rows *sql.Rows
if len(username) > 0 {
rows, err = stmt.QueryContext(ctx, username, limit, offset) //nolint:rowserrcheck // rows.Err() is checked
} else {
rows, err = stmt.QueryContext(ctx, limit, offset) //nolint:rowserrcheck // rows.Err() is checked
}
if err == nil {
defer rows.Close()
for rows.Next() {
u, err := getUserFromDbRow(nil, rows)
if err != nil {
return users, err
}
users = append(users, HideUserSensitiveData(&u))
}
}
err = rows.Err()
if err != nil {
return users, err
}
return getUsersWithVirtualFolders(users, dbHandle)
}
func updateUserPermissionsFromDb(user *User, permissions string) error {
var err error
perms := make(map[string][]string)
err = json.Unmarshal([]byte(permissions), &perms)
if err == nil {
user.Permissions = perms
} else {
// compatibility layer: until version 0.9.4 permissions were a string list
var list []string
err = json.Unmarshal([]byte(permissions), &list)
if err != nil {
return err
}
perms["/"] = list
user.Permissions = perms
}
return err
}
func getUserFromDbRow(row *sql.Row, rows *sql.Rows) (User, error) {
var user User
var permissions sql.NullString
var password sql.NullString
var publicKey sql.NullString
var filters sql.NullString
var fsConfig sql.NullString
var err error
if row != nil {
err = row.Scan(&user.ID, &user.Username, &password, &publicKey, &user.HomeDir, &user.UID, &user.GID, &user.MaxSessions,
&user.QuotaSize, &user.QuotaFiles, &permissions, &user.UsedQuotaSize, &user.UsedQuotaFiles, &user.LastQuotaUpdate,
&user.UploadBandwidth, &user.DownloadBandwidth, &user.ExpirationDate, &user.LastLogin, &user.Status, &filters, &fsConfig)
} else {
err = rows.Scan(&user.ID, &user.Username, &password, &publicKey, &user.HomeDir, &user.UID, &user.GID, &user.MaxSessions,
&user.QuotaSize, &user.QuotaFiles, &permissions, &user.UsedQuotaSize, &user.UsedQuotaFiles, &user.LastQuotaUpdate,
&user.UploadBandwidth, &user.DownloadBandwidth, &user.ExpirationDate, &user.LastLogin, &user.Status, &filters, &fsConfig)
}
if err != nil {
if err == sql.ErrNoRows {
return user, &RecordNotFoundError{err: err.Error()}
}
return user, err
}
if password.Valid {
user.Password = password.String
}
// we can have a empty string or an invalid json in null string
// so we do a relaxed test if the field is optional, for example we
// populate public keys only if unmarshal does not return an error
if publicKey.Valid {
var list []string
err = json.Unmarshal([]byte(publicKey.String), &list)
if err == nil {
user.PublicKeys = list
}
}
if permissions.Valid {
err = updateUserPermissionsFromDb(&user, permissions.String)
if err != nil {
return user, err
}
}
if filters.Valid {
var userFilters UserFilters
err = json.Unmarshal([]byte(filters.String), &userFilters)
if err == nil {
user.Filters = userFilters
}
}
if fsConfig.Valid {
var fs Filesystem
err = json.Unmarshal([]byte(fsConfig.String), &fs)
if err == nil {
user.FsConfig = fs
}
}
return user, err
}
func sqlCommonCheckFolderExists(ctx context.Context, name string, dbHandle sqlQuerier) (vfs.BaseVirtualFolder, error) {
var folder vfs.BaseVirtualFolder
q := getFolderByPathQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return folder, err
}
defer stmt.Close()
row := stmt.QueryRowContext(ctx, name)
err = row.Scan(&folder.ID, &folder.MappedPath, &folder.UsedQuotaSize, &folder.UsedQuotaFiles, &folder.LastQuotaUpdate)
if err == sql.ErrNoRows {
return folder, &RecordNotFoundError{err: err.Error()}
}
return folder, err
}
func sqlCommonAddOrGetFolder(ctx context.Context, name string, usedQuotaSize int64, usedQuotaFiles int, lastQuotaUpdate int64, dbHandle sqlQuerier) (vfs.BaseVirtualFolder, error) {
folder, err := sqlCommonCheckFolderExists(ctx, name, dbHandle)
if _, ok := err.(*RecordNotFoundError); ok {
f := vfs.BaseVirtualFolder{
MappedPath: name,
UsedQuotaSize: usedQuotaSize,
UsedQuotaFiles: usedQuotaFiles,
LastQuotaUpdate: lastQuotaUpdate,
}
err = sqlCommonAddFolder(f, dbHandle)
if err != nil {
return folder, err
}
return sqlCommonCheckFolderExists(ctx, name, dbHandle)
}
return folder, err
}
func sqlCommonAddFolder(folder vfs.BaseVirtualFolder, dbHandle sqlQuerier) error {
err := validateFolder(&folder)
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getAddFolderQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, folder.MappedPath, folder.UsedQuotaSize, folder.UsedQuotaFiles, folder.LastQuotaUpdate)
return err
}
func sqlCommonDeleteFolder(folder vfs.BaseVirtualFolder, dbHandle sqlQuerier) error {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getDeleteFolderQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, folder.ID)
return err
}
func sqlCommonDumpFolders(dbHandle sqlQuerier) ([]vfs.BaseVirtualFolder, error) {
folders := make([]vfs.BaseVirtualFolder, 0, 50)
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
q := getDumpFoldersQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return nil, err
}
defer stmt.Close()
rows, err := stmt.QueryContext(ctx)
if err != nil {
return folders, err
}
defer rows.Close()
for rows.Next() {
var folder vfs.BaseVirtualFolder
err = rows.Scan(&folder.ID, &folder.MappedPath, &folder.UsedQuotaSize, &folder.UsedQuotaFiles, &folder.LastQuotaUpdate)
if err != nil {
return folders, err
}
folders = append(folders, folder)
}
err = rows.Err()
if err != nil {
return folders, err
}
return getVirtualFoldersWithUsers(folders, dbHandle)
}
func sqlCommonGetFolders(limit, offset int, order, folderPath string, dbHandle sqlQuerier) ([]vfs.BaseVirtualFolder, error) {
folders := make([]vfs.BaseVirtualFolder, 0, limit)
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getFoldersQuery(order, folderPath)
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return nil, err
}
defer stmt.Close()
var rows *sql.Rows
if len(folderPath) > 0 {
rows, err = stmt.QueryContext(ctx, folderPath, limit, offset) //nolint:rowserrcheck // rows.Err() is checked
} else {
rows, err = stmt.QueryContext(ctx, limit, offset) //nolint:rowserrcheck // rows.Err() is checked
}
if err != nil {
return folders, err
}
defer rows.Close()
for rows.Next() {
var folder vfs.BaseVirtualFolder
err = rows.Scan(&folder.ID, &folder.MappedPath, &folder.UsedQuotaSize, &folder.UsedQuotaFiles, &folder.LastQuotaUpdate)
if err != nil {
return folders, err
}
folders = append(folders, folder)
}
err = rows.Err()
if err != nil {
return folders, err
}
return getVirtualFoldersWithUsers(folders, dbHandle)
}
func sqlCommonClearFolderMapping(ctx context.Context, user User, dbHandle sqlQuerier) error {
q := getClearFolderMappingQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, user.Username)
return err
}
func sqlCommonAddFolderMapping(ctx context.Context, user User, folder vfs.VirtualFolder, dbHandle sqlQuerier) error {
q := getAddFolderMappingQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, folder.VirtualPath, folder.QuotaSize, folder.QuotaFiles, folder.ID, user.Username)
return err
}
func generateVirtualFoldersMapping(ctx context.Context, user User, dbHandle sqlQuerier) error {
err := sqlCommonClearFolderMapping(ctx, user, dbHandle)
if err != nil {
return err
}
for _, vfolder := range user.VirtualFolders {
f, err := sqlCommonAddOrGetFolder(ctx, vfolder.MappedPath, 0, 0, 0, dbHandle)
if err != nil {
return err
}
vfolder.BaseVirtualFolder = f
err = sqlCommonAddFolderMapping(ctx, user, vfolder, dbHandle)
if err != nil {
return err
}
}
return err
}
func getUserWithVirtualFolders(user User, dbHandle sqlQuerier) (User, error) {
users, err := getUsersWithVirtualFolders([]User{user}, dbHandle)
if err != nil {
return user, err
}
if len(users) == 0 {
return user, errSQLFoldersAssosaction
}
return users[0], err
}
func getUsersWithVirtualFolders(users []User, dbHandle sqlQuerier) ([]User, error) {
var err error
usersVirtualFolders := make(map[int64][]vfs.VirtualFolder)
if len(users) == 0 {
return users, err
}
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getRelatedFoldersForUsersQuery(users)
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return nil, err
}
defer stmt.Close()
rows, err := stmt.QueryContext(ctx)
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
var folder vfs.VirtualFolder
var userID int64
err = rows.Scan(&folder.ID, &folder.MappedPath, &folder.UsedQuotaSize, &folder.UsedQuotaFiles,
&folder.LastQuotaUpdate, &folder.VirtualPath, &folder.QuotaSize, &folder.QuotaFiles, &userID)
if err != nil {
return users, err
}
usersVirtualFolders[userID] = append(usersVirtualFolders[userID], folder)
}
err = rows.Err()
if err != nil {
return users, err
}
if len(usersVirtualFolders) == 0 {
return users, err
}
for idx := range users {
ref := &users[idx]
ref.VirtualFolders = usersVirtualFolders[ref.ID]
}
return users, err
}
func getVirtualFoldersWithUsers(folders []vfs.BaseVirtualFolder, dbHandle sqlQuerier) ([]vfs.BaseVirtualFolder, error) {
var err error
vFoldersUsers := make(map[int64][]string)
if len(folders) == 0 {
return folders, err
}
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getRelatedUsersForFoldersQuery(folders)
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return nil, err
}
defer stmt.Close()
rows, err := stmt.QueryContext(ctx)
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
var username string
var folderID int64
err = rows.Scan(&folderID, &username)
if err != nil {
return folders, err
}
vFoldersUsers[folderID] = append(vFoldersUsers[folderID], username)
}
err = rows.Err()
if err != nil {
return folders, err
}
if len(vFoldersUsers) == 0 {
return folders, err
}
for idx := range folders {
ref := &folders[idx]
ref.Users = vFoldersUsers[ref.ID]
}
return folders, err
}
func sqlCommonUpdateFolderQuota(mappedPath string, filesAdd int, sizeAdd int64, reset bool, dbHandle *sql.DB) error {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getUpdateFolderQuotaQuery(reset)
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, sizeAdd, filesAdd, utils.GetTimeAsMsSinceEpoch(time.Now()), mappedPath)
if err == nil {
providerLog(logger.LevelDebug, "quota updated for folder %#v, files increment: %v size increment: %v is reset? %v",
mappedPath, filesAdd, sizeAdd, reset)
} else {
providerLog(logger.LevelWarn, "error updating quota for folder %#v: %v", mappedPath, err)
}
return err
}
func sqlCommonGetFolderUsedQuota(mappedPath string, dbHandle *sql.DB) (int, int64, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getQuotaFolderQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return 0, 0, err
}
defer stmt.Close()
var usedFiles int
var usedSize int64
err = stmt.QueryRowContext(ctx, mappedPath).Scan(&usedSize, &usedFiles)
if err != nil {
providerLog(logger.LevelWarn, "error getting quota for folder: %v, error: %v", mappedPath, err)
return 0, 0, err
}
return usedFiles, usedSize, err
}
func sqlCommonRollbackTransaction(tx *sql.Tx) {
err := tx.Rollback()
if err != nil {
providerLog(logger.LevelWarn, "error rolling back transaction: %v", err)
}
}
func sqlCommonGetDatabaseVersion(dbHandle *sql.DB, showInitWarn bool) (schemaVersion, error) {
var result schemaVersion
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getDatabaseVersionQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
if showInitWarn && strings.Contains(err.Error(), sqlTableSchemaVersion) {
logger.WarnToConsole("database query error, did you forgot to run the \"initprovider\" command?")
}
return result, err
}
defer stmt.Close()
row := stmt.QueryRowContext(ctx)
err = row.Scan(&result.Version)
return result, err
}
func sqlCommonUpdateDatabaseVersion(ctx context.Context, dbHandle sqlQuerier, version int) error {
q := getUpdateDBVersionQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, version)
return err
}
func sqlCommonExecSQLAndUpdateDBVersion(dbHandle *sql.DB, sql []string, newVersion int) error {
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
tx, err := dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
for _, q := range sql {
if len(strings.TrimSpace(q)) == 0 {
continue
}
_, err = tx.ExecContext(ctx, q)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
}
err = sqlCommonUpdateDatabaseVersion(ctx, tx, newVersion)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
return tx.Commit()
}
func sqlCommonGetCompatVirtualFolders(dbHandle *sql.DB) ([]userCompactVFolders, error) {
users := []userCompactVFolders{}
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
q := getCompatVirtualFoldersQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return nil, err
}
defer stmt.Close()
rows, err := stmt.QueryContext(ctx)
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
var user userCompactVFolders
var virtualFolders sql.NullString
err = rows.Scan(&user.ID, &user.Username, &virtualFolders)
if err != nil {
return nil, err
}
if virtualFolders.Valid {
var list []virtualFoldersCompact
err = json.Unmarshal([]byte(virtualFolders.String), &list)
if err == nil && len(list) > 0 {
user.VirtualFolders = list
users = append(users, user)
}
}
}
return users, rows.Err()
}
func sqlCommonRestoreCompatVirtualFolders(ctx context.Context, users []userCompactVFolders, dbHandle sqlQuerier) ([]string, error) {
foldersToScan := []string{}
for _, user := range users {
for _, vfolder := range user.VirtualFolders {
providerLog(logger.LevelInfo, "restoring virtual folder: %+v for user %#v", vfolder, user.Username)
// -1 means included in user quota, 0 means unlimited
quotaSize := int64(-1)
quotaFiles := -1
if vfolder.ExcludeFromQuota {
quotaFiles = 0
quotaSize = 0
}
b, err := sqlCommonAddOrGetFolder(ctx, vfolder.MappedPath, 0, 0, 0, dbHandle)
if err != nil {
providerLog(logger.LevelWarn, "error restoring virtual folder for user %#v: %v", user.Username, err)
return foldersToScan, err
}
u := User{
ID: user.ID,
Username: user.Username,
}
f := vfs.VirtualFolder{
BaseVirtualFolder: b,
VirtualPath: vfolder.VirtualPath,
QuotaSize: quotaSize,
QuotaFiles: quotaFiles,
}
err = sqlCommonAddFolderMapping(ctx, u, f, dbHandle)
if err != nil {
providerLog(logger.LevelWarn, "error adding virtual folder mapping for user %#v: %v", user.Username, err)
return foldersToScan, err
}
if !utils.IsStringInSlice(vfolder.MappedPath, foldersToScan) {
foldersToScan = append(foldersToScan, vfolder.MappedPath)
}
providerLog(logger.LevelInfo, "virtual folder: %+v for user %#v successfully restored", vfolder, user.Username)
}
}
return foldersToScan, nil
}
func sqlCommonUpdateDatabaseFrom3To4(sqlV4 string, dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 3 -> 4")
providerLog(logger.LevelInfo, "updating database version: 3 -> 4")
users, err := sqlCommonGetCompatVirtualFolders(dbHandle)
if err != nil {
return err
}
sql := strings.ReplaceAll(sqlV4, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
tx, err := dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
for _, q := range strings.Split(sql, ";") {
if len(strings.TrimSpace(q)) == 0 {
continue
}
_, err = tx.ExecContext(ctx, q)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
}
foldersToScan, err := sqlCommonRestoreCompatVirtualFolders(ctx, users, tx)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
err = sqlCommonUpdateDatabaseVersion(ctx, tx, 4)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
err = tx.Commit()
if err == nil {
go updateVFoldersQuotaAfterRestore(foldersToScan)
}
return err
}

View file

@ -1,273 +0,0 @@
// +build !nosqlite
package dataprovider
import (
"context"
"database/sql"
"fmt"
"path/filepath"
"strings"
// we import go-sqlite3 here to be able to disable SQLite support using a build tag
_ "github.com/mattn/go-sqlite3"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
)
const (
sqliteUsersTableSQL = `CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255)
NOT NULL UNIQUE, "password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
sqliteSchemaTableSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);`
sqliteV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
sqliteV3SQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL,
"status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL, "virtual_folders" text NULL);
INSERT INTO "new__users" ("id", "username", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
"last_login", "status", "filters", "filesystem", "virtual_folders", "password") SELECT "id", "username", "public_keys", "home_dir",
"uid", "gid", "max_sessions", "quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update",
"upload_bandwidth", "download_bandwidth", "expiration_date", "last_login", "status", "filters", "filesystem", "virtual_folders",
"password" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";`
sqliteV4SQL = `CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "path" varchar(512) NOT NULL UNIQUE,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" varchar(512) NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id")
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, "user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
CONSTRAINT "unique_mapping" UNIQUE ("user_id", "folder_id"));
CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE, "password" text NULL,
"public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL);
INSERT INTO "new__users" ("id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
"last_login", "status", "filters", "filesystem") SELECT "id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth",
"expiration_date", "last_login", "status", "filters", "filesystem" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
)
// SQLiteProvider auth provider for SQLite database
type SQLiteProvider struct {
dbHandle *sql.DB
}
func init() {
version.AddFeature("+sqlite")
}
func initializeSQLiteProvider(basePath string) error {
var err error
var connectionString string
logSender = fmt.Sprintf("dataprovider_%v", SQLiteDataProviderName)
if len(config.ConnectionString) == 0 {
dbPath := config.Name
if !utils.IsFileInputValid(dbPath) {
return fmt.Errorf("Invalid database path: %#v", dbPath)
}
if !filepath.IsAbs(dbPath) {
dbPath = filepath.Join(basePath, dbPath)
}
connectionString = fmt.Sprintf("file:%v?cache=shared&_foreign_keys=1", dbPath)
} else {
connectionString = config.ConnectionString
}
dbHandle, err := sql.Open("sqlite3", connectionString)
if err == nil {
providerLog(logger.LevelDebug, "sqlite database handle created, connection string: %#v", connectionString)
dbHandle.SetMaxOpenConns(1)
provider = SQLiteProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating sqlite database handler, connection string: %#v, error: %v",
connectionString, err)
}
return err
}
func (p SQLiteProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
func (p SQLiteProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
func (p SQLiteProvider) getUserByID(ID int64) (User, error) {
return sqlCommonGetUserByID(ID, p.dbHandle)
}
func (p SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p SQLiteProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p SQLiteProvider) userExists(username string) (User, error) {
return sqlCommonCheckUserExists(username, p.dbHandle)
}
func (p SQLiteProvider) addUser(user User) error {
return sqlCommonAddUser(user, p.dbHandle)
}
func (p SQLiteProvider) updateUser(user User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p SQLiteProvider) deleteUser(user User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p SQLiteProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p SQLiteProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
}
func (p SQLiteProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p SQLiteProvider) getFolders(limit, offset int, order, folderPath string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, folderPath, p.dbHandle)
}
func (p SQLiteProvider) getFolderByPath(mappedPath string) (vfs.BaseVirtualFolder, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return sqlCommonCheckFolderExists(ctx, mappedPath, p.dbHandle)
}
func (p SQLiteProvider) addFolder(folder vfs.BaseVirtualFolder) error {
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p SQLiteProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
func (p SQLiteProvider) updateFolderQuota(mappedPath string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(mappedPath, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p SQLiteProvider) getUsedFolderQuota(mappedPath string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(mappedPath, p.dbHandle)
}
func (p SQLiteProvider) close() error {
return p.dbHandle.Close()
}
func (p SQLiteProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p SQLiteProvider) initializeDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
sqlUsers := strings.Replace(sqliteUsersTableSQL, "{{users}}", sqlTableUsers, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
_, err = tx.Exec(strings.Replace(sqliteSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
sqlCommonRollbackTransaction(tx)
return err
}
return tx.Commit()
}
func (p SQLiteProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
return ErrNoInitRequired
}
switch dbVersion.Version {
case 1:
err = updateSQLiteDatabaseFrom1To2(p.dbHandle)
if err != nil {
return err
}
err = updateSQLiteDatabaseFrom2To3(p.dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFrom3To4(p.dbHandle)
case 2:
err = updateSQLiteDatabaseFrom2To3(p.dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFrom3To4(p.dbHandle)
case 3:
return updateSQLiteDatabaseFrom3To4(p.dbHandle)
default:
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
func updateSQLiteDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(sqliteV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func updateSQLiteDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.ReplaceAll(sqliteV3SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
}
func updateSQLiteDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(sqliteV4SQL, dbHandle)
}

View file

@ -1,17 +0,0 @@
// +build nosqlite
package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/version"
)
func init() {
version.AddFeature("-sqlite")
}
func initializeSQLiteProvider(basePath string) error {
return errors.New("SQLite disabled at build time")
}

View file

@ -1,186 +0,0 @@
package dataprovider
import (
"fmt"
"strconv"
"strings"
"github.com/drakkan/sftpgo/vfs"
)
const (
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,used_quota_size," +
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem"
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update"
)
func getSQLPlaceholders() []string {
var placeholders []string
for i := 1; i <= 20; i++ {
if config.Driver == PGSQLDataProviderName {
placeholders = append(placeholders, fmt.Sprintf("$%v", i))
} else {
placeholders = append(placeholders, "?")
}
}
return placeholders
}
func getUserByUsernameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
func getUserByIDQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE id = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
func getUsersQuery(order string, username string) string {
if len(username) > 0 {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v ORDER BY username %v LIMIT %v OFFSET %v`,
selectUserFields, sqlTableUsers, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
}
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectUserFields, sqlTableUsers,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDumpUsersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectUserFields, sqlTableUsers)
}
func getDumpFoldersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectFolderFields, sqlTableFolders)
}
func getUpdateQuotaQuery(reset bool) string {
if reset {
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getUpdateLastLoginQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getQuotaQuery() string {
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, sqlTableUsers,
sqlPlaceholders[0])
}
func getAddUserQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,status,last_login,expiration_date,filters,
filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13],
sqlPlaceholders[14], sqlPlaceholders[15])
}
func getUpdateUserQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,public_keys=%v,home_dir=%v,uid=%v,gid=%v,max_sessions=%v,quota_size=%v,
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v
WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15])
}
func getDeleteUserQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0])
}
func getFolderByPathQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE path = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
}
func getAddFolderQuery() string {
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update) VALUES (%v,%v,%v,%v)`,
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getDeleteFolderQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableFolders, sqlPlaceholders[0])
}
func getClearFolderMappingQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE user_id = (SELECT id FROM %v WHERE username = %v)`, sqlTableFoldersMapping,
sqlTableUsers, sqlPlaceholders[0])
}
func getAddFolderMappingQuery() string {
return fmt.Sprintf(`INSERT INTO %v (virtual_path,quota_size,quota_files,folder_id,user_id)
VALUES (%v,%v,%v,%v,(SELECT id FROM %v WHERE username = %v))`, sqlTableFoldersMapping, sqlPlaceholders[0],
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
}
func getFoldersQuery(order, folderPath string) string {
if len(folderPath) > 0 {
return fmt.Sprintf(`SELECT %v FROM %v WHERE path = %v ORDER BY path %v LIMIT %v OFFSET %v`,
selectFolderFields, sqlTableFolders, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
}
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY path %v LIMIT %v OFFSET %v`, selectFolderFields, sqlTableFolders,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateFolderQuotaQuery(reset bool) string {
if reset {
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
WHERE path = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
WHERE path = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getQuotaFolderQuery() string {
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE path = %v`, sqlTableFolders,
sqlPlaceholders[0])
}
func getRelatedFoldersForUsersQuery(users []User) string {
var sb strings.Builder
for _, u := range users {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(u.ID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT f.id,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,fm.quota_size,fm.quota_files,fm.user_id
FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders,
sqlTableFoldersMapping, sb.String())
}
func getRelatedUsersForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
var sb strings.Builder
for _, f := range folders {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(f.ID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT fm.folder_id,u.username FROM %v fm INNER JOIN %v u ON fm.user_id = u.id
WHERE fm.folder_id IN %v ORDER BY fm.folder_id`, sqlTableFoldersMapping, sqlTableUsers, sb.String())
}
func getDatabaseVersionQuery() string {
return fmt.Sprintf("SELECT version from %v LIMIT 1", sqlTableSchemaVersion)
}
func getUpdateDBVersionQuery() string {
return fmt.Sprintf(`UPDATE %v SET version=%v`, sqlTableSchemaVersion, sqlPlaceholders[0])
}
func getCompatVirtualFoldersQuery() string {
return fmt.Sprintf(`SELECT id,username,virtual_folders FROM %v`, sqlTableUsers)
}

View file

@ -1,807 +0,0 @@
package dataprovider
import (
"encoding/json"
"errors"
"fmt"
"net"
"os"
"path"
"path/filepath"
"strconv"
"strings"
"time"
"golang.org/x/net/webdav"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
// Available permissions for SFTP users
const (
// All permissions are granted
PermAny = "*"
// List items such as files and directories is allowed
PermListItems = "list"
// download files is allowed
PermDownload = "download"
// upload files is allowed
PermUpload = "upload"
// overwrite an existing file, while uploading, is allowed
// upload permission is required to allow file overwrite
PermOverwrite = "overwrite"
// delete files or directories is allowed
PermDelete = "delete"
// rename files or directories is allowed
PermRename = "rename"
// create directories is allowed
PermCreateDirs = "create_dirs"
// create symbolic links is allowed
PermCreateSymlinks = "create_symlinks"
// changing file or directory permissions is allowed
PermChmod = "chmod"
// changing file or directory owner and group is allowed
PermChown = "chown"
// changing file or directory access and modification time is allowed
PermChtimes = "chtimes"
)
// Available login methods
const (
LoginMethodNoAuthTryed = "no_auth_tryed"
LoginMethodPassword = "password"
SSHLoginMethodPublicKey = "publickey"
SSHLoginMethodKeyboardInteractive = "keyboard-interactive"
SSHLoginMethodKeyAndPassword = "publickey+password"
SSHLoginMethodKeyAndKeyboardInt = "publickey+keyboard-interactive"
)
var (
errNoMatchingVirtualFolder = errors.New("no matching virtual folder found")
)
// CachedUser adds fields useful for caching to a SFTPGo user
type CachedUser struct {
User User
Expiration time.Time
Password string
LockSystem webdav.LockSystem
}
// IsExpired returns true if the cached user is expired
func (c *CachedUser) IsExpired() bool {
if c.Expiration.IsZero() {
return false
}
return c.Expiration.Before(time.Now())
}
// ExtensionsFilter defines filters based on file extensions.
// These restrictions do not apply to files listing for performance reasons, so
// a denied file cannot be downloaded/overwritten/renamed but will still be
// it will still be listed in the list of files.
// System commands such as Git and rsync interacts with the filesystem directly
// and they are not aware about these restrictions so they are not allowed
// inside paths with extensions filters
type ExtensionsFilter struct {
// SFTP/SCP path, if no other specific filter is defined, the filter apply for
// sub directories too.
// For example if filters are defined for the paths "/" and "/sub" then the
// filters for "/" are applied for any file outside the "/sub" directory
Path string `json:"path"`
// only files with these, case insensitive, extensions are allowed.
// Shell like expansion is not supported so you have to specify ".jpg" and
// not "*.jpg"
AllowedExtensions []string `json:"allowed_extensions,omitempty"`
// files with these, case insensitive, extensions are not allowed.
// Denied file extensions are evaluated before the allowed ones
DeniedExtensions []string `json:"denied_extensions,omitempty"`
}
// UserFilters defines additional restrictions for a user
type UserFilters struct {
// only clients connecting from these IP/Mask are allowed.
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
// for example "192.0.2.0/24" or "2001:db8::/32"
AllowedIP []string `json:"allowed_ip,omitempty"`
// clients connecting from these IP/Mask are not allowed.
// Denied rules will be evaluated before allowed ones
DeniedIP []string `json:"denied_ip,omitempty"`
// these login methods are not allowed.
// If null or empty any available login method is allowed
DeniedLoginMethods []string `json:"denied_login_methods,omitempty"`
// these protocols are not allowed.
// If null or empty any available protocol is allowed
DeniedProtocols []string `json:"denied_protocols,omitempty"`
// filters based on file extensions.
// Please note that these restrictions can be easily bypassed.
FileExtensions []ExtensionsFilter `json:"file_extensions,omitempty"`
// max size allowed for a single upload, 0 means unlimited
MaxUploadFileSize int64 `json:"max_upload_file_size,omitempty"`
}
// FilesystemProvider defines the supported storages
type FilesystemProvider int
// supported values for FilesystemProvider
const (
LocalFilesystemProvider FilesystemProvider = iota // Local
S3FilesystemProvider // AWS S3 compatible
GCSFilesystemProvider // Google Cloud Storage
AzureBlobFilesystemProvider // Azure Blob Storage
)
// Filesystem defines cloud storage filesystem details
type Filesystem struct {
Provider FilesystemProvider `json:"provider"`
S3Config vfs.S3FsConfig `json:"s3config,omitempty"`
GCSConfig vfs.GCSFsConfig `json:"gcsconfig,omitempty"`
AzBlobConfig vfs.AzBlobFsConfig `json:"azblobconfig,omitempty"`
}
// User defines a SFTPGo user
type User struct {
// Database unique identifier
ID int64 `json:"id"`
// 1 enabled, 0 disabled (login is not allowed)
Status int `json:"status"`
// Username
Username string `json:"username"`
// Account expiration date as unix timestamp in milliseconds. An expired account cannot login.
// 0 means no expiration
ExpirationDate int64 `json:"expiration_date"`
// Password used for password authentication.
// For users created using SFTPGo REST API the password is be stored using argon2id hashing algo.
// Checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt is supported too.
Password string `json:"password,omitempty"`
// PublicKeys used for public key authentication. At least one between password and a public key is mandatory
PublicKeys []string `json:"public_keys,omitempty"`
// The user cannot upload or download files outside this directory. Must be an absolute path
HomeDir string `json:"home_dir"`
// Mapping between virtual paths and filesystem paths outside the home directory.
// Supported for local filesystem only
VirtualFolders []vfs.VirtualFolder `json:"virtual_folders,omitempty"`
// If sftpgo runs as root system user then the created files and directories will be assigned to this system UID
UID int `json:"uid"`
// If sftpgo runs as root system user then the created files and directories will be assigned to this system GID
GID int `json:"gid"`
// Maximum concurrent sessions. 0 means unlimited
MaxSessions int `json:"max_sessions"`
// Maximum size allowed as bytes. 0 means unlimited
QuotaSize int64 `json:"quota_size"`
// Maximum number of files allowed. 0 means unlimited
QuotaFiles int `json:"quota_files"`
// List of the granted permissions
Permissions map[string][]string `json:"permissions"`
// Used quota as bytes
UsedQuotaSize int64 `json:"used_quota_size"`
// Used quota as number of files
UsedQuotaFiles int `json:"used_quota_files"`
// Last quota update as unix timestamp in milliseconds
LastQuotaUpdate int64 `json:"last_quota_update"`
// Maximum upload bandwidth as KB/s, 0 means unlimited
UploadBandwidth int64 `json:"upload_bandwidth"`
// Maximum download bandwidth as KB/s, 0 means unlimited
DownloadBandwidth int64 `json:"download_bandwidth"`
// Last login as unix timestamp in milliseconds
LastLogin int64 `json:"last_login"`
// Additional restrictions
Filters UserFilters `json:"filters"`
// Filesystem configuration details
FsConfig Filesystem `json:"filesystem"`
}
// GetFilesystem returns the filesystem for this user
func (u *User) GetFilesystem(connectionID string) (vfs.Fs, error) {
if u.FsConfig.Provider == S3FilesystemProvider {
return vfs.NewS3Fs(connectionID, u.GetHomeDir(), u.FsConfig.S3Config)
} else if u.FsConfig.Provider == GCSFilesystemProvider {
config := u.FsConfig.GCSConfig
config.CredentialFile = u.getGCSCredentialsFilePath()
return vfs.NewGCSFs(connectionID, u.GetHomeDir(), config)
} else if u.FsConfig.Provider == AzureBlobFilesystemProvider {
return vfs.NewAzBlobFs(connectionID, u.GetHomeDir(), u.FsConfig.AzBlobConfig)
}
return vfs.NewOsFs(connectionID, u.GetHomeDir(), u.VirtualFolders), nil
}
// GetPermissionsForPath returns the permissions for the given path.
// The path must be an SFTP path
func (u *User) GetPermissionsForPath(p string) []string {
permissions := []string{}
if perms, ok := u.Permissions["/"]; ok {
// if only root permissions are defined returns them unconditionally
if len(u.Permissions) == 1 {
return perms
}
// fallback permissions
permissions = perms
}
dirsForPath := utils.GetDirsForSFTPPath(p)
// dirsForPath contains all the dirs for a given path in reverse order
// for example if the path is: /1/2/3/4 it contains:
// [ "/1/2/3/4", "/1/2/3", "/1/2", "/1", "/" ]
// so the first match is the one we are interested to
for _, val := range dirsForPath {
if perms, ok := u.Permissions[val]; ok {
permissions = perms
break
}
}
return permissions
}
// GetVirtualFolderForPath returns the virtual folder containing the specified sftp path.
// If the path is not inside a virtual folder an error is returned
func (u *User) GetVirtualFolderForPath(sftpPath string) (vfs.VirtualFolder, error) {
var folder vfs.VirtualFolder
if len(u.VirtualFolders) == 0 || u.FsConfig.Provider != LocalFilesystemProvider {
return folder, errNoMatchingVirtualFolder
}
dirsForPath := utils.GetDirsForSFTPPath(sftpPath)
for _, val := range dirsForPath {
for _, v := range u.VirtualFolders {
if v.VirtualPath == val {
return v, nil
}
}
}
return folder, errNoMatchingVirtualFolder
}
// AddVirtualDirs adds virtual folders, if defined, to the given files list
func (u *User) AddVirtualDirs(list []os.FileInfo, sftpPath string) []os.FileInfo {
if len(u.VirtualFolders) == 0 {
return list
}
for _, v := range u.VirtualFolders {
if path.Dir(v.VirtualPath) == sftpPath {
fi := vfs.NewFileInfo(v.VirtualPath, true, 0, time.Now(), false)
found := false
for index, f := range list {
if f.Name() == fi.Name() {
list[index] = fi
found = true
break
}
}
if !found {
list = append(list, fi)
}
}
}
return list
}
// IsMappedPath returns true if the specified filesystem path has a virtual folder mapping.
// The filesystem path must be cleaned before calling this method
func (u *User) IsMappedPath(fsPath string) bool {
for _, v := range u.VirtualFolders {
if fsPath == v.MappedPath {
return true
}
}
return false
}
// IsVirtualFolder returns true if the specified sftp path is a virtual folder
func (u *User) IsVirtualFolder(sftpPath string) bool {
for _, v := range u.VirtualFolders {
if sftpPath == v.VirtualPath {
return true
}
}
return false
}
// HasVirtualFoldersInside returns true if there are virtual folders inside the
// specified SFTP path. We assume that path are cleaned
func (u *User) HasVirtualFoldersInside(sftpPath string) bool {
if sftpPath == "/" && len(u.VirtualFolders) > 0 {
return true
}
for _, v := range u.VirtualFolders {
if len(v.VirtualPath) > len(sftpPath) {
if strings.HasPrefix(v.VirtualPath, sftpPath+"/") {
return true
}
}
}
return false
}
// HasPermissionsInside returns true if the specified sftpPath has no permissions itself and
// no subdirs with defined permissions
func (u *User) HasPermissionsInside(sftpPath string) bool {
for dir := range u.Permissions {
if dir == sftpPath {
return true
} else if len(dir) > len(sftpPath) {
if strings.HasPrefix(dir, sftpPath+"/") {
return true
}
}
}
return false
}
// HasOverlappedMappedPaths returns true if this user has virtual folders with overlapped mapped paths
func (u *User) HasOverlappedMappedPaths() bool {
if len(u.VirtualFolders) <= 1 {
return false
}
for _, v1 := range u.VirtualFolders {
for _, v2 := range u.VirtualFolders {
if v1.VirtualPath == v2.VirtualPath {
continue
}
if isMappedDirOverlapped(v1.MappedPath, v2.MappedPath) {
return true
}
}
}
return false
}
// HasPerm returns true if the user has the given permission or any permission
func (u *User) HasPerm(permission, path string) bool {
perms := u.GetPermissionsForPath(path)
if utils.IsStringInSlice(PermAny, perms) {
return true
}
return utils.IsStringInSlice(permission, perms)
}
// HasPerms return true if the user has all the given permissions
func (u *User) HasPerms(permissions []string, path string) bool {
perms := u.GetPermissionsForPath(path)
if utils.IsStringInSlice(PermAny, perms) {
return true
}
for _, permission := range permissions {
if !utils.IsStringInSlice(permission, perms) {
return false
}
}
return true
}
// HasNoQuotaRestrictions returns true if no quota restrictions need to be applyed
func (u *User) HasNoQuotaRestrictions(checkFiles bool) bool {
if u.QuotaSize == 0 && (!checkFiles || u.QuotaFiles == 0) {
return true
}
return false
}
// IsLoginMethodAllowed returns true if the specified login method is allowed
func (u *User) IsLoginMethodAllowed(loginMethod string, partialSuccessMethods []string) bool {
if len(u.Filters.DeniedLoginMethods) == 0 {
return true
}
if len(partialSuccessMethods) == 1 {
for _, method := range u.GetNextAuthMethods(partialSuccessMethods, true) {
if method == loginMethod {
return true
}
}
}
if utils.IsStringInSlice(loginMethod, u.Filters.DeniedLoginMethods) {
return false
}
return true
}
// GetNextAuthMethods returns the list of authentications methods that
// can continue for multi-step authentication
func (u *User) GetNextAuthMethods(partialSuccessMethods []string, isPasswordAuthEnabled bool) []string {
var methods []string
if len(partialSuccessMethods) != 1 {
return methods
}
if partialSuccessMethods[0] != SSHLoginMethodPublicKey {
return methods
}
for _, method := range u.GetAllowedLoginMethods() {
if method == SSHLoginMethodKeyAndPassword && isPasswordAuthEnabled {
methods = append(methods, LoginMethodPassword)
}
if method == SSHLoginMethodKeyAndKeyboardInt {
methods = append(methods, SSHLoginMethodKeyboardInteractive)
}
}
return methods
}
// IsPartialAuth returns true if the specified login method is a step for
// a multi-step Authentication.
// We support publickey+password and publickey+keyboard-interactive, so
// only publickey can returns partial success.
// We can have partial success if only multi-step Auth methods are enabled
func (u *User) IsPartialAuth(loginMethod string) bool {
if loginMethod != SSHLoginMethodPublicKey {
return false
}
for _, method := range u.GetAllowedLoginMethods() {
if !utils.IsStringInSlice(method, SSHMultiStepsLoginMethods) {
return false
}
}
return true
}
// GetAllowedLoginMethods returns the allowed login methods
func (u *User) GetAllowedLoginMethods() []string {
var allowedMethods []string
for _, method := range ValidSSHLoginMethods {
if !utils.IsStringInSlice(method, u.Filters.DeniedLoginMethods) {
allowedMethods = append(allowedMethods, method)
}
}
return allowedMethods
}
// IsFileAllowed returns true if the specified file is allowed by the file restrictions filters
func (u *User) IsFileAllowed(sftpPath string) bool {
if len(u.Filters.FileExtensions) == 0 {
return true
}
dirsForPath := utils.GetDirsForSFTPPath(path.Dir(sftpPath))
var filter ExtensionsFilter
for _, dir := range dirsForPath {
for _, f := range u.Filters.FileExtensions {
if f.Path == dir {
filter = f
break
}
}
if len(filter.Path) > 0 {
break
}
}
if len(filter.Path) > 0 {
toMatch := strings.ToLower(sftpPath)
for _, denied := range filter.DeniedExtensions {
if strings.HasSuffix(toMatch, denied) {
return false
}
}
for _, allowed := range filter.AllowedExtensions {
if strings.HasSuffix(toMatch, allowed) {
return true
}
}
return len(filter.AllowedExtensions) == 0
}
return true
}
// IsLoginFromAddrAllowed returns true if the login is allowed from the specified remoteAddr.
// If AllowedIP is defined only the specified IP/Mask can login.
// If DeniedIP is defined the specified IP/Mask cannot login.
// If an IP is both allowed and denied then login will be denied
func (u *User) IsLoginFromAddrAllowed(remoteAddr string) bool {
if len(u.Filters.AllowedIP) == 0 && len(u.Filters.DeniedIP) == 0 {
return true
}
remoteIP := net.ParseIP(utils.GetIPFromRemoteAddress(remoteAddr))
// if remoteIP is invalid we allow login, this should never happen
if remoteIP == nil {
logger.Warn(logSender, "", "login allowed for invalid IP. remote address: %#v", remoteAddr)
return true
}
for _, IPMask := range u.Filters.DeniedIP {
_, IPNet, err := net.ParseCIDR(IPMask)
if err != nil {
return false
}
if IPNet.Contains(remoteIP) {
return false
}
}
for _, IPMask := range u.Filters.AllowedIP {
_, IPNet, err := net.ParseCIDR(IPMask)
if err != nil {
return false
}
if IPNet.Contains(remoteIP) {
return true
}
}
return len(u.Filters.AllowedIP) == 0
}
// GetPermissionsAsJSON returns the permissions as json byte array
func (u *User) GetPermissionsAsJSON() ([]byte, error) {
return json.Marshal(u.Permissions)
}
// GetPublicKeysAsJSON returns the public keys as json byte array
func (u *User) GetPublicKeysAsJSON() ([]byte, error) {
return json.Marshal(u.PublicKeys)
}
// GetFiltersAsJSON returns the filters as json byte array
func (u *User) GetFiltersAsJSON() ([]byte, error) {
return json.Marshal(u.Filters)
}
// GetFsConfigAsJSON returns the filesystem config as json byte array
func (u *User) GetFsConfigAsJSON() ([]byte, error) {
return json.Marshal(u.FsConfig)
}
// GetUID returns a validate uid, suitable for use with os.Chown
func (u *User) GetUID() int {
if u.UID <= 0 || u.UID > 65535 {
return -1
}
return u.UID
}
// GetGID returns a validate gid, suitable for use with os.Chown
func (u *User) GetGID() int {
if u.GID <= 0 || u.GID > 65535 {
return -1
}
return u.GID
}
// GetHomeDir returns the shortest path name equivalent to the user's home directory
func (u *User) GetHomeDir() string {
return filepath.Clean(u.HomeDir)
}
// HasQuotaRestrictions returns true if there is a quota restriction on number of files or size or both
func (u *User) HasQuotaRestrictions() bool {
return u.QuotaFiles > 0 || u.QuotaSize > 0
}
// GetQuotaSummary returns used quota and limits if defined
func (u *User) GetQuotaSummary() string {
var result string
result = "Files: " + strconv.Itoa(u.UsedQuotaFiles)
if u.QuotaFiles > 0 {
result += "/" + strconv.Itoa(u.QuotaFiles)
}
if u.UsedQuotaSize > 0 || u.QuotaSize > 0 {
result += ". Size: " + utils.ByteCountSI(u.UsedQuotaSize)
if u.QuotaSize > 0 {
result += "/" + utils.ByteCountSI(u.QuotaSize)
}
}
return result
}
// GetPermissionsAsString returns the user's permissions as comma separated string
func (u *User) GetPermissionsAsString() string {
result := ""
for dir, perms := range u.Permissions {
var dirPerms string
for _, p := range perms {
if len(dirPerms) > 0 {
dirPerms += ", "
}
dirPerms += p
}
dp := fmt.Sprintf("%#v: %#v", dir, dirPerms)
if dir == "/" {
if len(result) > 0 {
result = dp + ", " + result
} else {
result = dp
}
} else {
if len(result) > 0 {
result += ", "
}
result += dp
}
}
return result
}
// GetBandwidthAsString returns bandwidth limits if defines
func (u *User) GetBandwidthAsString() string {
result := "Download: "
if u.DownloadBandwidth > 0 {
result += utils.ByteCountSI(u.DownloadBandwidth*1000) + "/s."
} else {
result += "unlimited."
}
result += " Upload: "
if u.UploadBandwidth > 0 {
result += utils.ByteCountSI(u.UploadBandwidth*1000) + "/s."
} else {
result += "unlimited."
}
return result
}
// GetInfoString returns user's info as string.
// Storage provider, number of public keys, max sessions, uid,
// gid, denied and allowed IP/Mask are returned
func (u *User) GetInfoString() string {
var result string
if u.LastLogin > 0 {
t := utils.GetTimeFromMsecSinceEpoch(u.LastLogin)
result += fmt.Sprintf("Last login: %v ", t.Format("2006-01-02 15:04:05")) // YYYY-MM-DD HH:MM:SS
}
if u.FsConfig.Provider == S3FilesystemProvider {
result += "Storage: S3 "
} else if u.FsConfig.Provider == GCSFilesystemProvider {
result += "Storage: GCS "
} else if u.FsConfig.Provider == AzureBlobFilesystemProvider {
result += "Storage: Azure "
}
if len(u.PublicKeys) > 0 {
result += fmt.Sprintf("Public keys: %v ", len(u.PublicKeys))
}
if u.MaxSessions > 0 {
result += fmt.Sprintf("Max sessions: %v ", u.MaxSessions)
}
if u.UID > 0 {
result += fmt.Sprintf("UID: %v ", u.UID)
}
if u.GID > 0 {
result += fmt.Sprintf("GID: %v ", u.GID)
}
if len(u.Filters.DeniedIP) > 0 {
result += fmt.Sprintf("Denied IP/Mask: %v ", len(u.Filters.DeniedIP))
}
if len(u.Filters.AllowedIP) > 0 {
result += fmt.Sprintf("Allowed IP/Mask: %v ", len(u.Filters.AllowedIP))
}
return result
}
// GetExpirationDateAsString returns expiration date formatted as YYYY-MM-DD
func (u *User) GetExpirationDateAsString() string {
if u.ExpirationDate > 0 {
t := utils.GetTimeFromMsecSinceEpoch(u.ExpirationDate)
return t.Format("2006-01-02")
}
return ""
}
// GetAllowedIPAsString returns the allowed IP as comma separated string
func (u User) GetAllowedIPAsString() string {
result := ""
for _, IPMask := range u.Filters.AllowedIP {
if len(result) > 0 {
result += ","
}
result += IPMask
}
return result
}
// GetDeniedIPAsString returns the denied IP as comma separated string
func (u User) GetDeniedIPAsString() string {
result := ""
for _, IPMask := range u.Filters.DeniedIP {
if len(result) > 0 {
result += ","
}
result += IPMask
}
return result
}
func (u *User) getACopy() User {
pubKeys := make([]string, len(u.PublicKeys))
copy(pubKeys, u.PublicKeys)
virtualFolders := make([]vfs.VirtualFolder, len(u.VirtualFolders))
copy(virtualFolders, u.VirtualFolders)
permissions := make(map[string][]string)
for k, v := range u.Permissions {
perms := make([]string, len(v))
copy(perms, v)
permissions[k] = perms
}
filters := UserFilters{}
filters.MaxUploadFileSize = u.Filters.MaxUploadFileSize
filters.AllowedIP = make([]string, len(u.Filters.AllowedIP))
copy(filters.AllowedIP, u.Filters.AllowedIP)
filters.DeniedIP = make([]string, len(u.Filters.DeniedIP))
copy(filters.DeniedIP, u.Filters.DeniedIP)
filters.DeniedLoginMethods = make([]string, len(u.Filters.DeniedLoginMethods))
copy(filters.DeniedLoginMethods, u.Filters.DeniedLoginMethods)
filters.FileExtensions = make([]ExtensionsFilter, len(u.Filters.FileExtensions))
copy(filters.FileExtensions, u.Filters.FileExtensions)
filters.DeniedProtocols = make([]string, len(u.Filters.DeniedProtocols))
copy(filters.DeniedProtocols, u.Filters.DeniedProtocols)
fsConfig := Filesystem{
Provider: u.FsConfig.Provider,
S3Config: vfs.S3FsConfig{
Bucket: u.FsConfig.S3Config.Bucket,
Region: u.FsConfig.S3Config.Region,
AccessKey: u.FsConfig.S3Config.AccessKey,
AccessSecret: u.FsConfig.S3Config.AccessSecret,
Endpoint: u.FsConfig.S3Config.Endpoint,
StorageClass: u.FsConfig.S3Config.StorageClass,
KeyPrefix: u.FsConfig.S3Config.KeyPrefix,
UploadPartSize: u.FsConfig.S3Config.UploadPartSize,
UploadConcurrency: u.FsConfig.S3Config.UploadConcurrency,
},
GCSConfig: vfs.GCSFsConfig{
Bucket: u.FsConfig.GCSConfig.Bucket,
CredentialFile: u.FsConfig.GCSConfig.CredentialFile,
Credentials: u.FsConfig.GCSConfig.Credentials,
AutomaticCredentials: u.FsConfig.GCSConfig.AutomaticCredentials,
StorageClass: u.FsConfig.GCSConfig.StorageClass,
KeyPrefix: u.FsConfig.GCSConfig.KeyPrefix,
},
AzBlobConfig: vfs.AzBlobFsConfig{
Container: u.FsConfig.AzBlobConfig.Container,
AccountName: u.FsConfig.AzBlobConfig.AccountName,
AccountKey: u.FsConfig.AzBlobConfig.AccountKey,
Endpoint: u.FsConfig.AzBlobConfig.Endpoint,
SASURL: u.FsConfig.AzBlobConfig.SASURL,
KeyPrefix: u.FsConfig.AzBlobConfig.KeyPrefix,
UploadPartSize: u.FsConfig.AzBlobConfig.UploadPartSize,
UploadConcurrency: u.FsConfig.AzBlobConfig.UploadConcurrency,
UseEmulator: u.FsConfig.AzBlobConfig.UseEmulator,
},
}
return User{
ID: u.ID,
Username: u.Username,
Password: u.Password,
PublicKeys: pubKeys,
HomeDir: u.HomeDir,
VirtualFolders: virtualFolders,
UID: u.UID,
GID: u.GID,
MaxSessions: u.MaxSessions,
QuotaSize: u.QuotaSize,
QuotaFiles: u.QuotaFiles,
Permissions: permissions,
UsedQuotaSize: u.UsedQuotaSize,
UsedQuotaFiles: u.UsedQuotaFiles,
LastQuotaUpdate: u.LastQuotaUpdate,
UploadBandwidth: u.UploadBandwidth,
DownloadBandwidth: u.DownloadBandwidth,
Status: u.Status,
ExpirationDate: u.ExpirationDate,
LastLogin: u.LastLogin,
Filters: filters,
FsConfig: fsConfig,
}
}
func (u *User) getNotificationFieldsAsSlice(action string) []string {
return []string{action, u.Username,
strconv.FormatInt(u.ID, 10),
strconv.FormatInt(int64(u.Status), 10),
strconv.FormatInt(u.ExpirationDate, 10),
u.HomeDir,
strconv.FormatInt(int64(u.UID), 10),
strconv.FormatInt(int64(u.GID), 10),
}
}
func (u *User) getNotificationFieldsAsEnvVars(action string) []string {
return []string{fmt.Sprintf("SFTPGO_USER_ACTION=%v", action),
fmt.Sprintf("SFTPGO_USER_USERNAME=%v", u.Username),
fmt.Sprintf("SFTPGO_USER_PASSWORD=%v", u.Password),
fmt.Sprintf("SFTPGO_USER_ID=%v", u.ID),
fmt.Sprintf("SFTPGO_USER_STATUS=%v", u.Status),
fmt.Sprintf("SFTPGO_USER_EXPIRATION_DATE=%v", u.ExpirationDate),
fmt.Sprintf("SFTPGO_USER_HOME_DIR=%v", u.HomeDir),
fmt.Sprintf("SFTPGO_USER_UID=%v", u.UID),
fmt.Sprintf("SFTPGO_USER_GID=%v", u.GID),
fmt.Sprintf("SFTPGO_USER_QUOTA_FILES=%v", u.QuotaFiles),
fmt.Sprintf("SFTPGO_USER_QUOTA_SIZE=%v", u.QuotaSize),
fmt.Sprintf("SFTPGO_USER_UPLOAD_BANDWIDTH=%v", u.UploadBandwidth),
fmt.Sprintf("SFTPGO_USER_DOWNLOAD_BANDWIDTH=%v", u.DownloadBandwidth),
fmt.Sprintf("SFTPGO_USER_MAX_SESSIONS=%v", u.MaxSessions),
fmt.Sprintf("SFTPGO_USER_FS_PROVIDER=%v", u.FsConfig.Provider)}
}
func (u *User) getGCSCredentialsFilePath() string {
return filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json", u.Username))
}

View file

@ -1,117 +0,0 @@
# Official Docker image
SFTPGo provides an official Docker image, it is available on both [Docker Hub](https://hub.docker.com/r/drakkan/sftpgo) and on [GitHub Container Registry](https://github.com/users/drakkan/packages/container/package/sftpgo).
## Supported tags and respective Dockerfile links
- [v1.2.0, v1.2, v1, latest](https://github.com/drakkan/sftpgo/blob/v1.2.0/Dockerfile)
- [v1.2.0-alpine, v1.2-alpine, v1-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v1.2.0/Dockerfile.alpine)
- [edge](../Dockerfile)
- [edge-alpine](../Dockerfile.alpine)
## How to use the SFTPGo image
### Start a `sftpgo` server instance
Starting a SFTPGo instance is simple:
```shell
docker run --name some-sftpgo -p 127.0.0.1:8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
Now visit [http://localhost:8080/](http://localhost:8080/) and create a new SFTPGo user. The SFTP service is available on port 2022.
If you prefer GitHub Container Registry to Docker Hub replace `drakkan/sftpgo:tag` with `ghcr.io/drakkan/sftpgo:tag`.
### Container shell access and viewing SFTPGo logs
The docker exec command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `sftpgo` container:
```shell
docker exec -it some-sftpgo bash
```
The logs are available through Docker's container log:
```shell
docker logs some-sftpgo
```
### Where to Store Data
Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the SFTPGo images to familiarize themselves with the options available, including:
- Let Docker manage the storage for SFTPGo data by [writing them to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
- Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container]((https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume)). This places the SFTPGo files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly. The SFTPGo image runs using `1000` as UID/GID by default.
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/sftpgodata`.
2. Create a home directory for the sftpgo container user on your host system e.g. `/my/own/sftpgohome`.
3. Start your SFTPGo container like this:
```shell
docker run --name some-sftpgo \
-p 127.0.0.1:8080:8090 \
-p 2022:2022 \
--mount type=bind,source=/my/own/sftpgodata,target=/srv/sftpgo \
--mount type=bind,source=/my/own/sftpgohome,target=/var/lib/sftpgo \
-e SFTPGO_HTTPD__BIND_PORT=8090 \
-d "drakkan/sftpgo:tag"
```
As you can see SFTPGo uses two volumes:
- `/srv/sftpgo` to handle persistent data. The default home directory for SFTP/FTP/WebDAV users is `/srv/sftpgo/data/<username>`. Backups are stored in `/srv/sftpgo/backups`
- `/var/lib/sftpgo` is the home directory for the sftpgo system user defined inside the container. This is the container working directory too, host keys will be created here when using the default configuration.
### Configuration
The runtime configuration can be customized via environment variables that you can set passing the `-e` option to the `docker run` command or inside the `environment` section if you are using [docker stack deploy](https://docs.docker.com/engine/reference/commandline/stack_deploy/) or [docker-compose](https://github.com/docker/compose).
Please take a look [here](../docs/full-configuration.md#environment-variables) to learn how to configure SFTPGo via environment variables.
Alternately you can mount your custom configuration file to `/var/lib/sftpgo` or `/var/lib/sftpgo/.config/sftpgo`.
### Running as an arbitrary user
The SFTPGo image runs using `1000` as UID/GID by default. If you know the permissions of your data and/or configuration directory are already set appropriately or you have need of running SFTPGo with a specific UID/GID, it is possible to invoke this image with `--user` set to any value (other than `root/0`) in order to achieve the desired access/configuration:
```shell
$ ls -lnd data
drwxr-xr-x 2 1100 11000 6 6 nov 09.09 data
$ ls -lnd config
drwxr-xr-x 2 1100 11000 6 6 nov 09.19 config
```
With the above directory permissions, you can start a SFTPGo instance like this:
```shell
docker run --name some-sftpgo \
--user 1100:1100 \
-p 127.0.0.1:8080:8080 \
-p 2022:2022 \
--mount type=bind,source="${PWD}/data",target=/srv/sftpgo \
--mount type=bind,source="${PWD}/config",target=/var/lib/sftpgo \
-d "drakkan/sftpgo:tag"
```
## Image Variants
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge` and `edge-alpine`tags are updated after each new commit.
### `sftpgo:<version>`
This is the defacto image, it is based on [Debian](https://www.debian.org/), available in [the `debian` official image](https://hub.docker.com/_/debian). If you are unsure about what your needs are, you probably want to use this one.
### `sftpgo:<version>-alpine`
This image is based on the popular [Alpine Linux project](https://alpinelinux.org/), available in [the `alpine` official image](https://hub.docker.com/_/alpine). Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](https://musl.libc.org/) instead of [glibc and friends](https://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See [this Hacker News comment thread](https://news.ycombinator.com/item?id=10782897) for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
## Helm Chart
An helm chart is [available](https://artifacthub.io/packages/helm/sagikazarmark/sftpgo). You can find the source code [here](https://github.com/sagikazarmark/helm-charts/tree/master/charts/sftpgo).

View file

@ -1,8 +0,0 @@
FROM debian:latest
LABEL maintainer="nicola.murino@gmail.com"
RUN apt-get update && apt-get install -y curl python3-requests python3-pygments
RUN curl https://raw.githubusercontent.com/drakkan/sftpgo/master/examples/rest-api-cli/sftpgo_api_cli --output /usr/bin/sftpgo_api_cli
ENTRYPOINT ["python3", "/usr/bin/sftpgo_api_cli" ]
CMD []

View file

@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -e
ARCH=`uname -m`
case ${ARCH} in
"x86_64")
SUFFIX=amd64
;;
"aarch64")
SUFFIX=arm64
;;
*)
SUFFIX=ppc64le
;;
esac
echo "download plugins for arch ${SUFFIX}"
for PLUGIN in geoipfilter kms pubsub eventstore eventsearch auth
do
echo "download plugin from https://github.com/sftpgo/sftpgo-plugin-${PLUGIN}/releases/latest/download/sftpgo-plugin-${PLUGIN}-linux-${SUFFIX}"
curl -L "https://github.com/sftpgo/sftpgo-plugin-${PLUGIN}/releases/latest/download/sftpgo-plugin-${PLUGIN}-linux-${SUFFIX}" --output "/usr/local/bin/sftpgo-plugin-${PLUGIN}"
chmod 755 "/usr/local/bin/sftpgo-plugin-${PLUGIN}"
done

View file

@ -1,50 +0,0 @@
FROM golang:alpine as builder
RUN apk add --no-cache git gcc g++ ca-certificates \
&& go get -v -d github.com/drakkan/sftpgo
WORKDIR /go/src/github.com/drakkan/sftpgo
ARG TAG
ARG FEATURES
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o /go/bin/sftpgo
FROM alpine:latest
RUN apk add --no-cache ca-certificates su-exec \
&& mkdir -p /data /etc/sftpgo /srv/sftpgo/config /srv/sftpgo/web /srv/sftpgo/backups
# git and rsync are optional, uncomment the next line to add support for them if needed.
#RUN apk add --no-cache git rsync
COPY --from=builder /go/bin/sftpgo /bin/
COPY --from=builder /go/src/github.com/drakkan/sftpgo/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /go/src/github.com/drakkan/sftpgo/templates /srv/sftpgo/web/templates
COPY --from=builder /go/src/github.com/drakkan/sftpgo/static /srv/sftpgo/web/static
COPY docker-entrypoint.sh /bin/entrypoint.sh
RUN chmod +x /bin/entrypoint.sh
VOLUME [ "/data", "/srv/sftpgo/config", "/srv/sftpgo/backups" ]
EXPOSE 2022 8080
# uncomment the following settings to enable FTP support
#ENV SFTPGO_FTPD__BIND_PORT=2121
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
#EXPOSE 2121
# we need to expose the passive ports range too
#EXPOSE 50000-50100
# it is a good idea to provide certificates to enable FTPS too
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=/srv/sftpgo/config/mycert.crt
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=/srv/sftpgo/config/mycert.key
# uncomment the following setting to enable WebDAV support
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
# it is a good idea to provide certificates to enable WebDAV over HTTPS
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
ENTRYPOINT ["/bin/entrypoint.sh"]
CMD ["serve"]

View file

@ -1,61 +0,0 @@
# SFTPGo with Docker and Alpine
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
This DockerFile is made to build image to host multiple instances of SFTPGo started with different users.
## Example
> 1003 is a custom uid:gid for this instance of SFTPGo
```bash
# Prereq on docker host
sudo groupadd -g 1003 sftpgrp && \
sudo useradd -u 1003 -g 1003 sftpuser -d /home/sftpuser/ && \
sudo -u sftpuser mkdir /home/sftpuser/{conf,data} && \
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /home/sftpuser/conf/sftpgo.json
# Edit sftpgo.json as you need
# Get and build SFTPGo image.
# Add --build-arg TAG=LATEST to build the latest tag or e.g. TAG=v1.0.0 for a specific tag/commit.
# Add --build-arg FEATURES=<build features comma separated> to specify the features to build.
git clone https://github.com/drakkan/sftpgo.git && \
cd sftpgo && \
sudo docker build -t sftpgo docker/sftpgo/alpine/
# Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the "initprovider" command will create the required tables.
sudo docker run --name sftpgo \
-e PUID=1003 \
-e GUID=1003 \
-v /home/sftpuser/conf/:/srv/sftpgo/config \
sftpgo initprovider -c /srv/sftpgo/config
# Start the image
sudo docker rm sftpgo && sudo docker run --name sftpgo \
-e SFTPGO_LOG_FILE_PATH= \
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
-e SFTPGO_HTTPD__STATIC_FILES_PATH=/srv/sftpgo/web/static \
-e SFTPGO_HTTPD__BACKUPS_PATH=/srv/sftpgo/backups \
-p 8080:8080 \
-p 2022:2022 \
-e PUID=1003 \
-e GUID=1003 \
-v /home/sftpuser/conf/:/srv/sftpgo/config \
-v /home/sftpuser/data:/data \
-v /home/sftpuser/backups:/srv/sftpgo/backups \
sftpgo
```
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.
The script `entrypoint.sh` makes sure to correct the permissions of directories and start the process with the right user.
Several images can be run with different parameters.
## Custom systemd script
An example of systemd script is present [here](sftpgo.service), with `Environment` parameter to set `PUID` and `GUID`
`WorkingDirectory` parameter must be exist with one file in this directory like `sftpgo-${PUID}.env` corresponding to the variable file for SFTPGo instance.

View file

@ -1,7 +0,0 @@
#!/bin/sh
set -eu
chown -R "${PUID}:${GUID}" /data /etc/sftpgo /srv/sftpgo/config /srv/sftpgo/backups \
&& exec su-exec "${PUID}:${GUID}" \
/bin/sftpgo "$@"

View file

@ -1,35 +0,0 @@
[Unit]
Description=SFTPGo server
After=docker.service
[Service]
User=root
Group=root
WorkingDirectory=/etc/sftpgo
Environment=PUID=1003
Environment=GUID=1003
EnvironmentFile=-/etc/sysconfig/sftpgo.env
ExecStartPre=-docker kill sftpgo
ExecStartPre=-docker rm sftpgo
ExecStart=docker run --name sftpgo \
--env-file sftpgo-${PUID}.env \
-e PUID=${PUID} \
-e GUID=${GUID} \
-e SFTPGO_LOG_FILE_PATH= \
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
-e SFTPGO_HTTPD__STATIC_FILES_PATH=/srv/sftpgo/web/static \
-e SFTPGO_HTTPD__BACKUPS_PATH=/srv/sftpgo/backups \
-p 8080:8080 \
-p 2022:2022 \
-v /home/sftpuser/conf/:/srv/sftpgo/config \
-v /home/sftpuser/data:/data \
-v /home/sftpuser/backups:/srv/sftpgo/backups \
sftpgo
ExecStop=docker stop sftpgo
SyslogIdentifier=sftpgo
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target

View file

@ -1,93 +0,0 @@
# we use a multi stage build to have a separate build and run env
FROM golang:latest as buildenv
LABEL maintainer="nicola.murino@gmail.com"
RUN go get -v -d github.com/drakkan/sftpgo
WORKDIR /go/src/github.com/drakkan/sftpgo
ARG TAG
ARG FEATURES
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
# now define the run environment
FROM debian:latest
# ca-certificates is needed for Cloud Storage Support and for HTTPS/FTPS.
RUN apt-get update && apt-get install -y ca-certificates && apt-get clean
# git and rsync are optional, uncomment the next line to add support for them if needed.
#RUN apt-get update && apt-get install -y git rsync && apt-get clean
ARG BASE_DIR=/app
ARG DATA_REL_DIR=data
ARG CONFIG_REL_DIR=config
ARG BACKUP_REL_DIR=backups
ARG USERNAME=sftpgo
ARG GROUPNAME=sftpgo
ARG UID=515
ARG GID=515
ARG WEB_REL_PATH=web
# HOME_DIR for sftpgo itself
ENV HOME_DIR=${BASE_DIR}/${USERNAME}
# DATA_DIR, this is a volume that you can use hold user's home dirs
ENV DATA_DIR=${BASE_DIR}/${DATA_REL_DIR}
# CONFIG_DIR, this is a volume to persist the daemon private keys, configuration file ecc..
ENV CONFIG_DIR=${BASE_DIR}/${CONFIG_REL_DIR}
# BACKUPS_DIR, this is a volume to store backups done using "dumpdata" REST API
ENV BACKUPS_DIR=${BASE_DIR}/${BACKUP_REL_DIR}
ENV WEB_DIR=${BASE_DIR}/${WEB_REL_PATH}
RUN mkdir -p ${DATA_DIR} ${CONFIG_DIR} ${WEB_DIR} ${BACKUPS_DIR}
RUN groupadd --system -g ${GID} ${GROUPNAME}
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /usr/sbin/nologin --gid ${GID} --uid ${UID} ${USERNAME}
WORKDIR ${HOME_DIR}
RUN mkdir -p bin .config/sftpgo
ENV PATH ${HOME_DIR}/bin:$PATH
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/sftpgo bin/sftpgo
# default config file to use if no config file is found inside the CONFIG_DIR volume.
# You can override each configuration options via env vars too
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/sftpgo.json .config/sftpgo/
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/templates ${WEB_DIR}/templates
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/static ${WEB_DIR}/static
RUN chown -R ${UID}:${GID} ${DATA_DIR} ${BACKUPS_DIR}
# run as non root user
USER ${USERNAME}
EXPOSE 2022 8080
# the defined volumes must have write access for the UID and GID defined above
VOLUME [ "$DATA_DIR", "$CONFIG_DIR", "$BACKUPS_DIR" ]
# override some default configuration options using env vars
ENV SFTPGO_CONFIG_DIR=${CONFIG_DIR}
# setting SFTPGO_LOG_FILE_PATH to an empty string will log to stdout
ENV SFTPGO_LOG_FILE_PATH=""
ENV SFTPGO_HTTPD__BIND_ADDRESS=""
ENV SFTPGO_HTTPD__TEMPLATES_PATH=${WEB_DIR}/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=${WEB_DIR}/static
ENV SFTPGO_DATA_PROVIDER__USERS_BASE_DIR=${DATA_DIR}
ENV SFTPGO_HTTPD__BACKUPS_PATH=${BACKUPS_DIR}
# uncomment the following settings to enable FTP support
#ENV SFTPGO_FTPD__BIND_PORT=2121
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
#EXPOSE 2121
# we need to expose the passive ports range too
#EXPOSE 50000-50100
# it is a good idea to provide certificates to enable FTPS too
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
# uncomment the following setting to enable WebDAV support
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
# it is a good idea to provide certificates to enable WebDAV over HTTPS
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
ENTRYPOINT ["sftpgo"]
CMD ["serve"]

View file

@ -1,59 +0,0 @@
# Dockerfile based on Debian stable
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
Please read the comments inside the `Dockerfile` to learn how to customize things for your setup.
You can build the container image using `docker build`, for example:
```bash
docker build -t="drakkan/sftpgo" .
```
This will build master of github.com/drakkan/sftpgo.
To build the latest tag you can add `--build-arg TAG=LATEST` and to build a specific tag/commit you can use for example `TAG=v1.0.0`, like this:
```bash
docker build -t="drakkan/sftpgo" --build-arg TAG=v1.0.0 .
```
To specify the features to build you can add `--build-arg FEATURES=<build features comma separated>`. For example you can disable SQLite and S3 support like this:
```bash
docker build -t="drakkan/sftpgo" --build-arg FEATURES=nosqlite,nos3 .
```
Please take a look at the [build from source](./../../../docs/build-from-source.md) documentation for the complete list of the features that can be disabled.
Now create the required folders on the host system, for example:
```bash
sudo mkdir -p /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
```
and give write access to them to the UID/GID defined inside the `Dockerfile`. You can choose to create a new user, on the host system, with a matching UID/GID pair, or simply do something like this:
```bash
sudo chown -R <UID>:<GID> /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
```
Download the default configuration file and edit it as you need:
```bash
sudo curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /srv/sftpgo/config/sftpgo.json
```
Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the `initprovider` command will create the required tables:
```bash
docker run --name sftpgo --mount type=bind,source=/srv/sftpgo/config,target=/app/config drakkan/sftpgo initprovider -c /app/config
```
and finally you can run the image using something like this:
```bash
docker rm sftpgo && docker run --name sftpgo -p 8080:8080 -p 2022:2022 --mount type=bind,source=/srv/sftpgo/data,target=/app/data --mount type=bind,source=/srv/sftpgo/config,target=/app/config --mount type=bind,source=/srv/sftpgo/backups,target=/app/backups drakkan/sftpgo
```
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.

View file

@ -1,87 +0,0 @@
# Account's configuration properties
For each account, the following properties can be configured:
- `username`
- `password` used for password authentication. For users created using SFTPGo REST API, if the password has no known hashing algo prefix, it will be stored using argon2id. SFTPGo supports checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512` or `$pbkdf2-b64salt-sha256$`. For example the `pbkdf2-sha256` of the word `password` using 150000 iterations and `E86a9YMX3zC7` as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. In pbkdf2 variant with `b64salt` the salt is base64 encoded. For bcrypt the format must be the one supported by golang's [crypto/bcrypt](https://godoc.org/golang.org/x/crypto/bcrypt) package, for example the password `secret` with cost `14` must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as bcrypt, pbkdf2, md5crypt or sha512crypt and it will be stored as is.
- `public_keys` array of public keys. At least one public key or the password is mandatory.
- `status` 1 means "active", 0 "inactive". An inactive account cannot login.
- `expiration_date` expiration date as unix timestamp in milliseconds. An expired account cannot login. 0 means no expiration.
- `home_dir` the user cannot upload or download files outside this directory. Must be an absolute path. A local home directory is required for Cloud Storage Backends too: in this case it will store temporary files.
- `virtual_folders` list of mappings between virtual SFTP/SCP paths and local filesystem paths outside the user home directory. More information can be found [here](./virtual-folders.md)
- `uid`, `gid`. If SFTPGo runs as root system user then the created files and directories will be assigned to this system uid/gid. Ignored on windows or if SFTPGo runs as non root user: in this case files and directories for all SFTP users will be owned by the system user that runs SFTPGo.
- `max_sessions` maximum concurrent sessions. 0 means unlimited.
- `quota_size` maximum size allowed as bytes. 0 means unlimited.
- `quota_files` maximum number of files allowed. 0 means unlimited.
- `permissions` for SFTP paths. The following per directory permissions are supported:
- `*` all permissions are granted
- `list` list items is allowed
- `download` download files is allowed
- `upload` upload files is allowed
- `overwrite` overwrite an existing file, while uploading, is allowed. `upload` permission is required to allow file overwrite
- `delete` delete files or directories is allowed
- `rename` rename a file or a directory is allowed if this permission is granted on source and target path. You can enable rename in a more controlled way granting `delete` permission on source directory and `upload`/`create_dirs`/`create_symlinks` permissions on target directory
- `create_dirs` create directories is allowed
- `create_symlinks` create symbolic links is allowed
- `chmod` changing file or directory permissions is allowed. On Windows, only the 0200 bit (owner writable) of mode is used; it controls whether the file's read-only attribute is set or cleared. The other bits are currently unused. Use mode 0400 for a read-only file and 0600 for a readable+writable file.
- `chown` changing file or directory owner and group is allowed. Changing owner and group is not supported on Windows.
- `chtimes` changing file or directory access and modification time is allowed
- `upload_bandwidth` maximum upload bandwidth as KB/s, 0 means unlimited.
- `download_bandwidth` maximum download bandwidth as KB/s, 0 means unlimited.
- `allowed_ip`, List of IP/Mask allowed to login. Any IP address not contained in this list cannot login. IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291, for example "192.0.2.0/24" or "2001:db8::/32"
- `denied_ip`, List of IP/Mask not allowed to login. If an IP address is both allowed and denied then login will be denied
- `max_upload_file_size`, max allowed size, as bytes, for a single file upload. The upload will be aborted if/when the size of the file being sent exceeds this limit. 0 means unlimited. This restriction does not apply for SSH system commands such as `git` and `rsync`
- `denied_login_methods`, List of login methods not allowed. To enable multi-step authentication you have to allow only multi-step login methods. If password login method is denied or no password is set then FTP and WebDAV users cannot login. The following login methods are supported:
- `publickey`
- `password`
- `keyboard-interactive`
- `publickey+password`
- `publickey+keyboard-interactive`
- `denied_protocols`, list of protocols not allowed. The following protocols are supported:
- `SSH`
- `FTP`
- `DAV`
- `file_extensions`, list of struct. These restrictions do not apply to files listing for performance reasons, so a denied file cannot be downloaded/overwritten/renamed but it will still be listed in the list of files. Please note that these restrictions can be easily bypassed. Each struct contains the following fields:
- `allowed_extensions`, list of, case insensitive, allowed files extension. Shell like expansion is not supported so you have to specify `.jpg` and not `*.jpg`. Any file that does not end with this suffix will be denied
- `denied_extensions`, list of, case insensitive, denied files extension. Denied file extensions are evaluated before the allowed ones
- `path`, SFTP/SCP path, if no other specific filter is defined, the filter apply for sub directories too. For example if filters are defined for the paths `/` and `/sub` then the filters for `/` are applied for any file outside the `/sub` directory
- `fs_provider`, filesystem to serve via SFTP. Local filesystem (0), S3 Compatible Object Storage (1), Google Cloud Storage (2) and Azure Blob Storage (3) are supported
- `s3_bucket`, required for S3 filesystem
- `s3_region`, required for S3 filesystem. Must match the region for your bucket. You can find here the list of available [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions). For example if your bucket is at `Frankfurt` you have to set the region to `eu-central-1`
- `s3_access_key`
- `s3_access_secret`, if provided it is stored encrypted (AES-256-GCM). You can leave access key and access secret blank to use credentials from environment
- `s3_endpoint`, specifies a S3 endpoint (server) different from AWS. It is not required if you are connecting to AWS
- `s3_storage_class`, leave blank to use the default or specify a valid AWS [storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
- `s3_key_prefix`, allows to restrict access to the folder identified by this prefix and its contents
- `s3_upload_part_size`, the buffer size for multipart uploads (MB). Zero means the default (5 MB). Minimum is 5
- `s3_upload_concurrency` how many parts are uploaded in parallel
- `gcs_bucket`, required for GCS filesystem
- `gcs_credentials`, Google Cloud Storage JSON credentials base64 encoded
- `gcs_automatic_credentials`, integer. Set to 1 to use Application Default Credentials strategy or set to 0 to use explicit credentials via `gcs_credentials`
- `gcs_storage_class`
- `gcs_key_prefix`, allows to restrict access to the folder identified by this prefix and its contents
- `az_container`, Azure Blob Storage container
- `az_account_name`, Azure account name. leave blank to use SAS URL
- `az_account_key`, Azure account key. leave blank to use SAS URL. If provided it is stored encrypted (AES-256-GCM)
- `az_sas_url`, Azure shared access signature URL
- `az_endpoint`, Default is "blob.core.windows.net". If you use the emulator the endpoint must include the protocol, for example "http://127.0.0.1:10000"
- `az_upload_part_size`, the buffer size for multipart uploads (MB). Zero means the default (4 MB)
- `az_upload_concurrency`, how many parts are uploaded in parallel. Zero means the default (2)
- `az_key_prefix`, allows to restrict access to the folder identified by this prefix and its contents
- `az_use_emulator`, boolean
These properties are stored inside the data provider.
If you want to use your existing accounts, you have these options:
- you can import your users inside SFTPGo. Take a look at [sftpgo_api_cli](../examples/rest-api-cli#convert-users-from-other-stores "SFTPGo API CLI example"), it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
- you can use an external authentication program
Please take a look at the [OpenAPI schema](../httpd/schema/openapi.yaml) for the exact definitions of user and folder fields.
If you need an example you can export a dump using the REST API CLI client or by invoking the `dumpdata` endpoint directly, for example:
```shell
curl "http://127.0.0.1:8080/api/v1/dumpdata?output_file=dump.json&indent=1"
```
the dump is a JSON with users and folder.

View file

@ -1,20 +0,0 @@
# Azure Blob Storage backend
To connect SFTPGo to Azure Blob Storage, you need to specify the access credentials. Azure Blob Storage has different options for credentials, we support:
1. Providing an account name and account key.
2. Providing a shared access signature (SAS).
If you authenticate using account and key you also need to specify a container. The endpoint can generally be left blank, the default is `blob.core.windows.net`.
If you provide a SAS URL the container is optional and if given it must match the one inside the shared access signature.
If you want to connect to an emulator such as [Azurite](https://github.com/Azure/Azurite) you need to provide the account name/key pair and an endpoint prefixed with the protocol, for example `http://127.0.0.1:10000`.
Specifying a different `key_prefix`, you can assign different "folders" of the same container to different users. This is similar to a chroot directory for local filesystem. Each SFTPGo user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and the Azure Blob service then the client should wait for the last parts to be uploaded to Azure after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
The configured container must exist.
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations.

View file

@ -1,48 +0,0 @@
# Build SFTPGo from source
You can install the package to your [\$GOPATH](https://github.com/golang/go/wiki/GOPATH "GOPATH") with the [go tool](https://golang.org/cmd/go/ "go command") from shell:
```bash
go get -u github.com/drakkan/sftpgo
```
Or you can download the sources and use `go build`.
Make sure [Git](https://git-scm.com/downloads) is installed on your machine and in your system's `PATH`.
The following build tags are available:
- `nogcs`, disable Google Cloud Storage backend, default enabled
- `nos3`, disable S3 Compabible Object Storage backends, default enabled
- `noazblob`, disable Azure Blob Storage backend, default enabled
- `nobolt`, disable Bolt data provider, default enabled
- `nomysql`, disable MySQL data provider, default enabled
- `nopgsql`, disable PostgreSQL data provider, default enabled
- `nosqlite`, disable SQLite data provider, default enabled
- `noportable`, disable portable mode, default enabled
- `nometrics`, disable Prometheus metrics, default enabled
If no build tag is specified the build will include the default features.
The optional [SQLite driver](https://github.com/mattn/go-sqlite3 "go-sqlite3") is a `CGO` package and so it requires a `C` compiler at build time.
On Linux and macOS, a compiler is easy to install or already installed. On Windows, you need to download [MinGW-w64](https://sourceforge.net/projects/mingw-w64/files/) and build SFTPGo from its command prompt.
The compiler is a build time only dependency. It is not required at runtime.
Version info, such as git commit and build date, can be embedded setting the following string variables at build time:
- `github.com/drakkan/sftpgo/version.commit`
- `github.com/drakkan/sftpgo/version.date`
For example, you can build using the following command:
```bash
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
```
You should get a version that includes git commit, build date and available features like this one:
```bash
$ ./sftpgo -v
SFTPGo 0.9.6-dev-b30614e-dirty-2020-06-19T11:04:56Z +metrics -gcs -s3 +bolt +mysql +pgsql -sqlite +portable
```

View file

@ -1,45 +0,0 @@
# Check password hook
This hook allows you to externally check the provided password, its main use case is to allow to easily support things like password+OTP for protocols without keyboard interactive support such as FTP and WebDAV. You can ask your users to login using a string consisting of a fixed password and a One Time Token, you can verify the token inside the hook and ask to SFTPGo to verify the fixed part.
The same thing can be achieved using [External authentication](./external-auth.md) but using this hook is simpler in some use cases.
The `check password hook` can be defined as the absolute path of your program or an HTTP URL.
The expected response is a JSON serialized struct containing the following keys:
- `status` integer. 0 means KO, 1 means OK, 2 means partial success
- `to_verify` string. For `status` = 2 SFTPGo will check this password against the one stored inside SFTPGo data provider
If the hook defines an external program it can read the following environment variables:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_PASSWORD`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
The program must write, on its standard output, the expected JSON serialized response described above.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `password`
- `ip`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
If authentication succeeds the HTTP response code must be 200 and the response body must contain the expected JSON serialized response described above.
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
You can also restrict the hook scope using the `check_password_scope` configuration key:
- `0` means all supported protocols.
- `1` means SSH only
- `2` means FTP only
- `4` means WebDAV only
You can combine the scopes. For example, 6 means FTP and WebDAV.
An example check password program allowing 2FA using password + one time token can be found inside the source tree [checkpwd](../examples/OTP/authy/checkpwd) directory.

View file

@ -1,89 +0,0 @@
# Custom Actions
The `actions` struct inside the "common" configuration section allows to configure the actions for file operations and SSH commands.
The `hook` can be defined as the absolute path of your program or an HTTP URL.
The `upload` condition includes both uploads to new files and overwrite of existing files. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
The notification will indicate if an error is detected and so, for example, a partial file is uploaded.
The `pre-delete` action, if defined, will be called just before files deletion. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo will assume that the file was already deleted/moved and so it will not try to remove the file and it will not execute the hook defined for the `delete` action.
If the `hook` defines a path to an external program, then this program is invoked with the following arguments:
- `action`, string, possible values are: `download`, `upload`, `pre-delete`,`delete`, `rename`, `ssh_cmd`
- `username`
- `path` is the full filesystem path, can be empty for some ssh commands
- `target_path`, non-empty for `rename` action and for `sftpgo-copy` SSH command
- `ssh_cmd`, non-empty for `ssh_cmd` action
The external program can also read the following environment variables:
- `SFTPGO_ACTION`
- `SFTPGO_ACTION_USERNAME`
- `SFTPGO_ACTION_PATH`
- `SFTPGO_ACTION_TARGET`, non-empty for `rename` `SFTPGO_ACTION`
- `SFTPGO_ACTION_SSH_CMD`, non-empty for `ssh_cmd` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FILE_SIZE`, non-empty for `upload`, `download` and `delete` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
- `SFTPGO_ACTION_STATUS`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`
Previous global environment variables aren't cleared when the script is called.
The program must finish within 30 seconds.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `action`
- `username`
- `path`
- `target_path`, not null for `rename` action
- `ssh_cmd`, not null for `ssh_cmd` action
- `file_size`, not null for `upload`, `download`, `delete` actions
- `fs_provider`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
- `bucket`, not null for S3, GCS and Azure backends
- `endpoint`, not null for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
- `status`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `FTP`, `DAV`
The HTTP request will use the global configuration for HTTP clients.
The `actions` struct inside the "data_provider" configuration section allows you to configure actions on user add, update, delete.
Actions will not be fired for internal updates, such as the last login or the user quota fields, or after external authentication.
If the `hook` defines a path to an external program, then this program is invoked with the following arguments:
- `action`, string, possible values are: `add`, `update`, `delete`
- `username`
- `ID`
- `status`
- `expiration_date`
- `home_dir`
- `uid`
- `gid`
The external program can also read the following environment variables:
- `SFTPGO_USER_ACTION`
- `SFTPGO_USER_USERNAME`
- `SFTPGO_USER_PASSWORD`, hashed password as stored inside the data provider, can be empty if the user does not login using a password
- `SFTPGO_USER_ID`
- `SFTPGO_USER_STATUS`
- `SFTPGO_USER_EXPIRATION_DATE`
- `SFTPGO_USER_HOME_DIR`
- `SFTPGO_USER_UID`
- `SFTPGO_USER_GID`
- `SFTPGO_USER_QUOTA_FILES`
- `SFTPGO_USER_QUOTA_SIZE`
- `SFTPGO_USER_UPLOAD_BANDWIDTH`
- `SFTPGO_USER_DOWNLOAD_BANDWIDTH`
- `SFTPGO_USER_MAX_SESSIONS`
- `SFTPGO_USER_FS_PROVIDER`
Previous global environment variables aren't cleared when the script is called.
The program must finish within 15 seconds.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action is added to the query string, for example `<hook>?action=update`, and the user is sent serialized as JSON inside the POST body with sensitive fields removed.
The HTTP request will use the global configuration for HTTP clients.

View file

@ -1,55 +0,0 @@
# Dynamic user creation or modification
Dynamic user creation or modification is supported via an external program or an HTTP URL that can be invoked just before the user login.
To enable dynamic user modification, you must set the absolute path of your program or an HTTP URL using the `pre_login_hook` key in your configuration file.
The external program can read the following environment variables to get info about the user trying to login:
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON. A JSON serialized user id equal to zero means the user does not exist inside SFTPGo
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey` and `keyboard-interactive`
- `SFTPGO_LOGIND_IP`, ip address of the user trying to login
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
The program must write, on its standard output:
- an empty string (or no response at all) if the user should not be created/updated
- or the SFTPGo user, JSON serialized, if you want to create or update the given user
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol and the ip address of the user trying to login are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH`.
The request body will contain the user trying to login serialized as JSON. If no modification is needed the HTTP response code must be 204, otherwise the response code must be 200 and the response body a valid SFTPGo user serialized as JSON.
Actions defined for user's updates will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
The JSON response can include only the fields to update instead of the full user. For example, if you want to disable the user, you can return a response like this:
```json
{"status": 0}
```
Please note that if you want to create a new user, the pre-login hook response must include all the mandatory user fields.
The program hook must finish within 30 seconds, the HTTP hook will use the global configuration for HTTP clients.
If an error happens while executing the hook then login will be denied.
"Dynamic user creation or modification" and "External Authentication" are mutually exclusive, they are quite similar, the difference is that "External Authentication" returns an already authenticated user while using "Dynamic users modification" you simply create or update a user. The authentication will be checked inside SFTPGo.
In other words while using "External Authentication" the external program receives the credentials of the user trying to login (for example the cleartext password) and it needs to validate them. While using "Dynamic users modification" the pre-login program receives the user stored inside the dataprovider (it includes the hashed password if any) and it can modify it, after the modification SFTPGo will check the credentials of the user trying to login.
Let's see a very basic example. Our sample program will grant access to the existing user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.
```shell
#!/bin/bash
CURRENT_TIME=`date +%H:%M`
if [[ "$SFTPGO_LOGIND_USER" =~ "\"test_user\"" ]]
then
if [[ $CURRENT_TIME > "18:00" || $CURRENT_TIME < "10:00" ]]
then
echo '{"status":0}'
else
echo '{"status":1}'
fi
fi
```
Please note that this is a demo program and it might not work in all cases. For example, the username should be obtained by parsing the JSON serialized user and not by searching the username inside the JSON as shown here.

View file

@ -1,58 +0,0 @@
# External Authentication
To enable external authentication, you must set the absolute path of your authentication program or an HTTP URL using the `external_auth_hook` key in your configuration file.
The external program can read the following environment variables to get info about the user trying to authenticate:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
- `SFTPGO_AUTHD_PASSWORD`, not empty for password authentication
- `SFTPGO_AUTHD_PUBLIC_KEY`, not empty for public key authentication
- `SFTPGO_AUTHD_KEYBOARD_INTERACTIVE`, not empty for keyboard interactive authentication
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
The program must write, on its standard output, a valid SFTPGo user serialized as JSON if the authentication succeeds or a user with an empty username if the authentication fails.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `ip`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
- `password`, not empty for password authentication
- `public_key`, not empty for public key authentication
- `keyboard_interactive`, not empty for keyboard interactive authentication
If authentication succeeds the HTTP response code must be 200 and the response body a valid SFTPGo user serialized as JSON. If the authentication fails the HTTP response code must be != 200 or the response body must be empty.
If the authentication succeeds, the user will be automatically added/updated inside the defined data provider. Actions defined for users added/updated will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
This method is slower than built-in authentication, but it's very flexible as anyone can easily write his own authentication hooks.
You can also restrict the authentication scope for the hook using the `external_auth_scope` configuration key:
- `0` means all supported authentication scopes. The external hook will be used for password, public key and keyboard interactive authentication
- `1` means passwords only
- `2` means public keys only
- `4` means keyboard interactive only
You can combine the scopes. For example, 3 means password and public key, 5 means password and keyboard interactive, and so on.
Let's see a very basic example. Our sample authentication program will only accept user `test_user` with any password or public key.
```shell
#!/bin/sh
if test "$SFTPGO_AUTHD_USERNAME" = "test_user"; then
echo '{"status":1,"username":"test_user","expiration_date":0,"home_dir":"/tmp/test_user","uid":0,"gid":0,"max_sessions":0,"quota_size":0,"quota_files":100000,"permissions":{"/":["*"],"/somedir":["list","download"]},"upload_bandwidth":0,"download_bandwidth":0,"filters":{"allowed_ip":[],"denied_ip":[]},"public_keys":[]}'
else
echo '{"username":""}'
fi
```
An example authentication program allowing to authenticate against an LDAP server can be found inside the source tree [ldapauth](../examples/ldapauth) directory.
An example server, to use as HTTP authentication hook, allowing to authenticate against an LDAP server can be found inside the source tree [ldapauthserver](../examples/ldapauthserver) directory.
If you have an external authentication hook that could be useful to others too, please let us know and/or please send a pull request.

View file

@ -1,202 +0,0 @@
# Configuring SFTPGo
## Command line options
The SFTPGo executable can be used this way:
```console
Usage:
sftpgo [command]
Available Commands:
gen A collection of useful generators
help Help about any command
initprovider Initializes and/or updates the configured data provider
portable Serve a single directory
serve Start the SFTP Server
Flags:
-h, --help help for sftpgo
-v, --version
Use "sftpgo [command] --help" for more information about a command
```
The `serve` command supports the following flags:
- `--config-dir` string. Location of the config dir. This directory should contain the configuration file and is used as the base directory for any files that use a relative path (eg. the private keys for the SFTP server, the SQLite or bblot database if you use SQLite or bbolt as data provider). The default value is "." or the value of `SFTPGO_CONFIG_DIR` environment variable.
- `--config-file` string. Name of the configuration file. It must be the name of a file stored in `config-dir`, not the absolute path to the configuration file. The specified file name must have no extension because we automatically append JSON, YAML, TOML, HCL and Java extensions when we search for the file. The default value is "sftpgo" (and therefore `sftpgo.json`, `sftpgo.yaml` and so on are searched) or the value of `SFTPGO_CONFIG_FILE` environment variable.
- `--loaddata-from` string. Load users and folders from this file. The file must be specified as absolute path and it must contain a backup obtained using the `dumpdata` REST API or compatible content. The default value is empty or the value of `SFTPGO_LOADDATA_FROM` environment variable.
- `--loaddata-clean` boolean. Determine if the loaddata-from file should be removed after a successful load. Default `false` or the value of `SFTPGO_LOADDATA_CLEAN` environment variable (1 or `true`, 0 or `false`).
- `--loaddata-mode`, integer. Restore mode for data to load. 0 means new users are added, existing users are updated. 1 means new users are added, existing users are not modified. Default 1 or the value of `SFTPGO_LOADDATA_MODE` environment variable.
- `--loaddata-scan`, integer. Quota scan mode after data load. 0 means no quota scan. 1 means quota scan. 2 means scan quota if the user has quota restrictions. Default 0 or the value of `SFTPGO_LOADDATA_QUOTA_SCAN` environment variable.
- `--log-compress` boolean. Determine if the rotated log files should be compressed using gzip. Default `false` or the value of `SFTPGO_LOG_COMPRESS` environment variable (1 or `true`, 0 or `false`). It is unused if `log-file-path` is empty.
- `--log-file-path` string. Location for the log file, default "sftpgo.log" or the value of `SFTPGO_LOG_FILE_PATH` environment variable. Leave empty to write logs to the standard error.
- `--log-max-age` int. Maximum number of days to retain old log files. Default 28 or the value of `SFTPGO_LOG_MAX_AGE` environment variable. It is unused if `log-file-path` is empty.
- `--log-max-backups` int. Maximum number of old log files to retain. Default 5 or the value of `SFTPGO_LOG_MAX_BACKUPS` environment variable. It is unused if `log-file-path` is empty.
- `--log-max-size` int. Maximum size in megabytes of the log file before it gets rotated. Default 10 or the value of `SFTPGO_LOG_MAX_SIZE` environment variable. It is unused if `log-file-path` is empty.
- `--log-verbose` boolean. Enable verbose logs. Default `true` or the value of `SFTPGO_LOG_VERBOSE` environment variable (1 or `true`, 0 or `false`).
- `--profiler` boolean. Enable the built-in profiler. The profiler will be accessible via HTTP/HTTPS using the base URL "/debug/pprof/". Default `false` or the value of `SFTPGO_PROFILER` environment variable (1 or `true`, 0 or `false`).
Log file can be rotated on demand sending a `SIGUSR1` signal on Unix based systems and using the command `sftpgo service rotatelogs` on Windows.
If you don't configure any private host key, the daemon will use `id_rsa`, `id_ecdsa` and `id_ed25519` in the configuration directory. If these files don't exist, the daemon will attempt to autogenerate them. The server supports any private key format supported by [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/keys.go#L33).
The `gen` command allows to generate completion scripts for your shell and man pages.
## Configuration file
The configuration file contains the following sections:
- **"common"**, configuration parameters shared among all the supported protocols
- `idle_timeout`, integer. Time in minutes after which an idle client will be disconnected. 0 means disabled. Default: 15
- `upload_mode` integer. 0 means standard: the files are uploaded directly to the requested path. 1 means atomic: files are uploaded to a temporary path and renamed to the requested path when the client ends the upload. Atomic mode avoids problems such as a web server that serves partial files when the files are being uploaded. In atomic mode, if there is an upload error, the temporary file is deleted and so the requested upload path will not contain a partial file. 2 means atomic with resume support: same as atomic but if there is an upload error, the temporary file is renamed to the requested path and not deleted. This way, a client can reconnect and resume the upload.
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `download`, `upload`, `pre-delete`, `delete`, `rename`, `ssh_cmd`. Leave empty to disable actions.
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `setstat_mode`, integer. 0 means "normal mode": requests for changing permissions, owner/group and access/modification times are executed. 1 means "ignore mode": requests for changing permissions, owner/group and access/modification times are silently ignored.
- `proxy_protocol`, integer. Support for [HAProxy PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable the proxy protocol. It provides a convenient way to safely transport connection information such as a client's address across multiple layers of NAT or TCP proxies to get the real client IP address instead of the proxy IP. Both protocol versions 1 and 2 are supported. If the proxy protocol is enabled in SFTPGo then you have to enable the protocol in your proxy configuration too. For example, for HAProxy, add `send-proxy` or `send-proxy-v2` to each server configuration line. The following modes are supported:
- 0, disabled
- 1, enabled. Proxy header will be used and requests without proxy header will be accepted
- 2, required. Proxy header will be used and requests without proxy header will be rejected
- `proxy_allowed`, List of IP addresses and IP ranges allowed to send the proxy header:
- If `proxy_protocol` is set to 1 and we receive a proxy header from an IP that is not in the list then the connection will be accepted and the header will be ignored
- If `proxy_protocol` is set to 2 and we receive a proxy header from an IP that is not in the list then the connection will be rejected
- `post_connect_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Post connect hook](./post-connect-hook.md) for more details. Leave empty to disable
- **"sftpd"**, the configuration for the SFTP server
- `bind_port`, integer. The port used for serving SFTP requests. Default: 2022
- `bind_address`, string. Leave blank to listen on all available network interfaces. Default: ""
- `idle_timeout`, integer. Deprecated, please use the same key in `common` section.
- `max_auth_tries` integer. Maximum number of authentication attempts permitted per connection. If set to a negative number, the number of attempts is unlimited. If set to zero, the number of attempts is limited to 6.
- `banner`, string. Identification string used by the server. Leave empty to use the default banner. Default `SFTPGo_<version>`, for example `SSH-2.0-SFTPGo_0.9.5`
- `upload_mode` integer. Deprecated, please use the same key in `common` section.
- `actions`, struct. Deprecated, please use the same key in `common` section.
- `keys`, struct array. Deprecated, please use `host_keys`.
- `private_key`, path to the private key file. It can be a path relative to the config dir or an absolute one.
- `host_keys`, list of strings. It contains the daemon's private host keys. Each host key can be defined as a path relative to the configuration directory or an absolute one. If empty, the daemon will search or try to generate `id_rsa`, `id_ecdsa` and `id_ed25519` keys inside the configuration directory. If you configure absolute paths to files named `id_rsa`, `id_ecdsa` and/or `id_ed25519` then SFTPGo will try to generate these keys using the default settings.
- `kex_algorithms`, list of strings. Available KEX (Key Exchange) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L46 "Supported kex algos")
- `ciphers`, list of strings. Allowed ciphers. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L28 "Supported ciphers")
- `macs`, list of strings. Available MAC (message authentication code) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L84 "Supported MACs")
- `trusted_user_ca_keys`, list of public keys paths of certificate authorities that are trusted to sign user certificates for authentication. The paths can be absolute or relative to the configuration directory.
- `login_banner_file`, path to the login banner file. The contents of the specified file, if any, are sent to the remote user before authentication is allowed. It can be a path relative to the config dir or an absolute one. Leave empty to disable login banner.
- `setstat_mode`, integer. Deprecated, please use the same key in `common` section.
- `enabled_ssh_commands`, list of enabled SSH commands. `*` enables all supported commands. More information can be found [here](./ssh-commands.md).
- `keyboard_interactive_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for keyboard interactive authentication. See [Keyboard Interactive Authentication](./keyboard-interactive.md) for more details.
- `password_authentication`, boolean. Set to false to disable password authentication. This setting will disable multi-step authentication method using public key + password too. It is useful for public key only configurations if you need to manage old clients that will not attempt to authenticate with public keys if the password login method is advertised. Default: true.
- `proxy_protocol`, integer. Deprecated, please use the same key in `common` section.
- `proxy_allowed`, list of strings. Deprecated, please use the same key in `common` section.
- **"ftpd"**, the configuration for the FTP server
- `bind_port`, integer. The port used for serving FTP requests. 0 means disabled. Default: 0.
- `bind_address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `banner`, string. Greeting banner displayed when a connection first comes in. Leave empty to use the default banner. Default `SFTPGo <version> ready`, for example `SFTPGo 1.0.0-dev ready`.
- `banner_file`, path to the banner file. The contents of the specified file, if any, are displayed when someone connects to the server. It can be a path relative to the config dir or an absolute one. If set, it overrides the banner string provided by the `banner` option. Leave empty to disable.
- `active_transfers_port_non_20`, boolean. Do not impose the port 20 for active data transfers. Enabling this option allows to run SFTPGo with less privilege. Default: false.
- `force_passive_ip`, ip address. External IP address to expose for passive connections. Leavy empty to autodetect. Defaut: "".
- `passive_port_range`, struct containing the key `start` and `end`. Port Range for data connections. Random if not specified. Default range is 50000-50100.
- `certificate_file`, string. Certificate for FTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided the server will accept both plain FTP an explicit FTP over TLS. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `tls_mode`, integer. 0 means accept both cleartext and encrypted sessions. 1 means TLS is required for both control and data connection. Do not enable this blindly, please check that a proper TLS config is in place or no login will be allowed if `tls_mode` is 1.
- **webdavd**, the configuration for the WebDAV server, more info [here](./webdav.md)
- `bind_port`, integer. The port used for serving WebDAV requests. 0 means disabled. Default: 0.
- `bind_address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `certificate_file`, string. Certificate for WebDAV over HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `cors` struct containing CORS configuration. SFTPGo uses [Go CORS handler](https://github.com/rs/cors), please refer to upstream documentation for fields meaning and their default values.
- `enabled`, boolean, set to true to enable CORS.
- `allowed_origins`, list of strings.
- `allowed_methods`, list of strings.
- `allowed_headers`, list of strings.
- `exposed_headers`, list of strings.
- `allow_credentials` boolean.
- `max_age`, integer.
- `cache` struct containing cache configuration for the authenticated users.
- `enabled`, boolean, set to true to enable user caching. Default: true.
- `expiration_time`, integer. Expiration time, in minutes, for the cached users. 0 means unlimited. Default: 0.
- `max_size`, integer. Maximum number of users to cache. 0 means unlimited. Default: 50.
- **"data_provider"**, the configuration for the data provider
- `driver`, string. Supported drivers are `sqlite`, `mysql`, `postgresql`, `bolt`, `memory`
- `name`, string. Database name. For driver `sqlite` this can be the database name relative to the config dir or the absolute path to the SQLite database. For driver `memory` this is the (optional) path relative to the config dir or the absolute path to the users dump, obtained using the `dumpdata` REST API, to load. This dump will be loaded at startup and can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The `memory` provider will not modify the provided file so quota usage and last login will not be persisted
- `host`, string. Database host. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `port`, integer. Database port. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `username`, string. Database user. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `password`, string. Database password. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `sslmode`, integer. Used for drivers `mysql` and `postgresql`. 0 disable SSL/TLS connections, 1 require ssl, 2 set ssl mode to `verify-ca` for driver `postgresql` and `skip-verify` for driver `mysql`, 3 set ssl mode to `verify-full` for driver `postgresql` and `preferred` for driver `mysql`
- `connectionstring`, string. Provide a custom database connection string. If not empty, this connection string will be used instead of building one using the previous parameters. Leave empty for drivers `bolt` and `memory`
- `sql_tables_prefix`, string. Prefix for SQL tables
- `manage_users`, integer. Set to 0 to disable users management, 1 to enable
- `track_quota`, integer. Set the preferred mode to track users quota between the following choices:
- 0, disable quota tracking. REST API to scan users home directories/virtual folders and update quota will do nothing
- 1, quota is updated each time a user uploads or deletes a file, even if the user has no quota restrictions
- 2, quota is updated each time a user uploads or deletes a file, but only for users with quota restrictions and for virtual folders. With this configuration, the `quota scan` and `folder_quota_scan` REST API can still be used to periodically update space usage for users without quota restrictions and for folders
- `pool_size`, integer. Sets the maximum number of open connections for `mysql` and `postgresql` driver. Default 0 (unlimited)
- `users_base_dir`, string. Users default base directory. If no home dir is defined while adding a new user, and this value is a valid absolute path, then the user home dir will be automatically defined as the path obtained joining the base dir and the username
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `add`, `update`, `delete`. `update` action will not be fired for internal updates such as the last login or the user quota fields.
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `external_auth_program`, string. Deprecated, please use `external_auth_hook`.
- `external_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for users authentication. See [External Authentication](./external-auth.md) for more details. Leave empty to disable.
- `external_auth_scope`, integer. 0 means all supported authentication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. The flags can be combined, for example 6 means public keys and keyboard interactive
- `credentials_path`, string. It defines the directory for storing user provided credential files such as Google Cloud Storage credentials. This can be an absolute path or a path relative to the config dir
- `prefer_database_credentials`, boolean. When true, users' Google Cloud Storage credentials will be written to the data provider instead of disk, though pre-existing credentials on disk will be used as a fallback. When false, they will be written to the directory specified by `credentials_path`.
- `pre_login_program`, string. Deprecated, please use `pre_login_hook`.
- `pre_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to modify user details just before the login. See [Dynamic user modification](./dynamic-user-mod.md) for more details. Leave empty to disable.
- `post_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to notify a successful or failed login. See [Post-login hook](./post-login-hook.md) for more details. Leave empty to disable.
- `post_login_scope`, defines the scope for the post-login hook. 0 means notify both failed and successful logins. 1 means notify failed logins. 2 means notify successful logins.
- `check_password_hook`, string. Absolute path to an external program or an HTTP URL to invoke to check the user provided password. See [Check password hook](./check-password-hook.md) for more details. Leave empty to disable.
- `check_password_scope`, defines the scope for the check password hook. 0 means all protocols, 1 means SSH, 2 means FTP, 4 means WebDAV. You can combine the scopes, for example 6 means FTP and WebDAV.
- `password_hashing`, struct. It contains the configuration parameters to be used to generate the password hash. SFTPGo can verify passwords in several formats and uses the `argon2id` algorithm to hash passwords in plain-text before storing them inside the data provider. These options allow you to customize how the hash is generated.
- `argon2_options` struct containing the options for argon2id hashing algorithm. The `memory` and `iterations` parameters control the computational cost of hashing the password. The higher these figures are, the greater the cost of generating the hash and the longer the runtime. It also follows that the greater the cost will be for any attacker trying to guess the password. If the code is running on a machine with multiple cores, then you can decrease the runtime without reducing the cost by increasing the `parallelism` parameter. This controls the number of threads that the work is spread across.
- `memory`, unsigned integer. The amount of memory used by the algorithm (in kibibytes). Default: 65536.
- `iterations`, unsigned integer. The number of iterations over the memory. Default: 1.
- `parallelism`. unsigned 8 bit integer. The number of threads (or lanes) used by the algorithm. Default: 2.
- `update_mode`, integer. Defines how the database will be initialized/updated. 0 means automatically. 1 means manually using the initprovider sub-command.
- **"httpd"**, the configuration for the HTTP server used to serve REST API and to expose the built-in web interface
- `bind_port`, integer. The port used for serving HTTP requests. Set to 0 to disable HTTP server. Default: 8080
- `bind_address`, string. Leave blank to listen on all available network interfaces. Default: "127.0.0.1"
- `templates_path`, string. Path to the HTML web templates. This can be an absolute path or a path relative to the config dir
- `static_files_path`, string. Path to the static files for the web interface. This can be an absolute path or a path relative to the config dir. If both `templates_path` and `static_files_path` are empty the built-in web interface will be disabled
- `backups_path`, string. Path to the backup directory. This can be an absolute path or a path relative to the config dir. We don't allow backups in arbitrary paths for security reasons
- `auth_user_file`, string. Path to a file used to store usernames and passwords for basic authentication. This can be an absolute path or a path relative to the config dir. We support HTTP basic authentication, and the file format must conform to the one generated using the Apache `htpasswd` tool. The supported password formats are bcrypt (`$2y$` prefix) and md5 crypt (`$apr1$` prefix). If empty, HTTP authentication is disabled.
- `certificate_file`, string. Certificate for HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- **"http"**, the configuration for HTTP clients. HTTP clients are used for executing hooks such as the ones used for custom actions, external authentication and pre-login user modifications
- `timeout`, integer. Timeout specifies a time limit, in seconds, for requests.
- `ca_certificates`, list of strings. List of paths to extra CA certificates to trust. The paths can be absolute or relative to the config dir. Adding trusted CA certificates is a convenient way to use self-signed certificates without defeating the purpose of using TLS.
- `skip_tls_verify`, boolean. if enabled the HTTP client accepts any TLS certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks. This should be used only for testing.
A full example showing the default config (in JSON format) can be found [here](../sftpgo.json).
If you want to use a private host key that uses an algorithm/setting different from the auto generated RSA/ECDSA keys, or more than two private keys, you can generate your own keys and replace the empty `keys` array with something like this:
```json
"host_keys": [
"id_rsa",
"id_ecdsa",
"id_ed25519"
]
```
where `id_rsa`, `id_ecdsa` and `id_ed25519`, in this example, are files containing your generated keys. You can use absolute paths or paths relative to the configuration directory specified via the `--config-dir` serve flag. By default the configuration directory is the working directory.
If you want the default host keys generation in a directory different from the config dir, please specify absolute paths to files named `id_rsa`, `id_ecdsa` or `id_ed25519` like this:
```json
"host_keys": [
"/etc/sftpgo/keys/id_rsa",
"/etc/sftpgo/keys/id_ecdsa",
"/etc/sftpgo/keys/id_ed25519"
]
```
then SFTPGo will try to create `id_rsa`, `id_ecdsa` and `id_ed25519`, if they are missing, inside the directory `/etc/sftpgo/keys`.
The configuration can be read from JSON, TOML, YAML, HCL, envfile and Java properties config files. If your `config-file` flag is set to `sftpgo` (default value), you need to create a configuration file called `sftpgo.json` or `sftpgo.yaml` and so on inside `config-dir`.
## Environment variables
You can also override all the available configuration options using environment variables. SFTPGo will check for environment variables with a name matching the key uppercased and prefixed with the `SFTPGO_`. You need to use `__` to traverse a struct.
Let's see some examples:
- To set sftpd `bind_port`, you need to define the env var `SFTPGO_SFTPD__BIND_PORT`
- To set the `execute_on` actions, you need to define the env var `SFTPGO_COMMON__ACTIONS__EXECUTE_ON`. For example `SFTPGO_COMMON__ACTIONS__EXECUTE_ON=upload,download`

View file

@ -1,11 +0,0 @@
# Google Cloud Storage backend
To connect SFTPGo to Google Cloud Storage you can use use the Application Default Credentials (ADC) strategy to try to find your application's credentials automatically or you can explicitly provide a JSON credentials file that you can obtain from the Google Cloud Console. Take a look [here](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application) for details.
Specifying a different `key_prefix`, you can assign different "folders" of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
You can optionally specify a [storage class](https://cloud.google.com/storage/docs/storage-classes) too. Leave it blank to use the default storage class.
The configured bucket must exist.
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations.

View file

@ -1,6 +0,0 @@
# Tutorials
Here we collect step-to-step tutorials. SFTPGo users are encouraged to contribute!
- [SFTPGo with PostgreSQL data provider and S3 backend](./postgresql-s3.md)
- [Expose Web Admin and REST API over HTTPS and password protected](./rest-api-https-auth.md)

View file

@ -1,215 +0,0 @@
# SFTPGo with PostgreSQL data provider and S3 backend
This tutorial shows the installation of SFTPGo on Ubuntu 20.04 (Focal Fossa) with PostgreSQL data provider and S3 backend. SFTPGo will run as an unprivileged (non-root) user. We assume that you want to serve a single S3 bucket and you want to assign different "virtual folders" of this bucket to different SFTPGo virtual users.
## Preliminary Note
Before proceeding further you need to have a basic minimal installation of Ubuntu 20.04.
## Install PostgreSQL
Before installing any packages on the Ubuntu system, update and upgrade all packages using the `apt` commands below.
```shell
sudo apt update
sudo apt upgrade
```
Install PostgreSQL with this `apt` command.
```shell
sudo apt -y install postgresql
```
Once installation is completed, start the PostgreSQL service and add it to the system boot.
```shell
sudo systemctl start postgresql
sudo systemctl enable postgresql
```
Next, check the PostgreSQL service using the following command.
```shell
systemctl status postgresql
```
## Configure PostgreSQL
PostgreSQL uses roles for user authentication and authorization, it just like Unix-Style permissions. By default, PostgreSQL creates a new user called `postgres` for basic authentication.
In this step, we will create a new PostgreSQL user for SFTPGo.
Login to the PostgreSQL shell using the command below.
```shell
sudo -i -u postgres psql
```
Next, create a new role `sftpgo` with the password `sftpgo_pg_pwd` using the following query.
```sql
create user "sftpgo" with encrypted password 'sftpgo_pg_pwd';
```
Next, create a new database `sftpgo.db` for the SFTPGo service using the following queries.
```sql
create database "sftpgo.db";
grant all privileges on database "sftpgo.db" to "sftpgo";
```
Exit from the PostgreSQL shell typing `\q`.
## Install SFTPGo
To install SFTPGo you can use the PPA [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
Start by adding the PPA.
```shell
sudo add-apt-repository ppa:sftpgo/sftpgo
sudo apt-get update
```
Next install SFTPGo.
```shell
sudo apt install sftpgo
```
After installation SFTPGo should already be running with default configuration and configured to start automatically at boot, check its status using the following command.
```shell
systemctl status sftpgo
```
## Configure AWS credentials
We assume that you want to serve a single S3 bucket and you want to assign different "virtual folders" of this bucket to different SFTPGo virtual users. In this case is very convenient to configure a credential file so SFTPGo will automatically use it and you don't need to specify the same AWS credentials for each user.
You can manually create the `/var/lib/sftpgo/.aws/credentials` file and write your AWS credentials like this.
```shell
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
```
Alternately you can install `AWS CLI` and manage the credential using this tool.
```shell
sudo apt install awscli
```
and now set your credentials, region, and output format with the following command.
```shell
aws configure
```
Confirm that you can list your bucket contents with the following command.
```shell
aws s3 ls s3://mybucket
```
The AWS CLI will create the credential file in `~/.aws/credentials`. The SFTPGo service runs using the `sftpgo` system user whose home directory is `/var/lib/sftpgo` so you need to copy the credentials file to the sftpgo home directory and assign it the proper permissions.
```shell
sudo mkdir /var/lib/sftpgo/.aws
sudo cp ~/.aws/credentials /var/lib/sftpgo/.aws/
sudo chown -R sftpgo:sftpgo /var/lib/sftpgo/.aws
```
## Configure SFTPGo
Now open the SFTPGo configuration.
```shell
sudo vi /etc/sftpgo/sftpgo.json
```
Search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "postgresql",
"name": "sftpgo.db",
"host": "127.0.0.1",
"port": 5432,
"username": "sftpgo",
"password": "sftpgo_pg_pwd",
...
}
```
This way we set the PostgreSQL connection parameters.
If you want to connect to PostgreSQL over a Unix Domain socket you have to set the value `/var/run/postgresql` for the `host` configuration key instead of `127.0.0.1`.
You can further customize your configuration adding custom actions and other hooks. A full explanation of all configuration parameters can be found [here](../full-configuration.md).
Next, initialize the data provider with the following command.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2020-10-09T21:07:50.000 INF Initializing provider: "postgresql" config file: "/etc/sftpgo/sftpgo.json"
2020-10-09T21:07:50.000 INF updating database version: 1 -> 2
2020-10-09T21:07:50.000 INF updating database version: 2 -> 3
2020-10-09T21:07:50.000 INF updating database version: 3 -> 4
2020-10-09T21:07:50.000 INF Data provider successfully initialized/updated
```
The default sftpgo systemd service will start after the network target, in this setup it is more appropriate to start it after the PostgreSQL service, so edit the service using the following command.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=postgresql.service
```
Confirm that `sftpgo.service` will start after `postgresql.service` with the next command.
```shell
$ systemctl show sftpgo.service | grep After=
After=postgresql.service systemd-journald.socket system.slice -.mount systemd-tmpfiles-setup.service network.target sysinit.target basic.target
```
Next restart the sftpgo service to use the new configuration and check that it is running.
```shell
sudo systemctl restart sftpgo
systemctl status sftpgo
```
## Add virtual users
The easiest way to add virtual users is to use the built-in Web interface.
You can expose the Web Admin interface over the network replacing `"bind_address": "127.0.0.1"` in the `httpd` configuration section with `"bind_address": ""` and apply the change restarting the SFTPGo service with the following command.
```shell
sudo systemctl restart sftpgo
```
So now open the Web Admin URL.
[http://127.0.0.1:8080/web](http://127.0.0.1:8080/web)
Click `Add` and fill the user details, the minimum required parameters are:
- `Username`
- `Password` or `Public keys`
- `Permissions`
- `Home Dir` can be empty since we defined a default base dir
- Select `AWS S3 (Compatible)` as storage and then set `Bucket`, `Region` and optionally a `Key Prefix` if you want to restrict the user to a specific virtual folder in the bucket. The specified virtual folder does not need to be pre-created. You can leave `Access Key` and `Access Secret` empty since we defined global credentials for the `sftpgo` user and we use this system user to run the SFTPGo service.
You are done! Now you can connect to you SFTPGo instance using any compatible `sftp` client on port `2022`.
You can mix S3 users with local users but please be aware that we are running the service as the unprivileged `sftpgo` system user so if you set storage as `local` for an SFTPGo virtual user then the home directory for this user must be owned by the `sftpgo` system user. If you don't specify an home directory the default will be `/srv/sftpgo/data/<username>` which should be appropriate.

View file

@ -1,122 +0,0 @@
# Expose Web Admin and REST API over HTTPS and password protected
This tutorial shows how to expose the SFTPGo web interface and REST API over HTTPS and password protect them.
## Preliminary Note
Before proceeding further you need to have a SFTPGo instance already configured and running.
We assume:
- you are running SFTPGo as service using the dedicated `sftpgo` system user
- the SFTPGo configuration directory is `/etc/sftpgo`
- you are running SFTPGo on Ubuntu 20.04, however this instructions can be easily adapted for other Linux variants.
## Authentication Setup
First install the `htpasswd` tool. We use this tool to create the users for the Web Admin/REST API.
```shell
sudo apt install apache2-utils
```
Create a user for web based authentication.
```shell
sudo htpasswd -B -c /etc/sftpgo/httpauth sftpgoweb
```
If you want to create additional users omit the `-c` option.
```shell
sudo htpasswd -B /etc/sftpgo/httpauth anotheruser
```
Next open the SFTPGo configuration.
```shell
sudo vi /etc/sftpgo/sftpgo.json
```
Search for the `httpd` section and change it as follow.
```json
"httpd": {
"bind_port": 8080,
"bind_address": "",
"templates_path": "templates",
"static_files_path": "static",
"backups_path": "backups",
"auth_user_file": "/etc/sftpgo/httpauth",
"certificate_file": "",
"certificate_key_file": ""
}
```
Setting an empty `bind_address` means that the service will listen on all available network interfaces and so it will be exposed over the network.
Now restart the SFTPGo service to apply the changes.
```shell
sudo systemctl restart sftpgo
```
You are done! Now login to the Web Admin interface using the username and password created above.
## Creation of a Self-Signed Certificate
For demostration purpose we use a self-signed certificate here. These certificates are easy to make and do not cost money. However, they do not provide all of the security properties that certificates signed by a Public Certificate Authority (CA) aim to provide, you are encouraged to use a certificate signed by a Public CA.
When creating a new SSL certificate, one needs to specify the duration validity of the same by changing the value 365 (as appearing in the message below) to the preferred number of days. It is important to mention here that the certificate so created stands to auto-expire upon completion of one year.
```shell
sudo mkdir /etc/sftpgo/ssl
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/sftpgo/ssl/sftpgo.key -out /etc/sftpgo/ssl/sftpgo.crt
```
The above command is rather versatile, and lets you create both the self-signed SSL certificate and the server key to safeguard it, in addition to placing both of these into the `etc/sftpgo/ssl` directory. Answer to the questions to create the certificate and the key for HTTPS.
Assign the proper permissions to the generated certificates.
```shell
sudo chown -R sftpgo:sftpgo /etc/sftpgo/ssl
```
## HTTPS Setup
Open the SFTPGo configuration.
```shell
sudo vi /etc/sftpgo/sftpgo.json
```
Search for the `httpd` section and change it as follow.
```json
"httpd": {
"bind_port": 8080,
"bind_address": "",
"templates_path": "templates",
"static_files_path": "static",
"backups_path": "backups",
"auth_user_file": "/etc/sftpgo/httpauth",
"certificate_file": "/etc/sftpgo/ssl/sftpgo.crt",
"certificate_key_file": "/etc/sftpgo/ssl/sftpgo.key"
}
```
Now restart the SFTPGo service to apply the changes.
```shell
sudo systemctl restart sftpgo
```
You are done! Now SFTPGo web admin and REST API are exposed over HTTPS and password protected.
You can easily replace the self-signed certificate used here with a properly signed certificate.
The certificate could frequently change if you use something like [let's encrypt](https://letsencrypt.org/). SFTPGo allows hot-certificate reloading using the following command.
```shell
sudo systemctl reload sftpgo
```

View file

@ -1,168 +0,0 @@
# Keyboard Interactive Authentication
Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client.
This authentication method is typically used for multi-factor authentication.
There are no restrictions on the number of questions asked on a particular authentication stage; there are also no restrictions on the number of stages involving different sets of questions.
To enable keyboard interactive authentication, you must set the absolute path of your authentication program or an HTTP URL using the `keyboard_interactive_auth_hook` key in your configuration file.
The external program can read the following environment variables to get info about the user trying to authenticate:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PASSWORD`, this is the hashed password as stored inside the data provider
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters.
The program must write the questions on its standard output, in a single line, using the following struct JSON serialized:
- `instruction`, string. A short description to show to the user that is trying to authenticate. Can be empty or omitted
- `questions`, list of questions to be asked to the user
- `echos` list of boolean flags corresponding to the questions (so the lengths of both lists must be the same) and indicating whether user's reply for a particular question should be echoed on the screen while they are typing: true if it should be echoed, or false if it should be hidden.
- `check_password` optional integer. Ask exactly one question and set this field to 1 if the expected answer is the user password and you want that SFTPGo checks it for you. If the password is correct, the returned response to the program is `OK`. If the password is wrong, the program will be terminated and an authentication error will be returned to the user that is trying to authenticate.
- `auth_result`, integer. Set this field to 1 to indicate successful authentication. 0 is ignored. Any other value means authentication error. If this field is found and it is different from 0 then SFTPGo will not read any other questions from the external program, and it will finalize the authentication.
SFTPGo writes the user answers to the program standard input, one per line, in the same order as the questions.
Please be sure that your program receives the answers for all the issued questions before asking for the next ones.
Keyboard interactive authentication can be chained to the external authentication.
The authentication must finish within 60 seconds.
Let's see a very basic example. Our sample keyboard interactive authentication program will ask for 2 sets of questions and accept the user if the answer to the last question is `answer3`.
```shell
#!/bin/sh
echo '{"questions":["Question1: ","Question2: "],"instruction":"This is a sample for keyboard interactive authentication","echos":[true,false]}'
read ANSWER1
read ANSWER2
echo '{"questions":["Question3: "],"instruction":"","echos":[true]}'
read ANSWER3
if test "$ANSWER3" = "answer3"; then
echo '{"auth_result":1}'
else
echo '{"auth_result":-1}'
fi
```
and here is an example where SFTPGo checks the user password for you:
```shell
#!/bin/sh
echo '{"questions":["Password: "],"instruction":"This is a sample for keyboard interactive authentication","echos":[false],"check_password":1}'
read ANSWER1
if test "$ANSWER1" != "OK"; then
exit 1
fi
echo '{"questions":["One time token: "],"instruction":"","echos":[false]}'
read ANSWER2
if test "$ANSWER2" = "token"; then
echo '{"auth_result":1}'
else
echo '{"auth_result":-1}'
fi
```
If the hook is an HTTP URL then it will be invoked as HTTP POST multiple times for each login request.
The request body will contain a JSON struct with the following fields:
- `request_id`, string. Unique request identifier
- `username`, string
- `ip`, string
- `password`, string. This is the hashed password as stored inside the data provider
- `answers`, list of string. It will be null for the first request
- `questions`, list of string. It will contain the previously asked questions. It will be null for the first request
The HTTP response code must be 200 and the body must contain the same JSON struct described for the program.
Let's see a basic sample, the configured hook is `http://127.0.0.1:8000/keyIntHookPwd`, as soon as the user tries to login, SFTPGo makes this HTTP POST request:
```shell
POST /keyIntHookPwd HTTP/1.1
Host: 127.0.0.1:8000
User-Agent: Go-http-client/1.1
Content-Length: 189
Content-Type: application/json
Accept-Encoding: gzip
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA=="}
```
as you can see in this first requests `answers` and `questions` are null.
Here is the response that instructs SFTPGo to ask for the user password and to check it:
```shell
HTTP/1.1 200 OK
Date: Tue, 31 Mar 2020 21:15:24 GMT
Server: WSGIServer/0.2 CPython/3.8.2
Content-Type: application/json
X-Frame-Options: SAMEORIGIN
Content-Length: 143
{"questions": ["Password: "], "check_password": 1, "instruction": "This is a sample for keyboard interactive authentication", "echos": [false]}
```
The user enters the correct password and so SFTPGo makes a new HTTP POST, please note that the `request_id` is the same of the previous request, this time the asked `questions` and the user's `answers` are not null:
```shell
POST /keyIntHookPwd HTTP/1.1
Host: 127.0.0.1:8000
User-Agent: Go-http-client/1.1
Content-Length: 233
Content-Type: application/json
Accept-Encoding: gzip
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["OK"],"questions":["Password: "]}
```
Here is the HTTP response that instructs SFTPGo to ask for a new question:
```shell
HTTP/1.1 200 OK
Date: Tue, 31 Mar 2020 21:15:27 GMT
Server: WSGIServer/0.2 CPython/3.8.2
Content-Type: application/json
X-Frame-Options: SAMEORIGIN
Content-Length: 66
{"questions": ["Question2: "], "instruction": "", "echos": [true]}
```
As soon as the user answer to this question, SFTPGo will make a new HTTP POST request with the user's answers:
```shell
POST /keyIntHookPwd HTTP/1.1
Host: 127.0.0.1:8000
User-Agent: Go-http-client/1.1
Content-Length: 239
Content-Type: application/json
Accept-Encoding: gzip
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["answer2"],"questions":["Question2: "]}
```
Here is the final HTTP response that allows the user login:
```shell
HTTP/1.1 200 OK
Date: Tue, 31 Mar 2020 21:15:29 GMT
Server: WSGIServer/0.2 CPython/3.8.2
Content-Type: application/json
X-Frame-Options: SAMEORIGIN
Content-Length: 18
{"auth_result": 1}
```
An example keyboard interactive program allowing to authenticate using [Twilio Authy 2FA](https://www.twilio.com/docs/authy) can be found inside the source tree [authy](../examples/OTP/authy) directory.

View file

@ -1,56 +0,0 @@
# Logs
The log file is a stream of JSON structs. Each struct has a `sender` field that identifies the log type.
The logs can be divided into the following categories:
- **"app logs"**, internal logs used to debug SFTPGo:
- `sender` string. This is generally the package name that emits the log
- `time` string. Date/time with millisecond precision
- `level` string
- `message` string
- **"transfer logs"**, SFTP/SCP transfer logs:
- `sender` string. `Upload` or `Download`
- `time` string. Date/time with millisecond precision
- `level` string
- `elapsed_ms`, int64. Elapsed time, as milliseconds, for the upload/download
- `size_bytes`, int64. Size, as bytes, of the download/upload
- `username`, string
- `file_path` string
- `connection_id` string. Unique connection identifier
- `protocol` string. `SFTP` or `SCP`
- **"command logs"**, SFTP/SCP command logs:
- `sender` string. `Rename`, `Rmdir`, `Mkdir`, `Symlink`, `Remove`, `Chmod`, `Chown`, `Chtimes`, `Truncate`, `SSHCommand`
- `level` string
- `username`, string
- `file_path` string
- `target_path` string
- `filemode` string. Valid for sender `Chmod` otherwise empty
- `uid` integer. Valid for sender `Chown` otherwise -1
- `gid` integer. Valid for sender `Chown` otherwise -1
- `access_time` datetime as YYYY-MM-DDTHH:MM:SS. Valid for sender `Chtimes` otherwise empty
- `modification_time` datetime as YYYY-MM-DDTHH:MM:SS. Valid for sender `Chtimes` otherwise empty
- `size` int64. Valid for sender `Truncate` otherwise -1
- `ssh_command`, string. Valid for sender `SSHCommand` otherwise empty
- `connection_id` string. Unique connection identifier
- `protocol` string. `SFTP`, `SCP` or `SSH`
- **"http logs"**, REST API logs:
- `sender` string. `httpd`
- `level` string
- `remote_addr` string. IP and port of the remote client
- `proto` string, for example `HTTP/1.1`
- `method` string. HTTP method (`GET`, `POST`, `PUT`, `DELETE` etc.)
- `user_agent` string
- `uri` string. Full uri
- `resp_status` integer. HTTP response status code
- `resp_size` integer. Size in bytes of the HTTP response
- `elapsed_ms` int64. Elapsed time, as milliseconds, to complete the request
- `request_id` string. Unique request identifier
- **"connection failed logs"**, logs for failed attempts to initialize a connection. A connection can fail for an authentication error or other errors such as a client abort or a timeout if the login does not happen in two minutes
- `sender` string. `connection_failed`
- `level` string
- `username`, string. Can be empty if the connection is closed before an authentication attempt
- `client_ip` string.
- `protocol` string. Possible values are `SSH`, `FTP`, `DAV`
- `login_type` string. Can be `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive` or `no_auth_tryed`
- `error` string. Optional error description

View file

@ -1,18 +0,0 @@
# Metrics
SFTPGo exposes [Prometheus](https://prometheus.io/) metrics at the `/metrics` HTTP endpoint.
Several counters and gauges are available, for example:
- Total uploads and downloads
- Total upload and download size
- Total upload and download errors
- Total executed SSH commands
- Total SSH command errors
- Number of active connections
- Data provider availability
- Total successful and failed logins using password, public key, keyboard interactive authentication or supported multi-step authentications
- Total HTTP requests served and totals for response code
- Go's runtime details about GC, number of gouroutines and OS threads
- Process information like CPU, memory, file descriptor usage and start time
Please check the `/metrics` page for more details.

View file

@ -1,164 +0,0 @@
# Performance
SFTPGo can easily saturate a Gigabit connection on low end hardware with no special configuration, this is generally enough for most use cases.
For Multi-Gig connections, some performance improvements and comparisons with OpenSSH have been discussed [here](https://github.com/drakkan/sftpgo/issues/69), most of them have been included in the master branch. To summarize:
- In current state with all performance improvements applied, SFTP performance is very close to OpenSSH however CPU usage is higher. SCP performance match OpenSSH.
- The main bottlenecks are the encryption and the messages authentication, so if you can use a fast cipher with implicit messages authentication, such as `aes128-gcm@openssh.com`, you will get a big performance boost.
- SCP protocol is much simpler than SFTP and so, the multi-platform, SFTPGo's SCP implementation performs better than SFTP.
- Load balancing with HAProxy can greatly improve the performance if CPU not become the bottleneck.
## Benchmark
### Hardware specification
**Server** ||
--- | --- |
OS| Debian 10.2 x64 |
CPU| Ryzen5 3600 |
RAM| 64GB 2400MHz ECC |
Disk| Ramdisk |
Ethernet| Mellanox ConnectX-3 40GbE|
**Client** ||
--- | --- |
OS| Ubuntu 19.10 x64 |
CPU| Threadripper 1920X |
RAM| 64GB 2400MHz ECC |
Disk| Ramdisk |
Ethernet| Mellanox ConnectX-3 40GbE|
### Test configurations
- `Baseline`: SFTPGo version 0.9.6.
- `Devel`: SFTPGo commit b0ed1905918b9dcc22f9a20e89e354313f491734, compiled with Golang 1.14.2. This is basically the same as v1.0.0 as far as performance is concerned.
- `Optimized`: Various [optimizations](#Optimizations-applied) applied on top of `Devel`.
- `Balanced`: Two optimized instances, running on localhost, load balanced by HAProxy 2.1.3.
- `OpenSSH`: OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1d 10 Sep 2019
Server's CPU is in Eco mode, you can expect better results in certain cases with a stronger CPU, especially multi-stream HAProxy balanced load.
#### Cipher aes128-ctr
The Message Authentication Code (MAC) used is `hmac-sha2-256`.
##### SFTP
Download:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|150|243|319|412|452|
2|267|452|600|740|735|
3|351|637|802|991|1045|
4|414|811|1002|1192|1265|
8|536|1451|1742|1552|1798|
Upload:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|172|273|343|407|426|
2|284|469|595|673|738|
3|368|644|820|881|1090|
4|446|851|1041|1026|1244|
8|605|1210|1368|1273|1820|
##### SCP
Download:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|220|369|525|611|558|
2|437|659|941|1048|856|
3|635|1000|1365|1363|1201|
4|787|1272|1664|1610|1415|
8|1297|2129|2690|2100|1959|
Upload:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|208|312|400|458|508|
2|360|516|647|745|926|
3|476|678|861|935|1254|
4|576|836|1080|1099|1569|
8|857|1161|1416|1433|2271|
#### Cipher aes128-gcm@openssh.com
With this cipher the messages authentication is implicit, no SHA256 computation is needed.
##### SFTP
Download:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|332|423|<--|583|443|
2|533|755|<--|970|809|
3|666|1045|<--|1249|1098|
4|762|1276|<--|1461|1351|
8|886|2064|<--|1825|1933|
Upload:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|348|410|<--|527|469|
2|596|729|<--|842|930|
3|778|974|<--|1088|1341|
4|886|1192|<--|1232|1494|
8|1042|1578|<--|1433|1893|
##### SCP
Download:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|776|793|<--|832|578|
2|1343|1415|<--|1435|938|
3|1815|1878|<--|1877|1279|
4|2192|2205|<--|2056|1567|
8|3237|3287|<--|2493|2036|
Upload:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|528|545|<--|608|584|
2|872|849|<--|975|1019|
3|1121|1138|<--|1217|1412|
4|1367|1387|<--|1368|1755|
8|1733|1744|<--|1664|2510|
### Optimizations applied
- AES-CTR optimization of Go compiler for x86_64, there is a [patch](https://go-review.googlesource.com/c/go/+/51670) that hasn't been merged yet, you can apply it yourself.
### HAProxy configuration
Here is the relevant HAProxy configuration used for the `Balanced` test configuration:
```console
frontend sftp
bind :2222
mode tcp
timeout client 600s
default_backend sftpgo
backend sftpgo
mode tcp
balance roundrobin
timeout connect 10s
timeout server 600s
timeout queue 30s
option tcp-check
tcp-check expect string SSH-2.0-
server sftpgo1 127.0.0.1:2022 check send-proxy-v2 weight 10 inter 10s rise 2 fall 3
server sftpgo2 127.0.0.1:2024 check send-proxy-v2 weight 10 inter 10s rise 2 fall 3
```

View file

@ -1,114 +0,0 @@
# Portable mode
SFTPGo allows to share a single directory on demand using the `portable` subcommand:
```console
sftpgo portable --help
To serve the current working directory with auto generated credentials simply
use:
$ sftpgo portable
Please take a look at the usage below to customize the serving parameters
Usage:
sftpgo portable [flags]
Flags:
-C, --advertise-credentials If the SFTP/FTP service is
advertised via multicast DNS, this
flag allows to put username/password
inside the advertised TXT record
-S, --advertise-service Advertise SFTP/FTP service using
multicast DNS
--allowed-extensions stringArray Allowed file extensions case
insensitive. The format is
/dir::ext1,ext2.
For example: "/somedir::.jpg,.png"
--az-account-key string
--az-account-name string
--az-container string
--az-endpoint string Leave empty to use the default:
"blob.core.windows.net"
--az-key-prefix string Allows to restrict access to the
virtual folder identified by this
prefix and its contents
--az-sas-url string Shared access signature URL
--az-upload-concurrency int How many parts are uploaded in
parallel (default 2)
--az-upload-part-size int The buffer size for multipart uploads
(MB) (default 4)
--az-use-emulator
--denied-extensions stringArray Denied file extensions case
insensitive. The format is
/dir::ext1,ext2.
For example: "/somedir::.jpg,.png"
-d, --directory string Path to the directory to serve.
This can be an absolute path or a path
relative to the current directory
(default ".")
-f, --fs-provider int 0 => local filesystem
1 => AWS S3 compatible
2 => Google Cloud Storage
3 => Azure Blob Storage
--ftpd-cert string Path to the certificate file for FTPS
--ftpd-key string Path to the key file for FTPS
--ftpd-port int 0 means a random unprivileged port,
< 0 disabled (default -1)
--gcs-automatic-credentials int 0 means explicit credentials using
a JSON credentials file, 1 automatic
(default 1)
--gcs-bucket string
--gcs-credentials-file string Google Cloud Storage JSON credentials
file
--gcs-key-prefix string Allows to restrict access to the
virtual folder identified by this
prefix and its contents
--gcs-storage-class string
-h, --help help for portable
-l, --log-file-path string Leave empty to disable logging
-v, --log-verbose Enable verbose logs
-p, --password string Leave empty to use an auto generated
value
-g, --permissions strings User's permissions. "*" means any
permission (default [list,download])
-k, --public-key strings
--s3-access-key string
--s3-access-secret string
--s3-bucket string
--s3-endpoint string
--s3-key-prefix string Allows to restrict access to the
virtual folder identified by this
prefix and its contents
--s3-region string
--s3-storage-class string
--s3-upload-concurrency int How many parts are uploaded in
parallel (default 2)
--s3-upload-part-size int The buffer size for multipart uploads
(MB) (default 5)
-s, --sftpd-port int 0 means a random unprivileged port
-c, --ssh-commands strings SSH commands to enable.
"*" means any supported SSH command
including scp
(default [md5sum,sha1sum,cd,pwd,scp])
-u, --username string Leave empty to use an auto generated
value
--webdav-cert string Path to the certificate file for WebDAV
over HTTPS
--webdav-key string Path to the key file for WebDAV over
HTTPS
--webdav-port int 0 means a random unprivileged port,
< 0 disabled (default -1)
```
In portable mode, SFTPGo can advertise the SFTP/FTP services and, optionally, the credentials via multicast DNS, so there is a standard way to discover the service and to automatically connect to it.
Here is an example of the advertised SFTP service including credentials as seen using `avahi-browse`:
```console
= enp0s31f6 IPv4 SFTPGo portable 53705 SFTP File Transfer local
hostname = [p1.local]
address = [192.168.1.230]
port = [53705]
txt = ["password=EWOo6pJe" "user=user" "version=0.9.3-dev-b409523-dirty-2019-10-26T13:43:32Z"]
```

View file

@ -1,26 +0,0 @@
# Post-connect hook
This hook is executed as soon as a new connection is established. It notifies the connection's IP address and protocol. Based on the received response, the connection is accepted or rejected. Combining this hook with the [Post-login hook](./post-login-hook.md) you can implement your own (even for Protocol) blacklist/whitelist of IP addresses.
Please keep in mind that you can easily configure specialized program such as [Fail2ban](http://www.fail2ban.org/) for brute force protection. Executing a hook for each connection can be heavy.
The `post-connect-hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can read the following environment variables:
- `SFTPGO_CONNECTION_IP`
- `SFTPGO_CONNECTION_PROTOCOL`
If the external command completes with a zero exit status the connection will be accepted otherwise rejected.
Previous global environment variables aren't cleared when the script is called.
The program must finish within 20 seconds.
If the hook defines an HTTP URL then this URL will be invoked as HTTP GET with the following query parameters:
- `ip`
- `protocol`
The connection is accepted if the HTTP response code is `200` otherwise rejected.
The HTTP request will use the global configuration for HTTP clients.

View file

@ -1,36 +0,0 @@
# Post-login hook
This hook is executed after a login or after closing a connection for authentication timeout. Defining an appropriate `post_login_scope` you can get notifications for failed logins, successful logins or both.
Combining this hook with the [Post-connect hook](./post-connect-hook.md) you can implement your own (even for Protocol) blacklist/whitelist of IP addresses.
Please keep in mind that you can easily configure specialized program such as [Fail2ban](http://www.fail2ban.org/) for brute force protection. Executing a hook after each login can be heavy.
The `post-login-hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can reads the following environment variables:
- `SFTPGO_LOGIND_USER`, username, can be empty if the connection is closed for authentication timeout
- `SFTPGO_LOGIND_IP`
- `SFTPGO_LOGIND_METHOD`, possible values are `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive` or `no_auth_tryed`
- `SFTPGO_LOGIND_STATUS`, 1 means login OK, 0 login KO
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
Previous global environment variables aren't cleared when the script is called.
The program must finish within 20 seconds.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `login_method`
- `ip`
- `protocol`
- `status`
The HTTP request will use the global configuration for HTTP clients.
The `post_login_scope` supports the following configuration values:
- `0` means notify both failed and successful logins
- `1` means notify failed logins. Connections closed for authentication timeout are notified as failed connections. You will get an empty username in this case
- `2` means notify successful logins

View file

@ -1,23 +0,0 @@
# Profiling SFTPGo
The built-in profiler lets you collect CPU profiles, traces, allocations and heap profiles that allow to identify and correct specific bottlenecks.
You can enable the built-in profiler using the `--profiler` command flag.
Profiling data are exposed via HTTP/HTTPS in the format expected by the [pprof](https://github.com/google/pprof/blob/master/doc/README.md) visualization tool. You can find the index page at the URL `/debug/pprof/`.
The following profiles are available, you can obtain them via HTTP GET requests:
- `allocs`, a sampling of all past memory allocations
- `block`, stack traces that led to blocking on synchronization primitives
- `goroutine`, stack traces of all current goroutines
- `heap`, a sampling of memory allocations of live objects. You can specify the `gc` GET parameter to run GC before taking the heap sample
- `mutex`, stack traces of holders of contended mutexes
- `profile`, CPU profile. You can specify the duration in the `seconds` GET parameter. After you get the profile file, use the `go tool pprof` command to investigate the profile
- `threadcreate`, stack traces that led to the creation of new OS threads
- `trace`, a trace of execution of the current program. You can specify the duration in the `seconds` GET parameter. After you get the trace file, use the `go tool trace` command to investigate the trace
For example you can:
- download a 30 seconds CPU profile from the URL `/debug/pprof/profile?seconds=30`
- download a sampling of memory allocations of live objects from the URL `/debug/pprof/heap?gc=1`
- download a sampling of all past memory allocations from the URL `/debug/pprof/allocs`

View file

@ -1,35 +0,0 @@
# REST API
SFTPGo exposes REST API to manage, backup, and restore users and folders, and to get real time reports of the active connections with the ability to forcibly close a connection.
If quota tracking is enabled in the configuration file, then the used size and number of files are updated each time a file is added/removed. If files are added/removed not using SFTP/SCP, or if you change `track_quota` from `2` to `1`, you can rescan the users home dir and update the used quota using the REST API.
REST API can be protected using HTTP basic authentication and exposed via HTTPS. If you need more advanced security features, you can setup a reverse proxy using an HTTP Server such as Apache or NGNIX.
For example, you can keep SFTPGo listening on localhost and expose it externally configuring a reverse proxy using Apache HTTP Server this way:
```shell
ProxyPass /api/v1 http://127.0.0.1:8080/api/v1
ProxyPassReverse /api/v1 http://127.0.0.1:8080/api/v1
```
and you can add authentication with something like this:
```shell
<Location /api/v1>
AuthType Digest
AuthName "Private"
AuthDigestDomain "/api/v1"
AuthDigestProvider file
AuthUserFile "/etc/httpd/conf/auth_digest"
Require valid-user
</Location>
```
and, of course, you can configure the web server to use HTTPS.
The OpenAPI 3 schema for the exposed API can be found inside the source tree: [openapi.yaml](../httpd/schema/openapi.yaml "OpenAPI 3 specs").
A sample CLI client for the REST API can be found inside the source tree [rest-api-cli](../examples/rest-api-cli) directory.
You can also generate your own REST client in your preferred programming language, or even bash scripts, using an OpenAPI generator such as [swagger-codegen](https://github.com/swagger-api/swagger-codegen) or [OpenAPI Generator](https://openapi-generator.tech/)

View file

@ -1,36 +0,0 @@
# S3 Compatible Object Storage backends
To connect SFTPGo to AWS, you need to specify credentials, a `bucket` and a `region`. Here is the list of available [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions). For example, if your bucket is at `Frankfurt`, you have to set the region to `eu-central-1`. You can specify an AWS [storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) too. Leave it blank to use the default AWS storage class. An endpoint is required if you are connecting to a Compatible AWS Storage such as [MinIO](https://min.io/).
AWS SDK has different options for credentials. [More Detail](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html). We support:
1. Providing [Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
2. Use IAM roles for Amazon EC2
3. Use IAM roles for tasks if your application uses an ECS task definition
So, you need to provide access keys to activate option 1, or leave them blank to use the other ways to specify credentials.
Specifying a different `key_prefix`, you can assign different "folders" of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
SFTPGo uses multipart uploads and parallel downloads for storing and retrieving files from S3.
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and S3 then the client should wait for the last parts to be uploaded to S3 after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
The configured bucket must exist.
Some SFTP commands don't work over S3:
- `symlink` and `chtimes` will fail
- `chown` and `chmod` are silently ignored
- `truncate` is not supported
- opening a file for both reading and writing at the same time is not supported
- upload resume is not supported
- upload mode `atomic` is ignored since S3 uploads are already atomic
Other notes:
- `rename` is a two step operation: server-side copy and then deletion. So, it is not atomic as for local filesystem.
- We don't support renaming non empty directories since we should rename all the contents too and this could take a long time: think about directories with thousands of files: for each file we should do an AWS API call.
- For server side encryption, you have to configure the mapped bucket to automatically encrypt objects.
- A local home directory is still required to store temporary files.
- Clients that require advanced filesystem-like features such as `sshfs` are not supported.

View file

@ -1,145 +0,0 @@
# Running SFTPGo as a service
Download a binary SFTPGo [release](https://github.com/drakkan/sftpgo/releases) or a build artifact for the [latest commit](https://github.com/drakkan/sftpgo/actions) or build SFTPGo yourself.
Run the following instructions from the directory that contains the sftpgo binary and the accompanying files.
## Linux
The easiest way to run SFTPGo as a service is to download and install the pre-compiled deb/rpm package or use one of the Arch Linux PKGBUILDs we maintain.
This section describes the procedure to use if you prefer to build SFTPGo yourself or if you want to download and configure a pre-built release as tar.
A `systemd` sample [service](../init/sftpgo.service "systemd service") can be found inside the source tree.
Here are some basic instructions to run SFTPGo as service using a dedicated `sftpgo` system account.
Please run the following commands from the directory where you downloaded/compiled SFTPGo:
```bash
# create the sftpgo user and group
sudo groupadd --system sftpgo
sudo useradd --system \
--gid sftpgo \
--no-create-home \
--home-dir /var/lib/sftpgo \
--shell /usr/sbin/nologin \
--comment "SFTPGo user" \
sftpgo
# create the required directories
sudo mkdir -p /etc/sftpgo \
/var/lib/sftpgo \
/usr/share/sftpgo
# install the sftpgo executable
sudo install -Dm755 sftpgo /usr/bin/sftpgo
# install the default configuration file, edit it if required
sudo install -Dm644 sftpgo.json /etc/sftpgo/
# override some configuration keys using environment variables
sudo sh -c 'echo "SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates" > /etc/sftpgo/sftpgo.env'
sudo sh -c 'echo "SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static" >> /etc/sftpgo/sftpgo.env'
sudo sh -c 'echo "SFTPGO_HTTPD__BACKUPS_PATH=/var/lib/sftpgo/backups" >> /etc/sftpgo/sftpgo.env'
sudo sh -c 'echo "SFTPGO_DATA_PROVIDER__CREDENTIALS_PATH=/var/lib/sftpgo/credentials" >> /etc/sftpgo/sftpgo.env'
# if you use a file based data provider such as sqlite or bolt consider to set the database path too, for example:
#sudo sh -c 'echo "SFTPGO_DATA_PROVIDER__NAME=/var/lib/sftpgo/sftpgo.db" >> /etc/sftpgo/sftpgo.env'
# also set the provider's PATH as env var to get initprovider to work with SQLite provider:
#export SFTPGO_DATA_PROVIDER__NAME=/var/lib/sftpgo/sftpgo.db
# install static files and templates for the web UI
sudo cp -r static templates /usr/share/sftpgo/
# set files and directory permissions
sudo chown -R sftpgo:sftpgo /etc/sftpgo /var/lib/sftpgo
sudo chmod 750 /etc/sftpgo /var/lib/sftpgo
sudo chmod 640 /etc/sftpgo/sftpgo.json /etc/sftpgo/sftpgo.env
# initialize the configured data provider
# if you want to use MySQL or PostgreSQL you need to create the configured database before running the initprovider command
sudo -E su - sftpgo -m -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
# install the systemd service
sudo install -Dm644 init/sftpgo.service /etc/systemd/system
# start the service
sudo systemctl start sftpgo
# verify that the service is started
sudo systemctl status sftpgo
# automatically start sftpgo on boot
sudo systemctl enable sftpgo
# optional, install the REST API CLI. It requires python-requests to run
sudo install -Dm755 examples/rest-api-cli/sftpgo_api_cli /usr/bin/sftpgo_api_cli
# optional, create shell completion script, for example for bash
sudo sh -c '/usr/bin/sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo'
# optional, create man pages
sudo /usr/bin/sftpgo gen man -d /usr/share/man/man1
```
## macOS
For macOS, a `launchd` sample [service](../init/com.github.drakkan.sftpgo.plist "launchd plist") can be found inside the source tree. The `launchd` plist assumes that SFTPGo has `/usr/local/opt/sftpgo` as base directory.
Here are some basic instructions to run SFTPGo as service, please run the following commands from the directory where you downloaded SFTPGo:
```bash
# create the required directories
sudo mkdir -p /usr/local/opt/sftpgo/init \
/usr/local/opt/sftpgo/var/lib \
/usr/local/opt/sftpgo/usr/share \
/usr/local/opt/sftpgo/var/log \
/usr/local/opt/sftpgo/etc \
/usr/local/opt/sftpgo/bin
# install sftpgo executable
sudo cp sftpgo /usr/local/opt/sftpgo/bin/
# install the launchd service
sudo cp init/com.github.drakkan.sftpgo.plist /usr/local/opt/sftpgo/init/
sudo chown root:wheel /usr/local/opt/sftpgo/init/com.github.drakkan.sftpgo.plist
# install the default configuration file, edit it if required
sudo cp sftpgo.json /usr/local/opt/sftpgo/etc/
# install static files and templates for the web UI
sudo cp -r static templates /usr/local/opt/sftpgo/usr/share/
# initialize the configured data provider
# if you want to use MySQL or PostgreSQL you need to create the configured database before running the initprovider command
sudo /usr/local/opt/sftpgo/bin/sftpgo initprovider -c /usr/local/opt/sftpgo/etc/
# add sftpgo to the launch daemons
sudo ln -s /usr/local/opt/sftpgo/init/com.github.drakkan.sftpgo.plist /Library/LaunchDaemons/com.github.drakkan.sftpgo.plist
# start the service and enable it to start on boot
sudo launchctl load -w /Library/LaunchDaemons/com.github.drakkan.sftpgo.plist
# verify that the service is started
sudo launchctl list com.github.drakkan.sftpgo
# optional, install the REST API CLI. It requires python-requests to run, this python module is not installed by default
sudo cp examples/rest-api-cli/sftpgo_api_cli /usr/local/opt/sftpgo/bin/
```
## Windows
On Windows, you can register SFTPGo as Windows Service. Take a look at the CLI usage to learn how to do this:
```powershell
PS> sftpgo.exe service --help
Manage SFTPGo Windows Service
Usage:
sftpgo service [command]
Available Commands:
install Install SFTPGo as Windows Service
reload Reload the SFTPGo Windows Service sending a "paramchange" request
rotatelogs Signal to the running service to rotate the logs
start Start SFTPGo Windows Service
status Retrieve the status for the SFTPGo Windows Service
stop Stop SFTPGo Windows Service
uninstall Uninstall SFTPGo Windows Service
Flags:
-h, --help help for service
Use "sftpgo service [command] --help" for more information about a command.
```
The `install` subcommand accepts the same flags that are valid for `serve`.
After installing as a Windows Service, please remember to allow network access to the SFTPGo executable using something like this:
```powershell
PS> netsh advfirewall firewall add rule name="SFTPGo Service" dir=in action=allow program="C:\Program Files\SFTPGo\sftpgo.exe"
```
Or through the Windows Firewall GUI.
The Windows installer will register the service and allow network access for it automatically.

View file

@ -1,64 +0,0 @@
# SFTP subsystem mode
In this mode SFTPGo speaks the server side of SFTP protocol to stdout and expects client requests from stdin.
You can use SFTPGo as subsystem via the `startsubsys` command.
This mode is not intended to be called directly, but from sshd using the `Subsystem` option.
For example adding a line like this one in `/etc/ssh/sshd_config`:
```shell
Subsystem sftp sftpgo startsubsys
```
Command-line flags should be specified in the Subsystem declaration.
```shell
Usage:
sftpgo startsubsys [flags]
Flags:
-d, --base-home-dir string If the user does not exist specify an alternate
starting directory. The home directory for a new
user will be:
<base-home-dir>/<username>
base-home-dir must be an absolute path.
-c, --config-dir string Location for SFTPGo config dir. This directory
should contain the "sftpgo" configuration file
or the configured config-file and it is used as
the base for files with a relative path (eg. the
private keys for the SFTP server, the SQLite
database if you use SQLite as data provider).
This flag can be set using SFTPGO_CONFIG_DIR
env var too. (default ".")
-f, --config-file string Name for SFTPGo configuration file. It must be
the name of a file stored in config-dir not the
absolute path to the configuration file. The
specified file name must have no extension we
automatically load JSON, YAML, TOML, HCL and
Java properties. Therefore if you set "sftpgo"
then "sftpgo.json", "sftpgo.yaml" and so on
are searched.
This flag can be set using SFTPGO_CONFIG_FILE
env var too. (default "sftpgo")
-h, --help help for startsubsys
-j, --log-to-journald Send logs to journald. Only available on Linux.
Use:
$ journalctl -o verbose -f
To see full logs.
If not set, the logs will be sent to the standard
error
-v, --log-verbose Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
(default true)
-p, --preserve-home If the user already exists, the existing home
directory will not be changed
```
In this mode `bolt` and `sqlite` providers are not usable as the same database file cannot be shared among multiple processes, if one of these provider is configured it will be automatically changed to `memory` provider.
The username and home directory for the logged in user are determined using [user.Current()](https://golang.org/pkg/os/user/#Current).
If the user who is logging is not found within the SFTPGo data provider, it is added automatically.
You can pre-configure the users inside the SFTPGo data provider, this way you can use a different home directory, restrict permissions and such.

View file

@ -1,46 +0,0 @@
# SSH commands
Some SSH commands are implemented directly inside SFTPGo, while for others we use system commands that need to be installed and in your system's `PATH`.
For system commands we have no direct control on file creation/deletion and so there are some limitations:
- we cannot allow them if the target directory contains virtual folders or file extensions filters
- system commands work only on local filyestem
- we cannot avoid to leak real filesystem paths
- quota check is suboptimal
- maximum size restriction on single file is not respected
If quota is enabled and SFTPGo receives a system command, the used size and number of files are checked at the command start and not while new files are created/deleted. While the command is running the number of files is not checked, the remaining size is calculated as the difference between the max allowed quota and the used one, and it is checked against the bytes transferred via SSH. The command is aborted if it uploads more bytes than the remaining allowed size calculated at the command start. Anyway, we only see the bytes that the remote command sends to the local one via SSH. These bytes contain both protocol commands and files, and so the size of the files is different from the size transferred via SSH: for example, a command can send compressed files, or a protocol command (few bytes) could delete a big file. To mitigate these issues, quotas are recalculated at the command end with a full scan of the directory specified for the system command. This could be heavy for big directories. If you need system commands and quotas you could consider disabling quota restrictions and periodically update quota usage yourself using the REST API.
For these reasons we should limit system commands usage as much as possible, we currently support the following system commands:
- `git-receive-pack`, `git-upload-pack`, `git-upload-archive`. These commands enable support for Git repositories over SSH. They need to be installed and in your system's `PATH`.
- `rsync`. The `rsync` command needs to be installed and in your system's `PATH`.
At least the following permissions are required to be able to run system commands:
- `list`
- `download`
- `upload`
- `create_dirs`
- `overwrite`
- `delete`
For `rsync` we cannot avoid that it creates symlinks so if the `create_symlinks` permission is granted we add the option `--safe-links`, if it is not already set, to the received `rsync` command. This should prevent to create symlinks that point outside the home directory.
If the user cannot create symlinks we add the option `--munge-links`, if it is not already set, to the received `rsync` command. This should make symlinks unusable (but manually recoverable).
SFTPGo supports the following built-in SSH commands:
- `scp`, SFTPGo implements the SCP protocol so we can support it for cloud filesystems too and we can avoid the other system commands limitations. SCP between two remote hosts is supported using the `-3` scp option.
- `md5sum`, `sha1sum`, `sha256sum`, `sha384sum`, `sha512sum`. Useful to check message digests for uploaded files.
- `cd`, `pwd`. Some SFTP clients do not support the SFTP SSH_FXP_REALPATH packet type, so they use `cd` and `pwd` SSH commands to get the initial directory. Currently `cd` does nothing and `pwd` always returns the `/` path.
- `sftpgo-copy`. This is a built-in copy implementation. It allows server side copy for files and directories. The first argument is the source file/directory and the second one is the destination file/directory, for example `sftpgo-copy <src> <dst>`. The command will fail if the destination exists. Copy for directories spanning virtual folders is not supported. Only local filesystem is supported: recursive copy for Cloud Storage filesystems requires a new request for every file in any case, so a real server side copy is not possible.
- `sftpgo-remove`. This is a built-in remove implementation. It allows to remove single files and to recursively remove directories. The first argument is the file/directory to remove, for example `sftpgo-remove <dst>`. Only local filesystem is supported: recursive remove for Cloud Storage filesystems requires a new request for every file in any case, so a server side remove is not possible.
The following SSH commands are enabled by default:
- `md5sum`
- `sha1sum`
- `cd`
- `pwd`
- `scp`

View file

@ -1,31 +0,0 @@
# Virtual Folders
A virtual folder is a mapping between an SFTP/SCP virtual path and a filesystem path outside the user home directory.
The specified paths must be absolute and the virtual path cannot be "/", it must be a sub directory.
The parent directory to the specified virtual path must exist. SFTPGo will try to automatically create any missing parent directory for the configured virtual folders at user login.
For each virtual folder, the following properties can be configured:
- `mapped_path`, the full absolute path to the filesystem path to expose as virtual folder
- `virtual_path`, the SFTP/SCP absolute path to use to expose the mapped path
- `quota_size`, maximum size allowed as bytes. 0 means unlimited, -1 included in user quota
- `quota_files`, maximum number of files allowed. 0 means unlimited, -1 included in user quota
For example if you configure `/tmp/mapped` or `C:\mapped` as mapped path and `/vfolder` as virtual path then SFTP/SCP users can access the mapped path via the `/vfolder` SFTP path.
The same virtual folder, identified by the `mapped_path`, can be shared among users and different folder quota limits for each user are supported.
Folder quota limits can also be included inside the user quota but in this case the folder is considered "private" and sharing it with other users will break user quota calculation.
You don't need to create virtual folders, inside the data provider, to associate them to the users: any missing virtual folder will be automatically created when you add/update a user. You only have to create the folder on the filesystem.
Using the REST API you can:
- monitor folders quota usage
- scan quota for folders
- inspect the relationships among users and folders
- delete a virtual folder. SFTPGo removes folders from the data provider, no files deletion will occur
If you remove a folder, from the data provider, any users relationships will be cleared up. If the deleted folder is included inside the user quota you need to do a user quota scan to update its quota. An orphan virtual folder will not be automatically deleted since if you add it again later then a quota scan is needed and it could be quite expensive, anyway you can easily list the orphan folders using the REST API and delete them if they are not needed anymore.
Overlapping virtual paths are not allowed for the same user, overlapping mapped paths are allowed only if quota tracking is globally disabled inside the configuration file (`track_quota` must be set to `0`).
Virtual folders are supported for local filesystem only.

View file

@ -1,8 +0,0 @@
# Web Admin
You can easily build your own interface using the exposed REST API. Anyway, SFTPGo also provides a very basic built-in web interface that allows you to manage users and connections.
With the default `httpd` configuration, the web admin is available at the following URL:
[http://127.0.0.1:8080/web](http://127.0.0.1:8080/web)
The web interface can be protected using HTTP basic authentication and exposed via HTTPS. If you need more advanced security features, you can setup a reverse proxy as explained for the [REST API](./rest-api.md).

View file

@ -1,31 +0,0 @@
# WebDAV
The experimental `WebDAV` support can be enabled by setting a `bind_port` inside the `webdavd` configuration section.
Each user has his own path like `http/s://<SFTPGo ip>:<WevDAVPORT>/<username>` and it must authenticate using password credentials.
WebDAV is quite a different protocol than SCP/FTP, there is no session concept, each command is a separate HTTP request and must be authenticated, to improve performance SFTPGo caches authenticated users. This way SFTPGo don't need to do a dataprovider query and a password check for each request.
The user caching configuration allows to set:
- `expiration_time` in minutes. If a user is cached for more than the specified minutes it will be removed from the cache and a new dataprovider query will be performed. Please note that the `last_login` field will not be updated and `external_auth_hook`, `pre_login_hook` and `check_password_hook` will not be executed if the user is obtained from the cache.
- `max_size`. Maximum number of users to cache. When this limit is reached the user with the oldest expiration date will be removed from the cache. 0 means no limit however the cache size cannot exceed the number of users so if you have a small number of users you can set this value to 0.
Users are automatically removed from the cache after an update/delete.
WebDAV protocol requires the MIME type for each file. SFTPGo will first try to guess the MIME type by extension. If this fails it will send a `HEAD` request for Cloud backends and, as last resort, it will try to guess the MIME type reading the first 512 bytes of the file. This may slow down the directory listing, especially for Cloud based backends, if you have directories containing many files with unregistered extensions. To mitigate this problem, you can enable caching of MIME types so that the MIME type detection is done only once.
The MIME types caching configurations allows to set the maximum number of MIME types to cache. Once the cache reaches the configured maximum size no new MIME types will be added. The MIME types cache is a non-persistent in-memory cache. If you need a persistent cache add your MIME types to `/etc/mime.types` on Linux or inside the registry on Windows.
WebDAV should work as expected for most use cases but there are some minor issues and some missing features.
Know issues:
- removing a directory tree on Cloud Storage backends could generate a `not found` error when removing the last (virtual) directory. This happens if the client cycles the directories tree itself and removes files and directories one by one instead of issuing a single remove command
- the used [WebDAV library](https://pkg.go.dev/golang.org/x/net/webdav?tab=doc) asks to open a file to execute a `stat` and sometimes reads some bytes to find the content type. Stat calls are executed before and after a download too, so to be able to properly list a directory you need to grant both `list` and `download` permissions and to be able to upload files you need to gran both `list` and `upload` permissions
- the used `WebDAV library` not always returns a proper error code/message, most of the times it simply returns `Method not Allowed`. I'll try to improve the library error codes in the future
- if an object within a directory cannot be accessed, for example due to OS permissions issues or because is a missing mapped path for a virtual folder, the directory listing will fail. In SFTP/FTP the directory listing will succeed and you'll only get an error if you try to access to the problematic file/directory
We plan to add [Dead Properties](https://tools.ietf.org/html/rfc4918#section-3) support in future releases. We need a design decision here, probably the best solution is to store dead properties inside the data provider but this could increase a lot its size. Alternately we could store them on disk for local filesystem and add as metadata for Cloud Storage, this means that we need to do a separate `HEAD` request to retrieve dead properties for an S3 file. For big folders will do a lot of requests to the Cloud Provider, I don't like this solution. Another option is to expose a hook and allow you to implement `dead properties` outside SFTPGo.
If you find any other quirks or problems please let us know opening a GitHub issue, thank you!

View file

@ -29,7 +29,7 @@ The response is something like this:
{"message":"User created successfully.","user":{"id":xxxxxxxx},"success":true}
```
Save the user id somewhere and add a reference to the matching SFTPGo account.
Save the user id somewhere and add a reference to the matching SFTPGo account. You could also store this ID in the `additional_info` SFTPGo user field.
After this step you can use the Authy app installed on your phone to generate TOTP codes.
@ -45,7 +45,7 @@ curl -i "https://api.authy.com/protected/json/verify/${TOKEN}/${AUTHY_ID}" \
So inside your hook you need to check:
- the HTTP response code for the verify request, it must be `200`
- the JSON reponse body, it must contains the key `success` with the value `true` (as string)
- the JSON response body, it must contains the key `success` with the value `true` (as string)
If these conditions are met the token is valid and you allow the user to login.
@ -56,3 +56,5 @@ We provide the following examples:
- [Check password hook](./checkpwd/README.md) for 2FA using a password consisting of a fixed string and a One Time Token.
Please note that these are sample programs not intended for production use, you should write your own hook based on them and you should prefer HTTP based hooks if performance is a concern.
:warning: SFTPGo has also built-in 2FA support.

Some files were not shown because too many files have changed in this diff Show more