Compare commits

..

46 commits
main ... 2.3.x

Author SHA1 Message Date
Nicola Murino
66c14bebd8
set version to 2.3.6
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-12 17:41:17 +02:00
Nicola Murino
b4ef24e23d
FTPD: fix APPE to new files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-12 11:45:29 +02:00
Nicola Murino
e617dc9c0a
azblob: use UUIDs as block IDs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-10-07 07:38:12 +02:00
Nicola Murino
80fb56bc48
WebClient: validate PDF files before rendering
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-23 16:53:15 +02:00
Nicola Murino
b65fc0bdc2
ftpd: return relative paths for NLST reponses
Fixes #993

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-17 16:47:29 +02:00
Nicola Murino
cb98f8fd6d
set version to 2.3.5
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-16 19:28:17 +02:00
Nicola Murino
cbef217cfa
WebClient: improve HTML escaping
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-12 20:09:14 +02:00
Nicola Murino
4a34ae6662
WebClient: properly escape files/directories names
Fixes #981

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-12 12:14:40 +02:00
Nicola Murino
3ff7acc1b4
CI: add commit info in vendored sources
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-08 17:33:42 +02:00
Nicola Murino
fc648454df
CI: use Docker to build x86_64 Linux packages
therefore Linux packages are compiled with Docker for all supported
architectures

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-08 13:41:00 +02:00
Nicola Murino
836b36b816
WebClient/HTTP API: ensure to check home dir, when needed, in multi-node setups
Behind a load balancer with no sticky sessions enabled is not enough to check
the home dir only when the client logs in

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-08 11:55:08 +02:00
Nicola Murino
2d19817431
CI: improve workflows
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-03 19:02:33 +02:00
Nicola Murino
df84c42e7e
set version to 2.3.4
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-09-01 12:28:13 +02:00
Nicola Murino
c304143eb3
MFA: allow recovery codes only if two-factor auth is enabled
Fixes #965

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-31 10:07:45 +02:00
Nicola Murino
d2acc6f5c1
FTP: always generate a defender event if the client does not authenticate
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-30 17:50:12 +02:00
Nicola Murino
531ed852f5
ftpd: prefix MLST entries with a space
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-21 19:28:22 +02:00
Nicola Murino
303a723b04
backport from main
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-17 22:23:42 +02:00
Nicola Murino
88bfdb9910
OIDC: allow to get the role field from a sub-struct
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-11 13:02:06 +02:00
Nicola Murino
d3a523ba13
docker: add a variant with official plugins included
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-11 12:57:55 +02:00
Nicola Murino
665016ed1e
set version to 2.3.3
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-05 09:47:00 +02:00
Nicola Murino
97d5680d1e
azblob: fix SAS URL with embedded container name
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-01 21:52:32 +02:00
Nicola Murino
e7866047aa
allow to edit profile to users logged in via OIDC
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-01 21:49:53 +02:00
Nicola Murino
5f313cc6be
macOS: add config file search path
this way the default config file is used in brew package if no config file is
set

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-29 17:55:01 +02:00
Nicola Murino
3c2c703408
user templates: apply placeholders also for start directory
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-27 19:09:54 +02:00
Nicola Murino
78a399eed4
download as zip: improve filename
include username and also filename/directory name if the user downloads
a single file/directory

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-26 17:53:04 +02:00
Nicola Murino
e6d434654d
backport from main
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-24 08:56:31 +02:00
Nicola Murino
d34446e6e9
web client: add HTML5 player
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-23 16:30:27 +02:00
Nicola Murino
2da19ef233
backport OIDC related changes from main
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-23 15:31:57 +02:00
Nicola Murino
b34bc2b818
add license header to source files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-18 13:43:25 +02:00
Nicola Murino
378995147b
try to better highlight donations and sponsorships options ...
... and to better explain why they are required.

Please don't say "someone else will help the project, I'll just use it"

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-16 20:29:10 +02:00
Nicola Murino
6b995db864
oidc: allow to configure oauth2 scopes
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-16 19:25:04 +02:00
Nicola Murino
371012a46e
backport some fixes from main
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-15 20:09:06 +02:00
Nicola Murino
d3d788c8d0
s3: improve rename performance
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-30 18:25:40 +02:00
maximethebault
756b122ab8
S3: Fix timeout error when renaming large files (#899)
Remove AWS SDK Transport ResponseHeaderTimeout (finer-grained timeout are already handled by the callers)
Lower the threshold for MultipartCopy (5GB -> 500MB) to improve copy performance and reduce chance of hitting Single part copy timeout

Fixes #898

Signed-off-by: Maxime Thébault <contact@maximethebault.me>
2022-06-30 10:25:04 +02:00
Nicola Murino
e244ba37b2
config: fix replace from env vars for some sub list
ensure to merge configuration from files with configuration from env

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-28 19:17:16 +02:00
Nicola Murino
5610b98d19
fix get branding from env
Fixes #895

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-28 10:46:25 +02:00
Nicola Murino
b3ca20b5e6
dataprovider: fix sql tables prefix handling
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-24 12:26:43 +02:00
Nicola Murino
d0b6ca8d2f
backup: include folders set on groups
Fixes #885

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-21 14:13:25 +02:00
Nicola Murino
550158ff4b
fix database reset
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-13 19:40:24 +02:00
Nicola Murino
14a3803c8f
OpenAPI schema: improve compatibility with some generators
Fixes #875

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-11 19:00:04 +02:00
Nicola Murino
ca4da2f64e
set version to 2.3.1
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-10 18:42:13 +02:00
Nicola Murino
049c2b7430
mysql: groups is a reserved keyfrom since MySQL 8.0.2
add mysql to CI

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-10 17:36:26 +02:00
Nicola Murino
7fd5558400
parse IP proxy header also if listening on UNIX domain socket
Fixes #867

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-09 09:48:39 +02:00
Nicola Murino
b60255752f
web UIs: fix date formatting on Safari
Fixes #869

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-09 09:47:02 +02:00
Nicola Murino
37f79650c8
APT and YUM repo are now available
This is possible thanks to the Oregon State University's free
mirroring service

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-09 09:46:57 +02:00
Nicola Murino
8988d6542b
create branch 2.3.x
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-06 19:01:24 +02:00
600 changed files with 72807 additions and 112641 deletions

View file

@ -1,31 +0,0 @@
freebsd_task:
name: FreeBSD
matrix:
- name: FreeBSD 14.1
freebsd_instance:
image_family: freebsd-14-1
pkginstall_script:
- pkg update -f
- pkg install -y go123
- pkg install -y git
setup_script:
- ln -s /usr/local/bin/go123 /usr/local/bin/go
- pw groupadd sftpgo
- pw useradd sftpgo -g sftpgo -w none -m
- mkdir /home/sftpgo/sftpgo
- cp -R . /home/sftpgo/sftpgo
- chown -R sftpgo:sftpgo /home/sftpgo/sftpgo
compile_script:
- su sftpgo -c 'cd ~/sftpgo && go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo'
- su sftpgo -c 'cd ~/sftpgo/tests/eventsearcher && go build -trimpath -ldflags "-s -w" -o eventsearcher'
- su sftpgo -c 'cd ~/sftpgo/tests/ipfilter && go build -trimpath -ldflags "-s -w" -o ipfilter'
check_script:
- su sftpgo -c 'cd ~/sftpgo && ./sftpgo initprovider && ./sftpgo resetprovider --force'
test_script:
- su sftpgo -c 'cd ~/sftpgo && go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 20m ./... -coverprofile=coverage.txt -covermode=atomic'

View file

@ -1,108 +0,0 @@
name: Open Source Bug Report
description: "Submit a report and help us improve SFTPGo"
title: "[Bug]: "
labels: ["bug"]
body:
- type: markdown
attributes:
value: |
### 👍 Thank you for contributing to our project!
Before asking for help please check the [support policy](https://github.com/drakkan/sftpgo#support-policy).
If you are a [commercial user](https://sftpgo.com/) or a project sponsor please contact us using the dedicated [email address](mailto:support@sftpgo.com).
- type: checkboxes
id: before-posting
attributes:
label: "⚠️ This issue respects the following points: ⚠️"
description: All conditions are **required**.
options:
- label: This is a **bug**, not a question or a configuration issue.
required: true
- label: This issue is **not** already reported on Github _(I've searched it)_.
required: true
- type: textarea
id: bug-description
attributes:
label: Bug description
description: |
Provide a description of the bug you're experiencing.
Don't just expect someone will guess what your specific problem is and provide full details.
validations:
required: true
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce
description: |
Describe the steps to reproduce the bug.
The better your description is the fastest you'll get an _(accurate)_ answer.
value: |
1.
2.
3.
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: Describe what you expected to happen instead.
validations:
required: true
- type: input
id: version
attributes:
label: SFTPGo version
validations:
required: true
- type: input
id: data-provider
attributes:
label: Data provider
validations:
required: true
- type: dropdown
id: install-method
attributes:
label: Installation method
description: |
Select installation method you've used.
_Describe the method in the "Additional info" section if you chose "Other"._
options:
- "Community Docker image"
- "Community Deb package"
- "Community RPM package"
- "Other"
validations:
required: true
- type: textarea
attributes:
label: Configuration
description: "Describe your customizations to the configuration: both config file changes and overrides via environment variables"
value: config
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
id: usecase
attributes:
label: What are you using SFTPGo for?
description: We'd like to understand your SFTPGo usecase more
multiple: true
options:
- "Private user, home usecase (home backup/VPS)"
- "Professional user, 1 person business"
- "Small business (3-person firm with file exchange?)"
- "Medium business"
- "Enterprise"
validations:
required: true
- type: textarea
id: additional-info
attributes:
label: Additional info
description: Any additional information related to the issue.

View file

@ -1,9 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Commercial Support
url: https://sftpgo.com/
about: >
If you need Professional support, so your reports are prioritized and resolved more quickly.
- name: GitHub Community Discussions
url: https://github.com/drakkan/sftpgo/discussions
about: Please ask and answer questions here.

View file

@ -1,42 +0,0 @@
name: 🚀 Feature request
description: Suggest an idea for SFTPGo
labels: ["suggestion"]
body:
- type: textarea
attributes:
label: Is your feature request related to a problem? Please describe.
description: A clear and concise description of what the problem is.
validations:
required: false
- type: textarea
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: dropdown
id: usecase
attributes:
label: What are you using SFTPGo for?
description: We'd like to understand your SFTPGo usecase more
multiple: true
options:
- "Private user, home usecase (home backup/VPS)"
- "Professional user, 1 person business"
- "Small business (3-person firm with file exchange?)"
- "Medium business"
- "Enterprise"
validations:
required: true
- type: textarea
attributes:
label: Additional context
description: Add any other context or screenshots about the feature request here.
validations:
required: false

View file

@ -1,5 +0,0 @@
# Checklist for Pull Requests
- [ ] Have you signed the [Contributor License Agreement](https://sftpgo.com/cla.html)?
---

View file

@ -1,11 +1,11 @@
version: 2
updates:
#- package-ecosystem: "gomod"
# directory: "/"
# schedule:
# interval: "weekly"
# open-pull-requests-limit: 2
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
- package-ecosystem: "docker"
directory: "/"

View file

@ -1,36 +0,0 @@
name: "Code scanning - action"
on:
push:
pull_request:
schedule:
- cron: '30 1 * * 6'
jobs:
CodeQL-Build:
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v3
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3

View file

@ -2,75 +2,107 @@ name: CI
on:
push:
branches: [main]
branches: [2.3.x]
pull_request:
permissions:
id-token: write
contents: read
jobs:
test-deploy:
name: Test and deploy
runs-on: ${{ matrix.os }}
strategy:
matrix:
go: ['1.23']
go: [1.18]
os: [ubuntu-latest, macos-latest]
upload-coverage: [true]
include:
- go: 1.18
os: windows-latest
upload-coverage: false
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go }}
- name: Build for Linux/macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: |
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter
cd -
./sftpgo initprovider
./sftpgo resetprovider --force
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --abbrev=8 --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$FILE_VERSION = $LATEST_TAG.substring(1) + "." + $COMMITS_FROM_TAG
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher.exe
cd ../..
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter.exe
cd ../..
mkdir arm64
$Env:CGO_ENABLED='0'
$Env:GOOS='windows'
$Env:GOARCH='arm64'
go-winres simply --arch arm64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
mkdir x86
$Env:GOARCH='386'
go-winres simply --arch 386 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
Remove-Item Env:\CGO_ENABLED
Remove-Item Env:\GOOS
Remove-Item Env:\GOARCH
- name: Run test cases using SQLite provider
run: go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
run: go test -v -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
if: ${{ matrix.upload-coverage }}
uses: codecov/codecov-action@v3
with:
files: ./coverage.txt
file: ./coverage.txt
fail_ci_if_error: false
token: ${{ secrets.CODECOV_TOKEN }}
- name: Run test cases using bolt provider
run: |
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 2m ./internal/config -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 5m ./internal/common -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 5m ./internal/httpd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 8m ./internal/sftpd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 5m ./internal/ftpd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 5m ./internal/webdavd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 2m ./internal/telemetry -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 2m ./internal/mfa -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 2m ./internal/command -covermode=atomic
go test -v -p 1 -timeout 2m ./config -covermode=atomic
go test -v -p 1 -timeout 5m ./common -covermode=atomic
go test -v -p 1 -timeout 5m ./httpd -covermode=atomic
go test -v -p 1 -timeout 8m ./sftpd -covermode=atomic
go test -v -p 1 -timeout 5m ./ftpd -covermode=atomic
go test -v -p 1 -timeout 5m ./webdavd -covermode=atomic
go test -v -p 1 -timeout 2m ./telemetry -covermode=atomic
go test -v -p 1 -timeout 2m ./mfa -covermode=atomic
go test -v -p 1 -timeout 2m ./command -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: bolt
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
- name: Run test cases using memory provider
run: go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 15m ./... -covermode=atomic
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
@ -91,124 +123,8 @@ jobs:
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
- name: Upload build artifact
if: startsWith(matrix.os, 'ubuntu-') != true
uses: actions/upload-artifact@v4
with:
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
path: output
test-deploy-windows:
name: Test and deploy Windows
environment: signing
runs-on: windows-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Azure login
if: ${{ github.event_name != 'pull_request' }}
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.23'
- name: Build
run: |
$GIT_COMMIT = (git describe --always --abbrev=8 --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$FILE_VERSION = $LATEST_TAG.substring(1) + "." + $COMMITS_FROM_TAG
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0 with additional terms" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o sftpgo.exe
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher.exe
cd ../..
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter.exe
cd ../..
mkdir arm64
$Env:CGO_ENABLED='0'
$Env:GOOS='windows'
$Env:GOARCH='arm64'
go-winres simply --arch arm64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0 with additional terms" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
mkdir x86
$Env:GOARCH='386'
go-winres simply --arch 386 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0 with additional terms" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
Remove-Item Env:\CGO_ENABLED
Remove-Item Env:\GOOS
Remove-Item Env:\GOARCH
- name: Sign binaries
if: ${{ github.event_name != 'pull_request' }}
uses: azure/trusted-signing-action@v0.5.0
with:
endpoint: https://eus.codesigning.azure.net/
trusted-signing-account-name: nicola
certificate-profile-name: SFTPGo
files: |
${{ github.workspace }}\sftpgo.exe
${{ github.workspace }}\arm64\sftpgo.exe
${{ github.workspace }}\x86\sftpgo.exe
file-digest: SHA256
timestamp-rfc3161: http://timestamp.acs.microsoft.com
timestamp-digest: SHA256
exclude-environment-credential: true
exclude-workload-identity-credential: true
exclude-managed-identity-credential: true
exclude-shared-token-cache-credential: true
exclude-visual-studio-credential: true
exclude-visual-studio-code-credential: true
exclude-azure-cli-credential: false
exclude-azure-powershell-credential: true
exclude-azure-developer-cli-credential: true
exclude-interactive-browser-credential: true
- name: Run test cases using SQLite provider
run: go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
- name: Run test cases using bolt provider
run: |
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 2m ./internal/config -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 5m ./internal/common -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 5m ./internal/httpd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 8m ./internal/sftpd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 5m ./internal/ftpd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 5m ./internal/webdavd -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 2m ./internal/telemetry -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 2m ./internal/mfa -covermode=atomic
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 2m ./internal/command -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: bolt
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
- name: Run test cases using memory provider
run: go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
- name: Initialize data provider
run: |
rm sftpgo.db
./sftpgo initprovider
shell: bash
- name: Prepare Windows installers
if: ${{ github.event_name != 'pull_request' }}
- name: Prepare Windows installer
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
@ -216,7 +132,6 @@ jobs:
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\LICENSE .\output\LICENSE.txt
copy .\NOTICE .\output\NOTICE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
@ -227,7 +142,15 @@ jobs:
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$Env:SFTPGO_ISS_DEV_VERSION = $LATEST_TAG + "." + $COMMITS_FROM_TAG
iscc .\windows-installer\sftpgo.iss
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
rm .\output\sftpgo.db
@ -239,60 +162,40 @@ jobs:
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
$Env:SFTPGO_ISS_ARCH='arm64'
iscc .\windows-installer\sftpgo.iss
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
copy .\x86\sftpgo.exe .\output
$Env:SFTPGO_ISS_ARCH='x86'
iscc .\windows-installer\sftpgo.iss
- name: Sign installers
if: ${{ github.event_name != 'pull_request' }}
uses: azure/trusted-signing-action@v0.5.0
with:
endpoint: https://eus.codesigning.azure.net/
trusted-signing-account-name: nicola
certificate-profile-name: SFTPGo
files: |
${{ github.workspace }}\sftpgo_windows_x86_64.exe
${{ github.workspace }}\sftpgo_windows_arm64.exe
${{ github.workspace }}\sftpgo_windows_x86.exe
file-digest: SHA256
timestamp-rfc3161: http://timestamp.acs.microsoft.com
timestamp-digest: SHA256
exclude-environment-credential: true
exclude-workload-identity-credential: true
exclude-managed-identity-credential: true
exclude-shared-token-cache-credential: true
exclude-visual-studio-credential: true
exclude-visual-studio-code-credential: true
exclude-azure-cli-credential: false
exclude-azure-powershell-credential: true
exclude-azure-developer-cli-credential: true
exclude-interactive-browser-credential: true
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
env:
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Upload Windows installer x86_64 artifact
if: ${{ github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v4
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_x86_64
path: ./sftpgo_windows_x86_64.exe
- name: Upload Windows installer arm64 artifact
if: ${{ github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v4
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_arm64
path: ./sftpgo_windows_arm64.exe
- name: Upload Windows installer x86 artifact
if: ${{ github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v4
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_x86
path: ./sftpgo_windows_x86.exe
- name: Prepare build artifact for Windows
if: startsWith(matrix.os, 'windows-')
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
@ -311,30 +214,41 @@ jobs:
xcopy .\openapi .\output\openapi\ /E
- name: Upload build artifact
uses: actions/upload-artifact@v4
if: startsWith(matrix.os, 'ubuntu-') != true
uses: actions/upload-artifact@v3
with:
name: sftpgo-windows-portable
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
path: output
test-build-flags:
name: Test build flags
test-goarch-386:
name: Run test cases on 32-bit arch
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v3
with:
go-version: '1.23'
go-version: 1.18
- name: Build
run: |
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules,nogcs,nos3,noportable,nobolt,nomysql,nopgsql,nosqlite,nometrics,noazblob,unixcrypt -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
./sftpgo -v
cp -r openapi static templates internal/bundle/
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules,bundle -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
./sftpgo -v
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter
cd -
env:
GOARCH: 386
- name: Run test cases
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
GOARCH: 386
test-postgresql-mysql-crdb:
name: Test with PgSQL/MySQL/Cockroach
@ -362,7 +276,7 @@ jobs:
MYSQL_USER: sftpgo
MYSQL_PASSWORD: sftpgo
options: >-
--health-cmd "mariadb-admin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
--health-cmd "mysqladmin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
--health-interval 10s
--health-timeout 5s
--health-retries 6
@ -385,16 +299,15 @@ jobs:
- 3308:3306
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v3
with:
go-version: '1.23'
go-version: 1.18
- name: Build
run: |
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
@ -402,24 +315,9 @@ jobs:
go build -trimpath -ldflags "-s -w" -o ipfilter
cd -
- name: Run tests using MySQL provider
run: |
./sftpgo initprovider
./sftpgo resetprovider --force
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 3308
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
- name: Run tests using PostgreSQL provider
run: |
./sftpgo initprovider
./sftpgo resetprovider --force
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 15m ./... -covermode=atomic
go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: postgresql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@ -428,11 +326,20 @@ jobs:
SFTPGO_DATA_PROVIDER__USERNAME: postgres
SFTPGO_DATA_PROVIDER__PASSWORD: postgres
- name: Run tests using MySQL provider
run: |
go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 3308
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
- name: Run tests using MariaDB provider
run: |
./sftpgo initprovider
./sftpgo resetprovider --force
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 15m ./... -covermode=atomic
go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@ -447,9 +354,7 @@ jobs:
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr :26257
sleep 10
docker exec crdb cockroach sql --insecure -e 'create database "sftpgo"'
./sftpgo initprovider
./sftpgo resetprovider --force
go test -v -tags nopgxregisterdefaulttypes,disable_grpc_modules -p 1 -timeout 15m ./... -covermode=atomic
go test -v -p 1 -timeout 15m ./... -covermode=atomic
docker stop crdb
env:
SFTPGO_DATA_PROVIDER__DRIVER: cockroachdb
@ -458,7 +363,6 @@ jobs:
SFTPGO_DATA_PROVIDER__PORT: 26257
SFTPGO_DATA_PROVIDER__USERNAME: root
SFTPGO_DATA_PROVIDER__PASSWORD:
SFTPGO_DATA_PROVIDER__TARGET_SESSION_ATTRS: any
SFTPGO_DATA_PROVIDER__SQL_TABLES_PREFIX: prefix_
build-linux-packages:
@ -484,13 +388,13 @@ jobs:
go: latest
go-arch: arm7
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Get commit SHA
id: get_commit
run: echo "COMMIT=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
run: echo ::set-output name=COMMIT::${GITHUB_SHA::8}
shell: bash
- name: Build on amd64
@ -503,7 +407,7 @@ jobs:
echo 'apt-get install -q -y curl gcc' >> build.sh
if [ ${{ matrix.go }} == 'latest' ]
then
echo 'GO_VERSION=$(curl -L https://go.dev/VERSION?m=text | head -n 1)' >> build.sh
echo 'GO_VERSION=$(curl -L https://go.dev/VERSION?m=text)' >> build.sh
else
echo 'GO_VERSION=${{ matrix.go }}' >> build.sh
fi
@ -513,7 +417,7 @@ jobs:
echo 'export PATH=$PATH:/usr/local/go/bin' >> build.sh
echo 'go version' >> build.sh
echo 'cd /usr/local/src' >> build.sh
echo 'go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_commit.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo' >> build.sh
echo 'go build -buildvcs=false -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${{ steps.get_commit.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo' >> build.sh
chmod 755 build.sh
docker run --rm --name ubuntu-build --mount type=bind,source=`pwd`,target=/usr/local/src ${{ matrix.distro }} /usr/local/src/build.sh
@ -546,7 +450,7 @@ jobs:
apt-get install -q -y curl gcc
if [ ${{ matrix.go }} == 'latest' ]
then
GO_VERSION=$(curl -L https://go.dev/VERSION?m=text | head -n 1)
GO_VERSION=$(curl -L https://go.dev/VERSION?m=text)
else
GO_VERSION=${{ matrix.go }}
fi
@ -564,7 +468,7 @@ jobs:
then
export GOARM=7
fi
go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_commit.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -buildvcs=false -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${{ steps.get_commit.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
@ -578,7 +482,7 @@ jobs:
cp sftpgo output/
- name: Upload build artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: sftpgo-linux-${{ matrix.arch }}-go-${{ matrix.go }}
path: output
@ -590,16 +494,16 @@ jobs:
cd pkgs
./build.sh
PKG_VERSION=$(cat dist/version)
echo "pkg-version=${PKG_VERSION}" >> $GITHUB_OUTPUT
echo "::set-output name=pkg-version::${PKG_VERSION}"
- name: Upload Debian Package
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-rpm
path: pkgs/dist/rpm/*
@ -609,12 +513,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v3
with:
go-version: '1.23'
- uses: actions/checkout@v4
go-version: 1.18
- uses: actions/checkout@v3
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v3
with:
args: --timeout=10m
version: latest

View file

@ -5,7 +5,7 @@ on:
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
push:
branches:
- main
- 2.3.x
tags:
- v*
pull_request:
@ -33,7 +33,7 @@ jobs:
optional_deps: true
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
- name: Gather image information
id: info
@ -42,7 +42,6 @@ jobs:
DOCKERFILE=Dockerfile
MINOR=""
MAJOR=""
FEATURES="nopgxregisterdefaulttypes,disable_grpc_modules"
if [ "${{ github.event_name }}" = "schedule" ]; then
VERSION=nightly
elif [[ $GITHUB_REF == refs/tags/* ]]; then
@ -68,13 +67,9 @@ jobs:
VERSION="${VERSION}-distroless"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE=Dockerfile.distroless
FEATURES="${FEATURES},nosqlite"
elif [[ $DOCKER_PKG == debian-plugins ]]; then
VERSION="${VERSION}-plugins"
VERSION_SLIM="${VERSION}-slim"
FEATURES="${FEATURES},unixcrypt"
elif [[ $DOCKER_PKG == debian ]]; then
FEATURES="${FEATURES},unixcrypt"
fi
DOCKER_IMAGES=("drakkan/sftpgo" "ghcr.io/drakkan/sftpgo")
TAGS="${DOCKER_IMAGES[0]}:${VERSION}"
@ -119,43 +114,42 @@ jobs:
done
if [[ $OPTIONAL_DEPS == true ]]; then
echo "version=${VERSION}" >> $GITHUB_OUTPUT
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo "full=true" >> $GITHUB_OUTPUT
echo ::set-output name=version::${VERSION}
echo ::set-output name=tags::${TAGS}
echo ::set-output name=full::true
else
echo "version=${VERSION_SLIM}" >> $GITHUB_OUTPUT
echo "tags=${TAGS_SLIM}" >> $GITHUB_OUTPUT
echo "full=false" >> $GITHUB_OUTPUT
echo ::set-output name=version::${VERSION_SLIM}
echo ::set-output name=tags::${TAGS_SLIM}
echo ::set-output name=full::false
fi
if [[ $DOCKER_PKG == debian-plugins ]]; then
echo "plugins=true" >> $GITHUB_OUTPUT
echo ::set-output name=plugins::true
else
echo "plugins=false" >> $GITHUB_OUTPUT
echo ::set-output name=plugins::false
fi
echo "dockerfile=${DOCKERFILE}" >> $GITHUB_OUTPUT
echo "features=${FEATURES}" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
echo "sha=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
echo ::set-output name=dockerfile::${DOCKERFILE}
echo ::set-output name=created::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
echo ::set-output name=sha::${GITHUB_SHA::8}
env:
DOCKER_PKG: ${{ matrix.docker_pkg }}
OPTIONAL_DEPS: ${{ matrix.optional_deps }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
- name: Set up builder
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
id: builder
- name: Login to Docker Hub
uses: docker/login-action@v3
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
if: ${{ github.event_name != 'pull_request' }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@ -163,26 +157,25 @@ jobs:
if: ${{ github.event_name != 'pull_request' }}
- name: Build and push
uses: docker/build-push-action@v6
uses: docker/build-push-action@v3
with:
context: .
builder: ${{ steps.builder.outputs.name }}
file: ./${{ steps.info.outputs.dockerfile }}
platforms: linux/amd64,linux/arm64,linux/ppc64le,linux/arm/v7
platforms: linux/amd64,linux/arm64,linux/ppc64le
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.info.outputs.tags }}
build-args: |
COMMIT_SHA=${{ steps.info.outputs.sha }}
INSTALL_OPTIONAL_PACKAGES=${{ steps.info.outputs.full }}
DOWNLOAD_PLUGINS=${{ steps.info.outputs.plugins }}
FEATURES=${{ steps.info.outputs.features }}
labels: |
org.opencontainers.image.title=SFTPGo
org.opencontainers.image.description=Full-featured and highly configurable file transfer server: SFTP, HTTP/S,FTP/S, WebDAV
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support
org.opencontainers.image.url=https://github.com/drakkan/sftpgo
org.opencontainers.image.documentation=https://github.com/drakkan/sftpgo/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=https://github.com/drakkan/sftpgo
org.opencontainers.image.version=${{ steps.info.outputs.version }}
org.opencontainers.image.created=${{ steps.info.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=AGPL-3.0-only
org.opencontainers.image.licenses=AGPL-3.0

View file

@ -4,27 +4,23 @@ on:
push:
tags: 'v*'
permissions:
id-token: write
contents: write
env:
GO_VERSION: 1.23.3
GO_VERSION: 1.18.7
jobs:
prepare-sources-with-deps:
name: Prepare sources with deps
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
- name: Get SFTPGo version
id: get_version
run: echo "VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
- name: Prepare release
run: |
@ -36,229 +32,89 @@ jobs:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Upload build artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
retention-days: 1
prepare-windows:
name: Prepare Windows binaries
environment: signing
runs-on: windows-2022
prepare-window-mac:
name: Prepare binaries
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-11, windows-2022]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
- name: Azure login
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Get SFTPGo version
id: get_version
run: echo "VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
shell: bash
- name: Build
- name: Get OS name
id: get_os_name
run: |
if [[ $MATRIX_OS =~ ^macos.* ]]
then
echo ::set-output name=OS::macOS
else
echo ::set-output name=OS::windows
fi
shell: bash
env:
MATRIX_OS: ${{ matrix.os }}
- name: Build for macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --abbrev=8 --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
$FILE_VERSION = $Env:SFTPGO_VERSION.substring(1) + ".0"
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0 with additional terms" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o sftpgo.exe
go-winres simply --arch amd64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
mkdir arm64
$Env:CGO_ENABLED='0'
$Env:GOOS='windows'
$Env:GOARCH='arm64'
go-winres simply --arch arm64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0 with additional terms" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
go-winres simply --arch arm64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
mkdir x86
$Env:GOARCH='386'
go-winres simply --arch 386 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0 with additional terms" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
go-winres simply --arch 386 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
Remove-Item Env:\CGO_ENABLED
Remove-Item Env:\GOOS
Remove-Item Env:\GOARCH
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Sign binaries
uses: azure/trusted-signing-action@v0.5.0
with:
endpoint: https://eus.codesigning.azure.net/
trusted-signing-account-name: nicola
certificate-profile-name: SFTPGo
files: |
${{ github.workspace }}\sftpgo.exe
${{ github.workspace }}\arm64\sftpgo.exe
${{ github.workspace }}\x86\sftpgo.exe
file-digest: SHA256
timestamp-rfc3161: http://timestamp.acs.microsoft.com
timestamp-digest: SHA256
exclude-environment-credential: true
exclude-workload-identity-credential: true
exclude-managed-identity-credential: true
exclude-shared-token-cache-credential: true
exclude-visual-studio-credential: true
exclude-visual-studio-code-credential: true
exclude-azure-cli-credential: false
exclude-azure-powershell-credential: true
exclude-azure-developer-cli-credential: true
exclude-interactive-browser-credential: true
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
- name: Prepare Release
run: |
mkdir output
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\LICENSE .\output\LICENSE.txt
copy .\NOTICE .\output\NOTICE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
iscc .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
rm .\output\sftpgo.db
copy .\arm64\sftpgo.exe .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
.\sftpgo.exe initprovider
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
$Env:SFTPGO_ISS_ARCH='arm64'
iscc .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
copy .\x86\sftpgo.exe .\output
$Env:SFTPGO_ISS_ARCH='x86'
iscc .\windows-installer\sftpgo.iss
env:
SFTPGO_ISS_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Sign installers
uses: azure/trusted-signing-action@v0.5.0
with:
endpoint: https://eus.codesigning.azure.net/
trusted-signing-account-name: nicola
certificate-profile-name: SFTPGo
files: |
${{ github.workspace }}\sftpgo_windows_x86_64.exe
${{ github.workspace }}\sftpgo_windows_arm64.exe
${{ github.workspace }}\sftpgo_windows_x86.exe
file-digest: SHA256
timestamp-rfc3161: http://timestamp.acs.microsoft.com
timestamp-digest: SHA256
exclude-environment-credential: true
exclude-workload-identity-credential: true
exclude-managed-identity-credential: true
exclude-shared-token-cache-credential: true
exclude-visual-studio-credential: true
exclude-visual-studio-code-credential: true
exclude-azure-cli-credential: false
exclude-azure-powershell-credential: true
exclude-azure-developer-cli-credential: true
exclude-interactive-browser-credential: true
- name: Prepare Portable Release
run: |
mkdir win-portable
copy .\sftpgo.exe .\win-portable
mkdir win-portable\arm64
copy .\arm64\sftpgo.exe .\win-portable\arm64
mkdir win-portable\x86
copy .\x86\sftpgo.exe .\win-portable\x86
copy .\sftpgo.json .\win-portable
(Get-Content .\win-portable\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\win-portable\sftpgo.json
copy .\output\sftpgo.db .\win-portable
copy .\LICENSE .\win-portable\LICENSE.txt
copy .\NOTICE .\win-portable\NOTICE.txt
mkdir win-portable\templates
xcopy .\templates .\win-portable\templates\ /E
mkdir win-portable\static
xcopy .\static .\win-portable\static\ /E
mkdir win-portable\openapi
xcopy .\openapi .\win-portable\openapi\ /E
Compress-Archive .\win-portable\* sftpgo_portable.zip
- name: Upload Windows installer x86_64 artifact
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_windows_x86_64.exe
path: ./sftpgo_windows_x86_64.exe
retention-days: 1
- name: Upload Windows installer arm64 artifact
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_windows_arm64.exe
path: ./sftpgo_windows_arm64.exe
retention-days: 1
- name: Upload Windows installer x86 artifact
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_windows_x86.exe
path: ./sftpgo_windows_x86.exe
retention-days: 1
- name: Upload Windows portable artifact
uses: actions/upload-artifact@v4
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_windows_portable.zip
path: ./sftpgo_portable.zip
retention-days: 1
prepare-mac:
name: Prepare macOS binaries
runs-on: macos-12
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Get SFTPGo version
id: get_version
run: echo "VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
shell: bash
- name: Build for macOS x86_64
run: go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for macOS arm64
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
- name: Prepare Release
- name: Prepare Release for macOS
if: startsWith(matrix.os, 'macos-')
run: |
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
echo "https://docs.sftpgo.com" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
cp LICENSE output/
cp NOTICE output/
cp sftpgo output/
cp sftpgo.json output/
cp sftpgo.db output/sqlite/
@ -271,27 +127,130 @@ jobs:
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cd output
tar cJvf ../sftpgo_${SFTPGO_VERSION}_macOS_x86_64.tar.xz *
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
cd ..
cp sftpgo_arm64 output/sftpgo
cd output
tar cJvf ../sftpgo_${SFTPGO_VERSION}_macOS_arm64.tar.xz *
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_arm64.tar.xz *
cd ..
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
OS: ${{ steps.get_os_name.outputs.OS }}
- name: Prepare Release for Windows
if: startsWith(matrix.os, 'windows-')
run: |
mkdir output
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\LICENSE .\output\LICENSE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
rm .\output\sftpgo.db
copy .\arm64\sftpgo.exe .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
.\sftpgo.exe initprovider
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
$Env:SFTPGO_ISS_ARCH='arm64'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
copy .\x86\sftpgo.exe .\output
$Env:SFTPGO_ISS_ARCH='x86'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
env:
SFTPGO_ISS_VERSION: ${{ steps.get_version.outputs.VERSION }}
SFTPGO_ISS_DOC_URL: https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.VERSION }}/README.md
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Prepare Portable Release for Windows
if: startsWith(matrix.os, 'windows-')
run: |
mkdir win-portable
copy .\sftpgo.exe .\win-portable
mkdir win-portable\arm64
copy .\arm64\sftpgo.exe .\win-portable\arm64
mkdir win-portable\x86
copy .\x86\sftpgo.exe .\win-portable\x86
copy .\sftpgo.json .\win-portable
(Get-Content .\win-portable\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\win-portable\sftpgo.json
copy .\output\sftpgo.db .\win-portable
copy .\LICENSE .\win-portable\LICENSE.txt
mkdir win-portable\templates
xcopy .\templates .\win-portable\templates\ /E
mkdir win-portable\static
xcopy .\static .\win-portable\static\ /E
mkdir win-portable\openapi
xcopy .\openapi .\win-portable\openapi\ /E
Compress-Archive .\win-portable\* sftpgo_portable.zip
- name: Upload macOS x86_64 artifact
uses: actions/upload-artifact@v4
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_macOS_x86_64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_macOS_x86_64.tar.xz
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
retention-days: 1
- name: Upload macOS arm64 artifact
uses: actions/upload-artifact@v4
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_macOS_arm64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_macOS_arm64.tar.xz
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
retention-days: 1
- name: Upload Windows installer x86_64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
path: ./sftpgo_windows_x86_64.exe
retention-days: 1
- name: Upload Windows installer arm64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.exe
path: ./sftpgo_windows_arm64.exe
retention-days: 1
- name: Upload Windows installer x86 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86.exe
path: ./sftpgo_windows_x86.exe
retention-days: 1
- name: Upload Windows portable artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable.zip
path: ./sftpgo_portable.zip
retention-days: 1
prepare-linux:
@ -326,14 +285,14 @@ jobs:
tar-arch: armv7
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Get versions
id: get_version
run: |
echo "SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
echo "GO_VERSION=${GO_VERSION}" >> $GITHUB_OUTPUT
echo "COMMIT=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
echo ::set-output name=SFTPGO_VERSION::${GITHUB_REF/refs\/tags\//}
echo ::set-output name=GO_VERSION::${GO_VERSION}
echo ::set-output name=COMMIT::${GITHUB_SHA::8}
shell: bash
env:
GO_VERSION: ${{ env.GO_VERSION }}
@ -351,7 +310,7 @@ jobs:
echo 'export PATH=$PATH:/usr/local/go/bin' >> build.sh
echo 'go version' >> build.sh
echo 'cd /usr/local/src' >> build.sh
echo 'go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_version.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo' >> build.sh
echo 'go build -buildvcs=false -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${{ steps.get_version.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo' >> build.sh
chmod 755 build.sh
docker run --rm --name ubuntu-build --mount type=bind,source=`pwd`,target=/usr/local/src ${{ matrix.distro }} /usr/local/src/build.sh
@ -360,7 +319,6 @@ jobs:
echo "" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
cp LICENSE output/
cp NOTICE output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
@ -404,13 +362,12 @@ jobs:
run: |
export PATH=$PATH:/usr/local/go/bin
go version
go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes,disable_grpc_modules -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_version.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -buildvcs=false -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${{ steps.get_version.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.SFTPGO_VERSION }}/README.md" >> output/README.txt
cp LICENSE output/
cp NOTICE output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
@ -428,7 +385,7 @@ jobs:
cd ..
- name: Upload build artifact for ${{ matrix.arch }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
path: ./output/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
@ -441,19 +398,19 @@ jobs:
cd pkgs
./build.sh
PKG_VERSION=${SFTPGO_VERSION:1}
echo "pkg-version=${PKG_VERSION}" >> $GITHUB_OUTPUT
echo "::set-output name=pkg-version::${PKG_VERSION}"
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Deb Package
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
retention-days: 1
- name: Upload RPM Package
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
@ -468,26 +425,26 @@ jobs:
- name: Get versions
id: get_version
run: |
echo "SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
echo ::set-output name=SFTPGO_VERSION::${GITHUB_REF/refs\/tags\//}
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
@ -510,7 +467,7 @@ jobs:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Linux bundle
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
path: ./bundle/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
@ -518,117 +475,117 @@ jobs:
create-release:
name: Release
needs: [prepare-linux-bundle, prepare-sources-with-deps, prepare-mac, prepare-windows]
needs: [prepare-linux-bundle, prepare-sources-with-deps, prepare-window-mac]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Get versions
id: get_version
run: |
SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}
PKG_VERSION=${SFTPGO_VERSION:1}
echo "SFTPGO_VERSION=${SFTPGO_VERSION}" >> $GITHUB_OUTPUT
echo "PKG_VERSION=${PKG_VERSION}" >> $GITHUB_OUTPUT
echo ::set-output name=SFTPGO_VERSION::${SFTPGO_VERSION}
echo "::set-output name=PKG_VERSION::${PKG_VERSION}"
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Download Linux bundle artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
- name: Download Deb amd64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_amd64.deb
- name: Download Deb arm64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_arm64.deb
- name: Download Deb ppc64le artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_ppc64el.deb
- name: Download Deb armv7 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_armhf.deb
- name: Download RPM x86_64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.x86_64.rpm
- name: Download RPM aarch64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.aarch64.rpm
- name: Download RPM ppc64le artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.ppc64le.rpm
- name: Download RPM armv7 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.armv7hl.rpm
- name: Download macOS x86_64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_x86_64.tar.xz
- name: Download macOS arm64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_arm64.tar.xz
- name: Download Windows installer x86_64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86_64.exe
- name: Download Windows installer arm64 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_arm64.exe
- name: Download Windows installer x86 artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86.exe
- name: Download Windows portable artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_portable.zip
- name: Download source with deps artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_src_with_deps.tar.xz

View file

@ -1,5 +1,5 @@
run:
timeout: 10m
timeout: 5m
issues-exit-code: 1
tests: true

View file

@ -1 +0,0 @@
* @drakkan

View file

@ -1,128 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
support@sftpgo.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

34
DCO Normal file
View file

@ -0,0 +1,34 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

View file

@ -1,16 +1,14 @@
FROM golang:1.23-bookworm AS builder
FROM golang:1.18-bullseye as builder
ENV GOFLAGS="-mod=readonly"
RUN apt-get update && apt-get -y upgrade && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download && go mod verify
RUN go mod download
ARG COMMIT_SHA
@ -23,19 +21,19 @@ COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --abbrev=8 --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -v -o sftpgo
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
# Set to "true" to download the "official" plugins in /usr/local/bin
ARG DOWNLOAD_PLUGINS=false
RUN if [ "${DOWNLOAD_PLUGINS}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y curl && ./docker/scripts/download-plugins.sh; fi
FROM debian:bookworm-slim
FROM debian:bullseye-slim
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apt-get update && apt-get -y upgrade && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y jq git rsync && rm -rf /var/lib/apt/lists/*; fi

View file

@ -1,8 +1,8 @@
FROM golang:1.23-alpine3.21 AS builder
FROM golang:1.18-alpine3.16 AS builder
ENV GOFLAGS="-mod=readonly"
RUN apk -U upgrade --no-cache && apk add --update --no-cache bash ca-certificates curl git gcc g++
RUN apk add --update --no-cache bash ca-certificates curl git gcc g++
RUN mkdir -p /workspace
WORKDIR /workspace
@ -10,7 +10,7 @@ WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download && go mod verify
RUN go mod download
ARG COMMIT_SHA
@ -23,17 +23,22 @@ COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --abbrev=8 --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -v -o sftpgo
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM alpine:3.21
FROM alpine:3.16
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apk -U upgrade --no-cache && apk add --update --no-cache ca-certificates tzdata mailcap
RUN apk add --update --no-cache ca-certificates tzdata mailcap
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache jq git rsync; fi
# set up nsswitch.conf for Go's "netgo" implementation
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457
RUN test ! -e /etc/nsswitch.conf && echo 'hosts: files dns' > /etc/nsswitch.conf
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
RUN addgroup -g 1000 -S sftpgo && \

View file

@ -1,38 +1,38 @@
FROM golang:1.23-bookworm AS builder
FROM golang:1.18-bullseye as builder
ENV CGO_ENABLED=0 GOFLAGS="-mod=readonly"
RUN apt-get update && apt-get -y upgrade && apt-get install --no-install-recommends -y media-types && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download && go mod verify
RUN go mod download
ARG COMMIT_SHA
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
# For this variant we disable SQLite support since it requires CGO and so a C runtime which is not installed
# in distroless/static-* images
ARG FEATURES
ARG FEATURES=nosqlite
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --abbrev=8 --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -v -o sftpgo
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
# Modify the default configuration file
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' sftpgo.json && \
sed -i 's|"backups"|"/srv/sftpgo/backups"|' sftpgo.json && \
sed -i 's|"sqlite"|"bolt"|' sftpgo.json
RUN apt-get update && apt-get install --no-install-recommends -y media-types && rm -rf /var/lib/apt/lists/*
RUN mkdir /etc/sftpgo /var/lib/sftpgo /srv/sftpgo
FROM gcr.io/distroless/static-debian12
FROM gcr.io/distroless/static-debian11
COPY --from=builder --chown=1000:1000 /etc/sftpgo /etc/sftpgo
COPY --from=builder --chown=1000:1000 /srv/sftpgo /srv/sftpgo
@ -54,4 +54,4 @@ ENV HOME=/var/lib/sftpgo
WORKDIR /var/lib/sftpgo
USER 1000:1000
CMD ["sftpgo", "serve"]
CMD ["sftpgo", "serve"]

12
NOTICE
View file

@ -1,12 +0,0 @@
Additional terms under GNU AGPL version 3 section 7.3(b) and 13.1:
If you have included SFTPGo so that it is offered through any network
interactions, including by means of an external user interface, or
any other integration, even without modifying its source code and then
SFTPGo is partially, fully or optionally configured via your frontend,
you must provide reasonable but clear attribution to the SFTPGo project
and its author(s), not imply any endorsement by or affiliation with the
SFTPGo project, and you must prominently offer all users interacting
with it remotely through a computer network an opportunity to receive
the Corresponding Source of the SFTPGo version you include by providing
a link to the Corresponding Source in the SFTPGo source code repository.

350
README.md
View file

@ -1,72 +1,325 @@
# SFTPGo
[![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/main/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/main)
[![License: AGPL-3.0-only](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![License: AGPL v3](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Docker Pulls](https://img.shields.io/docker/pulls/drakkan/sftpgo)](https://hub.docker.com/r/drakkan/sftpgo)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
Full-featured and highly configurable event-driven file transfer solution.
Server protocols: SFTP, HTTP/S, FTP/S, WebDAV.
Storage backends: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, other SFTP servers.
[English](./README.md) | [简体中文](./README.zh_CN.md)
With SFTPGo you can leverage local and cloud storage backends for exchanging and storing files internally or with business partners using the same tools and processes you are already familiar with.
The WebAdmin UI allows to easily create and manage your users, folders, groups and other resources.
The WebClient UI allows end users to change their credentials, browse and manage their files in the browser and setup two-factor authentication which works with Microsoft Authenticator, Google Authenticator, Authy and other compatible apps.
Fully featured and highly configurable SFTP server with optional HTTP/S, FTP/S and WebDAV support.
Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.
## Sponsors
We strongly believe in Open Source software model, so we decided to make SFTPGo available to everyone, but maintaining and evolving SFTPGo takes a lot of time and work. To make development and maintenance sustainable you should consider to support the project with a [sponsorship](https://github.com/sponsors/drakkan).
If you find SFTPGo useful please consider supporting this Open Source project.
We love doing the work and we'd like to keep doing it - your support helps make SFTPGo possible.
Maintaining and evolving SFTPGo is a lot of work - easily the equivalent of a full time job - for me.
It is important to understand that you should support SFTPGo and any other Open Source project you rely on for ongoing maintenance, even if you don't have any questions or need new features, to mitigate the business risk of a project you depend on going unmaintained, with its security and development velocity implications.
I'd like to make SFTPGo into a sustainable long term project and would not like to introduce a dual licensing option and limit some features to the proprietary version only.
### Thank you to our sponsors
If you use SFTPGo, it is in your best interest to ensure that the project you rely on stays healthy and well maintained.
This can only happen with your donations and [sponsorships](https://github.com/sponsors/drakkan) :heart:
#### Platinum sponsors
If you just take and don't return anything back, the project will die in the long run and you will be forced to pay for a similar proprietary solution.
[<img src="./img/Aledade_logo.png" alt="Aledade logo" width="202" height="70">](https://www.aledade.com/)
</br></br>
[<img src="./img/jumptrading.png" alt="Jump Trading logo" width="362" height="63">](https://www.jumptrading.com/)
</br></br>
[<img src="./img/wpengine.png" alt="WP Engine logo" width="331" height="63">](https://wpengine.com/)
More [info](https://github.com/drakkan/sftpgo/issues/452).
#### Silver sponsors
Thank you to our sponsors!
[<img src="./img/IDCS.png" alt="IDCS logo" width="212" height="51">](https://idcs.ip-paris.fr/)
[<img src="https://www.7digital.com/wp-content/themes/sevendigital/images/top_logo.png" alt="7digital logo">](https://www.7digital.com/)
#### Bronze sponsors
## Features
[<img src="./img/7digital.png" alt="7digital logo" width="178" height="56">](https://www.7digital.com/)
</br></br>
[<img src="./img/vps2day.png" alt="VPS2day logo" width="234" height="56">](https://www.vps2day.com/)
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
- Virtual folders are supported: a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one. Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
- Configurable [custom commands and/or HTTP hooks](./docs/custom-actions.md) on upload, pre-upload, download, pre-download, delete, pre-delete, rename, mkdir, rmdir on SSH commands and on user add, update and delete.
- Virtual accounts stored within a "data provider".
- SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (key/value store in pure Go) and in-memory data providers are supported.
- Chroot isolation for local accounts. Cloud-based accounts can be restricted to a certain base path.
- Per-user and per-directory virtual permissions, for each exposed path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode and modification time.
- [REST API](./docs/rest-api.md) for users and folders management, data retention, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- [Web client interface](./docs/web-client.md) so that end users can change their credentials, manage and share their files in the browser.
- Public key and password authentication. Multiple public keys per-user are supported.
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
- Per-user authentication methods.
- [Two-factor authentication](./docs/howto/two-factor-authentication.md) based on time-based one time passwords (RFC 6238) which works with Authy, Google Authenticator and other compatible apps.
- Simplified user administrations using [groups](./docs/groups.md).
- Custom authentication via external programs/HTTP API.
- Web Client and Web Admin user interfaces support [OpenID Connect](https://openid.net/connect/) authentication and so they can be integrated with identity providers such as [Keycloak](https://www.keycloak.org/). You can find more details [here](./docs/oidc.md).
- [Data At Rest Encryption](./docs/dare.md).
- Dynamic user modification before login via external programs/HTTP API.
- Quota support: accounts can have individual disk quota expressed as max total size and/or max number of files.
- Bandwidth throttling, with separate settings for upload and download and overrides based on the client's IP address.
- Data transfer bandwidth limits, with total limit or separate settings for uploads and downloads and overrides based on the client's IP address. Limits can be reset using the REST API.
- Per-protocol [rate limiting](./docs/rate-limiting.md) is supported and can be optionally connected to the built-in defender to automatically block hosts that repeatedly exceed the configured limit.
- Per-user maximum concurrent sessions.
- Per-user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per-user and per-directory shell like patterns filters: files can be allowed, denied and optionally hidden based on shell like patterns.
- Automatically terminating idle connections.
- Automatic blocklist management using the built-in [defender](./docs/defender.md).
- Geo-IP filtering using a [plugin](https://github.com/sftpgo/sftpgo-plugin-geoipfilter).
- Atomic uploads are configurable.
- Per-user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Support for Git repositories over SSH.
- SCP and rsync are supported.
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
- [WebDAV](./docs/webdav.md) is supported.
- ACME protocol is supported. SFTPGo can obtain and automatically renew TLS certificates for HTTPS, WebDAV and FTPS from `Let's Encrypt` or other ACME compliant certificate authorities, using the the `HTTP-01` or `TLS-ALPN-01` [challenge types](https://letsencrypt.org/docs/challenge-types/).
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
- Per-user protocols restrictions. You can configure the allowed protocols (SSH/HTTP/FTP/WebDAV) for each user.
- [Prometheus metrics](./docs/metrics.md) are exposed.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP service without losing the information about the client's address.
- Easy [migration](./examples/convertusers) from Linux system user accounts.
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
- Performance analysis using built-in [profiler](./docs/profiling.md).
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
- SFTPGo supports a [plugin system](./docs/plugins.md) and therefore can be extended using external plugins.
## Support
## Platforms
You can use SFTPGo for free, respecting the obligations of the Open Source [license](#license), but please do not ask or expect free support as well.
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using [GitHub Actions](./.github/workflows/development.yml). The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.
Use [discussions](https://github.com/drakkan/sftpgo/discussions) to ask questions and get support from the community.
## Requirements
We offer commercial support, guarantees, and advice for SFTPGo:
- Go as build only dependency. We support the Go version(s) used in [continuous integration workflows](./.github/workflows).
- A suitable SQL server to use as data provider: PostgreSQL 9.4+, MySQL 5.6+, SQLite 3.x, CockroachDB stable.
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
- With our [plans](https://sftpgo.com/plans) you can safely install and use SFTPGo on-premise in professional environments.
- With our [SaaS offerings](https://sftpgo.com/saas) you can use SFTPGo hosted in the cloud, fully managed and supported.
## Installation
## Documentation
Binary releases for Linux, macOS, and Windows are available. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
You can read more about supported features and documentation at [docs.sftpgo.com](https://docs.sftpgo.com/).
An official Docker image is available. Documentation is [here](./docker/README.md).
## Internationalization
<details>
The translations are available via [Crowdin](https://crowdin.com/project/sftpgo), who have granted us an open source license.
<summary>Some Linux distro packages are available</summary>
Before start translating please take a look at our contribution [guidelines](https://sftpgo.github.io/latest/web-interfaces/#internationalization).
- For Arch Linux via AUR:
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/). This package follows stable releases. It requires `git`, `gcc` and `go` to build.
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/). This package follows stable releases downloading the prebuilt linux binary from GitHub. It does not require `git`, `gcc` and `go` to build.
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git `main` branch. It requires `git`, `gcc` and `go` to build.
- Deb and RPM packages are built after each commit and for each release.
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
- Void Linux provides an [official package](https://github.com/void-linux/void-packages/tree/master/srcpkgs/sftpgo).
</details>
APT and YUM repositories are [available](./docs/repo.md).
SFTPGo is also available on [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335) and [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/prasselsrl1645470739547.sftpgo_linux), purchasing from there will help keep SFTPGo a long-term sustainable project.
<details><summary>Windows packages</summary>
- The Windows installer to install and run SFTPGo as a Windows service.
- The portable package to start SFTPGo on demand.
- The [winget](https://docs.microsoft.com/en-us/windows/package-manager/winget/install) package to install and run SFTPGo as a Windows service: `winget install SFTPGo`.
- The [Chocolatey package](https://community.chocolatey.org/packages/sftpgo) to install and run SFTPGo as a Windows service.
</details>
On macOS you can install from the Homebrew [Formula](https://formulae.brew.sh/formula/sftpgo).
On FreeBSD you can install from the [SFTPGo port](https://www.freshports.org/ftp/sftpgo).
On DragonFlyBSD you can install SFTPGo from [DPorts](https://github.com/DragonFlyBSD/DPorts/tree/master/ftp/sftpgo).
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
Alternately, you can [build from source](./docs/build-from-source.md).
[Getting Started Guide for the Impatient](./docs/howto/getting-started.md).
## Configuration
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
Please make sure to [initialize the data provider](#data-provider-initialization-and-management) before running the daemon.
To start SFTPGo with the default settings, simply run:
```bash
sftpgo serve
```
Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a service.
### Data provider initialization and management
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
For PostgreSQL, MySQL and CockroachDB providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
Alternately, you can create/update the required data provider structures yourself using the `initprovider` command.
For example, you can simply execute the following command from the configuration directory:
```bash
sftpgo initprovider
```
Take a look at the CLI usage to learn how to specify a different configuration file:
```bash
sftpgo initprovider --help
```
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
You can also reset your provider by using the `resetprovider` sub-command. Take a look at the CLI usage for more details:
```bash
sftpgo resetprovider --help
```
:warning: Please note that some data providers (e.g. MySQL and CockroachDB) do not support schema changes within a transaction, this means that you may end up with an inconsistent schema if migrations are forcibly aborted. CockroachDB doesn't support database-level locks, so make sure you don't execute migrations concurrently.
## Create the first admin
To start using SFTPGo you need to create an admin user, you can do it in several ways:
- by using the web admin interface. The default URL is [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
- by loading initial data
- by enabling `create_default_admin` in your configuration file and setting the environment variables `SFTPGO_DEFAULT_ADMIN_USERNAME` and `SFTPGO_DEFAULT_ADMIN_PASSWORD`
## Upgrading
SFTPGo supports upgrading from the previous release branch to the current one.
Some examples for supported upgrade paths are:
- from 1.2.x to 2.0.x
- from 2.0.x to 2.1.x and so on.
For supported upgrade paths, the data and schema are migrated automatically, alternately you can use the `initprovider` command.
So if, for example, you want to upgrade from a version before 1.2.x to 2.0.x, you must first install version 1.2.x, update the data provider and finally install the version 2.0.x. It is recommended to always install the latest available minor version, ie do not install 1.2.0 if 1.2.2 is available.
Loading data from a provider independent JSON dump is supported from the previous release branch to the current one too. After upgrading SFTPGo it is advisable to regenerate the JSON dump from the new version.
## Downgrading
If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the `revertprovider` command for this task.
As for upgrading, SFTPGo supports downgrading from the previous release branch to the current one.
So, if you plan to downgrade from 2.0.x to 1.2.x, before uninstalling 2.0.x version, you can prepare your data provider executing the following command from the configuration directory:
```shell
sftpgo revertprovider --to-version 4
```
Take a look at the CLI usage to see the supported parameter for the `--to-version` argument and to learn how to specify a different configuration file:
```shell
sftpgo revertprovider --help
```
The `revertprovider` command is not supported for the memory provider.
Please note that we only support the current release branch and the current main branch, if you find a bug it is better to report it rather than downgrading to an older unsupported version.
## Users and folders management
After starting SFTPGo you can manage users and folders using:
- the [web based administration interface](./docs/web-admin.md)
- the [REST API](./docs/rest-api.md)
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
Full details for users, folders, admins and other resources are documented in the [OpenAPI](./openapi/openapi.yaml) schema. If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
## Tutorials
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
## Authentication options
<details><summary> External Authentication</summary>
Custom authentication methods can easily be added. SFTPGo supports external authentication modules, and writing a new backend can be as simple as a few lines of shell script. More information can be found [here](./docs/external-auth.md).
</details>
<details><summary> Keyboard Interactive Authentication</summary>
Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client.
This authentication method is typically used for multi-factor authentication.
More information can be found [here](./docs/keyboard-interactive.md).
</details>
## Dynamic user creation or modification
A user can be created or modified by an external program just before the login. More information about this can be found [here](./docs/dynamic-user-mod.md).
## Custom Actions
SFTPGo allows you to configure custom commands and/or HTTP hooks to receive notifications about file uploads, deletions and several other events.
More information about custom actions can be found [here](./docs/custom-actions.md).
## Virtual folders
Directories outside the user home directory or based on a different storage provider can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
## Other hooks
You can get notified as soon as a new connection is established using the [Post-connect hook](./docs/post-connect-hook.md) and after each login using the [Post-login hook](./docs/post-login-hook.md).
You can use your own hook to [check passwords](./docs/check-password-hook.md).
## Storage backends
### S3/GCP/Azure
Each user can be mapped with a [S3 Compatible Object Storage](./docs/s3.md) /[Google Cloud Storage](./docs/google-cloud-storage.md)/[Azure Blob Storage](./docs/azure-blob-storage.md) bucket or a bucket virtual folder that is exposed over SFTP/SCP/FTP/WebDAV.
### SFTP backend
Each user can be mapped to another SFTP server account or a subfolder of it. More information can be found [here](./docs/sftpfs.md).
### Encrypted backend
Data at-rest encryption is supported via the [cryptfs backend](./docs/dare.md).
### Other Storage backends
Adding new storage backends is quite easy:
- implement the [Fs interface](./vfs/vfs.go#L28 "interface for filesystem backends").
- update the user method `GetFilesystem` to return the new backend
- update the web interface and the REST API CLI
- add the flags for the new storage backed to the `portable` mode
Anyway, some backends require a pay per-use account (or they offer free account for a limited time period only). To be able to add support for such backends or to review pull requests, please provide a test account. The test account must be available for enough time to be able to maintain the backend and do basic tests before each new release.
## Brute force protection
SFTPGo supports a built-in [defender](./docs/defender.md).
Alternately you can use the [connection failed logs](./docs/logs.md) for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
## Account's configuration properties
Details information about account configuration properties can be found [here](./docs/account.md).
## Performance
SFTPGo can easily saturate a Gigabit connection on low end hardware with no special configuration, this is generally enough for most use cases.
More in-depth analysis of performance can be found [here](./docs/performance.md).
## Release Cadence
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new major releases per year and several bug fix releases.
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new releases per year.
## Acknowledgements
@ -74,25 +327,8 @@ SFTPGo makes use of the third party libraries listed inside [go.mod](./go.mod).
We are very grateful to all the people who contributed with ideas and/or pull requests.
Thank you to [ysura](https://www.ysura.com/) for granting us stable access to a test AWS S3 account.
Thank you to [KeenThemes](https://keenthemes.com/) for granting us a custom license to use their amazing [Mega Bundle](https://keenthemes.com/products/templates-mega-bundle) for SFTPGo UI.
Thank you to [Crowdin](https://crowdin.com/) for granting us an Open Source License.
Thank you to [Incode](https://www.incode.it/) for helping us to improve the UI/UX.
Thank you [ysura](https://www.ysura.com/) for granting me stable access to a test AWS S3 account.
## License
SFTPGo source code is licensed under the GNU AGPL-3.0-only with [additional terms](./NOTICE).
The [theme](https://keenthemes.com/products/templates-mega-bundle) used in WebAdmin and WebClient user interfaces is proprietary, this means:
- KeenThemes HTML/CSS/JS components are allowed for use only within the SFTPGo product and restricted to be used in a resealable HTML template that can compete with KeenThemes products anyhow.
- The SFTPGo WebAdmin and WebClient user interfaces (HTML, CSS and JS components) based on this theme are allowed for use only within the SFTPGo product and therefore cannot be used in derivative works/products without an explicit grant from the [SFTPGo Team](mailto:support@sftpgo.com).
More information about [compliance](https://sftpgo.com/compliance.html).
## Copyright
Copyright (C) 2019 Nicola Murino
GNU AGPLv3

318
README.zh_CN.md Normal file
View file

@ -0,0 +1,318 @@
# SFTPGo
![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/main/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/main)
[![License: AGPL v3](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Docker Pulls](https://img.shields.io/docker/pulls/drakkan/sftpgo)](https://hub.docker.com/r/drakkan/sftpgo)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
[English](./README.md) | [简体中文](./README.zh_CN.md)
功能齐全、高度可配置化、支持自定义 HTTP/SFTP/S 和 WebDAV 的 SFTP 服务。
一些存储后端支持本地文件系统、加密本地文件系统、S3兼容对象存储Google Cloud 存储Azure Blob 存储SFTP。
## 特性
- 支持服务本地文件系统、加密本地文件系统、S3 兼容对象存储、Google Cloud 存储、Azure Blob 存储或其它基于 SFTP/SCP/FTP/WebDAV 协议的 SFTP 账户。
- 虚拟目录支持:一个虚拟目录可以用于支持的存储后端。你可以,比如,一个 S3 用户暴露了一个 GCS bucket或者其中一部分在特定的路径下、一个加密本地文件系统在另一个。虚拟目录可以对于大量用户作为私密或者共享分享虚拟目录你可以为每个用户定义不同的配额。
- 可配置的 [自定义命令 和/或 HTTP 钩子](./docs/custom-actions.md) 在 SSH 命令的 upload, pre-upload, download, pre-download, delete, pre-delete, rename, mkdir, rmdir 阶段,和用户添加、更新、删除阶段。
- 存储在 “数据提供程序” 中的虚拟账户。
- 支持 SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (Go 原生键/值存储) 和内存数据提供程序。
- 为本地账户提供 Chroot 隔离。云端账户可以限制为特定的基本路径。
- 每个用户和每个目录虚拟权限,对于每个暴露的路径你可以允许或禁止:目录展示、上传、覆盖、下载、删除、重命名、创建文件夹、创建软连接、修改 owner/group/file 模式和更改时间。
- 为用户和目录管理提供、数据保留、备份、恢复和即时活动连接的实时报告,可能会强制关闭连接,提供 [REST API](./docs/rest-api.md)。
- [基于 Web 的管理员界面](./docs/web-admin.md) 可以容易地管理用户、目录和连接。
- [Web 客户端界面](./docs/web-client.md) 以便终端用户可以在浏览器中更改他们的凭据、管理和共享他们的文件。
- 公钥和密码认证。支持每个用户多个公钥。
- SSH 用户 [证书认证](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- 键盘交互认证。您可以轻松设置可定制的多因素身份认证。
- 部分验证。你可以配置多步验证请求,例如,用户密码在公钥验证之后。
- 每个用户的身份验证方法。
- [双重验证](./docs/howto/two-factor-authentication.md) 基于实现一次性密码 (RFC 6238) 可以与 Authy、Google Authenticator 和其他兼容的应用程序配合使用。
- 通过 [群组](./docs/groups.md) 精简用户管理。
- 通过外部 程序/HTTP API 自定义验证。
- Web 客户端和 Web 管理员他用户界面支持 [OpenID Connect](https://openid.net/connect/) 验证,所以它们很容易被集成在诸如 [Keycloak](https://www.keycloak.org/) 之类的身份认证程序。你可以在 [](./docs/oidc.md) 获取更多信息。
- [静态数据加密](./docs/dare.md)。
- 在登录之前通过 程序/HTTP API 进行动态用户修改。
- 配额支持:账户拥有独立的磁盘配额表示为总计最大体积 和/或 最大文件数量。
- 带宽节流,基于客户端 IP 地址独立设置上传、下载和覆盖。
- 数据传输带宽限制,限制总量或基于客户端 IP 地址设置上传、下载和覆盖。限制可以通过 REST API 重置。
- 支持每个协议[限速](./docs/rate-limiting.md),可以可选与内置的防护连接实现自动封禁重复超过设置限制的主机。
- 每个用户的最大并发会话。
- 每个用户和全局 IP 过滤:登录可以被限制在特定的 IP 段和指定的 IP 地址。
- 每个用户和每个文件夹类似于 shell 的模式过滤:文件可以被允许、禁止和隐藏基于类 shell 模式。
- 自动使 idle 连接终止。
- 通过内置的 [防护](./docs/defender.md) 自动管理禁止名单。
- 通过 [插件](https://github.com/sftpgo/sftpgo-plugin-geoipfilter) 实现 地理-IP 过滤。
- 原子上传是可配置的。
- 每个用户 文件/目录 所有权映射:你可以将所有用户映射到运行 SFEPGo 的系统账户(所有的平台都是支持的),或者你可以使用 root 用户运行 SFTPGo 并且映射每个用户或用户组到一个不同系统账户(仅支持 \*NIX
- 通过 SSH 支持 Git 仓库。
- 支持 SCP 和 rsync。
- 支持 FTP/S。你可以配置 FTP 服务为控制和数据连接都需要 TLS。
- [WebDAV](./docs/webdav.md) 是支持的。
- 两步 TLS 验证,具有客户端证书身份验证的 aka TLS支持 REST API/Web Admin、FTPS 和 基于 HTTPS 的 WebDAV。
- 每个用户协议限制。你可以为每个用户配置允许的协议(SSH/HTTP/FTP/WebDAV)。
- 暴露 [输出指标](./docs/metrics.md)。
- 支持 HAProxy PROXY 协议:你可以不需要丢失客户端地址信息代理 和/或 负载平衡 SFTP/SCP/FTP 服务。
- 简单从 Linux 系统用户账户进行 [迁移](./examples/convertusers)。
- [可携带模式](./docs/portable-mode.md):按需共享单个目录的便捷方式。
- [SFTP 子系统模式](./docs/sftp-subsystem.md):你可以使用 SFTPGo 作为 OpenSSH 的 SFTP 子系统。
- 性能分析基于内置的 [分析器](./docs/profiling.md)。
- 配置项格式基于你的选择JSON, TOML, YAML, HCL, envfile 都是支持的。
- 日志文件是精确的,它们被存储为易被解析的 JSON 格式。([更多信息](./docs/logs.md)
- SFTPGo 支持 [插件系统](./docs/plugins.md),因此可以使用外部插件拓展。
## 平台
SFTPGo 基于 Linux 开发和创建。在每一次提交之后,代码会自动通过 [GitHub Actions](./.github/workflows/development.yml) 在 Linux、macOS 和 Windows 构建和测试。测试用例定期手动在 FreeBSD 执行,其他的 *BSD 变体同样适用。
## 要求
- Go 作为构建仅有的依赖。我们支持 [持续集成工作流](./.github/workflows) 中使用的 Go 版本。
- 使用适配的 SQL 服务作为数据提供程序PostgreSQL 9.4+, MySQL 5.6+, SQLite 3.x, CockroachDB stable.
- SQL 服务是可选的:你可以使用一个内置的 bolt 数据库以 键/值 存储,或者一个内存中的数据提供程序。
## 安装
为 Linux、macOS 和 Windows 提供的二进制发行版是可用的。请参考 [发行版](https://github.com/drakkan/sftpgo/releases "releases") 页面。
一个官方的 Docker 镜像是可用的。文档参考 [Docker](./docker/README.md)。
<details>
<summary>一些 Linux 分支包是可用的</summary>
- Arch Linux 通过 AUR:
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/)。这个包跟随稳定的发行版。需要 `git``gcc``go` 进行构建。
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/)。这个包跟随稳定的发行版从 GitHub 下载预构建 Linux 二进制文件。不需要 `git``gcc``go` 进行构建。
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/)。这个包构建和下载基于最新的 `git` 主分支。需要 `git``gcc``go` 进行构建。
- Deb and RPM 包在每次提交和发行之后构建。
- Ubuntu PPA 在 [](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo) 可用。
- Void Linux 提供一个 [官方包](https://github.com/void-linux/void-packages/tree/master/srcpkgs/sftpgo)。
</details>
SFTPGo 在 [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335) 和 [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/prasselsrl1645470739547.sftpgo_linux) 同样可用,在此付费可以帮助 SFTPGo 成为一个可持续发展的长期项目。
<details><summary>Windows 包</summary>
- Windows installer 安装和运行 SFTPGo 作为一个 Windows 服务。
- 开箱即用的包启动按需使用的 SFTPGo。
- [winget](https://docs.microsoft.com/en-us/windows/package-manager/winget/install) 包下载和运行 SFTPGo 作为一个 Windows 服务:`winget install SFTPGo`
- [Chocolatey 包](https://community.chocolatey.org/packages/sftpgo) 下载和运行 SFTPGo 作为一个 Windows 服务。
</details>
在 FreeBSD你可以从 [SFTPGo port](https://www.freshports.org/ftp/sftpgo) 下载。
在 DragonFlyBSD你可以从 [DPorts](https://github.com/DragonFlyBSD/DPorts/tree/master/ftp/sftpgo) 下载。
您可以从 [Actions](https://github.com/drakkan/sftpgo/Actions) 页面选择一个 commit 并下载 Linux、macOS 或 Windows 的匹配构建从而轻松测试新特性。GitHub 存储 90 天。
另外,你可以 [从源码构建](./docs/build-from-source.md)。
[不耐烦的快速上手指南](./docs/howto/getting-started.md).
## 配置项
可以完整的配置项方法说明可以参考 [配置项](./docs/full-configuration.md)。
请确保按需运行之前,[初始化数据提供程序](#data-provider-initialization-and-management)。
默认配置启动 STFPGo运行
```bash
sftpgo serve
```
如果你将 SFTPGo作为服务请参阅 [这篇文档](./docs/service.md)。
### 数据提供程序初始化和管理
在启动 SFTPGo 服务之前,请确保配置的数据提供程序已经被适当的 初始化/更新。
对于 PostgreSQL, MySQL 和 CockroachDB 提供,你需要创建一个配置数据库。对于 SQLite配置数据库将会在启动时被自动创建。内存和 bolt 数据提供程序不需要初始化,但是它们需要在升级 SFTPGo 之后更新现有的数据。
SFTPGo 会尝试自动探测数据提供程序是否被 初始化/更新;如果没有,将会在启动时尝试 初始化/更新。
或者,你可以通过 `initprovider` 命令自行 创建/更新 需要的数据提供程序结构。
比如,你可以执行在配置文件目录下面的命令:
```bash
sftpgo initprovider
```
看一看 CLI 用法学习如何指定一个不同的配置文件:
```bash
sftpgo initprovider --help
```
你可以在启动阶段通过设置 `update_mode` 配置项为 `1`,禁止自动数据提供程序 检查/更新。
你可以通过使用 `resetprovider` 子命令重置你的数据提供程序。看一看 CLI 用法获取更多细节信息:
```bash
sftpgo resetprovider --help
```
:warning: 请注意一些数据提供程序(比如 MySQL 和 CockroachDB不支持事务内的方案更改这意味着如果迁移被强制中止或由多个实例同时运行您可能会得到不一致的方案。
## 创建第一个管理员
开始使用 SFTPGo你需要创建一个管理员用户你可以通过不同的方式进行实现
- 通过 web 管理员界面。默认 URL 是 [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
- 通过加载初始数据
- 通过在你的配置文件启用 `create_default_admin` 并设置环境变量 `SFTPGO_DEFAULT_ADMIN_USERNAME``SFTPGO_DEFAULT_ADMIN_PASSWORD`
## 升级
SFTPGo 支持从之前的发行版分支升级到当前分支。
一些支持的升级路径如下:
- 从 1.2.x 到 2.0.x
- 从 2.0.x 到 2.1.x 等。
对支持的升级路径,数据和方案将会自动迁移,你可以使用 `initprovider` 命令作为替代。
所以,比如,你想从 1.2.x 之前的版本升级到 2.0.x你必须首先安装 1.2.x 版本,升级数据提供程序并最终安装版本 2.0.x。建议安装最新的可用小版本如果 1.2.2 可用就不要安装 1.2.0 版本。
从以前发行版分支到当前版本,都支持从独立于数据提供程序的 JSON 转储中加载数据。升级 SFTPGo 后,建议从新版本重新生成 JSON 转储。
## 降级
如果因为一些原因你想降级 SFTPGo你可能需要降级你的用户数据提供程序方案和数据。你可以使用 `revertprovider` 命令执行这项任务。
对于升级SFTPGo 支持从先前的发行版分支降级到当前分支。
所以,如果你有计划从 2.0.x 降级到 1.2.x之前先卸载 2.0.x 版本,你可以通过从配置目录执行以下命令来准备你的数据提供程序:
```shell
sftpgo revertprovider --to-version 4
```
看一看 CLI 的用法、了解 `--to-version` 参数支持的参数,了解如何去指定一个不同的配置文件:
```shell
sftpgo revertprovider --help
```
`revertprovider` 命令不支持内存数据提供程序。
请注意我们只支持当前发行版分支和当前主分支,如果你发现了个 bug最好是报告这个问题而不是降级到一个老的、不被支持的版本。
## 用户和目录管理
在启动 SFTPGo 之后,你可以管理用户和目录使用:
- [基于 Web 的管理员界面](./docs/web-admin.md)
- [REST API](./docs/rest-api.md)
支持内置的数据提供程序比如 `bolt``SQLite`。我们不能使用 CLI 直接将用户和文件夹写到数据提供程序,通常使用 REAST API。
对于用户、目录、管理员和其它资源的细节,都记录在 [OpenAPI](./openapi/openapi.yaml) 方案。如果你想在不手动引入的情况下渲染方案,你可以在 [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml) 上暴露它。
## 教程
一些手把手教程可以在源码文件树中的 [howto](./docs/howto "How-to") 目录找到。
## 认证选项
<details><summary>外部认证</summary>
自定义认证方法可以很容易被添加。SFTPGo 支持外部认证模块,编写一个后端可以如编写几行 shell 脚本那样简单。更多的信息可以参考 [外部认证](./docs/external-auth.md)。
</details>
<details><summary>键盘交互认证</summary>
一般来说,键盘交互身份验证是服务器提出的一系列问题,由客户端提供响应。
这种身份认证方法通常用于多因素身份认证。
更多信息参考 [键盘交互](./docs/keyboard-interactive.md)。
</details>
## 动态用户创建或修改
一个用户可以通过外部程序在登录之前被创建和修改。更多关于此可以参考 [动态用户修改](./docs/dynamic-user-mod.md)。
## 自定义动作
SFTPGo 允许你配置自定义的命令 和/或 HTTP 钩子去获取关于文件上传、删除和一些其它操作的通知。
更多关于自定义动作的信息你可以参考 [自定义动作](./docs/custom-actions.md)。
## 虚拟目录
用户 home 文件夹外或者基于不同存储提供的目录,可以作为虚拟目录进行暴露,详细信息参考 [虚拟目录](./docs/virtual-folders.md)。
## 其它钩子
你可以使用 [Post-connect 钩子](./docs/post-connect-hook.md) 及时获取新的连接建立,使用 [Post-login hook](./docs/post-login-hook.md) 获取每次登录之后的通知。你可以使用你自己的钩子去 [验证密码](./docs/check-password-hook.md)。
## 存储后端
### S3/GCP/Azure
每个用户可以被映射到 [S3 兼容对象存储](./docs/s3.md) /[Google Cloud 存储](./docs/google-cloud-storage.md)/[Azure Blob 存储](./docs/azure-blob-storage.md) bucket 或者一个 bucket 虚拟目录,通过 SFTP/SCP/FTP/WebDAV 进行暴露。
### SFTP 后端
每个用户可以被映射到另一个 SFTP 服务器账户或者它的子目录。更多的信息可以参考 [sftpfs](./docs/sftpfs.md)。
### 加密后端
数据静态加密通过 [cryptfs 后端](./docs/dare.md) 进行支持。
### 其它存储后端
添加新的存储后端非常简单:
- 实现 [Fs 接口](./vfs/vfs.go#L28 "interface for filesystem backends")
- 更新用户方法 `GetFilesystem` 返回新的后端
- 更新 web 接口和 REST API CLI
- 为新的存储后端添加向 `portable` 模式添加 flags
无论如何,一些后端需要按次付费账户(或者他们提供限制期限内提供免费账户)。为了能够添加这些账户支持或者预览 PRs请提供一个测试账户。测试账户必须在提供足够长时间维护此后端并且支持每一次新的发行版之前做基本测试。
## 强力保护
SFTPGo 支持内置 [防护](./docs/defender.md)。
你可以使用 [连接失败日志](./docs/logs.md) 在诸如 [Fail2ban](http://www.fail2ban.org/) 进行工具内集成。[jails](./fail2ban/jails) 和 [filters](./fail2ban/filters) 示例,在 fail2ban 目录中与 `systemd`/`journald` 是可以同时工作的。
## 账户配置属性
关于账户配置属性的细节信息,请参考 [账户](./docs/account.md)。
## 性能
SFTPGo 在没有特殊配置的情况下,可以实现低端硬件轻松达到 GB 量级连接,对于大多数场景足够使用了。
更多深度性能分析可以参考 [性能](./docs/performance.md)。
## 发行节奏
STFPGo 发行版是特性驱动的,我们没有基于计划的固定时间。粗略估计,你可以每年期待一到两个新的发行版。
## 感谢
SFTPGo 使用了 [go.mod](./go.mod) 中列出的第三方库。
我们非常感激所有贡献想法 和/或 PRs。
感谢 [ysura](https://www.ysura.com/) 给予我测试 AWS S3 账户的稳定权限。
## 赞助者
我希望可以使 STFPGo 成为一个可持续发展的长期项目,你的 [赞助](https://github.com/sponsors/drakkan) 对我很有帮助!:heart:
感谢我们的赞助者!
[<img src="https://www.7digital.com/wp-content/themes/sevendigital/images/top_logo.png" alt="7digital logo">](https://www.7digital.com/)
## 许可证
GNU AGPLv3

View file

@ -2,9 +2,11 @@
## Supported Versions
Only the current release of the software is actively supported.
[Contact us](mailto:support@sftpgo.com) if you need early security patches and enterprise-grade security.
Only the current release of the software is actively supported. If you need
help backporting fixes into an older release, feel free to ask.
## Reporting a Vulnerability
To report (possible) security issues in SFTPGo, please either send a mail to the [SFTPGo Team](mailto:support@sftpgo.com) or use Github's [private reporting feature](https://github.com/drakkan/sftpgo/security/advisories/new).
Email your vulnerability information to SFTPGo's maintainer:
Nicola Murino <nicola.murino@gmail.com>

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package acme

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// Package acme provides automatic access to certificates from Let's Encrypt and any other ACME-based CA
// The code here is largely coiped from https://github.com/go-acme/lego/tree/master/cmd
@ -30,7 +30,6 @@ import (
"net/url"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"time"
@ -44,17 +43,15 @@ import (
"github.com/go-acme/lego/v4/log"
"github.com/go-acme/lego/v4/providers/http/webroot"
"github.com/go-acme/lego/v4/registration"
"github.com/hashicorp/go-retryablehttp"
"github.com/robfig/cron/v3"
"github.com/drakkan/sftpgo/v2/internal/common"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/ftpd"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/telemetry"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/internal/webdavd"
"github.com/drakkan/sftpgo/v2/ftpd"
"github.com/drakkan/sftpgo/v2/httpd"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/telemetry"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/webdavd"
)
const (
@ -62,27 +59,12 @@ const (
)
var (
config *Configuration
initialConfig Configuration
scheduler *cron.Cron
logMode int
supportedKeyTypes = []string{
string(certcrypto.EC256),
string(certcrypto.EC384),
string(certcrypto.RSA2048),
string(certcrypto.RSA3072),
string(certcrypto.RSA4096),
string(certcrypto.RSA8192),
}
fnReloadHTTPDCerts func() error
config *Configuration
scheduler *cron.Cron
logMode int
)
// SetReloadHTTPDCertsFn set the function to call to reload HTTPD certificates
func SetReloadHTTPDCertsFn(fn func() error) {
fnReloadHTTPDCerts = fn
}
// GetCertificates tries to obtain the certificates using the global configuration
// GetCertificates tries to obtain the certificates for the configured domains
func GetCertificates() error {
if config == nil {
return errors.New("acme is disabled")
@ -90,83 +72,6 @@ func GetCertificates() error {
return config.getCertificates()
}
// GetCertificatesForConfig tries to obtain the certificates using the provided
// configuration override. This is a NOOP if we already have certificates
func GetCertificatesForConfig(c *dataprovider.ACMEConfigs, configDir string) error {
if c.Domain == "" {
acmeLog(logger.LevelDebug, "no domain configured, nothing to do")
return nil
}
config := mergeConfig(getConfiguration(), c)
if err := config.Initialize(configDir); err != nil {
return err
}
hasCerts, err := config.hasCertificates(c.Domain)
if err != nil {
return fmt.Errorf("unable to check if we already have certificates for domain %q: %w", c.Domain, err)
}
if hasCerts {
return nil
}
return config.getCertificates()
}
// GetHTTP01WebRoot returns the web root for HTTP-01 challenge
func GetHTTP01WebRoot() string {
return initialConfig.HTTP01Challenge.WebRoot
}
func mergeConfig(config Configuration, c *dataprovider.ACMEConfigs) Configuration {
config.Domains = []string{c.Domain}
config.Email = c.Email
config.HTTP01Challenge.Port = c.HTTP01Challenge.Port
config.TLSALPN01Challenge.Port = 0
return config
}
// getConfiguration returns the configuration set using config file and env vars
func getConfiguration() Configuration {
return initialConfig
}
func loadProviderConf(c Configuration) (Configuration, error) {
configs, err := dataprovider.GetConfigs()
if err != nil {
return c, fmt.Errorf("unable to load config from provider: %w", err)
}
configs.SetNilsToEmpty()
if configs.ACME.Domain == "" {
return c, nil
}
return mergeConfig(c, configs.ACME), nil
}
// Initialize validates and set the configuration
func Initialize(c Configuration, configDir string, checkRenew bool) error {
config = nil
initialConfig = c
c, err := loadProviderConf(c)
if err != nil {
return err
}
util.CertsBasePath = ""
setLogMode(checkRenew)
if err := c.Initialize(configDir); err != nil {
return err
}
if len(c.Domains) == 0 {
return nil
}
util.CertsBasePath = c.CertsPath
acmeLog(logger.LevelInfo, "configured domains: %+v, certs base path %q", c.Domains, c.CertsPath)
config = &c
if checkRenew {
return startScheduler()
}
return nil
}
// HTTP01Challenge defines the configuration for HTTP-01 challenge type
type HTTP01Challenge struct {
Port int `json:"port" mapstructure:"port"`
@ -235,55 +140,70 @@ type Configuration struct {
tempDir string
}
// Initialize validates and initialize the configuration
func (c *Configuration) Initialize(configDir string) error {
// Initialize validates and set the configuration
func (c *Configuration) Initialize(configDir string, checkRenew bool) error {
config = nil
setLogMode(checkRenew)
c.checkDomains()
if len(c.Domains) == 0 {
acmeLog(logger.LevelInfo, "no domains configured, acme disabled")
return nil
}
if c.Email == "" || !util.IsEmailValid(c.Email) {
return util.NewI18nError(
fmt.Errorf("invalid email address %q", c.Email),
util.I18nErrorInvalidEmail,
)
return fmt.Errorf("invalid email address %#v", c.Email)
}
if c.RenewDays < 1 {
return fmt.Errorf("invalid number of days remaining before renewal: %d", c.RenewDays)
}
if !slices.Contains(supportedKeyTypes, c.KeyType) {
return fmt.Errorf("invalid key type %q", c.KeyType)
supportedKeyTypes := []string{
string(certcrypto.EC256),
string(certcrypto.EC384),
string(certcrypto.RSA2048),
string(certcrypto.RSA4096),
string(certcrypto.RSA8192),
}
if !util.Contains(supportedKeyTypes, c.KeyType) {
return fmt.Errorf("invalid key type %#v", c.KeyType)
}
caURL, err := url.Parse(c.CAEndpoint)
if err != nil {
return fmt.Errorf("invalid CA endopoint: %w", err)
}
if !util.IsFileInputValid(c.CertsPath) {
return fmt.Errorf("invalid certs path %q", c.CertsPath)
return fmt.Errorf("invalid certs path %#v", c.CertsPath)
}
if !filepath.IsAbs(c.CertsPath) {
c.CertsPath = filepath.Join(configDir, c.CertsPath)
}
err = os.MkdirAll(c.CertsPath, 0700)
if err != nil {
return fmt.Errorf("unable to create certs path %q: %w", c.CertsPath, err)
return fmt.Errorf("unable to create certs path %#v: %w", c.CertsPath, err)
}
c.tempDir = filepath.Join(c.CertsPath, "temp")
err = os.MkdirAll(c.CertsPath, 0700)
if err != nil {
return fmt.Errorf("unable to create certs temp path %q: %w", c.tempDir, err)
return fmt.Errorf("unable to create certs temp path %#v: %w", c.tempDir, err)
}
serverPath := strings.NewReplacer(":", "_", "/", string(os.PathSeparator)).Replace(caURL.Host)
accountPath := filepath.Join(c.CertsPath, serverPath)
err = os.MkdirAll(accountPath, 0700)
if err != nil {
return fmt.Errorf("unable to create account path %q: %w", accountPath, err)
return fmt.Errorf("unable to create account path %#v: %w", accountPath, err)
}
c.accountConfigPath = filepath.Join(accountPath, c.Email+".json")
c.accountKeyPath = filepath.Join(accountPath, c.Email+".key")
c.lockPath = filepath.Join(c.CertsPath, "lock")
return c.validateChallenges()
if err = c.validateChallenges(); err != nil {
return err
}
acmeLog(logger.LevelInfo, "configured domains: %+v", c.Domains)
config = c
if checkRenew {
return startScheduler()
}
return nil
}
func (c *Configuration) validateChallenges() error {
@ -293,7 +213,10 @@ func (c *Configuration) validateChallenges() error {
if err := c.HTTP01Challenge.validate(); err != nil {
return err
}
return c.TLSALPN01Challenge.validate()
if err := c.TLSALPN01Challenge.validate(); err != nil {
return err
}
return nil
}
func (c *Configuration) checkDomains() {
@ -314,10 +237,10 @@ func (c *Configuration) setLockTime() error {
lockTime := fmt.Sprintf("%v", util.GetTimeAsMsSinceEpoch(time.Now()))
err := os.WriteFile(c.lockPath, []byte(lockTime), 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save lock time to %q: %v", c.lockPath, err)
acmeLog(logger.LevelError, "unable to save lock time to %#v: %v", c.lockPath, err)
return fmt.Errorf("unable to save lock time: %w", err)
}
acmeLog(logger.LevelDebug, "lock time saved: %q", lockTime)
acmeLog(logger.LevelDebug, "lock time saved: %#v", lockTime)
return nil
}
@ -325,13 +248,13 @@ func (c *Configuration) getLockTime() (time.Time, error) {
content, err := os.ReadFile(c.lockPath)
if err != nil {
if os.IsNotExist(err) {
acmeLog(logger.LevelDebug, "lock file %q not found", c.lockPath)
acmeLog(logger.LevelDebug, "lock file %#v not found", c.lockPath)
return time.Time{}, nil
}
acmeLog(logger.LevelError, "unable to read lock file %q: %v", c.lockPath, err)
acmeLog(logger.LevelError, "unable to read lock file %#v: %v", c.lockPath, err)
return time.Time{}, err
}
msec, err := strconv.ParseInt(strings.TrimSpace(util.BytesToString(content)), 10, 64)
msec, err := strconv.ParseInt(strings.TrimSpace(string(content)), 10, 64)
if err != nil {
acmeLog(logger.LevelError, "unable to parse lock time: %v", err)
return time.Time{}, fmt.Errorf("unable to parse lock time: %w", err)
@ -346,7 +269,7 @@ func (c *Configuration) saveAccount(account *account) error {
}
err = os.WriteFile(c.accountConfigPath, jsonBytes, 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save account to file %q: %v", c.accountConfigPath, err)
acmeLog(logger.LevelError, "unable to save account to file %#v: %v", c.accountConfigPath, err)
return fmt.Errorf("unable to save account: %w", err)
}
return nil
@ -361,7 +284,7 @@ func (c *Configuration) getAccount(privateKey crypto.PrivateKey) (account, error
var account account
fileBytes, err := os.ReadFile(c.accountConfigPath)
if err != nil {
acmeLog(logger.LevelError, "unable to read account from file %q: %v", c.accountConfigPath, err)
acmeLog(logger.LevelError, "unable to read account from file %#v: %v", c.accountConfigPath, err)
return account, fmt.Errorf("unable to read account from file: %w", err)
}
err = json.Unmarshal(fileBytes, &account)
@ -390,15 +313,11 @@ func (c *Configuration) getAccount(privateKey crypto.PrivateKey) (account, error
func (c *Configuration) loadPrivateKey() (crypto.PrivateKey, error) {
keyBytes, err := os.ReadFile(c.accountKeyPath)
if err != nil {
acmeLog(logger.LevelError, "unable to read account key from file %q: %v", c.accountKeyPath, err)
acmeLog(logger.LevelError, "unable to read account key from file %#v: %v", c.accountKeyPath, err)
return nil, fmt.Errorf("unable to read account key: %w", err)
}
keyBlock, _ := pem.Decode(keyBytes)
if keyBlock == nil {
acmeLog(logger.LevelError, "unable to parse private key from file %q: pem decoding failed", c.accountKeyPath)
return nil, errors.New("pem decoding failed")
}
var privateKey crypto.PrivateKey
switch keyBlock.Type {
@ -407,10 +326,10 @@ func (c *Configuration) loadPrivateKey() (crypto.PrivateKey, error) {
case "EC PRIVATE KEY":
privateKey, err = x509.ParseECPrivateKey(keyBlock.Bytes)
default:
err = fmt.Errorf("unknown private key type %q", keyBlock.Type)
err = fmt.Errorf("unknown private key type %#v", keyBlock.Type)
}
if err != nil {
acmeLog(logger.LevelError, "unable to parse private key from file %q: %v", c.accountKeyPath, err)
acmeLog(logger.LevelError, "unable to parse private key from file %#v: %v", c.accountKeyPath, err)
return privateKey, fmt.Errorf("unable to parse private key: %w", err)
}
return privateKey, nil
@ -424,7 +343,7 @@ func (c *Configuration) generatePrivateKey() (crypto.PrivateKey, error) {
}
certOut, err := os.Create(c.accountKeyPath)
if err != nil {
acmeLog(logger.LevelError, "unable to save private key to file %q: %v", c.accountKeyPath, err)
acmeLog(logger.LevelError, "unable to save private key to file %#v: %v", c.accountKeyPath, err)
return nil, fmt.Errorf("unable to save private key: %w", err)
}
defer certOut.Close()
@ -443,25 +362,25 @@ func (c *Configuration) generatePrivateKey() (crypto.PrivateKey, error) {
func (c *Configuration) getPrivateKey() (crypto.PrivateKey, error) {
_, err := os.Stat(c.accountKeyPath)
if err != nil && os.IsNotExist(err) {
acmeLog(logger.LevelDebug, "private key file %q does not exist, generating new private key", c.accountKeyPath)
acmeLog(logger.LevelDebug, "private key file %#v does not exist, generating new private key", c.accountKeyPath)
return c.generatePrivateKey()
}
acmeLog(logger.LevelDebug, "loading private key from file %q, stat error: %v", c.accountKeyPath, err)
acmeLog(logger.LevelDebug, "loading private key from file %#v, stat error: %v", c.accountKeyPath, err)
return c.loadPrivateKey()
}
func (c *Configuration) loadCertificatesForDomain(domain string) ([]*x509.Certificate, error) {
domain = util.SanitizeDomain(domain)
acmeLog(logger.LevelDebug, "loading certificates for domain %q", domain)
domain = sanitizedDomain(domain)
acmeLog(logger.LevelDebug, "loading certificates for domain %#v", domain)
content, err := os.ReadFile(filepath.Join(c.CertsPath, domain+".crt"))
if err != nil {
acmeLog(logger.LevelError, "unable to load certificates for domain %q: %v", domain, err)
return nil, fmt.Errorf("unable to load certificates for domain %q: %w", domain, err)
acmeLog(logger.LevelError, "unable to load certificates for domain %#v: %v", domain, err)
return nil, fmt.Errorf("unable to load certificates for domain %#v: %w", domain, err)
}
certs, err := certcrypto.ParsePEMBundle(content)
if err != nil {
acmeLog(logger.LevelError, "unable to parse certificates for domain %q: %v", domain, err)
return certs, fmt.Errorf("unable to parse certificates for domain %q: %w", domain, err)
acmeLog(logger.LevelError, "unable to parse certificates for domain %#v: %v", domain, err)
return certs, fmt.Errorf("unable to parse certificates for domain %#v: %w", domain, err)
}
return certs, nil
}
@ -473,7 +392,7 @@ func (c *Configuration) needRenewal(x509Cert *x509.Certificate, domain string) b
}
notAfter := int(time.Until(x509Cert.NotAfter).Hours() / 24.0)
if notAfter > c.RenewDays {
acmeLog(logger.LevelDebug, "the certificate for domain %q expires in %d days, no renewal", domain, notAfter)
acmeLog(logger.LevelDebug, "the certificate for domain %#v expires in %d days, no renewal", domain, notAfter)
return false
}
return true
@ -491,15 +410,7 @@ func (c *Configuration) setup() (*account, *lego.Client, error) {
config := lego.NewConfig(&account)
config.CADirURL = c.CAEndpoint
config.Certificate.KeyType = certcrypto.KeyType(c.KeyType)
config.Certificate.OverallRequestLimit = 6
config.UserAgent = version.GetServerVersion("/", false)
retryClient := retryablehttp.NewClient()
retryClient.RetryMax = 5
retryClient.HTTPClient = config.HTTPClient
config.HTTPClient = retryClient.StandardClient()
config.UserAgent = fmt.Sprintf("SFTPGo/%v", version.Get().Version)
client, err := lego.NewClient(config)
if err != nil {
acmeLog(logger.LevelError, "unable to get ACME client: %v", err)
@ -516,10 +427,10 @@ func (c *Configuration) setupChalleges(client *lego.Client) error {
client.Challenge.Remove(challenge.DNS01)
if c.HTTP01Challenge.isEnabled() {
if c.HTTP01Challenge.WebRoot != "" {
acmeLog(logger.LevelDebug, "configuring HTTP-01 web root challenge, path %q", c.HTTP01Challenge.WebRoot)
acmeLog(logger.LevelDebug, "configuring HTTP-01 web root challenge, path %#v", c.HTTP01Challenge.WebRoot)
providerServer, err := webroot.NewHTTPProvider(c.HTTP01Challenge.WebRoot)
if err != nil {
acmeLog(logger.LevelError, "unable to create HTTP-01 web root challenge provider from path %q: %v",
acmeLog(logger.LevelError, "unable to create HTTP-01 web root challenge provider from path %#v: %v",
c.HTTP01Challenge.WebRoot, err)
return fmt.Errorf("unable to create HTTP-01 web root challenge provider: %w", err)
}
@ -565,13 +476,7 @@ func (c *Configuration) register(client *lego.Client) (*registration.Resource, e
func (c *Configuration) tryRecoverRegistration(privateKey crypto.PrivateKey) (*registration.Resource, error) {
config := lego.NewConfig(&account{key: privateKey})
config.CADirURL = c.CAEndpoint
config.UserAgent = version.GetServerVersion("/", false)
retryClient := retryablehttp.NewClient()
retryClient.RetryMax = 5
retryClient.HTTPClient = config.HTTPClient
config.HTTPClient = retryClient.StandardClient()
config.UserAgent = fmt.Sprintf("SFTPGo/%v", version.Get().Version)
client, err := lego.NewClient(config)
if err != nil {
@ -582,20 +487,15 @@ func (c *Configuration) tryRecoverRegistration(privateKey crypto.PrivateKey) (*r
return client.Registration.ResolveAccountByKey()
}
func (c *Configuration) getCrtPath(domain string) string {
return filepath.Join(c.CertsPath, domain+".crt")
}
func (c *Configuration) getKeyPath(domain string) string {
return filepath.Join(c.CertsPath, domain+".key")
}
func (c *Configuration) getResourcePath(domain string) string {
return filepath.Join(c.CertsPath, domain+".json")
}
func (c *Configuration) obtainAndSaveCertificate(client *lego.Client, domain string) error {
domains := getDomains(domain)
var domains []string
for _, d := range strings.Split(domain, ",") {
d = strings.TrimSpace(d)
if d != "" {
domains = append(domains, d)
}
}
acmeLog(logger.LevelInfo, "requesting certificates for domains %+v", domains)
request := certificate.ObtainRequest{
Domains: domains,
@ -609,15 +509,15 @@ func (c *Configuration) obtainAndSaveCertificate(client *lego.Client, domain str
acmeLog(logger.LevelError, "unable to obtain certificates for domains %+v: %v", domains, err)
return fmt.Errorf("unable to obtain certificates: %w", err)
}
domain = util.SanitizeDomain(domain)
err = os.WriteFile(c.getCrtPath(domain), cert.Certificate, 0600)
domain = sanitizedDomain(domain)
err = os.WriteFile(filepath.Join(c.CertsPath, domain+".crt"), cert.Certificate, 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save certificate for domain %s: %v", domain, err)
acmeLog(logger.LevelError, "unable to save certificate for domain %v: %v", domain, err)
return fmt.Errorf("unable to save certificate: %w", err)
}
err = os.WriteFile(c.getKeyPath(domain), cert.PrivateKey, 0600)
err = os.WriteFile(filepath.Join(c.CertsPath, domain+".key"), cert.PrivateKey, 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save private key for domain %s: %v", domain, err)
acmeLog(logger.LevelError, "unable to save private key for domain %v: %v", domain, err)
return fmt.Errorf("unable to save private key: %w", err)
}
jsonBytes, err := json.MarshalIndent(cert, "", "\t")
@ -625,7 +525,7 @@ func (c *Configuration) obtainAndSaveCertificate(client *lego.Client, domain str
acmeLog(logger.LevelError, "unable to marshal certificate resources for domain %v: %v", domain, err)
return err
}
err = os.WriteFile(c.getResourcePath(domain), jsonBytes, 0600)
err = os.WriteFile(filepath.Join(c.CertsPath, domain+".json"), jsonBytes, 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save certificate resources for domain %v: %v", domain, err)
return fmt.Errorf("unable to save certificate resources: %w", err)
@ -635,25 +535,6 @@ func (c *Configuration) obtainAndSaveCertificate(client *lego.Client, domain str
return nil
}
// hasCertificates returns true if certificates for the specified domain has already been issued
func (c *Configuration) hasCertificates(domain string) (bool, error) {
domain = util.SanitizeDomain(domain)
if _, err := os.Stat(c.getCrtPath(domain)); err != nil {
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
if _, err := os.Stat(c.getKeyPath(domain)); err != nil {
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
return true, nil
}
// getCertificates tries to obtain the certificates for the configured domains
func (c *Configuration) getCertificates() error {
account, client, err := c.setup()
if err != nil {
@ -680,24 +561,6 @@ func (c *Configuration) getCertificates() error {
return nil
}
func (c *Configuration) notifyCertificateRenewal(domain string, err error) {
if domain == "" {
domain = strings.Join(c.Domains, ",")
}
params := common.EventParams{
Name: domain,
Event: "Certificate renewal",
Timestamp: time.Now(),
}
if err != nil {
params.Status = 2
params.AddError(err)
} else {
params.Status = 1
}
common.HandleCertificateEvent(params)
}
func (c *Configuration) renewCertificates() error {
lockTime, err := c.getLockTime()
if err != nil {
@ -710,28 +573,22 @@ func (c *Configuration) renewCertificates() error {
}
err = c.setLockTime()
if err != nil {
c.notifyCertificateRenewal("", err)
return err
}
account, client, err := c.setup()
if err != nil {
c.notifyCertificateRenewal("", err)
return err
}
if account.Registration == nil {
acmeLog(logger.LevelError, "cannot renew certificates, your account is not registered")
err = errors.New("cannot renew certificates, your account is not registered")
c.notifyCertificateRenewal("", err)
return err
return fmt.Errorf("cannot renew certificates, your account is not registered")
}
var errRenew error
needReload := false
for _, domain := range c.Domains {
certificates, err := c.loadCertificatesForDomain(domain)
if err != nil {
c.notifyCertificateRenewal(domain, err)
errRenew = err
continue
return err
}
cert := certificates[0]
if !c.needRenewal(cert, domain) {
@ -739,10 +596,8 @@ func (c *Configuration) renewCertificates() error {
}
err = c.obtainAndSaveCertificate(client, domain)
if err != nil {
c.notifyCertificateRenewal(domain, err)
errRenew = err
} else {
c.notifyCertificateRenewal(domain, nil)
needReload = true
}
}
@ -750,10 +605,8 @@ func (c *Configuration) renewCertificates() error {
// at least one certificate has been renewed, sends a reload to all services that may be using certificates
err = ftpd.ReloadCertificateMgr()
acmeLog(logger.LevelInfo, "ftpd certificate manager reloaded , error: %v", err)
if fnReloadHTTPDCerts != nil {
err = fnReloadHTTPDCerts()
acmeLog(logger.LevelInfo, "httpd certificates manager reloaded , error: %v", err)
}
err = httpd.ReloadCertificateMgr()
acmeLog(logger.LevelInfo, "httpd certificates manager reloaded , error: %v", err)
err = webdavd.ReloadCertificateMgr()
acmeLog(logger.LevelInfo, "webdav certificates manager reloaded , error: %v", err)
err = telemetry.ReloadCertificateMgr()
@ -775,21 +628,8 @@ func isDomainValid(domain string) (string, bool) {
return domain, isValid
}
func getDomains(domain string) []string {
var domains []string
delimiter := ","
if !strings.Contains(domain, ",") && strings.Contains(domain, " ") {
delimiter = " "
}
for _, d := range strings.Split(domain, delimiter) {
d = strings.TrimSpace(d)
if d != "" {
domains = append(domains, d)
}
}
return util.RemoveDuplicates(domains, false)
func sanitizedDomain(domain string) string {
return strings.NewReplacer(":", "_", "*", "_", ",", "_").Replace(domain)
}
func stopScheduler() {
@ -802,8 +642,10 @@ func stopScheduler() {
func startScheduler() error {
stopScheduler()
rand.Seed(time.Now().UnixNano())
randSecs := rand.Intn(59)
scheduler = cron.New(cron.WithLocation(time.UTC), cron.WithLogger(cron.DiscardLogger))
scheduler = cron.New()
_, err := scheduler.AddFunc(fmt.Sprintf("@every 12h0m%ds", randSecs), renewCertificates)
if err != nil {
return fmt.Errorf("unable to schedule certificates renewal: %w", err)

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -20,11 +20,10 @@ import (
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/acme"
"github.com/drakkan/sftpgo/v2/internal/config"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/acme"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@ -41,38 +40,19 @@ Certificates are saved in the configured "certs_path".
After this initial step, the certificates are automatically checked and
renewed by the SFTPGo service
`,
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.ErrorToConsole("Unable to initialize ACME, config load error: %v", err)
logger.ErrorToConsole("Unable to initialize data provider, config load error: %v", err)
return
}
kmsConfig := config.GetKMSConfig()
err = kmsConfig.Initialize()
if err != nil {
logger.ErrorToConsole("unable to initialize KMS: %v", err)
os.Exit(1)
}
mfaConfig := config.GetMFAConfig()
err = mfaConfig.Initialize()
if err != nil {
logger.ErrorToConsole("Unable to initialize MFA: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
err = dataprovider.Initialize(providerConf, configDir, false)
if err != nil {
logger.ErrorToConsole("error initializing data provider: %v", err)
os.Exit(1)
}
acmeConfig := config.GetACMEConfig()
err = acme.Initialize(acmeConfig, configDir, false)
err = acmeConfig.Initialize(configDir, false)
if err != nil {
logger.ErrorToConsole("Unable to initialize ACME configuration: %v", err)
os.Exit(1)
}
if err = acme.GetCertificates(); err != nil {
logger.ErrorToConsole("Cannot get certificates: %v", err)

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build awscontainer
// +build awscontainer

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !awscontainer
// +build !awscontainer
@ -21,4 +21,4 @@ import (
"github.com/spf13/cobra"
)
func addAWSContainerFlags(_ *cobra.Command) {}
func addAWSContainerFlags(cmd *cobra.Command) {}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -53,7 +53,7 @@ MacOS:
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, _ []string) error {
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenBashCompletionV2(os.Stdout, true)
},
}
@ -79,7 +79,7 @@ macOS:
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, _ []string) error {
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenZshCompletion(os.Stdout)
},
}
@ -100,7 +100,7 @@ $ sftpgo gen completion fish > ~/.config/fish/completions/sftpgo.fish
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, _ []string) error {
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenFishCompletion(os.Stdout, true)
},
}
@ -118,7 +118,7 @@ To load completions for every new session, add the output of the above command
to your powershell profile.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, _ []string) error {
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
},
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -24,8 +24,8 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
)
var (
@ -38,7 +38,7 @@ command-line interface.
By default, it creates the man page files in the "man" directory under the
current directory.
`,
Run: func(cmd *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if _, err := os.Stat(manDir); errors.Is(err, fs.ErrNotExist) {

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -21,11 +21,11 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/internal/config"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/service"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@ -50,7 +50,7 @@ $ sftpgo initprovider
Any defined action is ignored.
Please take a look at the usage below to customize the options.`,
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
@ -65,18 +65,12 @@ Please take a look at the usage below to customize the options.`,
logger.ErrorToConsole("Unable to initialize KMS: %v", err)
os.Exit(1)
}
mfaConfig := config.GetMFAConfig()
err = mfaConfig.Initialize()
if err != nil {
logger.ErrorToConsole("Unable to initialize MFA: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
// ignore actions
providerConf.Actions.Hook = ""
providerConf.Actions.ExecuteFor = nil
providerConf.Actions.ExecuteOn = nil
logger.InfoToConsole("Initializing provider: %q config file: %q", providerConf.Driver, viper.ConfigFileUsed())
logger.InfoToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err = dataprovider.InitializeDatabase(providerConf, configDir)
if err == nil {
logger.InfoToConsole("Data provider successfully initialized/updated")

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -21,8 +21,8 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/service"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@ -35,7 +35,7 @@ line flags simply use:
sftpgo service install
Please take a look at the usage below to customize the startup options`,
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
s := service.Service{
ConfigDir: util.CleanDirInput(configDir),
ConfigFile: configFile,
@ -44,7 +44,7 @@ Please take a look at the usage below to customize the startup options`,
LogMaxBackups: logMaxBackups,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogLevel: logLevel,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
Shutdown: make(chan bool),
}
@ -99,9 +99,8 @@ func getCustomServeFlags() []string {
result = append(result, "--"+logMaxAgeFlag)
result = append(result, strconv.Itoa(logMaxAge))
}
if logLevel != defaultLogLevel {
result = append(result, "--"+logLevelFlag)
result = append(result, logLevel)
if logVerbose != defaultLogVerbose {
result = append(result, "--"+logVerboseFlag+"=false")
}
if logUTCTime != defaultLogUTCTime {
result = append(result, "--"+logUTCTimeFlag+"=true")
@ -109,9 +108,5 @@ func getCustomServeFlags() []string {
if logCompress != defaultLogCompress {
result = append(result, "--"+logCompressFlag+"=true")
}
if graceTime != defaultGraceTime {
result = append(result, "--"+graceTimeFlag)
result = append(result, strconv.Itoa(graceTime))
}
return result
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !noportable
// +build !noportable
@ -27,25 +27,25 @@ import (
"github.com/sftpgo/sdk"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/common"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/kms"
"github.com/drakkan/sftpgo/v2/internal/service"
"github.com/drakkan/sftpgo/v2/internal/sftpd"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
var (
directoryToServe string
portableSFTPDPort int
portableAdvertiseService bool
portableAdvertiseCredentials bool
portableUsername string
portablePassword string
portablePasswordFile string
portableStartDir string
portableLogFile string
portableLogLevel string
portableLogVerbose bool
portableLogUTCTime bool
portablePublicKeys []string
portablePermissions []string
@ -65,7 +65,6 @@ var (
portableS3ULPartSize int
portableS3ULConcurrency int
portableS3ForcePathStyle bool
portableS3SkipTLSVerify bool
portableGCSBucket string
portableGCSCredentialsFile string
portableGCSAutoCredentials int
@ -77,9 +76,6 @@ var (
portableWebDAVPort int
portableWebDAVCert string
portableWebDAVKey string
portableHTTPPort int
portableHTTPSCert string
portableHTTPSKey string
portableAzContainer string
portableAzAccountName string
portableAzAccountKey string
@ -110,9 +106,9 @@ use:
$ sftpgo portable
Please take a look at the usage below to customize the serving parameters`,
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
portableDir := directoryToServe
fsProvider := dataprovider.GetProviderFromValue(convertFsProvider())
fsProvider := sdk.GetProviderByName(portableFsProvider)
if !filepath.IsAbs(portableDir) {
if fsProvider == sdk.LocalFilesystemProvider {
portableDir, _ = filepath.Abs(portableDir)
@ -152,12 +148,12 @@ Please take a look at the usage below to customize the serving parameters`,
_, err := common.NewCertManager(keyPairs, filepath.Clean(defaultConfigDir),
"FTP portable")
if err != nil {
fmt.Printf("Unable to load FTPS key pair, cert file %q key file %q error: %v\n",
fmt.Printf("Unable to load FTPS key pair, cert file %#v key file %#v error: %v\n",
portableFTPSCert, portableFTPSKey, err)
os.Exit(1)
}
}
if portableWebDAVPort >= 0 && portableWebDAVCert != "" && portableWebDAVKey != "" {
if portableWebDAVPort > 0 && portableWebDAVCert != "" && portableWebDAVKey != "" {
keyPairs := []common.TLSKeyPair{
{
Cert: portableWebDAVCert,
@ -168,53 +164,27 @@ Please take a look at the usage below to customize the serving parameters`,
_, err := common.NewCertManager(keyPairs, filepath.Clean(defaultConfigDir),
"WebDAV portable")
if err != nil {
fmt.Printf("Unable to load WebDAV key pair, cert file %q key file %q error: %v\n",
fmt.Printf("Unable to load WebDAV key pair, cert file %#v key file %#v error: %v\n",
portableWebDAVCert, portableWebDAVKey, err)
os.Exit(1)
}
}
if portableHTTPPort >= 0 && portableHTTPSCert != "" && portableHTTPSKey != "" {
keyPairs := []common.TLSKeyPair{
{
Cert: portableHTTPSCert,
Key: portableHTTPSKey,
ID: common.DefaultTLSKeyPaidID,
},
}
_, err := common.NewCertManager(keyPairs, filepath.Clean(defaultConfigDir),
"HTTP portable")
if err != nil {
fmt.Printf("Unable to load HTTPS key pair, cert file %q key file %q error: %v\n",
portableHTTPSCert, portableHTTPSKey, err)
os.Exit(1)
}
}
pwd := portablePassword
if portablePasswordFile != "" {
content, err := os.ReadFile(portablePasswordFile)
if err != nil {
fmt.Printf("Unable to read password file %q: %v", portablePasswordFile, err)
os.Exit(1)
}
pwd = strings.TrimSpace(util.BytesToString(content))
}
service.SetGraceTime(graceTime)
service := service.Service{
ConfigDir: util.CleanDirInput(configDir),
ConfigFile: configFile,
ConfigDir: filepath.Clean(defaultConfigDir),
ConfigFile: defaultConfigFile,
LogFilePath: portableLogFile,
LogMaxSize: defaultLogMaxSize,
LogMaxBackups: defaultLogMaxBackup,
LogMaxAge: defaultLogMaxAge,
LogCompress: defaultLogCompress,
LogLevel: portableLogLevel,
LogVerbose: portableLogVerbose,
LogUTCTime: portableLogUTCTime,
Shutdown: make(chan bool),
PortableMode: 1,
PortableUser: dataprovider.User{
BaseUser: sdk.BaseUser{
Username: portableUsername,
Password: pwd,
Password: portablePassword,
PublicKeys: portablePublicKeys,
Permissions: permissions,
HomeDir: portableDir,
@ -227,7 +197,7 @@ Please take a look at the usage below to customize the serving parameters`,
},
},
FsConfig: vfs.Filesystem{
Provider: fsProvider,
Provider: sdk.GetProviderByName(portableFsProvider),
S3Config: vfs.S3FsConfig{
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: portableS3Bucket,
@ -241,7 +211,6 @@ Please take a look at the usage below to customize the serving parameters`,
UploadPartSize: int64(portableS3ULPartSize),
UploadConcurrency: portableS3ULConcurrency,
ForcePathStyle: portableS3ForcePathStyle,
SkipTLSVerify: portableS3SkipTLSVerify,
},
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
},
@ -282,17 +251,14 @@ Please take a look at the usage below to customize the serving parameters`,
DisableCouncurrentReads: portableSFTPDisableConcurrentReads,
BufferSize: portableSFTPDBufferSize,
},
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
KeyPassphrase: kms.NewEmptySecret(),
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
},
},
},
}
err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableHTTPPort,
portableSSHCommands, portableFTPSCert, portableFTPSKey, portableWebDAVCert, portableWebDAVKey,
portableHTTPSCert, portableHTTPSKey)
if err == nil {
if err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableSSHCommands, portableAdvertiseService,
portableAdvertiseCredentials, portableFTPSCert, portableFTPSKey, portableWebDAVCert, portableWebDAVKey); err == nil {
service.Wait()
if service.Error == nil {
os.Exit(0)
@ -319,9 +285,7 @@ path`)
< 0 disabled`)
portableCmd.Flags().IntVar(&portableWebDAVPort, "webdav-port", -1, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().IntVar(&portableHTTPPort, "httpd-port", -1, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().StringSliceVar(&portableSSHCommands, "ssh-commands", sftpd.GetDefaultSSHCommands(),
portableCmd.Flags().StringSliceVarP(&portableSSHCommands, "ssh-commands", "c", sftpd.GetDefaultSSHCommands(),
`SSH commands to enable.
"*" means any supported SSH command
including scp
@ -330,15 +294,8 @@ including scp
value`)
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", `Leave empty to use an auto generated
value`)
portableCmd.Flags().StringVar(&portablePasswordFile, "password-file", "", `Read the password from the specified
file path. Leave empty to use an auto
generated value`)
portableCmd.Flags().StringVarP(&portableLogFile, logFilePathFlag, "l", "", "Leave empty to disable logging")
portableCmd.Flags().StringVar(&portableLogLevel, logLevelFlag, defaultLogLevel, `Set the log level.
Supported values:
debug, info, warn, error.
`)
portableCmd.Flags().BoolVarP(&portableLogVerbose, logVerboseFlag, "v", false, "Enable verbose logs")
portableCmd.Flags().BoolVar(&portableLogUTCTime, logUTCTimeFlag, false, "Use UTC time for logging")
portableCmd.Flags().StringSliceVarP(&portablePublicKeys, "public-key", "k", []string{}, "")
portableCmd.Flags().StringSliceVarP(&portablePermissions, "permissions", "g", []string{"list", "download"},
@ -354,6 +311,14 @@ For example: "/somedir::*.jpg,a*b?.png"`)
The format is:
/dir::pattern1,pattern2.
For example: "/somedir::*.jpg,a*b?.png"`)
portableCmd.Flags().BoolVarP(&portableAdvertiseService, "advertise-service", "S", false,
`Advertise configured services using
multicast DNS`)
portableCmd.Flags().BoolVarP(&portableAdvertiseCredentials, "advertise-credentials", "C", false,
`If the SFTP/FTP service is
advertised via multicast DNS, this
flag allows to put username/password
inside the advertised TXT record`)
portableCmd.Flags().StringVarP(&portableFsProvider, "fs-provider", "f", "osfs", `osfs => local filesystem (legacy value: 0)
s3fs => AWS S3 compatible (legacy: 1)
gcsfs => Google Cloud Storage (legacy: 2)
@ -376,13 +341,6 @@ prefix and its contents`)
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, `How many parts are uploaded in
parallel`)
portableCmd.Flags().BoolVar(&portableS3ForcePathStyle, "s3-force-path-style", false, `Force path style bucket URL`)
portableCmd.Flags().BoolVar(&portableS3SkipTLSVerify, "s3-skip-tls-verify", false, `If enabled the S3 client accepts any TLS
certificate presented by the server and
any host name in that certificate.
In this mode, TLS is susceptible to
man-in-the-middle attacks.
This should be used only for testing.
`)
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", `Allows to restrict access to the
@ -398,10 +356,6 @@ a JSON credentials file, 1 automatic
portableCmd.Flags().StringVar(&portableWebDAVCert, "webdav-cert", "", `Path to the certificate file for WebDAV
over HTTPS`)
portableCmd.Flags().StringVar(&portableWebDAVKey, "webdav-key", "", `Path to the key file for WebDAV over
HTTPS`)
portableCmd.Flags().StringVar(&portableHTTPSCert, "httpd-cert", "", `Path to the certificate file for WebClient
over HTTPS`)
portableCmd.Flags().StringVar(&portableHTTPSKey, "httpd-key", "", `Path to the key file for WebClient over
HTTPS`)
portableCmd.Flags().StringVar(&portableAzContainer, "az-container", "", "")
portableCmd.Flags().StringVar(&portableAzAccountName, "az-account-name", "", "")
@ -445,14 +399,6 @@ multiple concurrent requests and this
allows data to be transferred at a
faster rate, over high latency networks,
by overlapping round-trip times`)
portableCmd.Flags().IntVar(&graceTime, graceTimeFlag, 0,
`This grace time defines the number of
seconds allowed for existing transfers
to get completed before shutting down.
A graceful shutdown is triggered by an
interrupt signal.
`)
addConfigFlags(portableCmd)
rootCmd.AddCommand(portableCmd)
}
@ -517,30 +463,11 @@ func getFileContents(name string) (string, error) {
return "", err
}
if fi.Size() > 1048576 {
return "", fmt.Errorf("%q is too big %v/1048576 bytes", name, fi.Size())
return "", fmt.Errorf("%#v is too big %v/1048576 bytes", name, fi.Size())
}
contents, err := os.ReadFile(name)
if err != nil {
return "", err
}
return util.BytesToString(contents), nil
}
func convertFsProvider() string {
switch portableFsProvider {
case "osfs", "6": // httpfs (6) is not supported in portable mode, so return the default
return "0"
case "s3fs":
return "1"
case "gcsfs":
return "2"
case "azblobfs":
return "3"
case "cryptfs":
return "4"
case "sftpfs":
return "5"
default:
return portableFsProvider
}
return string(contents), nil
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,14 +10,14 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build noportable
// +build noportable
package cmd
import "github.com/drakkan/sftpgo/v2/internal/version"
import "github.com/drakkan/sftpgo/v2/version"
func init() {
version.AddFeature("-portable")

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -20,14 +20,14 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (
reloadCmd = &cobra.Command{
Use: "reload",
Short: "Reload the SFTPGo Windows Service sending a \"paramchange\" request",
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -23,10 +23,10 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/internal/config"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@ -39,13 +39,13 @@ configuration file and resets the provider by deleting all data and schemas.
This command is not supported for the memory provider.
Please take a look at the usage below to customize the options.`,
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to load configuration: %v", err)
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
@ -56,7 +56,7 @@ Please take a look at the usage below to customize the options.`,
}
providerConf := config.GetProviderConf()
if !resetProviderForce {
logger.WarnToConsole("You are about to delete all the SFTPGo data for provider %q, config file: %q",
logger.WarnToConsole("You are about to delete all the SFTPGo data for provider %#v, config file: %#v",
providerConf.Driver, viper.ConfigFileUsed())
logger.WarnToConsole("Are you sure? (Y/n)")
reader := bufio.NewReader(os.Stdin)
@ -70,7 +70,7 @@ Please take a look at the usage below to customize the options.`,
os.Exit(1)
}
}
logger.InfoToConsole("Resetting provider: %q, config file: %q", providerConf.Driver, viper.ConfigFileUsed())
logger.InfoToConsole("Resetting provider: %#v, config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err = dataprovider.ResetDatabase(providerConf, configDir)
if err != nil {
logger.WarnToConsole("Error resetting provider: %v", err)

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -21,10 +21,10 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/internal/config"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@ -37,17 +37,17 @@ configuration file and restore the provider schema and/or data to a previous ver
This command is not supported for the memory provider.
Please take a look at the usage below to customize the options.`,
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if revertProviderTargetVersion != 29 {
logger.WarnToConsole("Unsupported target version, 29 is the only supported one")
if revertProviderTargetVersion != 15 {
logger.WarnToConsole("Unsupported target version, 15 is the only supported one")
os.Exit(1)
}
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to load configuration: %v", err)
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
@ -57,7 +57,7 @@ Please take a look at the usage below to customize the options.`,
os.Exit(1)
}
providerConf := config.GetProviderConf()
logger.InfoToConsole("Reverting provider: %q config file: %q target version %d", providerConf.Driver,
logger.InfoToConsole("Reverting provider: %#v config file: %#v target version %v", providerConf.Driver,
viper.ConfigFileUsed(), revertProviderTargetVersion)
err = dataprovider.RevertDatabase(providerConf, configDir, revertProviderTargetVersion)
if err != nil {
@ -71,7 +71,7 @@ Please take a look at the usage below to customize the options.`,
func init() {
addConfigFlags(revertProviderCmd)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 29, `29 means the version supported in v2.6.x`)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 15, `15 means the version supported in v2.2.x`)
rootCmd.AddCommand(revertProviderCmd)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// Package cmd provides Command Line Interface support
package cmd
@ -22,7 +22,7 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/version"
)
const (
@ -40,8 +40,8 @@ const (
logMaxAgeKey = "log_max_age"
logCompressFlag = "log-compress"
logCompressKey = "log_compress"
logLevelFlag = "log-level"
logLevelKey = "log_level"
logVerboseFlag = "log-verbose"
logVerboseKey = "log_verbose"
logUTCTimeFlag = "log-utc-time"
logUTCTimeKey = "log_utc_time"
loadDataFromFlag = "loaddata-from"
@ -52,8 +52,6 @@ const (
loadDataQuotaScanKey = "loaddata_scan"
loadDataCleanFlag = "loaddata-clean"
loadDataCleanKey = "loaddata_clean"
graceTimeFlag = "grace-time"
graceTimeKey = "grace_time"
defaultConfigDir = "."
defaultConfigFile = ""
defaultLogFile = "sftpgo.log"
@ -61,13 +59,12 @@ const (
defaultLogMaxBackup = 5
defaultLogMaxAge = 28
defaultLogCompress = false
defaultLogLevel = "debug"
defaultLogVerbose = true
defaultLogUTCTime = false
defaultLoadDataFrom = ""
defaultLoadDataMode = 1
defaultLoadDataQuotaScan = 0
defaultLoadDataClean = false
defaultGraceTime = 0
)
var (
@ -78,19 +75,18 @@ var (
logMaxBackups int
logMaxAge int
logCompress bool
logLevel string
logVerbose bool
logUTCTime bool
loadDataFrom string
loadDataMode int
loadDataQuotaScan int
loadDataClean bool
graceTime int
// used if awscontainer build tag is enabled
disableAWSInstallationCode bool
rootCmd = &cobra.Command{
Use: "sftpgo",
Short: "Full-featured and highly configurable file transfer server",
Short: "Fully featured and highly configurable SFTP server",
}
)
@ -115,11 +111,11 @@ func addConfigFlags(cmd *cobra.Command) {
viper.SetDefault(configDirKey, defaultConfigDir)
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR") //nolint:errcheck // err is not nil only if the key to bind is missing
cmd.Flags().StringVarP(&configDir, configDirFlag, "c", viper.GetString(configDirKey),
`Location of the config dir. This directory
`Location for the config dir. This directory
is used as the base for files with a relative
path, e.g. the private keys for the SFTP
server or the database file if you use a
file-based data provider.
path, eg. the private keys for the SFTP
server or the SQLite database if you use
SQLite as data provider.
The configuration file, if not explicitly set,
is looked for in this dir. We support reading
from JSON, TOML, YAML, HCL, envfile and Java
@ -233,17 +229,13 @@ It is unused if log-file-path is empty.
`)
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
viper.SetDefault(logLevelKey, defaultLogLevel)
viper.BindEnv(logLevelKey, "SFTPGO_LOG_LEVEL") //nolint:errcheck
cmd.Flags().StringVar(&logLevel, logLevelFlag, viper.GetString(logLevelKey),
`Set the log level. Supported values:
debug, info, warn, error.
This flag can be set
using SFTPGO_LOG_LEVEL env var too.
viper.SetDefault(logVerboseKey, defaultLogVerbose)
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
`Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logLevelKey, cmd.Flags().Lookup(logLevelFlag)) //nolint:errcheck
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
@ -266,20 +258,4 @@ This flag can be set using SFTPGO_LOADDATA_QUOTA_SCAN
env var too.
(default 0)`)
viper.BindPFlag(loadDataQuotaScanKey, cmd.Flags().Lookup(loadDataQuotaScanFlag)) //nolint:errcheck
viper.SetDefault(graceTimeKey, defaultGraceTime)
viper.BindEnv(graceTimeKey, "SFTPGO_GRACE_TIME") //nolint:errcheck
cmd.Flags().IntVar(&graceTime, graceTimeFlag, viper.GetInt(graceTimeKey),
`Graceful shutdown is an option to initiate a
shutdown without abrupt cancellation of the
currently ongoing client-initiated transfer
sessions.
This grace time defines the number of seconds
allowed for existing transfers to get
completed before shutting down.
A graceful shutdown is triggered by an
interrupt signal.
This flag can be set using SFTPGO_GRACE_TIME env
var too. 0 means disabled. (default 0)`)
viper.BindPFlag(graceTimeKey, cmd.Flags().Lookup(graceTimeFlag)) //nolint:errcheck
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -20,14 +20,14 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (
rotateLogCmd = &cobra.Command{
Use: "rotatelogs",
Short: "Signal to the running service to rotate the logs",
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),

68
cmd/serve.go Normal file
View file

@ -0,0 +1,68 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
"os"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
serveCmd = &cobra.Command{
Use: "serve",
Short: "Start the SFTPGo service",
Long: `To start the SFTPGo with the default values for the command line flags simply
use:
$ sftpgo serve
Please take a look at the usage below to customize the startup options`,
Run: func(cmd *cobra.Command, args []string) {
service := service.Service{
ConfigDir: util.CleanDirInput(configDir),
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
LogMaxBackups: logMaxBackups,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
LoadDataFrom: loadDataFrom,
LoadDataMode: loadDataMode,
LoadDataQuotaScan: loadDataQuotaScan,
LoadDataClean: loadDataClean,
Shutdown: make(chan bool),
}
if err := service.Start(disableAWSInstallationCode); err == nil {
service.Wait()
if service.Error == nil {
os.Exit(0)
}
}
os.Exit(1)
},
}
)
func init() {
rootCmd.AddCommand(serveCmd)
addServeFlags(serveCmd)
addAWSContainerFlags(serveCmd)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -20,11 +20,10 @@ import (
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/config"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/smtp"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@ -34,35 +33,28 @@ var (
Short: "Test the SMTP configuration",
Long: `SFTPGo will try to send a test email to the specified recipient.
If the SMTP configuration is correct you should receive this email.`,
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.ErrorToConsole("Unable to load configuration: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
err = dataprovider.Initialize(providerConf, configDir, false)
if err != nil {
logger.ErrorToConsole("error initializing data provider: %v", err)
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
smtpConfig := config.GetSMTPConfig()
smtpConfig.Debug = 1
err = smtpConfig.Initialize(configDir, false)
err = smtpConfig.Initialize(configDir)
if err != nil {
logger.ErrorToConsole("unable to initialize SMTP configuration: %v", err)
os.Exit(1)
}
err = smtp.SendEmail([]string{smtpTestRecipient}, nil, "SFTPGo - Testing Email Settings", "It appears your SFTPGo email is setup correctly!",
err = smtp.SendEmail(smtpTestRecipient, "SFTPGo - Testing Email Settings", "It appears your SFTPGo email is setup correctly!",
smtp.EmailContentTypeTextPlain)
if err != nil {
logger.WarnToConsole("Error sending email: %v", err)
os.Exit(1)
}
logger.InfoToConsole("No errors were reported while sending the test email. Please check your inbox to make sure.")
logger.InfoToConsole("No errors were reported while sending an email. Please check your inbox to make sure.")
},
}
)

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,43 +10,41 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
"fmt"
"os"
"path/filepath"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/service"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
startCmd = &cobra.Command{
Use: "start",
Short: "Start the SFTPGo Windows Service",
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
configDir = util.CleanDirInput(configDir)
checkServeParamsFromEnvFiles(configDir)
service.SetGraceTime(graceTime)
if !filepath.IsAbs(logFilePath) && util.IsFileInputValid(logFilePath) {
logFilePath = filepath.Join(configDir, logFilePath)
}
s := service.Service{
ConfigDir: configDir,
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
LogMaxBackups: logMaxBackups,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogLevel: logLevel,
LogUTCTime: logUTCTime,
LoadDataFrom: loadDataFrom,
LoadDataMode: loadDataMode,
LoadDataQuotaScan: loadDataQuotaScan,
LoadDataClean: loadDataClean,
Shutdown: make(chan bool),
ConfigDir: configDir,
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
LogMaxBackups: logMaxBackups,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
Shutdown: make(chan bool),
}
winService := service.WindowsService{
Service: s,

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -25,13 +25,13 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/internal/common"
"github.com/drakkan/sftpgo/v2/internal/config"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/plugin"
"github.com/drakkan/sftpgo/v2/internal/sftpd"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
)
var (
@ -51,25 +51,18 @@ Subsystem sftp sftpgo startsubsys
Command-line flags should be specified in the Subsystem declaration.
`,
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
logSender := "startsubsys"
connectionID := xid.New().String()
var zeroLogLevel zerolog.Level
switch logLevel {
case "info":
zeroLogLevel = zerolog.InfoLevel
case "warn":
zeroLogLevel = zerolog.WarnLevel
case "error":
zeroLogLevel = zerolog.ErrorLevel
default:
zeroLogLevel = zerolog.DebugLevel
logLevel := zerolog.DebugLevel
if !logVerbose {
logLevel = zerolog.InfoLevel
}
logger.SetLogTime(logUTCTime)
if logJournalD {
logger.InitJournalDLogger(zeroLogLevel)
logger.InitJournalDLogger(logLevel)
} else {
logger.InitStdErrLogger(zeroLogLevel)
logger.InitStdErrLogger(logLevel)
}
osUser, err := user.Current()
if err != nil {
@ -78,31 +71,45 @@ Command-line flags should be specified in the Subsystem declaration.
}
username := osUser.Username
homedir := osUser.HomeDir
logger.Info(logSender, connectionID, "starting SFTPGo %v as subsystem, user %q home dir %q config dir %q base home dir %q",
logger.Info(logSender, connectionID, "starting SFTPGo %v as subsystem, user %#v home dir %#v config dir %#v base home dir %#v",
version.Get(), username, homedir, configDir, baseHomeDir)
err = config.LoadConfig(configDir, configFile)
if err != nil {
logger.Error(logSender, connectionID, "unable to load configuration: %v", err)
os.Exit(1)
}
dataProviderConf := config.GetProviderConf()
commonConfig := config.GetCommonConfig()
// idle connection are managed externally
commonConfig.IdleTimeout = 0
config.SetCommonConfig(commonConfig)
if err := common.Initialize(config.GetCommonConfig(), dataProviderConf.GetShared()); err != nil {
logger.Error(logSender, connectionID, "%v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
if err := kmsConfig.Initialize(); err != nil {
logger.Error(logSender, connectionID, "unable to initialize KMS: %v", err)
os.Exit(1)
}
if err := plugin.Initialize(config.GetPluginsConfig(), logLevel); err != nil {
logger.Error(logSender, connectionID, "unable to initialize plugin system: %v", err)
os.Exit(1)
}
mfaConfig := config.GetMFAConfig()
err = mfaConfig.Initialize()
if err != nil {
logger.Error(logSender, "", "unable to initialize MFA: %v", err)
os.Exit(1)
}
dataProviderConf := config.GetProviderConf()
if err := plugin.Initialize(config.GetPluginsConfig(), logVerbose); err != nil {
logger.Error(logSender, connectionID, "unable to initialize plugin system: %v", err)
os.Exit(1)
}
smtpConfig := config.GetSMTPConfig()
err = smtpConfig.Initialize(configDir)
if err != nil {
logger.Error(logSender, connectionID, "unable to initialize SMTP configuration: %v", err)
os.Exit(1)
}
if dataProviderConf.Driver == dataprovider.SQLiteDataProviderName || dataProviderConf.Driver == dataprovider.BoltDataProviderName {
logger.Debug(logSender, connectionID, "data provider %q not supported in subsystem mode, using %q provider",
logger.Debug(logSender, connectionID, "data provider %#v not supported in subsystem mode, using %#v provider",
dataProviderConf.Driver, dataprovider.MemoryDataProviderName)
dataProviderConf.Driver = dataprovider.MemoryDataProviderName
dataProviderConf.Name = ""
@ -113,20 +120,6 @@ Command-line flags should be specified in the Subsystem declaration.
logger.Error(logSender, connectionID, "unable to initialize the data provider: %v", err)
os.Exit(1)
}
smtpConfig := config.GetSMTPConfig()
err = smtpConfig.Initialize(configDir, false)
if err != nil {
logger.Error(logSender, connectionID, "unable to initialize SMTP configuration: %v", err)
os.Exit(1)
}
commonConfig := config.GetCommonConfig()
// idle connection are managed externally
commonConfig.IdleTimeout = 0
config.SetCommonConfig(commonConfig)
if err := common.Initialize(config.GetCommonConfig(), dataProviderConf.GetShared()); err != nil {
logger.Error(logSender, connectionID, "%v", err)
os.Exit(1)
}
httpConfig := config.GetHTTPConfig()
if err := httpConfig.Initialize(configDir); err != nil {
logger.Error(logSender, connectionID, "unable to initialize http client: %v", err)
@ -137,14 +130,14 @@ Command-line flags should be specified in the Subsystem declaration.
logger.Error(logSender, connectionID, "unable to initialize commands configuration: %v", err)
os.Exit(1)
}
user, err := dataprovider.UserExists(username, "")
user, err := dataprovider.UserExists(username)
if err == nil {
if user.HomeDir != filepath.Clean(homedir) && !preserveHomeDir {
// update the user
user.HomeDir = filepath.Clean(homedir)
err = dataprovider.UpdateUser(&user, dataprovider.ActionExecutorSystem, "", "")
err = dataprovider.UpdateUser(&user, dataprovider.ActionExecutorSystem, "")
if err != nil {
logger.Error(logSender, connectionID, "unable to update user %q: %v", username, err)
logger.Error(logSender, connectionID, "unable to update user %#v: %v", username, err)
os.Exit(1)
}
}
@ -155,19 +148,19 @@ Command-line flags should be specified in the Subsystem declaration.
} else {
user.HomeDir = filepath.Clean(homedir)
}
logger.Debug(logSender, connectionID, "home dir for new user %q", user.HomeDir)
logger.Debug(logSender, connectionID, "home dir for new user %#v", user.HomeDir)
user.Password = connectionID
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
err = dataprovider.AddUser(&user, dataprovider.ActionExecutorSystem, "", "")
err = dataprovider.AddUser(&user, dataprovider.ActionExecutorSystem, "")
if err != nil {
logger.Error(logSender, connectionID, "unable to add user %q: %v", username, err)
logger.Error(logSender, connectionID, "unable to add user %#v: %v", username, err)
os.Exit(1)
}
}
err = user.LoadAndApplyGroupSettings()
if err != nil {
logger.Error(logSender, connectionID, "unable to apply group settings for user %q: %v", username, err)
logger.Error(logSender, connectionID, "unable to apply group settings for user %#v: %v", username, err)
os.Exit(1)
}
err = sftpd.ServeSubSystemConnection(&user, connectionID, os.Stdin, os.Stdout)
@ -203,17 +196,13 @@ error`)
addConfigFlags(subsystemCmd)
viper.SetDefault(logLevelKey, defaultLogLevel)
viper.BindEnv(logLevelKey, "SFTPGO_LOG_LEVEL") //nolint:errcheck
subsystemCmd.Flags().StringVar(&logLevel, logLevelFlag, viper.GetString(logLevelKey),
`Set the log level. Supported values:
debug, info, warn, error.
This flag can be set
using SFTPGO_LOG_LEVEL env var too.
viper.SetDefault(logVerboseKey, defaultLogVerbose)
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
subsystemCmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
`Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logLevelKey, subsystemCmd.Flags().Lookup(logLevelFlag)) //nolint:errcheck
viper.BindPFlag(logVerboseKey, subsystemCmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -20,14 +20,14 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (
statusCmd = &cobra.Command{
Use: "status",
Short: "Retrieve the status for the SFTPGo Windows Service",
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),
@ -38,7 +38,7 @@ var (
fmt.Printf("Error querying service status: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service status: %q\r\n", status.String())
fmt.Printf("Service status: %#v\r\n", status.String())
}
},
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -20,14 +20,14 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (
stopCmd = &cobra.Command{
Use: "stop",
Short: "Stop the SFTPGo Windows Service",
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
@ -20,14 +20,14 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/internal/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (
uninstallCmd = &cobra.Command{
Use: "uninstall",
Short: "Uninstall the SFTPGo Windows Service",
Run: func(_ *cobra.Command, _ []string) {
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,14 +10,14 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// Package command provides command configuration for SFTPGo hooks
package command
import (
"fmt"
"slices"
"os"
"strings"
"time"
)
@ -28,25 +28,8 @@ const (
defaultTimeout = 30
)
// Supported hook names
const (
HookFsActions = "fs_actions"
HookProviderActions = "provider_actions"
HookStartup = "startup"
HookPostConnect = "post_connect"
HookPostDisconnect = "post_disconnect"
HookDataRetention = "data_retention"
HookCheckPassword = "check_password"
HookPreLogin = "pre_login"
HookPostLogin = "post_login"
HookExternalAuth = "external_auth"
HookKeyboardInteractive = "keyboard_interactive"
)
var (
config Config
supportedHooks = []string{HookFsActions, HookProviderActions, HookStartup, HookPostConnect, HookPostDisconnect,
HookDataRetention, HookCheckPassword, HookPreLogin, HookPostLogin, HookExternalAuth, HookKeyboardInteractive}
config Config
)
// Command define the configuration for a specific commands
@ -58,14 +41,10 @@ type Command struct {
// Do not use variables with the SFTPGO_ prefix to avoid conflicts with env
// vars that SFTPGo sets
Timeout int `json:"timeout" mapstructure:"timeout"`
// Env defines environment variable for the command.
// Env defines additional environment variable for the commands.
// Each entry is of the form "key=value".
// These values are added to the global environment variables if any
Env []string `json:"env" mapstructure:"env"`
// Args defines arguments to pass to the specified command
Args []string `json:"args" mapstructure:"args"`
// if not empty both command path and hook name must match
Hook string `json:"hook" mapstructure:"hook"`
}
// Config defines the configuration for external commands such as
@ -73,7 +52,7 @@ type Command struct {
type Config struct {
// Timeout specifies a global time limit, in seconds, for the external commands execution
Timeout int `json:"timeout" mapstructure:"timeout"`
// Env defines environment variable for the commands.
// Env defines additional environment variable for the commands.
// Each entry is of the form "key=value".
// Do not use variables with the SFTPGO_ prefix to avoid conflicts with env
// vars that SFTPGo sets
@ -94,30 +73,24 @@ func (c Config) Initialize() error {
return fmt.Errorf("invalid timeout %v", c.Timeout)
}
for _, env := range c.Env {
if len(strings.SplitN(env, "=", 2)) != 2 {
return fmt.Errorf("invalid env var %q", env)
if len(strings.Split(env, "=")) != 2 {
return fmt.Errorf("invalid env var %#v", env)
}
}
for idx, cmd := range c.Commands {
if cmd.Path == "" {
return fmt.Errorf("invalid path %q", cmd.Path)
return fmt.Errorf("invalid path %#v", cmd.Path)
}
if cmd.Timeout == 0 {
c.Commands[idx].Timeout = c.Timeout
} else {
if cmd.Timeout < minTimeout || cmd.Timeout > maxTimeout {
return fmt.Errorf("invalid timeout %v for command %q", cmd.Timeout, cmd.Path)
return fmt.Errorf("invalid timeout %v for command %#v", cmd.Timeout, cmd.Path)
}
}
for _, env := range cmd.Env {
if len(strings.SplitN(env, "=", 2)) != 2 {
return fmt.Errorf("invalid env var %q for command %q", env, cmd.Path)
}
}
// don't validate args, we allow to pass empty arguments
if cmd.Hook != "" {
if !slices.Contains(supportedHooks, cmd.Hook) {
return fmt.Errorf("invalid hook name %q, supported values: %+v", cmd.Hook, supportedHooks)
if len(strings.Split(env, "=")) != 2 {
return fmt.Errorf("invalid env var %#v for command %#v", env, cmd.Path)
}
}
}
@ -126,21 +99,17 @@ func (c Config) Initialize() error {
}
// GetConfig returns the configuration for the specified command
func GetConfig(command, hook string) (time.Duration, []string, []string) {
env := []string{}
var args []string
func GetConfig(command string) (time.Duration, []string) {
env := os.Environ()
timeout := time.Duration(config.Timeout) * time.Second
env = append(env, config.Env...)
for _, cmd := range config.Commands {
if cmd.Path == command {
if cmd.Hook == "" || cmd.Hook == hook {
timeout = time.Duration(cmd.Timeout) * time.Second
env = append(env, cmd.Env...)
args = cmd.Args
break
}
timeout = time.Duration(cmd.Timeout) * time.Second
env = append(env, cmd.Env...)
break
}
}
return timeout, env, args
return timeout, env
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package command
@ -33,17 +33,15 @@ func TestCommandConfig(t *testing.T) {
assert.Equal(t, cfg.Timeout, config.Timeout)
assert.Equal(t, cfg.Env, config.Env)
assert.Len(t, cfg.Commands, 0)
timeout, env, args := GetConfig("cmd", "")
timeout, env := GetConfig("cmd")
assert.Equal(t, time.Duration(config.Timeout)*time.Second, timeout)
assert.Contains(t, env, "a=b")
assert.Len(t, args, 0)
cfg.Commands = []Command{
{
Path: "cmd1",
Timeout: 30,
Env: []string{"c=d"},
Args: []string{"1", "", "2"},
},
{
Path: "cmd2",
@ -59,68 +57,20 @@ func TestCommandConfig(t *testing.T) {
assert.Equal(t, cfg.Commands[0].Path, config.Commands[0].Path)
assert.Equal(t, cfg.Commands[0].Timeout, config.Commands[0].Timeout)
assert.Equal(t, cfg.Commands[0].Env, config.Commands[0].Env)
assert.Equal(t, cfg.Commands[0].Args, config.Commands[0].Args)
assert.Equal(t, cfg.Commands[1].Path, config.Commands[1].Path)
assert.Equal(t, cfg.Timeout, config.Commands[1].Timeout)
assert.Equal(t, cfg.Commands[1].Env, config.Commands[1].Env)
assert.Equal(t, cfg.Commands[1].Args, config.Commands[1].Args)
}
timeout, env, args = GetConfig("cmd1", "")
timeout, env = GetConfig("cmd1")
assert.Equal(t, time.Duration(config.Commands[0].Timeout)*time.Second, timeout)
assert.Contains(t, env, "a=b")
assert.Contains(t, env, "c=d")
assert.NotContains(t, env, "e=f")
if assert.Len(t, args, 3) {
assert.Equal(t, "1", args[0])
assert.Empty(t, args[1])
assert.Equal(t, "2", args[2])
}
timeout, env, args = GetConfig("cmd2", "")
timeout, env = GetConfig("cmd2")
assert.Equal(t, time.Duration(config.Timeout)*time.Second, timeout)
assert.Contains(t, env, "a=b")
assert.NotContains(t, env, "c=d")
assert.Contains(t, env, "e=f")
assert.Len(t, args, 0)
cfg.Commands = []Command{
{
Path: "cmd1",
Timeout: 30,
Env: []string{"c=d"},
Args: []string{"1", "", "2"},
Hook: HookCheckPassword,
},
{
Path: "cmd1",
Timeout: 0,
Env: []string{"e=f"},
Hook: HookExternalAuth,
},
}
err = cfg.Initialize()
require.NoError(t, err)
timeout, env, args = GetConfig("cmd1", "")
assert.Equal(t, time.Duration(config.Timeout)*time.Second, timeout)
assert.Contains(t, env, "a=b")
assert.NotContains(t, env, "c=d")
assert.NotContains(t, env, "e=f")
assert.Len(t, args, 0)
timeout, env, args = GetConfig("cmd1", HookCheckPassword)
assert.Equal(t, time.Duration(config.Commands[0].Timeout)*time.Second, timeout)
assert.Contains(t, env, "a=b")
assert.Contains(t, env, "c=d")
assert.NotContains(t, env, "e=f")
if assert.Len(t, args, 3) {
assert.Equal(t, "1", args[0])
assert.Empty(t, args[1])
assert.Equal(t, "2", args[2])
}
timeout, env, args = GetConfig("cmd1", HookExternalAuth)
assert.Equal(t, time.Duration(cfg.Timeout)*time.Second, timeout)
assert.Contains(t, env, "a=b")
assert.NotContains(t, env, "c=d")
assert.Contains(t, env, "e=f")
assert.Len(t, args, 0)
}
func TestConfigErrors(t *testing.T) {
@ -166,16 +116,4 @@ func TestConfigErrors(t *testing.T) {
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "invalid env var")
}
c.Commands = []Command{
{
Path: "path",
Timeout: 30,
Env: []string{"a=b"},
Hook: "invali",
},
}
err = c.Initialize()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "invalid hook name")
}
}

290
common/actions.go Normal file
View file

@ -0,0 +1,290 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/sftpgo/sdk"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/command"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
var (
errUnconfiguredAction = errors.New("no hook is configured for this action")
errNoHook = errors.New("unable to execute action, no hook defined")
errUnexpectedHTTResponse = errors.New("unexpected HTTP response code")
hooksConcurrencyGuard = make(chan struct{}, 150)
)
func startNewHook() {
hooksConcurrencyGuard <- struct{}{}
}
func hookEnded() {
<-hooksConcurrencyGuard
}
// ProtocolActions defines the action to execute on file operations and SSH commands
type ProtocolActions struct {
// Valid values are download, upload, pre-delete, delete, rename, ssh_cmd. Empty slice to disable
ExecuteOn []string `json:"execute_on" mapstructure:"execute_on"`
// Actions to be performed synchronously.
// The pre-delete action is always executed synchronously while the other ones are asynchronous.
// Executing an action synchronously means that SFTPGo will not return a result code to the client
// (which is waiting for it) until your hook have completed its execution.
ExecuteSync []string `json:"execute_sync" mapstructure:"execute_sync"`
// Absolute path to an external program or an HTTP URL
Hook string `json:"hook" mapstructure:"hook"`
}
var actionHandler ActionHandler = &defaultActionHandler{}
// InitializeActionHandler lets the user choose an action handler implementation.
//
// Do NOT call this function after application initialization.
func InitializeActionHandler(handler ActionHandler) {
actionHandler = handler
}
func handleUnconfiguredPreAction(operation string) error {
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
// Other pre action will deny the operation on error so if we have no configuration we must return
// a nil error
if operation == operationPreDelete {
return errUnconfiguredAction
}
return nil
}
// ExecutePreAction executes a pre-* action and returns the result
func ExecutePreAction(conn *BaseConnection, operation, filePath, virtualPath string, fileSize int64, openFlags int) error {
var event *notifier.FsEvent
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
hasHook := util.Contains(Config.Actions.ExecuteOn, operation)
if !hasHook && !hasNotifiersPlugin {
return handleUnconfiguredPreAction(operation)
}
event = newActionNotification(&conn.User, operation, filePath, virtualPath, "", "", "",
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, openFlags, nil)
if hasNotifiersPlugin {
plugin.Handler.NotifyFsEvent(event)
}
if !hasHook {
return handleUnconfiguredPreAction(operation)
}
return actionHandler.Handle(event)
}
// ExecuteActionNotification executes the defined hook, if any, for the specified action
func ExecuteActionNotification(conn *BaseConnection, operation, filePath, virtualPath, target, virtualTarget, sshCmd string,
fileSize int64, err error,
) {
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
hasHook := util.Contains(Config.Actions.ExecuteOn, operation)
if !hasHook && !hasNotifiersPlugin {
return
}
notification := newActionNotification(&conn.User, operation, filePath, virtualPath, target, virtualTarget, sshCmd,
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, 0, err)
if hasNotifiersPlugin {
plugin.Handler.NotifyFsEvent(notification)
}
if hasHook {
if util.Contains(Config.Actions.ExecuteSync, operation) {
actionHandler.Handle(notification) //nolint:errcheck
return
}
go func() {
startNewHook()
defer hookEnded()
actionHandler.Handle(notification) //nolint:errcheck
}()
}
}
// ActionHandler handles a notification for a Protocol Action.
type ActionHandler interface {
Handle(notification *notifier.FsEvent) error
}
func newActionNotification(
user *dataprovider.User,
operation, filePath, virtualPath, target, virtualTarget, sshCmd, protocol, ip, sessionID string,
fileSize int64,
openFlags int,
err error,
) *notifier.FsEvent {
var bucket, endpoint string
status := 1
fsConfig := user.GetFsConfigForPath(virtualPath)
switch fsConfig.Provider {
case sdk.S3FilesystemProvider:
bucket = fsConfig.S3Config.Bucket
endpoint = fsConfig.S3Config.Endpoint
case sdk.GCSFilesystemProvider:
bucket = fsConfig.GCSConfig.Bucket
case sdk.AzureBlobFilesystemProvider:
bucket = fsConfig.AzBlobConfig.Container
if fsConfig.AzBlobConfig.Endpoint != "" {
endpoint = fsConfig.AzBlobConfig.Endpoint
}
case sdk.SFTPFilesystemProvider:
endpoint = fsConfig.SFTPConfig.Endpoint
}
if err == ErrQuotaExceeded {
status = 3
} else if err != nil {
status = 2
}
return &notifier.FsEvent{
Action: operation,
Username: user.Username,
Path: filePath,
TargetPath: target,
VirtualPath: virtualPath,
VirtualTargetPath: virtualTarget,
SSHCmd: sshCmd,
FileSize: fileSize,
FsProvider: int(fsConfig.Provider),
Bucket: bucket,
Endpoint: endpoint,
Status: status,
Protocol: protocol,
IP: ip,
SessionID: sessionID,
OpenFlags: openFlags,
Timestamp: time.Now().UnixNano(),
}
}
type defaultActionHandler struct{}
func (h *defaultActionHandler) Handle(event *notifier.FsEvent) error {
if !util.Contains(Config.Actions.ExecuteOn, event.Action) {
return errUnconfiguredAction
}
if Config.Actions.Hook == "" {
logger.Warn(event.Protocol, "", "Unable to send notification, no hook is defined")
return errNoHook
}
if strings.HasPrefix(Config.Actions.Hook, "http") {
return h.handleHTTP(event)
}
return h.handleCommand(event)
}
func (h *defaultActionHandler) handleHTTP(event *notifier.FsEvent) error {
u, err := url.Parse(Config.Actions.Hook)
if err != nil {
logger.Error(event.Protocol, "", "Invalid hook %#v for operation %#v: %v",
Config.Actions.Hook, event.Action, err)
return err
}
startTime := time.Now()
respCode := 0
var b bytes.Buffer
_ = json.NewEncoder(&b).Encode(event)
resp, err := httpclient.RetryablePost(Config.Actions.Hook, "application/json", &b)
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
if respCode != http.StatusOK {
err = errUnexpectedHTTResponse
}
}
logger.Debug(event.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
event.Action, u.Redacted(), respCode, time.Since(startTime), err)
return err
}
func (h *defaultActionHandler) handleCommand(event *notifier.FsEvent) error {
if !filepath.IsAbs(Config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %#v", Config.Actions.Hook)
logger.Warn(event.Protocol, "", "unable to execute notification command: %v", err)
return err
}
timeout, env := command.GetConfig(Config.Actions.Hook)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
cmd := exec.CommandContext(ctx, Config.Actions.Hook)
cmd.Env = append(env, notificationAsEnvVars(event)...)
startTime := time.Now()
err := cmd.Run()
logger.Debug(event.Protocol, "", "executed command %#v, elapsed: %v, error: %v",
Config.Actions.Hook, time.Since(startTime), err)
return err
}
func notificationAsEnvVars(event *notifier.FsEvent) []string {
return []string{
fmt.Sprintf("SFTPGO_ACTION=%v", event.Action),
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", event.Username),
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", event.Path),
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", event.TargetPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_PATH=%v", event.VirtualPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_TARGET=%v", event.VirtualTargetPath),
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", event.SSHCmd),
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", event.FileSize),
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", event.FsProvider),
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", event.Bucket),
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", event.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", event.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", event.Protocol),
fmt.Sprintf("SFTPGO_ACTION_IP=%v", event.IP),
fmt.Sprintf("SFTPGO_ACTION_SESSION_ID=%v", event.SessionID),
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", event.OpenFlags),
fmt.Sprintf("SFTPGO_ACTION_TIMESTAMP=%v", event.Timestamp),
}
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
@ -22,21 +22,20 @@ import (
"path/filepath"
"runtime"
"testing"
"time"
"github.com/lithammer/shortuuid/v4"
"github.com/lithammer/shortuuid/v3"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/plugin"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/vfs"
)
func TestNewActionNotification(t *testing.T) {
user := dataprovider.User{
user := &dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
},
@ -64,62 +63,45 @@ func TestNewActionNotification(t *testing.T) {
Endpoint: "sftpendpoint",
},
}
user.FsConfig.HTTPConfig = vfs.HTTPFsConfig{
BaseHTTPFsConfig: sdk.BaseHTTPFsConfig{
Endpoint: "httpendpoint",
},
}
c := NewBaseConnection("id", ProtocolSSH, "", "", user)
sessionID := xid.New().String()
a := newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, c.getNotificationStatus(errors.New("fake error")), 0, time.Now(), nil)
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, errors.New("fake error"))
assert.Equal(t, user.Username, a.Username)
assert.Equal(t, 0, len(a.Bucket))
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 2, a.Status)
user.FsConfig.Provider = sdk.S3FilesystemProvider
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", sessionID,
123, 0, c.getNotificationStatus(nil), 0, time.Now(), nil)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", sessionID,
123, 0, nil)
assert.Equal(t, "s3bucket", a.Bucket)
assert.Equal(t, "endpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.Provider = sdk.GCSFilesystemProvider
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, 0, c.getNotificationStatus(ErrQuotaExceeded), 0, time.Now(), nil)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, 0, ErrQuotaExceeded)
assert.Equal(t, "gcsbucket", a.Bucket)
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 3, a.Status)
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, 0, c.getNotificationStatus(fmt.Errorf("wrapper quota error: %w", ErrQuotaExceeded)), 0, time.Now(), nil)
assert.Equal(t, "gcsbucket", a.Bucket)
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 3, a.Status)
user.FsConfig.Provider = sdk.HTTPFilesystemProvider
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", sessionID,
123, 0, c.getNotificationStatus(nil), 0, time.Now(), nil)
assert.Equal(t, "httpendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.Provider = sdk.AzureBlobFilesystemProvider
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, 0, c.getNotificationStatus(nil), 0, time.Now(), nil)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, 0, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, os.O_APPEND, c.getNotificationStatus(nil), 0, time.Now(), nil)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, os.O_APPEND, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
assert.Equal(t, os.O_APPEND, a.OpenFlags)
user.FsConfig.Provider = sdk.SFTPFilesystemProvider
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, c.getNotificationStatus(nil), 0, time.Now(), nil)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, nil)
assert.Equal(t, "sftpendpoint", a.Endpoint)
}
@ -136,22 +118,19 @@ func TestActionHTTP(t *testing.T) {
},
}
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "",
xid.New().String(), 123, 0, 1, 0, time.Now(), nil)
status, err := actionHandler.Handle(a)
xid.New().String(), 123, 0, nil)
err := actionHandler.Handle(a)
assert.NoError(t, err)
assert.Equal(t, 1, status)
Config.Actions.Hook = "http://invalid:1234"
status, err = actionHandler.Handle(a)
err = actionHandler.Handle(a)
assert.Error(t, err)
assert.Equal(t, 1, status)
Config.Actions.Hook = fmt.Sprintf("http://%v/404", httpAddr)
status, err = actionHandler.Handle(a)
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, errUnexpectedHTTResponse.Error())
}
assert.Equal(t, 1, status)
Config.Actions = actionsCopy
}
@ -176,17 +155,14 @@ func TestActionCMD(t *testing.T) {
}
sessionID := shortuuid.New()
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, 1, 0, time.Now(), map[string]string{"key": "value"})
status, err := actionHandler.Handle(a)
123, 0, nil)
err = actionHandler.Handle(a)
assert.NoError(t, err)
assert.Equal(t, 1, status)
c := NewBaseConnection("id", ProtocolSFTP, "", "", *user)
err = ExecuteActionNotification(c, OperationSSHCmd, "path", "vpath", "target", "vtarget", "sha1sum", 0, nil, 0, nil)
assert.NoError(t, err)
ExecuteActionNotification(c, OperationSSHCmd, "path", "vpath", "target", "vtarget", "sha1sum", 0, nil)
err = ExecuteActionNotification(c, operationDownload, "path", "vpath", "", "", "", 0, nil, 0, nil)
assert.NoError(t, err)
ExecuteActionNotification(c, operationDownload, "path", "vpath", "", "", "", 0, nil)
Config.Actions = actionsCopy
}
@ -209,33 +185,30 @@ func TestWrongActions(t *testing.T) {
}
a := newActionNotification(user, operationUpload, "", "", "", "", "", ProtocolSFTP, "", xid.New().String(),
123, 0, 1, 0, time.Now(), nil)
status, err := actionHandler.Handle(a)
123, 0, nil)
err := actionHandler.Handle(a)
assert.Error(t, err, "action with bad command must fail")
assert.Equal(t, 1, status)
a.Action = operationDelete
status, err = actionHandler.Handle(a)
assert.NoError(t, err)
assert.Equal(t, 0, status)
err = actionHandler.Handle(a)
assert.EqualError(t, err, errUnconfiguredAction.Error())
Config.Actions.Hook = "http://foo\x7f.com/"
a.Action = operationUpload
status, err = actionHandler.Handle(a)
err = actionHandler.Handle(a)
assert.Error(t, err, "action with bad url must fail")
assert.Equal(t, 1, status)
Config.Actions.Hook = ""
status, err = actionHandler.Handle(a)
assert.NoError(t, err)
assert.Equal(t, 0, status)
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, errNoHook.Error())
}
Config.Actions.Hook = "relative path"
status, err = actionHandler.Handle(a)
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, fmt.Sprintf("invalid notification command %q", Config.Actions.Hook))
assert.EqualError(t, err, fmt.Sprintf("invalid notification command %#v", Config.Actions.Hook))
}
assert.Equal(t, 1, status)
Config.Actions = actionsCopy
}
@ -250,7 +223,7 @@ func TestPreDeleteAction(t *testing.T) {
assert.NoError(t, err)
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationPreDelete},
Hook: "missing hook",
Hook: hookCmd,
}
homeDir := filepath.Join(os.TempDir(), "test_user")
err = os.MkdirAll(homeDir, os.ModePerm)
@ -263,7 +236,7 @@ func TestPreDeleteAction(t *testing.T) {
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("id", homeDir, "", nil)
fs := vfs.NewOsFs("id", homeDir, "")
c := NewBaseConnection("id", ProtocolSFTP, "", "", user)
testfile := filepath.Join(user.HomeDir, "testfile")
@ -272,12 +245,8 @@ func TestPreDeleteAction(t *testing.T) {
info, err := os.Stat(testfile)
assert.NoError(t, err)
err = c.RemoveFile(fs, testfile, "testfile", info)
assert.ErrorIs(t, err, c.GetPermissionDeniedError())
assert.FileExists(t, testfile)
Config.Actions.Hook = hookCmd
err = c.RemoveFile(fs, testfile, "testfile", info)
assert.NoError(t, err)
assert.NoFileExists(t, testfile)
assert.FileExists(t, testfile)
os.RemoveAll(homeDir)
@ -296,22 +265,19 @@ func TestUnconfiguredHook(t *testing.T) {
Type: "notifier",
},
}
err := plugin.Initialize(pluginsConfig, "debug")
err := plugin.Initialize(pluginsConfig, true)
assert.Error(t, err)
assert.True(t, plugin.Handler.HasNotifiers())
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
status, err := ExecutePreAction(c, OperationPreDownload, "", "", 0, 0)
err = ExecutePreAction(c, OperationPreDownload, "", "", 0, 0)
assert.NoError(t, err)
assert.Equal(t, status, 0)
status, err = ExecutePreAction(c, operationPreDelete, "", "", 0, 0)
assert.NoError(t, err)
assert.Equal(t, status, 0)
err = ExecutePreAction(c, operationPreDelete, "", "", 0, 0)
assert.ErrorIs(t, err, errUnconfiguredAction)
err = ExecuteActionNotification(c, operationDownload, "", "", "", "", "", 0, nil, 0, nil)
assert.NoError(t, err)
ExecuteActionNotification(c, operationDownload, "", "", "", "", "", 0, nil)
err = plugin.Initialize(nil, "debug")
err = plugin.Initialize(nil, true)
assert.NoError(t, err)
assert.False(t, plugin.Handler.HasNotifiers())
@ -322,10 +288,10 @@ type actionHandlerStub struct {
called bool
}
func (h *actionHandlerStub) Handle(_ *notifier.FsEvent) (int, error) {
func (h *actionHandlerStub) Handle(event *notifier.FsEvent) error {
h.called = true
return 1, nil
return nil
}
func TestInitializeActionHandler(t *testing.T) {
@ -336,8 +302,8 @@ func TestInitializeActionHandler(t *testing.T) {
InitializeActionHandler(&defaultActionHandler{})
})
status, err := actionHandler.Handle(&notifier.FsEvent{})
err := actionHandler.Handle(&notifier.FsEvent{})
assert.NoError(t, err)
assert.True(t, handler.called)
assert.Equal(t, 1, status)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
@ -18,18 +18,18 @@ import (
"sync"
"sync/atomic"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/logger"
)
// clienstMap is a struct containing the map of the connected clients
type clientsMap struct {
totalConnections atomic.Int32
totalConnections int32
mu sync.RWMutex
clients map[string]int
}
func (c *clientsMap) add(source string) {
c.totalConnections.Add(1)
atomic.AddInt32(&c.totalConnections, 1)
c.mu.Lock()
defer c.mu.Unlock()
@ -42,7 +42,7 @@ func (c *clientsMap) remove(source string) {
defer c.mu.Unlock()
if val, ok := c.clients[source]; ok {
c.totalConnections.Add(-1)
atomic.AddInt32(&c.totalConnections, -1)
c.clients[source]--
if val > 1 {
return
@ -54,7 +54,7 @@ func (c *clientsMap) remove(source string) {
}
func (c *clientsMap) getTotal() int32 {
return c.totalConnections.Load()
return atomic.LoadInt32(&c.totalConnections)
}
func (c *clientsMap) getTotalFrom(source string) int {

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common

File diff suppressed because it is too large Load diff

1099
common/common_test.go Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

493
common/connection_test.go Normal file
View file

@ -0,0 +1,493 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"os"
"path"
"path/filepath"
"runtime"
"testing"
"time"
"github.com/pkg/sftp"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
// MockOsFs mockable OsFs
type MockOsFs struct {
vfs.Fs
hasVirtualFolders bool
}
// Name returns the name for the Fs implementation
func (fs *MockOsFs) Name() string {
return "mockOsFs"
}
// HasVirtualFolders returns true if folders are emulated
func (fs *MockOsFs) HasVirtualFolders() bool {
return fs.hasVirtualFolders
}
func (fs *MockOsFs) IsUploadResumeSupported() bool {
return !fs.hasVirtualFolders
}
func (fs *MockOsFs) Chtimes(name string, atime, mtime time.Time, isUploading bool) error {
return vfs.ErrVfsUnsupported
}
func newMockOsFs(hasVirtualFolders bool, connectionID, rootDir string) vfs.Fs {
return &MockOsFs{
Fs: vfs.NewOsFs(connectionID, rootDir, ""),
hasVirtualFolders: hasVirtualFolders,
}
}
func TestRemoveErrors(t *testing.T) {
mappedPath := filepath.Join(os.TempDir(), "map")
homePath := filepath.Join(os.TempDir(), "home")
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "remove_errors_user",
HomeDir: homePath,
},
VirtualFolders: []vfs.VirtualFolder{
{
BaseVirtualFolder: vfs.BaseVirtualFolder{
Name: filepath.Base(mappedPath),
MappedPath: mappedPath,
},
VirtualPath: "/virtualpath",
},
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
err := conn.IsRemoveDirAllowed(fs, mappedPath, "/virtualpath1")
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "permission denied")
}
err = conn.RemoveFile(fs, filepath.Join(homePath, "missing_file"), "/missing_file",
vfs.NewFileInfo("info", false, 100, time.Now(), false))
assert.Error(t, err)
}
func TestSetStatMode(t *testing.T) {
oldSetStatMode := Config.SetstatMode
Config.SetstatMode = 1
fakePath := "fake path"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
HomeDir: os.TempDir(),
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := newMockOsFs(true, "", user.GetHomeDir())
conn := NewBaseConnection("", ProtocolWebDAV, "", "", user)
err := conn.handleChmod(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
err = conn.handleChown(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
err = conn.handleChtimes(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
Config.SetstatMode = 2
err = conn.handleChmod(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
err = conn.handleChtimes(fs, fakePath, fakePath, &StatAttributes{
Atime: time.Now(),
Mtime: time.Now(),
})
assert.NoError(t, err)
Config.SetstatMode = oldSetStatMode
}
func TestRecursiveRenameWalkError(t *testing.T) {
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
err := conn.checkRecursiveRenameDirPermissions(fs, fs, "/source", "/target")
assert.ErrorIs(t, err, os.ErrNotExist)
}
func TestCrossRenameFsErrors(t *testing.T) {
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
res := conn.hasSpaceForCrossRename(fs, vfs.QuotaCheckResult{}, 1, "missingsource")
assert.False(t, res)
if runtime.GOOS != osWindows {
dirPath := filepath.Join(os.TempDir(), "d")
err := os.Mkdir(dirPath, os.ModePerm)
assert.NoError(t, err)
err = os.Chmod(dirPath, 0001)
assert.NoError(t, err)
res = conn.hasSpaceForCrossRename(fs, vfs.QuotaCheckResult{}, 1, dirPath)
assert.False(t, res)
err = os.Chmod(dirPath, os.ModePerm)
assert.NoError(t, err)
err = os.Remove(dirPath)
assert.NoError(t, err)
}
}
func TestRenameVirtualFolders(t *testing.T) {
vdir := "/avdir"
u := dataprovider.User{}
u.VirtualFolders = append(u.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
Name: "name",
MappedPath: "mappedPath",
},
VirtualPath: vdir,
})
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolFTP, "", "", u)
res := conn.isRenamePermitted(fs, fs, "source", "target", vdir, "vdirtarget", nil)
assert.False(t, res)
}
func TestRenamePerms(t *testing.T) {
src := "source"
target := "target"
sub := "/sub"
subTarget := sub + "/target"
u := dataprovider.User{}
u.Permissions = map[string][]string{}
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermCreateSymlinks,
dataprovider.PermDeleteFiles}
conn := NewBaseConnection("", ProtocolSFTP, "", "", u)
assert.False(t, conn.hasRenamePerms(src, target, nil))
u.Permissions["/"] = []string{dataprovider.PermRename}
assert.True(t, conn.hasRenamePerms(src, target, nil))
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteFiles,
dataprovider.PermDeleteDirs}
assert.False(t, conn.hasRenamePerms(src, target, nil))
info := vfs.NewFileInfo(src, true, 0, time.Now(), false)
u.Permissions["/"] = []string{dataprovider.PermRenameFiles}
assert.False(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermRenameDirs}
assert.True(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermRename}
assert.True(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermDownload, dataprovider.PermUpload, dataprovider.PermDeleteDirs}
assert.False(t, conn.hasRenamePerms(src, target, info))
// test with different permissions between source and target
u.Permissions["/"] = []string{dataprovider.PermRename}
u.Permissions[sub] = []string{dataprovider.PermRenameFiles}
assert.False(t, conn.hasRenamePerms(src, subTarget, info))
u.Permissions[sub] = []string{dataprovider.PermRenameDirs}
assert.True(t, conn.hasRenamePerms(src, subTarget, info))
// test files
info = vfs.NewFileInfo(src, false, 0, time.Now(), false)
u.Permissions["/"] = []string{dataprovider.PermRenameDirs}
assert.False(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermRenameFiles}
assert.True(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermRename}
assert.True(t, conn.hasRenamePerms(src, target, info))
// test with different permissions between source and target
u.Permissions["/"] = []string{dataprovider.PermRename}
u.Permissions[sub] = []string{dataprovider.PermRenameDirs}
assert.False(t, conn.hasRenamePerms(src, subTarget, info))
u.Permissions[sub] = []string{dataprovider.PermRenameFiles}
assert.True(t, conn.hasRenamePerms(src, subTarget, info))
}
func TestUpdateQuotaAfterRename(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
HomeDir: filepath.Join(os.TempDir(), "home"),
},
}
mappedPath := filepath.Join(os.TempDir(), "vdir")
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: mappedPath,
},
VirtualPath: "/vdir",
QuotaFiles: -1,
QuotaSize: -1,
})
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: mappedPath,
},
VirtualPath: "/vdir1",
QuotaFiles: -1,
QuotaSize: -1,
})
err := os.MkdirAll(user.GetHomeDir(), os.ModePerm)
assert.NoError(t, err)
err = os.MkdirAll(mappedPath, os.ModePerm)
assert.NoError(t, err)
fs, err := user.GetFilesystem("id")
assert.NoError(t, err)
c := NewBaseConnection("", ProtocolSFTP, "", "", user)
request := sftp.NewRequest("Rename", "/testfile")
if runtime.GOOS != osWindows {
request.Filepath = "/dir"
request.Target = path.Join("/vdir", "dir")
testDirPath := filepath.Join(mappedPath, "dir")
err := os.MkdirAll(testDirPath, os.ModePerm)
assert.NoError(t, err)
err = os.Chmod(testDirPath, 0001)
assert.NoError(t, err)
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, testDirPath, 0)
assert.Error(t, err)
err = os.Chmod(testDirPath, os.ModePerm)
assert.NoError(t, err)
}
testFile1 := "/testfile1"
request.Target = testFile1
request.Filepath = path.Join("/vdir", "file")
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 0)
assert.Error(t, err)
err = os.WriteFile(filepath.Join(mappedPath, "file"), []byte("test content"), os.ModePerm)
assert.NoError(t, err)
request.Filepath = testFile1
request.Target = path.Join("/vdir", "file")
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
assert.NoError(t, err)
err = os.WriteFile(filepath.Join(user.GetHomeDir(), "testfile1"), []byte("test content"), os.ModePerm)
assert.NoError(t, err)
request.Target = testFile1
request.Filepath = path.Join("/vdir", "file")
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
assert.NoError(t, err)
request.Target = path.Join("/vdir1", "file")
request.Filepath = path.Join("/vdir", "file")
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
assert.NoError(t, err)
err = os.RemoveAll(mappedPath)
assert.NoError(t, err)
err = os.RemoveAll(user.GetHomeDir())
assert.NoError(t, err)
}
func TestErrorsMapping(t *testing.T) {
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{BaseUser: sdk.BaseUser{HomeDir: os.TempDir()}})
osErrorsProtocols := []string{ProtocolWebDAV, ProtocolFTP, ProtocolHTTP, ProtocolHTTPShare,
ProtocolDataRetention, ProtocolOIDC}
for _, protocol := range supportedProtocols {
conn.SetProtocol(protocol)
err := conn.GetFsError(fs, os.ErrNotExist)
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxNoSuchFile)
} else if util.Contains(osErrorsProtocols, protocol) {
assert.EqualError(t, err, os.ErrNotExist.Error())
} else {
assert.EqualError(t, err, ErrNotExist.Error())
}
err = conn.GetFsError(fs, os.ErrPermission)
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxPermissionDenied.Error())
} else {
assert.EqualError(t, err, ErrPermissionDenied.Error())
}
err = conn.GetFsError(fs, os.ErrClosed)
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
assert.Contains(t, err.Error(), os.ErrClosed.Error())
} else {
assert.EqualError(t, err, ErrGenericFailure.Error())
}
err = conn.GetFsError(fs, ErrPermissionDenied)
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
assert.Contains(t, err.Error(), ErrPermissionDenied.Error())
} else {
assert.EqualError(t, err, ErrPermissionDenied.Error())
}
err = conn.GetFsError(fs, vfs.ErrVfsUnsupported)
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxOpUnsupported.Error())
} else {
assert.EqualError(t, err, ErrOpUnsupported.Error())
}
err = conn.GetFsError(fs, vfs.ErrStorageSizeUnavailable)
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxOpUnsupported)
assert.Contains(t, err.Error(), vfs.ErrStorageSizeUnavailable.Error())
} else {
assert.EqualError(t, err, vfs.ErrStorageSizeUnavailable.Error())
}
err = conn.GetQuotaExceededError()
assert.True(t, conn.IsQuotaExceededError(err))
err = conn.GetReadQuotaExceededError()
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
} else {
assert.ErrorIs(t, err, ErrReadQuotaExceeded)
}
err = conn.GetNotExistError()
assert.True(t, conn.IsNotExistError(err))
err = conn.GetFsError(fs, nil)
assert.NoError(t, err)
err = conn.GetOpUnsupportedError()
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxOpUnsupported.Error())
} else {
assert.EqualError(t, err, ErrOpUnsupported.Error())
}
}
}
func TestMaxWriteSize(t *testing.T) {
permissions := make(map[string][]string)
permissions["/"] = []string{dataprovider.PermAny}
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
},
}
fs, err := user.GetFilesystem("123")
assert.NoError(t, err)
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
quotaResult := vfs.QuotaCheckResult{
HasSpace: true,
}
size, err := conn.GetMaxWriteSize(quotaResult, false, 0, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(0), size)
conn.User.Filters.MaxUploadFileSize = 100
size, err = conn.GetMaxWriteSize(quotaResult, false, 0, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(100), size)
quotaResult.QuotaSize = 1000
size, err = conn.GetMaxWriteSize(quotaResult, false, 50, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(100), size)
quotaResult.QuotaSize = 1000
quotaResult.UsedSize = 990
size, err = conn.GetMaxWriteSize(quotaResult, false, 50, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(60), size)
quotaResult.QuotaSize = 0
quotaResult.UsedSize = 0
size, err = conn.GetMaxWriteSize(quotaResult, true, 100, fs.IsUploadResumeSupported())
assert.True(t, conn.IsQuotaExceededError(err))
assert.Equal(t, int64(0), size)
size, err = conn.GetMaxWriteSize(quotaResult, true, 10, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(90), size)
fs = newMockOsFs(true, fs.ConnectionID(), user.GetHomeDir())
size, err = conn.GetMaxWriteSize(quotaResult, true, 100, fs.IsUploadResumeSupported())
assert.EqualError(t, err, ErrOpUnsupported.Error())
assert.Equal(t, int64(0), size)
}
func TestCheckParentDirsErrors(t *testing.T) {
permissions := make(map[string][]string)
permissions["/"] = []string{dataprovider.PermAny}
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
},
FsConfig: vfs.Filesystem{
Provider: sdk.CryptedFilesystemProvider,
},
}
c := NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err := c.CheckParentDirs("/a/dir")
assert.Error(t, err)
user.FsConfig.Provider = sdk.LocalFilesystemProvider
user.VirtualFolders = nil
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
FsConfig: vfs.Filesystem{
Provider: sdk.CryptedFilesystemProvider,
},
},
VirtualPath: "/vdir",
})
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: filepath.Clean(os.TempDir()),
},
VirtualPath: "/vdir/sub",
})
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err = c.CheckParentDirs("/vdir/sub/dir")
assert.Error(t, err)
user = dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
},
FsConfig: vfs.Filesystem{
Provider: sdk.S3FilesystemProvider,
S3Config: vfs.S3FsConfig{
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: "buck",
Region: "us-east-1",
AccessKey: "key",
},
AccessSecret: kms.NewPlainSecret("s3secret"),
},
},
}
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err = c.CheckParentDirs("/a/dir")
assert.NoError(t, err)
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: filepath.Clean(os.TempDir()),
},
VirtualPath: "/local/dir",
})
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err = c.CheckParentDirs("/local/dir/sub-dir")
assert.NoError(t, err)
err = os.RemoveAll(filepath.Join(os.TempDir(), "sub-dir"))
assert.NoError(t, err)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
@ -18,9 +18,7 @@ import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"os"
@ -31,15 +29,12 @@ import (
"sync"
"time"
"github.com/wneessen/go-mail"
"github.com/drakkan/sftpgo/v2/internal/command"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/httpclient"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/smtp"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/command"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
)
// RetentionCheckNotification defines the supported notification methods for a retention check result
@ -54,36 +49,34 @@ const (
)
var (
// RetentionChecks is the list of active retention checks
// RetentionChecks is the list of active quota scans
RetentionChecks ActiveRetentionChecks
)
// ActiveRetentionChecks holds the active retention checks
// ActiveRetentionChecks holds the active quota scans
type ActiveRetentionChecks struct {
sync.RWMutex
Checks []RetentionCheck
}
// Get returns the active retention checks
func (c *ActiveRetentionChecks) Get(role string) []RetentionCheck {
func (c *ActiveRetentionChecks) Get() []RetentionCheck {
c.RLock()
defer c.RUnlock()
checks := make([]RetentionCheck, 0, len(c.Checks))
for _, check := range c.Checks {
if role == "" || role == check.Role {
foldersCopy := make([]dataprovider.FolderRetention, len(check.Folders))
copy(foldersCopy, check.Folders)
notificationsCopy := make([]string, len(check.Notifications))
copy(notificationsCopy, check.Notifications)
checks = append(checks, RetentionCheck{
Username: check.Username,
StartTime: check.StartTime,
Notifications: notificationsCopy,
Email: check.Email,
Folders: foldersCopy,
})
}
foldersCopy := make([]FolderRetention, len(check.Folders))
copy(foldersCopy, check.Folders)
notificationsCopy := make([]string, len(check.Notifications))
copy(notificationsCopy, check.Notifications)
checks = append(checks, RetentionCheck{
Username: check.Username,
StartTime: check.StartTime,
Notifications: notificationsCopy,
Email: check.Email,
Folders: foldersCopy,
})
}
return checks
}
@ -105,7 +98,6 @@ func (c *ActiveRetentionChecks) Add(check RetentionCheck, user *dataprovider.Use
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.Username = user.Username
check.Role = user.Role
check.StartTime = util.GetTimeAsMsSinceEpoch(time.Now())
check.conn = conn
check.updateUserPermissions()
@ -132,6 +124,37 @@ func (c *ActiveRetentionChecks) remove(username string) bool {
return false
}
// FolderRetention defines the retention policy for the specified directory path
type FolderRetention struct {
// Path is the exposed virtual directory path, if no other specific retention is defined,
// the retention applies for sub directories too. For example if retention is defined
// for the paths "/" and "/sub" then the retention for "/" is applied for any file outside
// the "/sub" directory
Path string `json:"path"`
// Retention time in hours. 0 means exclude this path
Retention int `json:"retention"`
// DeleteEmptyDirs defines if empty directories will be deleted.
// The user need the delete permission
DeleteEmptyDirs bool `json:"delete_empty_dirs,omitempty"`
// IgnoreUserPermissions defines if delete files even if the user does not have the delete permission.
// The default is "false" which means that files will be skipped if the user does not have the permission
// to delete them. This applies to sub directories too.
IgnoreUserPermissions bool `json:"ignore_user_permissions,omitempty"`
}
func (f *FolderRetention) isValid() error {
f.Path = path.Clean(f.Path)
if !path.IsAbs(f.Path) {
return util.NewValidationError(fmt.Sprintf("folder retention: invalid path %#v, please specify an absolute POSIX path",
f.Path))
}
if f.Retention < 0 {
return util.NewValidationError(fmt.Sprintf("invalid folder retention %v, it must be greater or equal to zero",
f.Retention))
}
return nil
}
type folderRetentionCheckResult struct {
Path string `json:"path"`
Retention int `json:"retention"`
@ -149,14 +172,13 @@ type RetentionCheck struct {
// retention check start time as unix timestamp in milliseconds
StartTime int64 `json:"start_time"`
// affected folders
Folders []dataprovider.FolderRetention `json:"folders"`
Folders []FolderRetention `json:"folders"`
// how cleanup results will be notified
Notifications []RetentionCheckNotification `json:"notifications,omitempty"`
// email to use if the notification method is set to email
Email string `json:"email,omitempty"`
Role string `json:"-"`
// Cleanup results
results []folderRetentionCheckResult `json:"-"`
results []*folderRetentionCheckResult `json:"-"`
conn *BaseConnection
}
@ -166,14 +188,14 @@ func (c *RetentionCheck) Validate() error {
nothingToDo := true
for idx := range c.Folders {
f := &c.Folders[idx]
if err := f.Validate(); err != nil {
if err := f.isValid(); err != nil {
return err
}
if f.Retention > 0 {
nothingToDo = false
}
if _, ok := folderPaths[f.Path]; ok {
return util.NewValidationError(fmt.Sprintf("duplicated folder path %q", f.Path))
return util.NewValidationError(fmt.Sprintf("duplicated folder path %#v", f.Path))
}
folderPaths[f.Path] = true
}
@ -194,19 +216,21 @@ func (c *RetentionCheck) Validate() error {
return util.NewValidationError("in order to notify results via hook you must define a data_retention_hook")
}
default:
return util.NewValidationError(fmt.Sprintf("invalid notification %q", notification))
return util.NewValidationError(fmt.Sprintf("invalid notification %#v", notification))
}
}
return nil
}
func (c *RetentionCheck) updateUserPermissions() {
for k := range c.conn.User.Permissions {
c.conn.User.Permissions[k] = []string{dataprovider.PermAny}
for _, folder := range c.Folders {
if folder.IgnoreUserPermissions {
c.conn.User.Permissions[folder.Path] = []string{dataprovider.PermAny}
}
}
}
func (c *RetentionCheck) getFolderRetention(folderPath string) (dataprovider.FolderRetention, error) {
func (c *RetentionCheck) getFolderRetention(folderPath string) (FolderRetention, error) {
dirsForPath := util.GetDirsForVirtualPath(folderPath)
for _, dirPath := range dirsForPath {
for _, folder := range c.Folders {
@ -216,7 +240,7 @@ func (c *RetentionCheck) getFolderRetention(folderPath string) (dataprovider.Fol
}
}
return dataprovider.FolderRetention{}, fmt.Errorf("unable to find folder retention for %q", folderPath)
return FolderRetention{}, fmt.Errorf("unable to find folder retention for %#v", folderPath)
}
func (c *RetentionCheck) removeFile(virtualPath string, info os.FileInfo) error {
@ -227,130 +251,102 @@ func (c *RetentionCheck) removeFile(virtualPath string, info os.FileInfo) error
return c.conn.RemoveFile(fs, fsPath, virtualPath, info)
}
func (c *RetentionCheck) cleanupFolder(folderPath string, recursion int) error {
func (c *RetentionCheck) cleanupFolder(folderPath string) error {
deleteFilesPerms := []string{dataprovider.PermDelete, dataprovider.PermDeleteFiles}
startTime := time.Now()
result := folderRetentionCheckResult{
result := &folderRetentionCheckResult{
Path: folderPath,
}
defer func() {
c.results = append(c.results, result)
}()
if recursion >= util.MaxRecursion {
c.results = append(c.results, result)
if !c.conn.User.HasPerm(dataprovider.PermListItems, folderPath) || !c.conn.User.HasAnyPerm(deleteFilesPerms, folderPath) {
result.Elapsed = time.Since(startTime)
result.Info = "data retention check skipped: recursion too deep"
c.conn.Log(logger.LevelError, "data retention check skipped, recursion too depth for %q: %d",
folderPath, recursion)
return util.ErrRecursionTooDeep
result.Info = "data retention check skipped: no permissions"
c.conn.Log(logger.LevelInfo, "user %#v does not have permissions to check retention on %#v, retention check skipped",
c.conn.User.Username, folderPath)
return nil
}
recursion++
folderRetention, err := c.getFolderRetention(folderPath)
if err != nil {
result.Elapsed = time.Since(startTime)
result.Error = "unable to get folder retention"
c.conn.Log(logger.LevelError, "unable to get folder retention for path %q", folderPath)
c.conn.Log(logger.LevelError, "unable to get folder retention for path %#v", folderPath)
return err
}
result.Retention = folderRetention.Retention
if folderRetention.Retention == 0 {
result.Elapsed = time.Since(startTime)
result.Info = "data retention check skipped: retention is set to 0"
c.conn.Log(logger.LevelDebug, "retention check skipped for folder %q, retention is set to 0", folderPath)
c.conn.Log(logger.LevelDebug, "retention check skipped for folder %#v, retention is set to 0", folderPath)
return nil
}
c.conn.Log(logger.LevelDebug, "start retention check for folder %q, retention: %v hours, delete empty dirs? %v",
folderPath, folderRetention.Retention, folderRetention.DeleteEmptyDirs)
lister, err := c.conn.ListDir(folderPath)
c.conn.Log(logger.LevelDebug, "start retention check for folder %#v, retention: %v hours, delete empty dirs? %v, ignore user perms? %v",
folderPath, folderRetention.Retention, folderRetention.DeleteEmptyDirs, folderRetention.IgnoreUserPermissions)
files, err := c.conn.ListDir(folderPath)
if err != nil {
result.Elapsed = time.Since(startTime)
if err == c.conn.GetNotExistError() {
result.Info = "data retention check skipped, folder does not exist"
c.conn.Log(logger.LevelDebug, "folder %q does not exist, retention check skipped", folderPath)
c.conn.Log(logger.LevelDebug, "folder %#v does not exist, retention check skipped", folderPath)
return nil
}
result.Error = fmt.Sprintf("unable to get lister for directory %q", folderPath)
c.conn.Log(logger.LevelError, "%s", result.Error)
result.Error = fmt.Sprintf("unable to list directory %#v", folderPath)
c.conn.Log(logger.LevelError, result.Error)
return err
}
defer lister.Close()
for {
files, err := lister.Next(vfs.ListerBatchSize)
finished := errors.Is(err, io.EOF)
if err := lister.convertError(err); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to list directory %q", folderPath)
c.conn.Log(logger.LevelError, "unable to list dir %q: %v", folderPath, err)
return err
}
for _, info := range files {
virtualPath := path.Join(folderPath, info.Name())
if info.IsDir() {
if err := c.cleanupFolder(virtualPath, recursion); err != nil {
for _, info := range files {
virtualPath := path.Join(folderPath, info.Name())
if info.IsDir() {
if err := c.cleanupFolder(virtualPath); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to check folder: %v", err)
c.conn.Log(logger.LevelError, "unable to cleanup folder %#v: %v", virtualPath, err)
return err
}
} else {
retentionTime := info.ModTime().Add(time.Duration(folderRetention.Retention) * time.Hour)
if retentionTime.Before(time.Now()) {
if err := c.removeFile(virtualPath, info); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to check folder: %v", err)
c.conn.Log(logger.LevelError, "unable to cleanup folder %q: %v", virtualPath, err)
result.Error = fmt.Sprintf("unable to remove file %#v: %v", virtualPath, err)
c.conn.Log(logger.LevelError, "unable to remove file %#v, retention %v: %v",
virtualPath, retentionTime, err)
return err
}
} else {
retentionTime := info.ModTime().Add(time.Duration(folderRetention.Retention) * time.Hour)
if retentionTime.Before(time.Now()) {
if err := c.removeFile(virtualPath, info); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to remove file %q: %v", virtualPath, err)
c.conn.Log(logger.LevelError, "unable to remove file %q, retention %v: %v",
virtualPath, retentionTime, err)
return err
}
c.conn.Log(logger.LevelDebug, "removed file %q, modification time: %v, retention: %v hours, retention time: %v",
virtualPath, info.ModTime(), folderRetention.Retention, retentionTime)
result.DeletedFiles++
result.DeletedSize += info.Size()
}
c.conn.Log(logger.LevelDebug, "removed file %#v, modification time: %v, retention: %v hours, retention time: %v",
virtualPath, info.ModTime(), folderRetention.Retention, retentionTime)
result.DeletedFiles++
result.DeletedSize += info.Size()
}
}
if finished {
break
}
}
lister.Close()
c.checkEmptyDirRemoval(folderPath, folderRetention.DeleteEmptyDirs)
if folderRetention.DeleteEmptyDirs {
c.checkEmptyDirRemoval(folderPath)
}
result.Elapsed = time.Since(startTime)
c.conn.Log(logger.LevelDebug, "retention check completed for folder %q, deleted files: %v, deleted size: %v bytes",
c.conn.Log(logger.LevelDebug, "retention check completed for folder %#v, deleted files: %v, deleted size: %v bytes",
folderPath, result.DeletedFiles, result.DeletedSize)
return nil
}
func (c *RetentionCheck) checkEmptyDirRemoval(folderPath string, checkVal bool) {
if folderPath == "/" || !checkVal {
return
}
for _, folder := range c.Folders {
if folderPath == folder.Path {
return
}
}
if c.conn.User.HasAnyPerm([]string{
func (c *RetentionCheck) checkEmptyDirRemoval(folderPath string) {
if folderPath != "/" && c.conn.User.HasAnyPerm([]string{
dataprovider.PermDelete,
dataprovider.PermDeleteDirs,
}, path.Dir(folderPath),
) {
lister, err := c.conn.ListDir(folderPath)
if err == nil {
files, err := lister.Next(1)
lister.Close()
if len(files) == 0 && errors.Is(err, io.EOF) {
err = c.conn.RemoveDir(folderPath)
c.conn.Log(logger.LevelDebug, "tried to remove empty dir %q, error: %v", folderPath, err)
}
files, err := c.conn.ListDir(folderPath)
if err == nil && len(files) == 0 {
err = c.conn.RemoveDir(folderPath)
c.conn.Log(logger.LevelDebug, "tryed to remove empty dir %#v, error: %v", folderPath, err)
}
}
}
// Start starts the retention check
func (c *RetentionCheck) Start() error {
func (c *RetentionCheck) Start() {
c.conn.Log(logger.LevelInfo, "retention check started")
defer RetentionChecks.remove(c.conn.User.Username)
defer c.conn.CloseFS() //nolint:errcheck
@ -358,63 +354,62 @@ func (c *RetentionCheck) Start() error {
startTime := time.Now()
for _, folder := range c.Folders {
if folder.Retention > 0 {
if err := c.cleanupFolder(folder.Path, 0); err != nil {
c.conn.Log(logger.LevelError, "retention check failed, unable to cleanup folder %q", folder.Path)
if err := c.cleanupFolder(folder.Path); err != nil {
c.conn.Log(logger.LevelError, "retention check failed, unable to cleanup folder %#v", folder.Path)
c.sendNotifications(time.Since(startTime), err)
return err
return
}
}
}
c.conn.Log(logger.LevelInfo, "retention check completed")
c.sendNotifications(time.Since(startTime), nil)
return nil
}
func (c *RetentionCheck) sendNotifications(elapsed time.Duration, err error) {
for _, notification := range c.Notifications {
switch notification {
case RetentionCheckNotificationEmail:
c.sendEmailNotification(err) //nolint:errcheck
c.sendEmailNotification(elapsed, err) //nolint:errcheck
case RetentionCheckNotificationHook:
c.sendHookNotification(elapsed, err) //nolint:errcheck
}
}
}
func (c *RetentionCheck) sendEmailNotification(errCheck error) error {
params := EventParams{}
if len(c.results) > 0 || errCheck != nil {
params.retentionChecks = append(params.retentionChecks, executedRetentionCheck{
Username: c.conn.User.Username,
ActionName: "Retention check",
Results: c.results,
})
func (c *RetentionCheck) sendEmailNotification(elapsed time.Duration, errCheck error) error {
body := new(bytes.Buffer)
data := make(map[string]any)
data["Results"] = c.results
totalDeletedFiles := 0
totalDeletedSize := int64(0)
for _, result := range c.results {
totalDeletedFiles += result.DeletedFiles
totalDeletedSize += result.DeletedSize
}
var files []*mail.File
f, err := params.getRetentionReportsAsMailAttachment()
if err != nil {
c.conn.Log(logger.LevelError, "unable to get retention report as mail attachment: %v", err)
data["HumanizeSize"] = util.ByteCountIEC
data["TotalFiles"] = totalDeletedFiles
data["TotalSize"] = totalDeletedSize
data["Elapsed"] = elapsed
data["Username"] = c.conn.User.Username
data["StartTime"] = util.GetTimeFromMsecSinceEpoch(c.StartTime)
if errCheck == nil {
data["Status"] = "Succeeded"
} else {
data["Status"] = "Failed"
}
if err := smtp.RenderRetentionReportTemplate(body, data); err != nil {
c.conn.Log(logger.LevelError, "unable to render retention check template: %v", err)
return err
}
f.Name = "retention-report.zip"
files = append(files, f)
startTime := time.Now()
var subject string
if errCheck == nil {
subject = fmt.Sprintf("Successful retention check for user %q", c.conn.User.Username)
} else {
subject = fmt.Sprintf("Retention check failed for user %q", c.conn.User.Username)
}
body := "Further details attached."
err = smtp.SendEmail([]string{c.Email}, nil, subject, body, smtp.EmailContentTypeTextPlain, files...)
if err != nil {
c.conn.Log(logger.LevelError, "unable to notify retention check result via email: %v, elapsed: %s", err,
subject := fmt.Sprintf("Retention check completed for user %#v", c.conn.User.Username)
if err := smtp.SendEmail(c.Email, subject, body.String(), smtp.EmailContentTypeTextHTML); err != nil {
c.conn.Log(logger.LevelError, "unable to notify retention check result via email: %v, elapsed: %v", err,
time.Since(startTime))
return err
}
c.conn.Log(logger.LevelInfo, "retention check result successfully notified via email, elapsed: %s", time.Since(startTime))
c.conn.Log(logger.LevelInfo, "retention check result successfully notified via email, elapsed: %v", time.Since(startTime))
return nil
}
@ -448,7 +443,7 @@ func (c *RetentionCheck) sendHookNotification(elapsed time.Duration, errCheck er
var url *url.URL
url, err := url.Parse(Config.DataRetentionHook)
if err != nil {
c.conn.Log(logger.LevelError, "invalid data retention hook %q: %v", Config.DataRetentionHook, err)
c.conn.Log(logger.LevelError, "invalid data retention hook %#v: %v", Config.DataRetentionHook, err)
return err
}
respCode := 0
@ -463,26 +458,26 @@ func (c *RetentionCheck) sendHookNotification(elapsed time.Duration, errCheck er
}
}
c.conn.Log(logger.LevelDebug, "notified result to URL: %q, status code: %v, elapsed: %v err: %v",
c.conn.Log(logger.LevelDebug, "notified result to URL: %#v, status code: %v, elapsed: %v err: %v",
url.Redacted(), respCode, time.Since(startTime), err)
return err
}
if !filepath.IsAbs(Config.DataRetentionHook) {
err := fmt.Errorf("invalid data retention hook %q", Config.DataRetentionHook)
err := fmt.Errorf("invalid data retention hook %#v", Config.DataRetentionHook)
c.conn.Log(logger.LevelError, "%v", err)
return err
}
timeout, env, args := command.GetConfig(Config.DataRetentionHook, command.HookDataRetention)
timeout, env := command.GetConfig(Config.DataRetentionHook)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
cmd := exec.CommandContext(ctx, Config.DataRetentionHook, args...)
cmd := exec.CommandContext(ctx, Config.DataRetentionHook)
cmd.Env = append(env,
fmt.Sprintf("SFTPGO_DATA_RETENTION_RESULT=%s", util.BytesToString(jsonData)))
fmt.Sprintf("SFTPGO_DATA_RETENTION_RESULT=%v", string(jsonData)))
err := cmd.Run()
c.conn.Log(logger.LevelDebug, "notified result using command: %q, elapsed: %s err: %v",
c.conn.Log(logger.LevelDebug, "notified result using command: %v, elapsed: %v err: %v",
Config.DataRetentionHook, time.Since(startTime), err)
return err
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
@ -26,24 +26,31 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/smtp"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/smtp"
)
func TestRetentionValidation(t *testing.T) {
check := RetentionCheck{}
check.Folders = []dataprovider.FolderRetention{
check.Folders = append(check.Folders, FolderRetention{
Path: "relative",
Retention: 10,
})
err := check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "please specify an absolute POSIX path")
check.Folders = []FolderRetention{
{
Path: "/",
Retention: -1,
},
}
err := check.Validate()
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid folder retention")
check.Folders = []dataprovider.FolderRetention{
check.Folders = []FolderRetention{
{
Path: "/ab/..",
Retention: 0,
@ -54,7 +61,7 @@ func TestRetentionValidation(t *testing.T) {
assert.Contains(t, err.Error(), "nothing to delete")
assert.Equal(t, "/", check.Folders[0].Path)
check.Folders = append(check.Folders, dataprovider.FolderRetention{
check.Folders = append(check.Folders, FolderRetention{
Path: "/../..",
Retention: 24,
})
@ -62,7 +69,7 @@ func TestRetentionValidation(t *testing.T) {
require.Error(t, err)
assert.Contains(t, err.Error(), `duplicated folder path "/"`)
check.Folders = []dataprovider.FolderRetention{
check.Folders = []FolderRetention{
{
Path: "/dir1",
Retention: 48,
@ -85,10 +92,9 @@ func TestRetentionValidation(t *testing.T) {
smtpCfg := smtp.Config{
Host: "mail.example.com",
Port: 25,
From: "notification@example.com",
TemplatesPath: "templates",
}
err = smtpCfg.Initialize(configDir, true)
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.Validate()
@ -100,7 +106,7 @@ func TestRetentionValidation(t *testing.T) {
assert.NoError(t, err)
smtpCfg = smtp.Config{}
err = smtpCfg.Initialize(configDir, true)
err = smtpCfg.Initialize("..")
require.NoError(t, err)
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationHook}
@ -118,10 +124,9 @@ func TestRetentionEmailNotifications(t *testing.T) {
smtpCfg := smtp.Config{
Host: "127.0.0.1",
Port: 2525,
From: "notification@example.com",
TemplatesPath: "templates",
}
err := smtpCfg.Initialize(configDir, true)
err := smtpCfg.Initialize("..")
require.NoError(t, err)
user := dataprovider.User{
@ -134,7 +139,7 @@ func TestRetentionEmailNotifications(t *testing.T) {
check := RetentionCheck{
Notifications: []RetentionCheckNotification{RetentionCheckNotificationEmail},
Email: "notification@example.com",
results: []folderRetentionCheckResult{
results: []*folderRetentionCheckResult{
{
Path: "/",
Retention: 24,
@ -149,36 +154,21 @@ func TestRetentionEmailNotifications(t *testing.T) {
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.sendNotifications(1*time.Second, nil)
err = check.sendEmailNotification(nil)
err = check.sendEmailNotification(1*time.Second, nil)
assert.NoError(t, err)
err = check.sendEmailNotification(errors.New("test error"))
err = check.sendEmailNotification(1*time.Second, errors.New("test error"))
assert.NoError(t, err)
check.results = nil
err = check.sendEmailNotification(nil)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "no data retention report available")
}
smtpCfg.Port = 2626
err = smtpCfg.Initialize(configDir, true)
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.sendEmailNotification(nil)
err = check.sendEmailNotification(1*time.Second, nil)
assert.Error(t, err)
check.results = []folderRetentionCheckResult{
{
Path: "/",
Retention: 24,
DeletedFiles: 20,
DeletedSize: 456789,
Elapsed: 12 * time.Second,
},
}
smtpCfg = smtp.Config{}
err = smtpCfg.Initialize(configDir, true)
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.sendEmailNotification(nil)
err = check.sendEmailNotification(1*time.Second, nil)
assert.Error(t, err)
}
@ -195,7 +185,7 @@ func TestRetentionHookNotifications(t *testing.T) {
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
results: []folderRetentionCheckResult{
results: []*folderRetentionCheckResult{
{
Path: "/",
Retention: 24,
@ -250,18 +240,21 @@ func TestRetentionPermissionsAndGetFolder(t *testing.T) {
user.Permissions["/dir2/sub2"] = []string{dataprovider.PermDelete}
check := RetentionCheck{
Folders: []dataprovider.FolderRetention{
Folders: []FolderRetention{
{
Path: "/dir2",
Retention: 24 * 7,
Path: "/dir2",
Retention: 24 * 7,
IgnoreUserPermissions: true,
},
{
Path: "/dir3",
Retention: 24 * 7,
Path: "/dir3",
Retention: 24 * 7,
IgnoreUserPermissions: false,
},
{
Path: "/dir2/sub1/sub",
Retention: 24,
Path: "/dir2/sub1/sub",
Retention: 24,
IgnoreUserPermissions: true,
},
},
}
@ -271,10 +264,12 @@ func TestRetentionPermissionsAndGetFolder(t *testing.T) {
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.updateUserPermissions()
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir1"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub1"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub2"])
assert.Equal(t, []string{dataprovider.PermListItems, dataprovider.PermDelete}, conn.User.Permissions["/"])
assert.Equal(t, []string{dataprovider.PermListItems}, conn.User.Permissions["/dir1"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub1/sub"])
assert.Equal(t, []string{dataprovider.PermCreateDirs}, conn.User.Permissions["/dir2/sub1"])
assert.Equal(t, []string{dataprovider.PermDelete}, conn.User.Permissions["/dir2/sub2"])
_, err := check.getFolderRetention("/")
assert.Error(t, err)
@ -305,7 +300,7 @@ func TestRetentionCheckAddRemove(t *testing.T) {
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Folders: []dataprovider.FolderRetention{
Folders: []FolderRetention{
{
Path: "/",
Retention: 48,
@ -314,7 +309,7 @@ func TestRetentionCheckAddRemove(t *testing.T) {
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
}
assert.NotNil(t, RetentionChecks.Add(check, &user))
checks := RetentionChecks.Get("")
checks := RetentionChecks.Get()
require.Len(t, checks, 1)
assert.Equal(t, username, checks[0].Username)
assert.Greater(t, checks[0].StartTime, int64(0))
@ -326,45 +321,10 @@ func TestRetentionCheckAddRemove(t *testing.T) {
assert.Nil(t, RetentionChecks.Add(check, &user))
assert.True(t, RetentionChecks.remove(username))
require.Len(t, RetentionChecks.Get(""), 0)
require.Len(t, RetentionChecks.Get(), 0)
assert.False(t, RetentionChecks.remove(username))
}
func TestRetentionCheckRole(t *testing.T) {
username := "retuser"
role1 := "retrole1"
role2 := "retrole2"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
Role: role1,
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Folders: []dataprovider.FolderRetention{
{
Path: "/",
Retention: 48,
},
},
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
}
assert.NotNil(t, RetentionChecks.Add(check, &user))
checks := RetentionChecks.Get("")
require.Len(t, checks, 1)
assert.Empty(t, checks[0].Role)
checks = RetentionChecks.Get(role1)
require.Len(t, checks, 1)
checks = RetentionChecks.Get(role2)
require.Len(t, checks, 0)
user.Role = ""
assert.Nil(t, RetentionChecks.Add(check, &user))
assert.True(t, RetentionChecks.remove(username))
require.Len(t, RetentionChecks.Get(""), 0)
}
func TestCleanupErrors(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
@ -374,7 +334,7 @@ func TestCleanupErrors(t *testing.T) {
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := &RetentionCheck{
Folders: []dataprovider.FolderRetention{
Folders: []FolderRetention{
{
Path: "/path",
Retention: 48,
@ -387,11 +347,8 @@ func TestCleanupErrors(t *testing.T) {
err := check.removeFile("missing file", nil)
assert.Error(t, err)
err = check.cleanupFolder("/", 0)
err = check.cleanupFolder("/")
assert.Error(t, err)
err = check.cleanupFolder("/", 1000)
assert.ErrorIs(t, err, util.ErrRecursionTooDeep)
assert.True(t, RetentionChecks.remove(user.Username))
}

342
common/defender.go Normal file
View file

@ -0,0 +1,342 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"encoding/json"
"fmt"
"net"
"os"
"strings"
"sync"
"time"
"github.com/yl2chen/cidranger"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// HostEvent is the enumerable for the supported host events
type HostEvent int
// Supported host events
const (
HostEventLoginFailed HostEvent = iota
HostEventUserNotFound
HostEventNoLoginTried
HostEventLimitExceeded
)
// Supported defender drivers
const (
DefenderDriverMemory = "memory"
DefenderDriverProvider = "provider"
)
var (
supportedDefenderDrivers = []string{DefenderDriverMemory, DefenderDriverProvider}
)
// Defender defines the interface that a defender must implements
type Defender interface {
GetHosts() ([]dataprovider.DefenderEntry, error)
GetHost(ip string) (dataprovider.DefenderEntry, error)
AddEvent(ip string, event HostEvent)
IsBanned(ip string) bool
GetBanTime(ip string) (*time.Time, error)
GetScore(ip string) (int, error)
DeleteHost(ip string) bool
Reload() error
}
// DefenderConfig defines the "defender" configuration
type DefenderConfig struct {
// Set to true to enable the defender
Enabled bool `json:"enabled" mapstructure:"enabled"`
// Defender implementation to use, we support "memory" and "provider".
// Using "provider" as driver you can share the defender events among
// multiple SFTPGo instances. For a single instance "memory" provider will
// be much faster
Driver string `json:"driver" mapstructure:"driver"`
// BanTime is the number of minutes that a host is banned
BanTime int `json:"ban_time" mapstructure:"ban_time"`
// Percentage increase of the ban time if a banned host tries to connect again
BanTimeIncrement int `json:"ban_time_increment" mapstructure:"ban_time_increment"`
// Threshold value for banning a client
Threshold int `json:"threshold" mapstructure:"threshold"`
// Score for invalid login attempts, eg. non-existent user accounts or
// client disconnected for inactivity without authentication attempts
ScoreInvalid int `json:"score_invalid" mapstructure:"score_invalid"`
// Score for valid login attempts, eg. user accounts that exist
ScoreValid int `json:"score_valid" mapstructure:"score_valid"`
// Score for limit exceeded events, generated from the rate limiters or for max connections
// per-host exceeded
ScoreLimitExceeded int `json:"score_limit_exceeded" mapstructure:"score_limit_exceeded"`
// Defines the time window, in minutes, for tracking client errors.
// A host is banned if it has exceeded the defined threshold during
// the last observation time minutes
ObservationTime int `json:"observation_time" mapstructure:"observation_time"`
// The number of banned IPs and host scores kept in memory will vary between the
// soft and hard limit for the "memory" driver. For the "provider" driver the
// soft limit is ignored and the hard limit is used to limit the number of entries
// to return when you request for the entire host list from the defender
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
// Path to a file containing a list of IP addresses and/or networks to never ban
SafeListFile string `json:"safelist_file" mapstructure:"safelist_file"`
// Path to a file containing a list of IP addresses and/or networks to always ban
BlockListFile string `json:"blocklist_file" mapstructure:"blocklist_file"`
// List of IP addresses and/or networks to never ban.
// For large lists prefer SafeListFile
SafeList []string `json:"safelist" mapstructure:"safelist"`
// List of IP addresses and/or networks to always ban.
// For large lists prefer BlockListFile
BlockList []string `json:"blocklist" mapstructure:"blocklist"`
}
type baseDefender struct {
config *DefenderConfig
sync.RWMutex
safeList *HostList
blockList *HostList
}
// Reload reloads block and safe lists
func (d *baseDefender) Reload() error {
blockList, err := loadHostListFromFile(d.config.BlockListFile)
if err != nil {
return err
}
blockList = addEntriesToList(d.config.BlockList, blockList, "blocklist")
d.Lock()
d.blockList = blockList
d.Unlock()
safeList, err := loadHostListFromFile(d.config.SafeListFile)
if err != nil {
return err
}
safeList = addEntriesToList(d.config.SafeList, safeList, "safelist")
d.Lock()
d.safeList = safeList
d.Unlock()
return nil
}
func (d *baseDefender) isBanned(ip string) bool {
if d.blockList != nil && d.blockList.isListed(ip) {
// permanent ban
return true
}
return false
}
func (d *baseDefender) getScore(event HostEvent) int {
var score int
switch event {
case HostEventLoginFailed:
score = d.config.ScoreValid
case HostEventLimitExceeded:
score = d.config.ScoreLimitExceeded
case HostEventUserNotFound, HostEventNoLoginTried:
score = d.config.ScoreInvalid
}
return score
}
// HostListFile defines the structure expected for safe/block list files
type HostListFile struct {
IPAddresses []string `json:"addresses"`
CIDRNetworks []string `json:"networks"`
}
// HostList defines the structure used to keep the HostListFile in memory
type HostList struct {
IPAddresses map[string]bool
Ranges cidranger.Ranger
}
func (h *HostList) isListed(ip string) bool {
if _, ok := h.IPAddresses[ip]; ok {
return true
}
ok, err := h.Ranges.Contains(net.ParseIP(ip))
if err != nil {
return false
}
return ok
}
type hostEvent struct {
dateTime time.Time
score int
}
type hostScore struct {
TotalScore int
Events []hostEvent
}
// validate returns an error if the configuration is invalid
func (c *DefenderConfig) validate() error {
if !c.Enabled {
return nil
}
if c.ScoreInvalid >= c.Threshold {
return fmt.Errorf("score_invalid %v cannot be greater than threshold %v", c.ScoreInvalid, c.Threshold)
}
if c.ScoreValid >= c.Threshold {
return fmt.Errorf("score_valid %v cannot be greater than threshold %v", c.ScoreValid, c.Threshold)
}
if c.ScoreLimitExceeded >= c.Threshold {
return fmt.Errorf("score_limit_exceeded %v cannot be greater than threshold %v", c.ScoreLimitExceeded, c.Threshold)
}
if c.BanTime <= 0 {
return fmt.Errorf("invalid ban_time %v", c.BanTime)
}
if c.BanTimeIncrement <= 0 {
return fmt.Errorf("invalid ban_time_increment %v", c.BanTimeIncrement)
}
if c.ObservationTime <= 0 {
return fmt.Errorf("invalid observation_time %v", c.ObservationTime)
}
if c.EntriesSoftLimit <= 0 {
return fmt.Errorf("invalid entries_soft_limit %v", c.EntriesSoftLimit)
}
if c.EntriesHardLimit <= c.EntriesSoftLimit {
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", c.EntriesHardLimit, c.EntriesSoftLimit)
}
return nil
}
func loadHostListFromFile(name string) (*HostList, error) {
if name == "" {
return nil, nil
}
if !util.IsFileInputValid(name) {
return nil, fmt.Errorf("invalid host list file name %#v", name)
}
info, err := os.Stat(name)
if err != nil {
return nil, err
}
// opinionated max size, you should avoid big host lists
if info.Size() > 1048576*5 { // 5MB
return nil, fmt.Errorf("host list file %#v is too big: %v bytes", name, info.Size())
}
content, err := os.ReadFile(name)
if err != nil {
return nil, fmt.Errorf("unable to read input file %#v: %v", name, err)
}
var hostList HostListFile
err = json.Unmarshal(content, &hostList)
if err != nil {
return nil, err
}
if len(hostList.CIDRNetworks) > 0 || len(hostList.IPAddresses) > 0 {
result := &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
ipCount := 0
cdrCount := 0
for _, ip := range hostList.IPAddresses {
if net.ParseIP(ip) == nil {
logger.Warn(logSender, "", "unable to parse IP %#v", ip)
continue
}
result.IPAddresses[ip] = true
ipCount++
}
for _, cidrNet := range hostList.CIDRNetworks {
_, network, err := net.ParseCIDR(cidrNet)
if err != nil {
logger.Warn(logSender, "", "unable to parse CIDR network %#v: %v", cidrNet, err)
continue
}
err = result.Ranges.Insert(cidranger.NewBasicRangerEntry(*network))
if err == nil {
cdrCount++
}
}
logger.Info(logSender, "", "list %#v loaded, ip addresses loaded: %v/%v networks loaded: %v/%v",
name, ipCount, len(hostList.IPAddresses), cdrCount, len(hostList.CIDRNetworks))
return result, nil
}
return nil, nil
}
func addEntriesToList(entries []string, hostList *HostList, listName string) *HostList {
if len(entries) == 0 {
return hostList
}
if hostList == nil {
hostList = &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
}
ipCount := 0
ipLoaded := 0
cdrCount := 0
cdrLoaded := 0
for _, entry := range entries {
entry = strings.TrimSpace(entry)
if strings.LastIndex(entry, "/") > 0 {
cdrCount++
_, network, err := net.ParseCIDR(entry)
if err != nil {
logger.Warn(logSender, "", "unable to parse CIDR network %#v: %v", entry, err)
continue
}
err = hostList.Ranges.Insert(cidranger.NewBasicRangerEntry(*network))
if err == nil {
cdrLoaded++
}
} else {
ipCount++
if net.ParseIP(entry) == nil {
logger.Warn(logSender, "", "unable to parse IP %#v", entry)
continue
}
hostList.IPAddresses[entry] = true
ipLoaded++
}
}
logger.Info(logSender, "", "%s from config loaded, ip addresses loaded: %v/%v networks loaded: %v/%v",
listName, ipLoaded, ipCount, cdrLoaded, cdrCount)
return hostList
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,93 +10,50 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt"
"net"
"os"
"path/filepath"
"runtime"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/yl2chen/cidranger"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
)
func TestBasicDefender(t *testing.T) {
entries := []dataprovider.IPListEntry{
{
IPOrNet: "172.16.1.1/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeDeny,
},
{
IPOrNet: "172.16.1.2/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeDeny,
},
{
IPOrNet: "10.8.0.0/24",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeDeny,
},
{
IPOrNet: "192.168.1.1/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeDeny,
},
{
IPOrNet: "192.168.1.2/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeDeny,
},
{
IPOrNet: "10.8.9.0/24",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeDeny,
},
{
IPOrNet: "172.16.1.3/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeAllow,
},
{
IPOrNet: "172.16.1.4/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeAllow,
},
{
IPOrNet: "192.168.8.0/24",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeAllow,
},
{
IPOrNet: "192.168.1.3/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeAllow,
},
{
IPOrNet: "192.168.1.4/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeAllow,
},
{
IPOrNet: "192.168.9.0/24",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeAllow,
},
bl := HostListFile{
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
CIDRNetworks: []string{"10.8.0.0/24"},
}
sl := HostListFile{
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
CIDRNetworks: []string{"192.168.8.0/24"},
}
blFile := filepath.Join(os.TempDir(), "bl.json")
slFile := filepath.Join(os.TempDir(), "sl.json")
for idx := range entries {
e := entries[idx]
err := dataprovider.AddIPListEntry(&e, "", "", "")
assert.NoError(t, err)
}
data, err := json.Marshal(bl)
assert.NoError(t, err)
err = os.WriteFile(blFile, data, os.ModePerm)
assert.NoError(t, err)
data, err = json.Marshal(sl)
assert.NoError(t, err)
err = os.WriteFile(slFile, data, os.ModePerm)
assert.NoError(t, err)
config := &DefenderConfig{
Enabled: true,
@ -105,26 +62,35 @@ func TestBasicDefender(t *testing.T) {
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreNoAuth: 2,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
SafeListFile: "slFile",
BlockListFile: "blFile",
SafeList: []string{"192.168.1.3", "192.168.1.4", "192.168.9.0/24"},
BlockList: []string{"192.168.1.1", "192.168.1.2", "10.8.9.0/24"},
}
_, err = newInMemoryDefender(config)
assert.Error(t, err)
config.BlockListFile = blFile
_, err = newInMemoryDefender(config)
assert.Error(t, err)
config.SafeListFile = slFile
d, err := newInMemoryDefender(config)
assert.NoError(t, err)
defender := d.(*memoryDefender)
assert.True(t, defender.IsBanned("172.16.1.1", ProtocolSSH))
assert.True(t, defender.IsBanned("192.168.1.1", ProtocolFTP))
assert.False(t, defender.IsBanned("172.16.1.10", ProtocolSSH))
assert.False(t, defender.IsBanned("192.168.1.10", ProtocolSSH))
assert.False(t, defender.IsBanned("10.8.2.3", ProtocolSSH))
assert.False(t, defender.IsBanned("10.9.2.3", ProtocolSSH))
assert.True(t, defender.IsBanned("10.8.0.3", ProtocolSSH))
assert.True(t, defender.IsBanned("10.8.9.3", ProtocolSSH))
assert.False(t, defender.IsBanned("invalid ip", ProtocolSSH))
assert.True(t, defender.IsBanned("172.16.1.1"))
assert.True(t, defender.IsBanned("192.168.1.1"))
assert.False(t, defender.IsBanned("172.16.1.10"))
assert.False(t, defender.IsBanned("192.168.1.10"))
assert.False(t, defender.IsBanned("10.8.2.3"))
assert.False(t, defender.IsBanned("10.9.2.3"))
assert.True(t, defender.IsBanned("10.8.0.3"))
assert.True(t, defender.IsBanned("10.8.9.3"))
assert.False(t, defender.IsBanned("invalid ip"))
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 0, defender.countHosts())
hosts, err := defender.GetHosts()
@ -133,15 +99,15 @@ func TestBasicDefender(t *testing.T) {
_, err = defender.GetHost("10.8.0.4")
assert.Error(t, err)
defender.AddEvent("172.16.1.4", ProtocolSSH, HostEventLoginFailed)
defender.AddEvent("192.168.1.4", ProtocolSSH, HostEventLoginFailed)
defender.AddEvent("192.168.8.4", ProtocolSSH, HostEventUserNotFound)
defender.AddEvent("172.16.1.3", ProtocolSSH, HostEventLimitExceeded)
defender.AddEvent("192.168.1.3", ProtocolSSH, HostEventLimitExceeded)
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
defender.AddEvent("192.168.1.3", HostEventLimitExceeded)
assert.Equal(t, 0, defender.countHosts())
testIP := "12.34.56.78"
defender.AddEvent(testIP, ProtocolSSH, HostEventLoginFailed)
defender.AddEvent(testIP, HostEventLoginFailed)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
score, err := defender.GetScore(testIP)
@ -161,7 +127,7 @@ func TestBasicDefender(t *testing.T) {
banTime, err := defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.Nil(t, banTime)
defender.AddEvent(testIP, ProtocolSSH, HostEventLimitExceeded)
defender.AddEvent(testIP, HostEventLimitExceeded)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
score, err = defender.GetScore(testIP)
@ -174,8 +140,8 @@ func TestBasicDefender(t *testing.T) {
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
defender.AddEvent(testIP, ProtocolSSH, HostEventUserNotFound)
defender.AddEvent(testIP, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, 1, defender.countBanned())
score, err = defender.GetScore(testIP)
@ -202,11 +168,11 @@ func TestBasicDefender(t *testing.T) {
testIP2 := "12.34.56.80"
testIP3 := "12.34.56.81"
defender.AddEvent(testIP1, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP2, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP1, HostEventNoLoginTried)
defender.AddEvent(testIP2, HostEventNoLoginTried)
assert.Equal(t, 2, defender.countHosts())
time.Sleep(20 * time.Millisecond)
defender.AddEvent(testIP3, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP3, HostEventNoLoginTried)
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
// testIP1 and testIP2 should be removed
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
@ -220,8 +186,8 @@ func TestBasicDefender(t *testing.T) {
assert.NoError(t, err)
assert.Equal(t, 2, score)
defender.AddEvent(testIP3, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP3, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP3, HostEventNoLoginTried)
defender.AddEvent(testIP3, HostEventNoLoginTried)
// IP3 is now banned
banTime, err = defender.GetBanTime(testIP3)
assert.NoError(t, err)
@ -230,7 +196,7 @@ func TestBasicDefender(t *testing.T) {
time.Sleep(20 * time.Millisecond)
for i := 0; i < 3; i++ {
defender.AddEvent(testIP1, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP1, HostEventNoLoginTried)
}
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, config.EntriesSoftLimit, defender.countBanned())
@ -245,9 +211,9 @@ func TestBasicDefender(t *testing.T) {
assert.NotNil(t, banTime)
for i := 0; i < 3; i++ {
defender.AddEvent(testIP, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
time.Sleep(10 * time.Millisecond)
defender.AddEvent(testIP3, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP3, HostEventNoLoginTried)
}
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countBanned())
@ -255,7 +221,7 @@ func TestBasicDefender(t *testing.T) {
banTime, err = defender.GetBanTime(testIP3)
assert.NoError(t, err)
if assert.NotNil(t, banTime) {
assert.True(t, defender.IsBanned(testIP3, ProtocolFTP))
assert.True(t, defender.IsBanned(testIP3))
// ban time should increase
newBanTime, err := defender.GetBanTime(testIP3)
assert.NoError(t, err)
@ -265,10 +231,10 @@ func TestBasicDefender(t *testing.T) {
assert.True(t, defender.DeleteHost(testIP3))
assert.False(t, defender.DeleteHost(testIP3))
for _, e := range entries {
err := dataprovider.DeleteIPListEntry(e.IPOrNet, e.Type, "", "", "")
assert.NoError(t, err)
}
err = os.Remove(slFile)
assert.NoError(t, err)
err = os.Remove(blFile)
assert.NoError(t, err)
}
func TestExpiredHostBans(t *testing.T) {
@ -298,14 +264,14 @@ func TestExpiredHostBans(t *testing.T) {
assert.NoError(t, err)
assert.Len(t, res, 0)
assert.False(t, defender.IsBanned(testIP, ProtocolFTP))
assert.False(t, defender.IsBanned(testIP))
_, err = defender.GetHost(testIP)
assert.Error(t, err)
_, ok := defender.banned[testIP]
assert.True(t, ok)
// now add an event for an expired banned ip, it should be removed
defender.AddEvent(testIP, ProtocolFTP, HostEventLoginFailed)
assert.False(t, defender.IsBanned(testIP, ProtocolFTP))
defender.AddEvent(testIP, HostEventLoginFailed)
assert.False(t, defender.IsBanned(testIP))
entry, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, testIP, entry.IP)
@ -347,6 +313,94 @@ func TestExpiredHostBans(t *testing.T) {
assert.True(t, ok)
}
func TestLoadHostListFromFile(t *testing.T) {
_, err := loadHostListFromFile(".")
assert.Error(t, err)
hostsFilePath := filepath.Join(os.TempDir(), "hostfile")
content := make([]byte, 1048576*6)
_, err = rand.Read(content)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, content, os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
hl := HostListFile{
IPAddresses: []string{},
CIDRNetworks: []string{},
}
asJSON, err := json.Marshal(hl)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err := loadHostListFromFile(hostsFilePath)
assert.NoError(t, err)
assert.Nil(t, hostList)
hl.IPAddresses = append(hl.IPAddresses, "invalidip")
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
assert.NoError(t, err)
assert.Len(t, hostList.IPAddresses, 0)
hl.IPAddresses = nil
hl.CIDRNetworks = append(hl.CIDRNetworks, "invalid net")
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
assert.NoError(t, err)
assert.NotNil(t, hostList)
assert.Len(t, hostList.IPAddresses, 0)
assert.Equal(t, 0, hostList.Ranges.Len())
if runtime.GOOS != "windows" {
err = os.Chmod(hostsFilePath, 0111)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
err = os.Chmod(hostsFilePath, 0644)
assert.NoError(t, err)
}
err = os.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
err = os.Remove(hostsFilePath)
assert.NoError(t, err)
}
func TestAddEntriesToHostList(t *testing.T) {
name := "testList"
hostlist := addEntriesToList([]string{"192.168.6.1", "10.7.0.0/25"}, nil, name)
require.NotNil(t, hostlist)
assert.True(t, hostlist.isListed("192.168.6.1"))
assert.False(t, hostlist.isListed("192.168.6.2"))
assert.True(t, hostlist.isListed("10.7.0.28"))
assert.False(t, hostlist.isListed("10.7.0.129"))
// load invalid values
hostlist = addEntriesToList([]string{"invalidip", "invalidnet/24"}, nil, name)
require.NotNil(t, hostlist)
assert.Len(t, hostlist.IPAddresses, 0)
assert.Equal(t, 0, hostlist.Ranges.Len())
}
func TestDefenderCleanup(t *testing.T) {
d := memoryDefender{
baseDefender: baseDefender{
@ -435,31 +489,6 @@ func TestDefenderCleanup(t *testing.T) {
assert.Equal(t, 0, score)
}
func TestDefenderDelay(t *testing.T) {
d := memoryDefender{
baseDefender: baseDefender{
config: &DefenderConfig{
ObservationTime: 1,
EntriesSoftLimit: 2,
EntriesHardLimit: 3,
LoginDelay: LoginDelay{
Success: 50,
PasswordFailed: 200,
},
},
},
}
startTime := time.Now()
d.DelayLogin(nil)
elapsed := time.Since(startTime)
assert.Less(t, elapsed, time.Millisecond*100)
startTime = time.Now()
d.DelayLogin(ErrInternalFailure)
elapsed = time.Since(startTime)
assert.Greater(t, elapsed, time.Millisecond*150)
}
func TestDefenderConfig(t *testing.T) {
c := DefenderConfig{}
err := c.validate()
@ -482,11 +511,6 @@ func TestDefenderConfig(t *testing.T) {
require.Error(t, err)
c.ScoreValid = 1
c.ScoreNoAuth = 10
err = c.validate()
require.Error(t, err)
c.ScoreNoAuth = 2
c.BanTime = 0
err = c.validate()
require.Error(t, err)
@ -516,20 +540,6 @@ func TestDefenderConfig(t *testing.T) {
c.EntriesHardLimit = 20
err = c.validate()
require.NoError(t, err)
c = DefenderConfig{
Enabled: true,
ScoreInvalid: -1,
ScoreLimitExceeded: -1,
ScoreNoAuth: -1,
ScoreValid: -1,
}
err = c.validate()
require.Error(t, err)
assert.Equal(t, 0, c.ScoreInvalid)
assert.Equal(t, 0, c.ScoreValid)
assert.Equal(t, 0, c.ScoreLimitExceeded)
assert.Equal(t, 0, c.ScoreNoAuth)
}
func BenchmarkDefenderBannedSearch(b *testing.B) {
@ -547,7 +557,7 @@ func BenchmarkDefenderBannedSearch(b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
d.IsBanned("192.168.1.1", ProtocolSSH)
d.IsBanned("192.168.1.1")
}
}
@ -563,7 +573,7 @@ func BenchmarkCleanup(b *testing.B) {
for i := 0; i < b.N; i++ {
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
d.AddEvent(ip.String(), ProtocolSSH, HostEventLoginFailed)
d.AddEvent(ip.String(), HostEventLoginFailed)
if d.countHosts() > d.config.EntriesHardLimit {
panic("too many hosts")
}
@ -574,10 +584,72 @@ func BenchmarkCleanup(b *testing.B) {
}
}
func BenchmarkDefenderBannedSearchWithBlockList(b *testing.B) {
d := getDefenderForBench()
d.blockList = &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
ip, ipnet, err := net.ParseCIDR("129.8.0.0/12") // 1048574 ip addresses
if err != nil {
panic(err)
}
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
d.blockList.IPAddresses[ip.String()] = true
}
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("10.8.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
if err := d.blockList.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
panic(err)
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
d.IsBanned("192.168.1.1")
}
}
func BenchmarkHostListSearch(b *testing.B) {
hostlist := &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
ip, ipnet, _ := net.ParseCIDR("172.16.0.0/16")
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
hostlist.IPAddresses[ip.String()] = true
}
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("10.8.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
if err := hostlist.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
panic(err)
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if hostlist.isListed("192.167.1.2") {
panic("should not be listed")
}
}
}
func BenchmarkCIDRanger(b *testing.B) {
ranger := cidranger.NewPCTrieRanger()
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("192.168.%d.1/24", i)
cidr := fmt.Sprintf("192.168.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
if err := ranger.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
panic(err)
@ -597,7 +669,7 @@ func BenchmarkCIDRanger(b *testing.B) {
func BenchmarkNetContains(b *testing.B) {
var nets []*net.IPNet
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("192.168.%d.1/24", i)
cidr := fmt.Sprintf("192.168.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
nets = append(nets, network)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,22 +10,21 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"sync/atomic"
"time"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
type dbDefender struct {
baseDefender
lastCleanup atomic.Int64
lastCleanup time.Time
}
func newDBDefender(config *DefenderConfig) (Defender, error) {
@ -33,17 +32,16 @@ func newDBDefender(config *DefenderConfig) (Defender, error) {
if err != nil {
return nil, err
}
ipList, err := dataprovider.NewIPList(dataprovider.IPListTypeDefender)
if err != nil {
return nil, err
}
defender := &dbDefender{
baseDefender: baseDefender{
config: config,
ipList: ipList,
},
lastCleanup: time.Time{},
}
if err := defender.Reload(); err != nil {
return nil, err
}
defender.lastCleanup.Store(0)
return defender, nil
}
@ -61,10 +59,13 @@ func (d *dbDefender) GetHost(ip string) (dataprovider.DefenderEntry, error) {
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
func (d *dbDefender) IsBanned(ip, protocol string) bool {
if d.baseDefender.isBanned(ip, protocol) {
func (d *dbDefender) IsBanned(ip string) bool {
d.RLock()
if d.baseDefender.isBanned(ip) {
d.RUnlock()
return true
}
d.RUnlock()
_, err := dataprovider.IsDefenderHostBanned(ip)
if err != nil {
@ -88,38 +89,29 @@ func (d *dbDefender) DeleteHost(ip string) bool {
}
// AddEvent adds an event for the given IP.
// This method must be called for clients not yet banned.
// Returns true if the IP is in the defender's safe list.
func (d *dbDefender) AddEvent(ip, protocol string, event HostEvent) bool {
if d.IsSafe(ip, protocol) {
return true
// This method must be called for clients not yet banned
func (d *dbDefender) AddEvent(ip string, event HostEvent) {
d.RLock()
if d.safeList != nil && d.safeList.isListed(ip) {
d.RUnlock()
return
}
d.RUnlock()
score := d.baseDefender.getScore(event)
host, err := dataprovider.AddDefenderEvent(ip, score, d.getStartObservationTime())
if err != nil {
return false
return
}
d.baseDefender.logEvent(ip, protocol, event, host.Score)
if host.Score > d.config.Threshold {
d.baseDefender.logBan(ip, protocol)
banTime := time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
err = dataprovider.SetDefenderBanTime(ip, util.GetTimeAsMsSinceEpoch(banTime))
if err == nil {
eventManager.handleIPBlockedEvent(EventParams{
Event: ipBlockedEventName,
IP: ip,
Timestamp: time.Now(),
Status: 1,
})
}
}
if err == nil {
d.cleanup()
}
return false
}
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
@ -165,17 +157,15 @@ func (d *dbDefender) getStartObservationTime() int64 {
}
func (d *dbDefender) getLastCleanup() time.Time {
val := d.lastCleanup.Load()
if val == 0 {
return time.Time{}
}
return util.GetTimeFromMsecSinceEpoch(val)
d.RLock()
defer d.RUnlock()
return d.lastCleanup
}
func (d *dbDefender) setLastCleanup(when time.Time) {
if when.IsZero() {
d.lastCleanup.Store(0)
return
}
d.lastCleanup.Store(util.GetTimeAsMsSinceEpoch(when))
d.Lock()
defer d.Unlock()
d.lastCleanup = when
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,64 +10,28 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"encoding/hex"
"encoding/json"
"os"
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/util"
)
func TestBasicDbDefender(t *testing.T) {
if !isDbDefenderSupported() {
t.Skip("this test is not supported with the current database provider")
}
entries := []dataprovider.IPListEntry{
{
IPOrNet: "172.16.1.1/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeDeny,
},
{
IPOrNet: "172.16.1.2/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeDeny,
},
{
IPOrNet: "10.8.0.0/24",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeDeny,
},
{
IPOrNet: "172.16.1.3/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeAllow,
},
{
IPOrNet: "172.16.1.4/32",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeAllow,
},
{
IPOrNet: "192.168.8.0/24",
Type: dataprovider.IPListTypeDefender,
Mode: dataprovider.ListModeAllow,
},
}
for idx := range entries {
e := entries[idx]
err := dataprovider.AddIPListEntry(&e, "", "", "")
assert.NoError(t, err)
}
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
@ -75,36 +39,65 @@ func TestBasicDbDefender(t *testing.T) {
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreNoAuth: 2,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 10,
SafeListFile: "slFile",
BlockListFile: "blFile",
}
_, err := newDBDefender(config)
assert.Error(t, err)
bl := HostListFile{
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
CIDRNetworks: []string{"10.8.0.0/24"},
}
sl := HostListFile{
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
CIDRNetworks: []string{"192.168.8.0/24"},
}
blFile := filepath.Join(os.TempDir(), "bl.json")
slFile := filepath.Join(os.TempDir(), "sl.json")
data, err := json.Marshal(bl)
assert.NoError(t, err)
err = os.WriteFile(blFile, data, os.ModePerm)
assert.NoError(t, err)
data, err = json.Marshal(sl)
assert.NoError(t, err)
err = os.WriteFile(slFile, data, os.ModePerm)
assert.NoError(t, err)
config.BlockListFile = blFile
_, err = newDBDefender(config)
assert.Error(t, err)
config.SafeListFile = slFile
d, err := newDBDefender(config)
assert.NoError(t, err)
defender := d.(*dbDefender)
assert.True(t, defender.IsBanned("172.16.1.1", ProtocolFTP))
assert.False(t, defender.IsBanned("172.16.1.10", ProtocolSSH))
assert.False(t, defender.IsBanned("10.8.1.3", ProtocolHTTP))
assert.True(t, defender.IsBanned("10.8.0.4", ProtocolWebDAV))
assert.False(t, defender.IsBanned("invalid ip", ProtocolSSH))
assert.True(t, defender.IsBanned("172.16.1.1"))
assert.False(t, defender.IsBanned("172.16.1.10"))
assert.False(t, defender.IsBanned("10.8.1.3"))
assert.True(t, defender.IsBanned("10.8.0.4"))
assert.False(t, defender.IsBanned("invalid ip"))
hosts, err := defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
_, err = defender.GetHost("10.8.0.3")
assert.Error(t, err)
defender.AddEvent("172.16.1.4", ProtocolSSH, HostEventLoginFailed)
defender.AddEvent("192.168.8.4", ProtocolSSH, HostEventUserNotFound)
defender.AddEvent("172.16.1.3", ProtocolSSH, HostEventLimitExceeded)
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
assert.True(t, defender.getLastCleanup().IsZero())
testIP := "123.45.67.89"
defender.AddEvent(testIP, ProtocolSSH, HostEventLoginFailed)
defender.AddEvent(testIP, HostEventLoginFailed)
lastCleanup := defender.getLastCleanup()
assert.False(t, lastCleanup.IsZero())
score, err := defender.GetScore(testIP)
@ -124,7 +117,7 @@ func TestBasicDbDefender(t *testing.T) {
banTime, err := defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.Nil(t, banTime)
defender.AddEvent(testIP, ProtocolSSH, HostEventLimitExceeded)
defender.AddEvent(testIP, HostEventLimitExceeded)
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 4, score)
@ -135,8 +128,8 @@ func TestBasicDbDefender(t *testing.T) {
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
defender.AddEvent(testIP, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, score)
@ -156,7 +149,7 @@ func TestBasicDbDefender(t *testing.T) {
assert.Equal(t, 0, host.Score)
assert.NotEmpty(t, host.GetBanTime())
// ban time should increase
assert.True(t, defender.IsBanned(testIP, ProtocolSSH))
assert.True(t, defender.IsBanned(testIP))
newBanTime, err := defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.True(t, newBanTime.After(*banTime))
@ -168,9 +161,9 @@ func TestBasicDbDefender(t *testing.T) {
testIP2 := "123.45.67.91"
testIP3 := "123.45.67.92"
for i := 0; i < 3; i++ {
defender.AddEvent(testIP, ProtocolSSH, HostEventUserNotFound)
defender.AddEvent(testIP1, ProtocolSSH, HostEventNoLoginTried)
defender.AddEvent(testIP2, ProtocolSSH, HostEventUserNotFound)
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP1, HostEventNoLoginTried)
defender.AddEvent(testIP2, HostEventNoLoginTried)
}
hosts, err = defender.GetHosts()
assert.NoError(t, err)
@ -180,7 +173,7 @@ func TestBasicDbDefender(t *testing.T) {
assert.False(t, host.BanTime.IsZero())
assert.NotEmpty(t, host.GetBanTime())
}
defender.AddEvent(testIP3, ProtocolSSH, HostEventLoginFailed)
defender.AddEvent(testIP3, HostEventLoginFailed)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 4)
@ -254,10 +247,10 @@ func TestBasicDbDefender(t *testing.T) {
assert.NoError(t, err)
assert.Len(t, hosts, 0)
for _, e := range entries {
err := dataprovider.DeleteIPListEntry(e.IPOrNet, e.Type, "", "", "")
assert.NoError(t, err)
}
err = os.Remove(slFile)
assert.NoError(t, err)
err = os.Remove(blFile)
assert.NoError(t, err)
}
func TestDbDefenderCleanup(t *testing.T) {
@ -286,8 +279,6 @@ func TestDbDefenderCleanup(t *testing.T) {
assert.False(t, lastCleanup.IsZero())
defender.cleanup()
assert.Equal(t, lastCleanup, defender.getLastCleanup())
defender.setLastCleanup(time.Time{})
assert.True(t, defender.getLastCleanup().IsZero())
defender.setLastCleanup(time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4))
time.Sleep(20 * time.Millisecond)
defender.cleanup()
@ -297,7 +288,7 @@ func TestDbDefenderCleanup(t *testing.T) {
err = dataprovider.Close()
assert.NoError(t, err)
lastCleanup = util.GetTimeFromMsecSinceEpoch(time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4).UnixMilli())
lastCleanup = time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4)
defender.setLastCleanup(lastCleanup)
defender.cleanup()
// cleanup will fail and so last cleanup should be reset to the previous value

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,22 +10,20 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"sort"
"sync"
"time"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/util"
)
type memoryDefender struct {
baseDefender
sync.RWMutex
// IP addresses of the clients trying to connected are stored inside hosts,
// they are added to banned once the thresold is reached.
// A violation from a banned host will increase the ban time
@ -39,19 +37,18 @@ func newInMemoryDefender(config *DefenderConfig) (Defender, error) {
if err != nil {
return nil, err
}
ipList, err := dataprovider.NewIPList(dataprovider.IPListTypeDefender)
if err != nil {
return nil, err
}
defender := &memoryDefender{
baseDefender: baseDefender{
config: config,
ipList: ipList,
},
hosts: make(map[string]hostScore),
banned: make(map[string]time.Time),
}
if err := defender.Reload(); err != nil {
return nil, err
}
return defender, nil
}
@ -122,7 +119,7 @@ func (d *memoryDefender) GetHost(ip string) (dataprovider.DefenderEntry, error)
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
func (d *memoryDefender) IsBanned(ip, protocol string) bool {
func (d *memoryDefender) IsBanned(ip string) bool {
d.RLock()
if banTime, ok := d.banned[ip]; ok {
@ -148,7 +145,7 @@ func (d *memoryDefender) IsBanned(ip, protocol string) bool {
defer d.RUnlock()
return d.baseDefender.isBanned(ip, protocol)
return d.baseDefender.isBanned(ip)
}
// DeleteHost removes the specified IP from the defender lists
@ -170,20 +167,19 @@ func (d *memoryDefender) DeleteHost(ip string) bool {
}
// AddEvent adds an event for the given IP.
// This method must be called for clients not yet banned.
// Returns true if the IP is in the defender's safe list.
func (d *memoryDefender) AddEvent(ip, protocol string, event HostEvent) bool {
if d.IsSafe(ip, protocol) {
return true
}
// This method must be called for clients not yet banned
func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
d.Lock()
defer d.Unlock()
if d.safeList != nil && d.safeList.isListed(ip) {
return
}
// ignore events for already banned hosts
if v, ok := d.banned[ip]; ok {
if v.After(time.Now()) {
return false
return
}
delete(d.banned, ip)
}
@ -207,32 +203,22 @@ func (d *memoryDefender) AddEvent(ip, protocol string, event HostEvent) bool {
idx++
}
}
d.baseDefender.logEvent(ip, protocol, event, hs.TotalScore)
hs.Events = hs.Events[:idx]
if hs.TotalScore >= d.config.Threshold {
d.baseDefender.logBan(ip, protocol)
d.banned[ip] = time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
delete(d.hosts, ip)
d.cleanupBanned()
eventManager.handleIPBlockedEvent(EventParams{
Event: ipBlockedEventName,
IP: ip,
Timestamp: time.Now(),
Status: 1,
})
} else {
d.hosts[ip] = hs
}
} else {
d.baseDefender.logEvent(ip, protocol, event, ev.score)
d.hosts[ip] = hostScore{
TotalScore: ev.score,
Events: []hostEvent{ev},
}
d.cleanupHosts()
}
return false
}
func (d *memoryDefender) countBanned() int {

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
@ -24,8 +24,8 @@ import (
"github.com/GehirnInc/crypt/md5_crypt"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
const (

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common

3840
common/protocol_test.go Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,14 +10,14 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"errors"
"fmt"
"slices"
"net"
"sort"
"sync"
"sync/atomic"
@ -25,7 +25,7 @@ import (
"golang.org/x/time/rate"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@ -62,6 +62,8 @@ type RateLimiterConfig struct {
// Available protocols are: "SFTP", "FTP", "DAV".
// A rate limiter with no protocols defined is disabled
Protocols []string `json:"protocols" mapstructure:"protocols"`
// AllowList defines a list of IP addresses and IP ranges excluded from rate limiting
AllowList []string `json:"allow_list" mapstructure:"mapstructure"`
// If the rate limit is exceeded, the defender is enabled, and this is a per-source limiter,
// a new defender event will be generated
GenerateDefenderEvents bool `json:"generate_defender_events" mapstructure:"generate_defender_events"`
@ -95,8 +97,8 @@ func (r *RateLimiterConfig) validate() error {
}
r.Protocols = util.RemoveDuplicates(r.Protocols, true)
for _, protocol := range r.Protocols {
if !slices.Contains(rateLimiterProtocolValues, protocol) {
return fmt.Errorf("invalid protocol %q", protocol)
if !util.Contains(rateLimiterProtocolValues, protocol) {
return fmt.Errorf("invalid protocol %#v", protocol)
}
}
return nil
@ -140,12 +142,23 @@ type rateLimiter struct {
globalBucket *rate.Limiter
buckets sourceBuckets
generateDefenderEvents bool
allowList []func(net.IP) bool
}
// Wait blocks until the limit allows one event to happen
// or returns an error if the time to wait exceeds the max
// allowed delay
func (rl *rateLimiter) Wait(source, protocol string) (time.Duration, error) {
func (rl *rateLimiter) Wait(source string) (time.Duration, error) {
if len(rl.allowList) > 0 {
ip := net.ParseIP(source)
if ip != nil {
for idx := range rl.allowList {
if rl.allowList[idx](ip) {
return 0, nil
}
}
}
}
var res *rate.Reservation
if rl.globalBucket != nil {
res = rl.globalBucket.Reserve()
@ -164,7 +177,7 @@ func (rl *rateLimiter) Wait(source, protocol string) (time.Duration, error) {
if delay > rl.maxDelay {
res.Cancel()
if rl.generateDefenderEvents && rl.globalBucket == nil {
AddDefenderEvent(source, protocol, HostEventLimitExceeded)
AddDefenderEvent(source, HostEventLimitExceeded)
}
return delay, fmt.Errorf("rate limit exceed, wait time to respect rate %v, max wait time allowed %v", delay, rl.maxDelay)
}
@ -173,16 +186,16 @@ func (rl *rateLimiter) Wait(source, protocol string) (time.Duration, error) {
}
type sourceRateLimiter struct {
lastActivity *atomic.Int64
lastActivity int64
bucket *rate.Limiter
}
func (s *sourceRateLimiter) updateLastActivity() {
s.lastActivity.Store(time.Now().UnixNano())
atomic.StoreInt64(&s.lastActivity, time.Now().UnixNano())
}
func (s *sourceRateLimiter) getLastActivity() int64 {
return s.lastActivity.Load()
return atomic.LoadInt64(&s.lastActivity)
}
type sourceBuckets struct {
@ -211,8 +224,7 @@ func (b *sourceBuckets) addAndReserve(r *rate.Limiter, source string) *rate.Rese
b.cleanup()
src := sourceRateLimiter{
lastActivity: new(atomic.Int64),
bucket: r,
bucket: r,
}
src.updateLastActivity()
b.buckets[source] = src

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
@ -20,6 +20,8 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/util"
)
func TestRateLimiterConfig(t *testing.T) {
@ -77,9 +79,9 @@ func TestRateLimiter(t *testing.T) {
Protocols: rateLimiterProtocolValues,
}
limiter := config.getLimiter()
_, err := limiter.Wait("", ProtocolFTP)
_, err := limiter.Wait("")
require.NoError(t, err)
_, err = limiter.Wait("", ProtocolSSH)
_, err = limiter.Wait("")
require.Error(t, err)
config.Type = int(rateLimiterTypeSource)
@ -89,17 +91,28 @@ func TestRateLimiter(t *testing.T) {
limiter = config.getLimiter()
source := "192.168.1.2"
_, err = limiter.Wait(source, ProtocolSSH)
_, err = limiter.Wait(source)
require.NoError(t, err)
_, err = limiter.Wait(source, ProtocolSSH)
_, err = limiter.Wait(source)
require.Error(t, err)
// a different source should work
_, err = limiter.Wait(source+"1", ProtocolSSH)
_, err = limiter.Wait(source + "1")
require.NoError(t, err)
allowList := []string{"192.168.1.0/24"}
allowFuncs, err := util.ParseAllowedIPAndRanges(allowList)
assert.NoError(t, err)
limiter.allowList = allowFuncs
for i := 0; i < 5; i++ {
_, err = limiter.Wait(source)
require.NoError(t, err)
}
_, err = limiter.Wait("not an ip")
require.NoError(t, err)
config.Burst = 0
limiter = config.getLimiter()
_, err = limiter.Wait(source, ProtocolSSH)
_, err = limiter.Wait(source)
require.ErrorIs(t, err, errReserve)
}
@ -118,10 +131,10 @@ func TestLimiterCleanup(t *testing.T) {
source2 := "10.8.0.2"
source3 := "10.8.0.3"
source4 := "10.8.0.4"
_, err := limiter.Wait(source1, ProtocolSSH)
_, err := limiter.Wait(source1)
assert.NoError(t, err)
time.Sleep(20 * time.Millisecond)
_, err = limiter.Wait(source2, ProtocolSSH)
_, err = limiter.Wait(source2)
assert.NoError(t, err)
time.Sleep(20 * time.Millisecond)
assert.Len(t, limiter.buckets.buckets, 2)
@ -129,7 +142,7 @@ func TestLimiterCleanup(t *testing.T) {
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source2]
assert.True(t, ok)
_, err = limiter.Wait(source3, ProtocolSSH)
_, err = limiter.Wait(source3)
assert.NoError(t, err)
assert.Len(t, limiter.buckets.buckets, 3)
_, ok = limiter.buckets.buckets[source1]
@ -139,7 +152,7 @@ func TestLimiterCleanup(t *testing.T) {
_, ok = limiter.buckets.buckets[source3]
assert.True(t, ok)
time.Sleep(20 * time.Millisecond)
_, err = limiter.Wait(source4, ProtocolSSH)
_, err = limiter.Wait(source4)
assert.NoError(t, err)
assert.Len(t, limiter.buckets.buckets, 2)
_, ok = limiter.buckets.buckets[source3]

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,36 +10,28 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"bytes"
"crypto/tls"
"crypto/x509"
"encoding/pem"
"crypto/x509/pkix"
"errors"
"fmt"
"io/fs"
"math/rand"
"os"
"path/filepath"
"slices"
"sync"
"time"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
const (
// DefaultTLSKeyPaidID defines the id to use for non-binding specific key pairs
DefaultTLSKeyPaidID = "default"
pemCRLType = "X509 CRL"
)
var (
pemCRLPrefix = []byte("-----BEGIN X509 CRL")
)
// TLSKeyPair defines the paths and the unique identifier for a TLS key pair
@ -57,11 +49,9 @@ type CertManager struct {
sync.RWMutex
caCertificates []string
caRevocationLists []string
monitorList []string
certs map[string]*tls.Certificate
certsInfo map[string]fs.FileInfo
rootCAs *x509.CertPool
crls []*x509.RevocationList
crls []*pkix.CertificateList
}
// Reload tries to reload certificate and CRLs
@ -87,19 +77,15 @@ func (m *CertManager) loadCertificates() error {
}
newCert, err := tls.LoadX509KeyPair(keyPair.Cert, keyPair.Key)
if err != nil {
logger.Error(m.logSender, "", "unable to load X509 key pair, cert file %q key file %q error: %v",
logger.Warn(m.logSender, "", "unable to load X509 key pair, cert file %#v key file %#v error: %v",
keyPair.Cert, keyPair.Key, err)
return err
}
if _, ok := certs[keyPair.ID]; ok {
logger.Error(m.logSender, "", "TLS certificate with id %q is duplicated", keyPair.ID)
return fmt.Errorf("TLS certificate with id %q is duplicated", keyPair.ID)
return fmt.Errorf("TLS certificate with id %#v is duplicated", keyPair.ID)
}
logger.Debug(m.logSender, "", "TLS certificate %q successfully loaded, id %v", keyPair.Cert, keyPair.ID)
logger.Debug(m.logSender, "", "TLS certificate %#v successfully loaded, id %v", keyPair.Cert, keyPair.ID)
certs[keyPair.ID] = &newCert
if !slices.Contains(m.monitorList, keyPair.Cert) {
m.monitorList = append(m.monitorList, keyPair.Cert)
}
}
m.Lock()
@ -109,25 +95,15 @@ func (m *CertManager) loadCertificates() error {
return nil
}
// HasCertificate returns true if there is a certificate for the specified certID
func (m *CertManager) HasCertificate(certID string) bool {
m.RLock()
defer m.RUnlock()
_, ok := m.certs[certID]
return ok
}
// GetCertificateFunc returns the loaded certificate
func (m *CertManager) GetCertificateFunc(certID string) func(*tls.ClientHelloInfo) (*tls.Certificate, error) {
return func(_ *tls.ClientHelloInfo) (*tls.Certificate, error) {
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
m.RLock()
defer m.RUnlock()
val, ok := m.certs[certID]
if !ok {
logger.Error(m.logSender, "", "no certificate for id %s", certID)
return nil, fmt.Errorf("no certificate for id %s", certID)
return nil, fmt.Errorf("no certificate for id %v", certID)
}
return val, nil
@ -140,13 +116,13 @@ func (m *CertManager) IsRevoked(crt *x509.Certificate, caCrt *x509.Certificate)
defer m.RUnlock()
if crt == nil || caCrt == nil {
logger.Error(m.logSender, "", "unable to verify crt %v, ca crt %v", crt, caCrt)
logger.Warn(m.logSender, "", "unable to verify crt %v ca crt %v", crt, caCrt)
return len(m.crls) > 0
}
for _, crl := range m.crls {
if crl.CheckSignatureFrom(caCrt) == nil {
for _, rc := range crl.RevokedCertificateEntries {
if !crl.HasExpired(time.Now()) && caCrt.CheckCRLSignature(crl) == nil {
for _, rc := range crl.TBSCertList.RevokedCertificates {
if rc.SerialNumber.Cmp(crt.SerialNumber) == 0 {
return true
}
@ -163,37 +139,28 @@ func (m *CertManager) LoadCRLs() error {
return nil
}
var crls []*x509.RevocationList
var crls []*pkix.CertificateList
for _, revocationList := range m.caRevocationLists {
if !util.IsFileInputValid(revocationList) {
return fmt.Errorf("invalid root CA revocation list %q", revocationList)
return fmt.Errorf("invalid root CA revocation list %#v", revocationList)
}
if revocationList != "" && !filepath.IsAbs(revocationList) {
revocationList = filepath.Join(m.configDir, revocationList)
}
crlBytes, err := os.ReadFile(revocationList)
if err != nil {
logger.Error(m.logSender, "", "unable to read revocation list %q", revocationList)
logger.Warn(m.logSender, "unable to read revocation list %#v", revocationList)
return err
}
if bytes.HasPrefix(crlBytes, pemCRLPrefix) {
block, _ := pem.Decode(crlBytes)
if block != nil && block.Type == pemCRLType {
crlBytes = block.Bytes
}
}
crl, err := x509.ParseRevocationList(crlBytes)
crl, err := x509.ParseCRL(crlBytes)
if err != nil {
logger.Error(m.logSender, "", "unable to parse revocation list %q", revocationList)
logger.Warn(m.logSender, "unable to parse revocation list %#v", revocationList)
return err
}
logger.Debug(m.logSender, "", "CRL %q successfully loaded", revocationList)
logger.Debug(m.logSender, "", "CRL %#v successfully loaded", revocationList)
crls = append(crls, crl)
if !slices.Contains(m.monitorList, revocationList) {
m.monitorList = append(m.monitorList, revocationList)
}
}
m.Lock()
@ -223,21 +190,20 @@ func (m *CertManager) LoadRootCAs() error {
for _, rootCA := range m.caCertificates {
if !util.IsFileInputValid(rootCA) {
return fmt.Errorf("invalid root CA certificate %q", rootCA)
return fmt.Errorf("invalid root CA certificate %#v", rootCA)
}
if rootCA != "" && !filepath.IsAbs(rootCA) {
rootCA = filepath.Join(m.configDir, rootCA)
}
crt, err := os.ReadFile(rootCA)
if err != nil {
logger.Error(m.logSender, "", "unable to read root CA from file %q: %v", rootCA, err)
return err
}
if rootCAs.AppendCertsFromPEM(crt) {
logger.Debug(m.logSender, "", "TLS certificate authority %q successfully loaded", rootCA)
logger.Debug(m.logSender, "", "TLS certificate authority %#v successfully loaded", rootCA)
} else {
err := fmt.Errorf("unable to load TLS certificate authority %q", rootCA)
logger.Error(m.logSender, "", "%v", err)
err := fmt.Errorf("unable to load TLS certificate authority %#v", rootCA)
logger.Warn(m.logSender, "", "%v", err)
return err
}
}
@ -261,56 +227,17 @@ func (m *CertManager) SetCARevocationLists(caRevocationLists []string) {
m.caRevocationLists = util.RemoveDuplicates(caRevocationLists, true)
}
func (m *CertManager) monitor() {
certsInfo := make(map[string]fs.FileInfo)
for _, crt := range m.monitorList {
info, err := os.Stat(crt)
if err != nil {
logger.Warn(m.logSender, "", "unable to stat certificate to monitor %q: %v", crt, err)
return
}
certsInfo[crt] = info
}
m.Lock()
isChanged := false
for k, oldInfo := range m.certsInfo {
newInfo, ok := certsInfo[k]
if ok {
if newInfo.Size() != oldInfo.Size() || newInfo.ModTime() != oldInfo.ModTime() {
logger.Debug(m.logSender, "", "change detected for certificate %q, reload required", k)
isChanged = true
}
}
}
m.certsInfo = certsInfo
m.Unlock()
if isChanged {
m.Reload() //nolint:errcheck
}
}
// NewCertManager creates a new certificate manager
func NewCertManager(keyPairs []TLSKeyPair, configDir, logSender string) (*CertManager, error) {
manager := &CertManager{
keyPairs: keyPairs,
certs: make(map[string]*tls.Certificate),
configDir: configDir,
logSender: logSender,
certs: make(map[string]*tls.Certificate),
certsInfo: make(map[string]fs.FileInfo),
}
err := manager.loadCertificates()
if err != nil {
return nil, err
}
randSecs := rand.Intn(59)
manager.monitor()
if eventScheduler != nil {
_, err = eventScheduler.AddFunc(fmt.Sprintf("@every 8h0m%ds", randSecs), manager.monitor)
}
return manager, err
return manager, nil
}

461
common/tlsutils_test.go Normal file
View file

@ -0,0 +1,461 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"crypto/tls"
"crypto/x509"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
const (
serverCert = `-----BEGIN CERTIFICATE-----
MIIEIDCCAgigAwIBAgIRAPOR9zTkX35vSdeyGpF8Rn8wDQYJKoZIhvcNAQELBQAw
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMjU1WhcNMjIwNzAyMjEz
MDUxWjARMQ8wDQYDVQQDEwZzZXJ2ZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQCte0PJhCTNqTiqdwk/s4JanKIMKUVWr2u94a+JYy5gJ9xYXrQ49SeN
m+fwhTAOqctP5zNVkFqxlBytJZg3pqCKqRoOOl1qVgL3F3o7JdhZGi67aw8QMLPx
tLPpYWnnrlUQoXRJdTlqkDqO8lOZl9HO5oZeidPZ7r5BVD6ZiujAC6Zg0jIc+EPt
qhaUJ1CStoAeRf1rNWKmDsLv5hEaDWoaHF9sNVzDQg6atZ3ici00qQj+uvEZo8mL
k6egg3rqsTv9ml2qlrRgFumt99J60hTt3tuQaAruHY80O9nGy3SCXC11daa7gszH
ElCRvhUVoOxRtB54YBEtJ0gEpFnTO9J1AgMBAAGjcTBvMA4GA1UdDwEB/wQEAwID
uDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFAgDXwPV
nhztNz+H20iNWgoIx8adMB8GA1UdIwQYMBaAFO1yCNAGr/zQTJIi8lw3w5OiuBvM
MA0GCSqGSIb3DQEBCwUAA4ICAQCR5kgIb4vAtrtsXD24n6RtU1yIXHPLNmDStVrH
uaMYNnHlLhRlQFCjHhjWvZ89FQC7FeNOITc3FpibJySyw7JfnsyEOGxEbcAS4uLB
2pdAiJPqdQtxIVcyi5vu53m1T5tm0sy8sBrGxU466aDQ8VGqjcjfTwNIyoFMd3p/
ezFRvg2BudwU9hqApgfHfLi4WCuI3hLO2tbmgDinyH0HI0YYNNweGpiBYbTLF4Tx
H6vHgD9USMZeu4+HX0IIsBiHQD7TTIe5ceREkPcNPd5qTpIvT3zKQ/KwwT90/zjP
aWmz6pLxBfjRu7MY/bDfxfRUqsrLYJCVBoaDVRWR9rhiPIFkC5JzoWD/4hdj2iis
N0+OOaJ77L+/ArFprE+7Fu3cSdYlfiNjV8R5kE29cAxKLI92CjAiTKrEuxKcQPKO
+taWNKIYYjEDZwVnzlkTIl007X0RBuzu9gh4w5NwJdt8ZOJAp0JV0Cq+UvG+FC/v
lYk82E6j1HKhf4CXmrjsrD1Fyu41mpVFOpa2ATiFGvms913MkXuyO8g99IllmDw1
D7/PN4Qe9N6Zm7yoKZM0IUw2v+SUMIdOAZ7dptO9ZjtYOfiAIYN3jM8R4JYgPiuD
DGSM9LJBJxCxI/DiO1y1Z3n9TcdDQYut8Gqdi/aYXw2YeqyHXosX5Od3vcK/O5zC
pOJTYQ==
-----END CERTIFICATE-----`
serverKey = `-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEArXtDyYQkzak4qncJP7OCWpyiDClFVq9rveGviWMuYCfcWF60
OPUnjZvn8IUwDqnLT+czVZBasZQcrSWYN6agiqkaDjpdalYC9xd6OyXYWRouu2sP
EDCz8bSz6WFp565VEKF0SXU5apA6jvJTmZfRzuaGXonT2e6+QVQ+mYrowAumYNIy
HPhD7aoWlCdQkraAHkX9azVipg7C7+YRGg1qGhxfbDVcw0IOmrWd4nItNKkI/rrx
GaPJi5OnoIN66rE7/Zpdqpa0YBbprffSetIU7d7bkGgK7h2PNDvZxst0glwtdXWm
u4LMxxJQkb4VFaDsUbQeeGARLSdIBKRZ0zvSdQIDAQABAoIBAF4sI8goq7HYwqIG
rEagM4rsrCrd3H4KC/qvoJJ7/JjGCp8OCddBfY8pquat5kCPe4aMgxlXm2P6evaj
CdZr5Ypf8Xz3we4PctyfKgMhsCfuRqAGpc6sIYJ8DY4LC2pxAExe2LlnoRtv39np
QeiGuaYPDbIUL6SGLVFZYgIHngFhbDYfL83q3Cb/PnivUGFvUVQCfRBUKO2d8KYq
TrVB5BWD2GrHor24ApQmci1OOqfbkIevkK6bk8HUfSZiZGI9LUQiPHMxi5k2x43J
nIwhZnW2N28dorKnWHg2vh7viGvinVRZ3MEyX150oCw/L6SYM4fqR6t2ZSBgNQHT
ZNoDtwECgYEA4lXMgtYqKuSlZ3TKfxAj03tJ/gbRdKcUCEGXEbdpY70tTu6KESZS
etid4Ut/sWEoPTJsgYiGbgJl571t1O8oR1UZYgh9hBGHLV6UEIt9n2PbExhE2vL3
SB7+LfO+tMvM4qKUBN+uy4GpU0NiyEEecw4x4S7MRSyHFRIDR7B6RV0CgYEAxDgS
mDaNUfSdfB5mXekLUJAwqeKRdL9RjXYaHbnoZ5kIwQ73tFikRwyTsLQwMhjE1l3z
MItTzIAyTf/BlK3dsp6bHTaT7hXIjHBsuKATN5qAuUpzTrg9+QaCawVSlQgNeF3a
iyfD4dVp66Bzn3gO757TWqmroBZ2e1owbAQvF/kCgYAKT/Jze6KMNcK7hfy78VZQ
imuCoXjlob8t6R8i9YJdwv7Pe9rakS5s3nXDEBePU2fr8eIzvK6zUHSoLF9WtlbV
eTEg4FYnsEzCam7AmjptCrWulwp8F1ng9ViLa3Gi9y4snU+1MSPbrdqzKnzTtvPW
Ni1bnzA7bp3w/dMcbxQDGQKBgB50hY5SiUS7LuZg4YqZ7UOn3aXAoMr6FvJZ7lvG
yyepPQ6aACBh0b2lWhcHIKPl7EdJdcGHHo6TJzusAqPNCKf8rh6upe9COkpx+K3/
SnxK4sffol4JgrTwKbXqsZKoGU8hYhZPKbwXn8UOtmN+AvN2N1/PDfBfDCzBJtrd
G2IhAoGBAN19976xAMDjKb2+wd/mQYA2fR7E8lodxdX3LDnblYmndTKY67nVo94M
FHPKZSN590HkFJ+wmChnOrqjtosY+N25CKMS7939EUIDrq+B+bYTWM/gcwdLXNUk
Rygw/078Z3ZDJamXmyez5WpeLFrrbmI8sLnBBmSjQvMb6vCEtQ2Z
-----END RSA PRIVATE KEY-----`
caCRT = `-----BEGIN CERTIFICATE-----
MIIE5jCCAs6gAwIBAgIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0
QXV0aDAeFw0yMTAxMDIyMTIwNTVaFw0yMjA3MDIyMTMwNTJaMBMxETAPBgNVBAMT
CENlcnRBdXRoMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA4Tiho5xW
AC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+sRKqC+Ti88OJWCV5saoyax/1S
CjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRRjxp/Bw9dHdiEb9MjLgu28Jro
9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgARainBkYjf0SwuWxHeu4nMqkp
Ak5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lvuU+DD2W2lym+YVUtRMGs1Env
k7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q8T1dCIyP9OQCKVILdc5aVFf1
cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n6ykecLEyKt1F1Y/MWY/nWUSI
8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZV2gX0a+eRlAVqaRbAhL3LaZe
bYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaEOsnGG9KFO6jh+W768qC0zLQI
CdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZf2fy7UIYN9ADLFZiorCXAZEh
CSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg73TlMsk1zSXEw0MKLUjtsw6c
rZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEAAaNFMEMwDgYDVR0PAQH/BAQD
AgEGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFO1yCNAGr/zQTJIi8lw3
w5OiuBvMMA0GCSqGSIb3DQEBCwUAA4ICAQA6gCNuM7r8mnx674dm31GxBjQy5ZwB
7CxDzYEvL/oiZ3Tv3HlPfN2LAAsJUfGnghh9DOytenL2CTZWjl/emP5eijzmlP+9
zva5I6CIMCf/eDDVsRdO244t0o4uG7+At0IgSDM3bpVaVb4RHZNjEziYChsEYY8d
HK6iwuRSvFniV6yhR/Vj1Ymi9yZ5xclqseLXiQnUB0PkfIk23+7s42cXB16653fH
O/FsPyKBLiKJArizLYQc12aP3QOrYoYD9+fAzIIzew7A5C0aanZCGzkuFpO6TRlD
Tb7ry9Gf0DfPpCgxraH8tOcmnqp/ka3hjqo/SRnnTk0IFrmmLdarJvjD46rKwBo4
MjyAIR1mQ5j8GTlSFBmSgETOQ/EYvO3FPLmra1Fh7L+DvaVzTpqI9fG3TuyyY+Ri
Fby4ycTOGSZOe5Fh8lqkX5Y47mCUJ3zHzOA1vUJy2eTlMRGpu47Eb1++Vm6EzPUP
2EF5aD+zwcssh+atZvQbwxpgVqVcyLt91RSkKkmZQslh0rnlTb68yxvUnD3zw7So
o6TAf9UvwVMEvdLT9NnFd6hwi2jcNte/h538GJwXeBb8EkfpqLKpTKyicnOdkamZ
7E9zY8SHNRYMwB9coQ/W8NvufbCgkvOoLyMXk5edbXofXl3PhNGOlraWbghBnzf5
r3rwjFsQOoZotA==
-----END CERTIFICATE-----`
caKey = `-----BEGIN RSA PRIVATE KEY-----
MIIJKQIBAAKCAgEA4Tiho5xWAC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+s
RKqC+Ti88OJWCV5saoyax/1SCjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRR
jxp/Bw9dHdiEb9MjLgu28Jro9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgA
RainBkYjf0SwuWxHeu4nMqkpAk5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lv
uU+DD2W2lym+YVUtRMGs1Envk7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q
8T1dCIyP9OQCKVILdc5aVFf1cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n
6ykecLEyKt1F1Y/MWY/nWUSI8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZ
V2gX0a+eRlAVqaRbAhL3LaZebYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaE
OsnGG9KFO6jh+W768qC0zLQICdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZ
f2fy7UIYN9ADLFZiorCXAZEhCSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg
73TlMsk1zSXEw0MKLUjtsw6crZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEA
AQKCAgAV+ElERYbaI5VyufvVnFJCH75ypPoc6sVGLEq2jbFVJJcq/5qlZCC8oP1F
Xj7YUR6wUiDzK1Hqb7EZ2SCHGjlZVrCVi+y+NYAy7UuMZ+r+mVSkdhmypPoJPUVv
GOTqZ6VB46Cn3eSl0WknvoWr7bD555yPmEuiSc5zNy74yWEJTidEKAFGyknowcTK
sG+w1tAuPLcUKQ44DGB+rgEkcHL7C5EAa7upzx0C3RmZFB+dTAVyJdkBMbFuOhTS
sB7DLeTplR7/4mp9da7EQw51ZXC1DlZOEZt++4/desXsqATNAbva1OuzrLG7mMKe
N/PCBh/aERQcsCvgUmaXqGQgqN1Jhw8kbXnjZnVd9iE7TAh7ki3VqNy1OMgTwOex
bBYWaCqHuDYIxCjeW0qLJcn0cKQ13FVYrxgInf4Jp82SQht5b/zLL3IRZEyKcLJF
kL6g1wlmTUTUX0z8eZzlM0ZCrqtExjgElMO/rV971nyNV5WU8Og3NmE8/slqMrmJ
DlrQr9q0WJsDKj1IMe46EUM6ix7bbxC5NIfJ96dgdxZDn6ghjca6iZYqqUACvmUj
cq08s3R4Ouw9/87kn11wwGBx2yDueCwrjKEGc0RKjweGbwu0nBxOrkJ8JXz6bAv7
1OKfYaX3afI9B8x4uaiuRs38oBQlg9uAYFfl4HNBPuQikGLmsQKCAQEA8VjFOsaz
y6NMZzKXi7WZ48uu3ed5x3Kf6RyDr1WvQ1jkBMv9b6b8Gp1CRnPqviRBto9L8QAg
bCXZTqnXzn//brskmW8IZgqjAlf89AWa53piucu9/hgidrHRZobs5gTqev28uJdc
zcuw1g8c3nCpY9WeTjHODzX5NXYRLFpkazLfYa6c8Q9jZR4KKrpdM+66fxL0JlOd
7dN0oQtEqEAugsd3cwkZgvWhY4oM7FGErrZoDLy273ZdJzi/vU+dThyVzfD8Ab8u
VxxuobVMT/S608zbe+uaiUdov5s96OkCl87403UNKJBH+6LNb3rjBBLE9NPN5ET9
JLQMrYd+zj8jQwKCAQEA7uU5I9MOufo9bIgJqjY4Ie1+Ex9DZEMUYFAvGNCJCVcS
mwOdGF8AWzIavTLACmEDJO7t/OrBdoo4L7IEsCNjgA3WiIwIMiWUVqveAGUMEXr6
TRI5EolV6FTqqIP6AS+BAeBq7G1ELgsTrWNHh11rW3+3kBMuOCn77PUQ8WHwcq/r
teZcZn4Ewcr6P7cBODgVvnBPhe/J8xHS0HFVCeS1CvaiNYgees5yA80Apo9IPjDJ
YWawLjmH5wUBI5yDFVp067wjqJnoKPSoKwWkZXqUk+zgFXx5KT0gh/c5yh1frASp
q6oaYnHEVC5qj2SpT1GFLonTcrQUXiSkiUudvNu1GQKCAQEAmko+5GFtRe0ihgLQ
4S76r6diJli6AKil1Fg3U1r6zZpBQ1PJtJxTJQyN9w5Z7q6tF/GqAesrzxevQdvQ
rCImAPtA3ZofC2UXawMnIjWHHx6diNvYnV1+gtUQ4nO1dSOFZ5VZFcUmPiZO6boF
oaryj3FcX+71JcJCjEvrlKhA9Es0hXUkvfMxfs5if4he1zlyHpTWYr4oA4egUugq
P0mwskikc3VIyvEO+NyjgFxo72yLPkFSzemkidN8uKDyFqKtnlfGM7OuA2CY1WZa
3+67lXWshx9KzyJIs92iCYkU8EoPxtdYzyrV6efdX7x27v60zTOut5TnJJS6WiF6
Do5MkwKCAQAxoR9IyP0DN/BwzqYrXU42Bi+t603F04W1KJNQNWpyrUspNwv41yus
xnD1o0hwH41Wq+h3JZIBfV+E0RfWO9Pc84MBJQ5C1LnHc7cQH+3s575+Km3+4tcd
CB8j2R8kBeloKWYtLdn/Mr/ownpGreqyvIq2/LUaZ+Z1aMgXTYB1YwS16mCBzmZQ
mEl62RsAwe4KfSyYJ6OtwqMoOJMxFfliiLBULK4gVykqjvk2oQeiG+KKQJoTUFJi
dRCyhD5bPkqR+qjxyt+HOqSBI4/uoROi05AOBqjpH1DVzk+MJKQOiX1yM0l98CKY
Vng+x+vAla/0Zh+ucajVkgk4mKPxazdpAoIBAQC17vWk4KYJpF2RC3pKPcQ0PdiX
bN35YNlvyhkYlSfDNdyH3aDrGiycUyW2mMXUgEDFsLRxHMTL+zPC6efqO6sTAJDY
cBptsW4drW/qo8NTx3dNOisLkW+mGGJOR/w157hREFr29ymCVMYu/Z7fVWIeSpCq
p3u8YX8WTljrxwSczlGjvpM7uJx3SfYRM4TUoy+8wU8bK74LywLa5f60bQY6Dye0
Gqd9O6OoPfgcQlwjC5MiAofeqwPJvU0hQOPoehZyNLAmOCWXTYWaTP7lxO1r6+NE
M3hGYqW3W8Ixua71OskCypBZg/HVlIP/lzjRzdx+VOB2hbWVth2Iup/Z1egW
-----END RSA PRIVATE KEY-----`
caCRL = `-----BEGIN X509 CRL-----
MIICpzCBkAIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0QXV0aBcN
MjEwMTAyMjEzNDA1WhcNMjMwMTAyMjEzNDA1WjAkMCICEQC+l04DbHWMyC3fG09k
VXf+Fw0yMTAxMDIyMTM0MDVaoCMwITAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJc
N8OTorgbzDANBgkqhkiG9w0BAQsFAAOCAgEAEJ7z+uNc8sqtxlOhSdTGDzX/xput
E857kFQkSlMnU2whQ8c+XpYrBLA5vIZJNSSwohTpM4+zVBX/bJpmu3wqqaArRO9/
YcW5mQk9Anvb4WjQW1cHmtNapMTzoC9AiYt/OWPfy+P6JCgCr4Hy6LgQyIRL6bM9
VYTalolOm1qa4Y5cIeT7iHq/91mfaqo8/6MYRjLl8DOTROpmw8OS9bCXkzGKdCat
AbAzwkQUSauyoCQ10rpX+Y64w9ng3g4Dr20aCqPf5osaqplEJ2HTK8ljDTidlslv
9anQj8ax3Su89vI8+hK+YbfVQwrThabgdSjQsn+veyx8GlP8WwHLAQ379KjZjWg+
OlOSwBeU1vTdP0QcB8X5C2gVujAyuQekbaV86xzIBOj7vZdfHZ6ee30TZ2FKiMyg
7/N2OqW0w77ChsjB4MSHJCfuTgIeg62GzuZXLM+Q2Z9LBdtm4Byg+sm/P52adOEg
gVb2Zf4KSvsAmA0PIBlu449/QXUFcMxzLFy7mwTeZj2B4Ln0Hm0szV9f9R8MwMtB
SyLYxVH+mgqaR6Jkk22Q/yYyLPaELfafX5gp/AIXG8n0zxfVaTvK3auSgb1Q6ZLS
5QH9dSIsmZHlPq7GoSXmKpMdjUL8eaky/IMteioyXgsBiATzl5L2dsw6MTX3MDF0
QbDK+MzhmbKfDxs=
-----END X509 CRL-----`
client1Crt = `-----BEGIN CERTIFICATE-----
MIIEITCCAgmgAwIBAgIRAIppZHoj1hM80D7WzTEKLuAwDQYJKoZIhvcNAQELBQAw
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzEwWhcNMjIwNzAyMjEz
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiVbJtH
XVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd20jP
yhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1UHw4
3Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZmH859
DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0habT
cDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBSJ5GIv
zIrE4ZSQt2+CGblKTDswizAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
zDANBgkqhkiG9w0BAQsFAAOCAgEALh4f5GhvNYNou0Ab04iQBbLEdOu2RlbK1B5n
K9P/umYenBHMY/z6HT3+6tpcHsDuqE8UVdq3f3Gh4S2Gu9m8PRitT+cJ3gdo9Plm
3rD4ufn/s6rGg3ppydXcedm17492tbccUDWOBZw3IO/ASVq13WPgT0/Kev7cPq0k
sSdSNhVeXqx8Myc2/d+8GYyzbul2Kpfa7h9i24sK49E9ftnSmsIvngONo08eT1T0
3wAOyK2981LIsHaAWcneShKFLDB6LeXIT9oitOYhiykhFlBZ4M1GNlSNfhQ8IIQP
xbqMNXCLkW4/BtLhGEEcg0QVso6Kudl9rzgTfQknrdF7pHp6rS46wYUjoSyIY6dl
oLmnoAVJX36J3QPWelePI9e07X2wrTfiZWewwgw3KNRWjd6/zfPLe7GoqXnK1S2z
PT8qMfCaTwKTtUkzXuTFvQ8bAo2My/mS8FOcpkt2oQWeOsADHAUX7fz5BCoa2DL3
k/7Mh4gVT+JYZEoTwCFuYHgMWFWe98naqHi9lB4yR981p1QgXgxO7qBeipagKY1F
LlH1iwXUqZ3MZnkNA+4e1Fglsw3sa/rC+L98HnznJ/YbTfQbCP6aQ1qcOymrjMud
7MrFwqZjtd/SK4Qx1VpK6jGEAtPgWBTUS3p9ayg6lqjMBjsmySWfvRsDQbq6P5Ct
O/e3EH8=
-----END CERTIFICATE-----`
client1Key = `-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiV
bJtHXVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd
20jPyhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1
UHw43Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZm
H859DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0
habTcDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABAoIBAEBSjVFqtbsp0byR
aXvyrtLX1Ng7h++at2jca85Ihq//jyqbHTje8zPuNAKI6eNbmb0YGr5OuEa4pD9N
ssDmMsKSoG/lRwwcm7h4InkSvBWpFShvMgUaohfHAHzsBYxfnh+TfULsi0y7c2n6
t/2OZcOTRkkUDIITnXYiw93ibHHv2Mv2bBDu35kGrcK+c2dN5IL5ZjTjMRpbJTe2
44RBJbdTxHBVSgoGBnugF+s2aEma6Ehsj70oyfoVpM6Aed5kGge0A5zA1JO7WCn9
Ay/DzlULRXHjJIoRWd2NKvx5n3FNppUc9vJh2plRHalRooZ2+MjSf8HmXlvG2Hpb
ScvmWgECgYEA1G+A/2KnxWsr/7uWIJ7ClcGCiNLdk17Pv3DZ3G4qUsU2ITftfIbb
tU0Q/b19na1IY8Pjy9ptP7t74/hF5kky97cf1FA8F+nMj/k4+wO8QDI8OJfzVzh9
PwielA5vbE+xmvis5Hdp8/od1Yrc/rPSy2TKtPFhvsqXjqoUmOAjDP8CgYEAwZjH
9dt1sc2lx/rMxihlWEzQ3JPswKW9/LJAmbRBoSWF9FGNjbX7uhWtXRKJkzb8ZAwa
88azluNo2oftbDD/+jw8b2cDgaJHlLAkSD4O1D1RthW7/LKD15qZ/oFsRb13NV85
ZNKtwslXGbfVNyGKUVFm7fVA8vBAOUey+LKDFj8CgYEAg8WWstOzVdYguMTXXuyb
ruEV42FJaDyLiSirOvxq7GTAKuLSQUg1yMRBIeQEo2X1XU0JZE3dLodRVhuO4EXP
g7Dn4X7Th9HSvgvNuIacowWGLWSz4Qp9RjhGhXhezUSx2nseY6le46PmFavJYYSR
4PBofMyt4PcyA6Cknh+KHmkCgYEAnTriG7ETE0a7v4DXUpB4TpCEiMCy5Xs2o8Z5
ZNva+W+qLVUWq+MDAIyechqeFSvxK6gRM69LJ96lx+XhU58wJiFJzAhT9rK/g+jS
bsHH9WOfu0xHkuHA5hgvvV2Le9B2wqgFyva4HJy82qxMxCu/VG/SMqyfBS9OWbb7
ibQhdq0CgYAl53LUWZsFSZIth1vux2LVOsI8C3X1oiXDGpnrdlQ+K7z57hq5EsRq
GC+INxwXbvKNqp5h0z2MvmKYPDlGVTgw8f8JjM7TkN17ERLcydhdRrMONUryZpo8
1xTob+8blyJgfxZUIAKbMbMbIiU0WAF0rfD/eJJwS4htOW/Hfv4TGA==
-----END RSA PRIVATE KEY-----`
// client 2 crt is revoked
client2Crt = `-----BEGIN CERTIFICATE-----
MIIEITCCAgmgAwIBAgIRAL6XTgNsdYzILd8bT2RVd/4wDQYJKoZIhvcNAQELBQAw
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzIwWhcNMjIwNzAyMjEz
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY+6hi
jcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN/4jQ
tNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2HkO/xG
oZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB1YFM
s8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhtsC871
nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBTB84v5
t9HqhLhMODbn6oYkEQt3KzAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
zDANBgkqhkiG9w0BAQsFAAOCAgEALGtBCve5k8tToL3oLuXp/oSik6ovIB/zq4I/
4zNMYPU31+ZWz6aahysgx1JL1yqTa3Qm8o2tu52MbnV10dM7CIw7c/cYa+c+OPcG
5LF97kp13X+r2axy+CmwM86b4ILaDGs2Qyai6VB6k7oFUve+av5o7aUrNFpqGCJz
HWdtHZSVA3JMATzy0TfWanwkzreqfdw7qH0yZ9bDURlBKAVWrqnCstva9jRuv+AI
eqxr/4Ro986TFjJdoAP3Vr16CPg7/B6GA/KmsBWJrpeJdPWq4i2gpLKvYZoy89qD
mUZf34RbzcCtV4NvV1DadGnt4us0nvLrvS5rL2+2uWD09kZYq9RbLkvgzF/cY0fz
i7I1bi5XQ+alWe0uAk5ZZL/D+GTRYUX1AWwCqwJxmHrMxcskMyO9pXvLyuSWRDLo
YNBrbX9nLcfJzVCp+X+9sntTHjs4l6Cw+fLepJIgtgqdCHtbhTiv68vSM6cgb4br
6n2xrXRKuioiWFOrTSRr+oalZh8dGJ/xvwY8IbWknZAvml9mf1VvfE7Ma5P777QM
fsbYVTq0Y3R/5hIWsC3HA5z6MIM8L1oRe/YyhP3CTmrCHkVKyDOosGXpGz+JVcyo
cfYkY5A3yFKB2HaCwZSfwFmRhxkrYWGEbHv3Cd9YkZs1J3hNhGFZyVMC9Uh0S85a
6zdDidU=
-----END CERTIFICATE-----`
client2Key = `-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY
+6hijcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN
/4jQtNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2Hk
O/xGoZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB
1YFMs8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhts
C871nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABAoIBAFatstVb1KdQXsq0
cFpui8zTKOUiduJOrDkWzTygAmlEhYtrccdfXu7OWz0x0lvBLDVGK3a0I/TGrAzj
4BuFY+FM/egxTVt9in6fmA3et4BS1OAfCryzUdfK6RV//8L+t+zJZ/qKQzWnugpy
QYjDo8ifuMFwtvEoXizaIyBNLAhEp9hnrv+Tyi2O2gahPvCHsD48zkyZRCHYRstD
NH5cIrwz9/RJgPO1KI+QsJE7Nh7stR0sbr+5TPU4fnsL2mNhMUF2TJrwIPrc1yp+
YIUjdnh3SO88j4TQT3CIrWi8i4pOy6N0dcVn3gpCRGaqAKyS2ZYUj+yVtLO4KwxZ
SZ1lNvECgYEA78BrF7f4ETfWSLcBQ3qxfLs7ibB6IYo2x25685FhZjD+zLXM1AKb
FJHEXUm3mUYrFJK6AFEyOQnyGKBOLs3S6oTAswMPbTkkZeD1Y9O6uv0AHASLZnK6
pC6ub0eSRF5LUyTQ55Jj8D7QsjXJueO8v+G5ihWhNSN9tB2UA+8NBmkCgYEA+weq
cvoeMIEMBQHnNNLy35bwfqrceGyPIRBcUIvzQfY1vk7KW6DYOUzC7u+WUzy/hA52
DjXVVhua2eMQ9qqtOav7djcMc2W9RbLowxvno7K5qiCss013MeWk64TCWy+WMp5A
AVAtOliC3hMkIKqvR2poqn+IBTh1449agUJQqTMCgYEAu06IHGq1GraV6g9XpGF5
wqoAlMzUTdnOfDabRilBf/YtSr+J++ThRcuwLvXFw7CnPZZ4TIEjDJ7xjj3HdxeE
fYYjineMmNd40UNUU556F1ZLvJfsVKizmkuCKhwvcMx+asGrmA+tlmds4p3VMS50
KzDtpKzLWlmU/p/RINWlRmkCgYBy0pHTn7aZZx2xWKqCDg+L2EXPGqZX6wgZDpu7
OBifzlfM4ctL2CmvI/5yPmLbVgkgBWFYpKUdiujsyyEiQvWTUKhn7UwjqKDHtcsk
G6p7xS+JswJrzX4885bZJ9Oi1AR2yM3sC9l0O7I4lDbNPmWIXBLeEhGMmcPKv/Kc
91Ff4wKBgQCF3ur+Vt0PSU0ucrPVHjCe7tqazm0LJaWbPXL1Aw0pzdM2EcNcW/MA
w0kqpr7MgJ94qhXCBcVcfPuFN9fBOadM3UBj1B45Cz3pptoK+ScI8XKno6jvVK/p
xr5cb9VBRBtB9aOKVfuRhpatAfS2Pzm2Htae9lFn7slGPUmu2hkjDw==
-----END RSA PRIVATE KEY-----`
)
func TestLoadCertificate(t *testing.T) {
caCrtPath := filepath.Join(os.TempDir(), "testca.crt")
caCrlPath := filepath.Join(os.TempDir(), "testcrl.crt")
certPath := filepath.Join(os.TempDir(), "test.crt")
keyPath := filepath.Join(os.TempDir(), "test.key")
err := os.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
assert.NoError(t, err)
err = os.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
assert.NoError(t, err)
err = os.WriteFile(certPath, []byte(serverCert), os.ModePerm)
assert.NoError(t, err)
err = os.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
assert.NoError(t, err)
keyPairs := []TLSKeyPair{
{
Cert: certPath,
Key: keyPath,
ID: DefaultTLSKeyPaidID,
},
{
Cert: certPath,
Key: keyPath,
ID: DefaultTLSKeyPaidID,
},
}
certManager, err := NewCertManager(keyPairs, configDir, logSenderTest)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "is duplicated")
}
assert.Nil(t, certManager)
keyPairs = []TLSKeyPair{
{
Cert: certPath,
Key: keyPath,
ID: DefaultTLSKeyPaidID,
},
}
certManager, err = NewCertManager(keyPairs, configDir, logSenderTest)
assert.NoError(t, err)
certFunc := certManager.GetCertificateFunc(DefaultTLSKeyPaidID)
if assert.NotNil(t, certFunc) {
hello := &tls.ClientHelloInfo{
ServerName: "localhost",
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305},
}
cert, err := certFunc(hello)
assert.NoError(t, err)
assert.Equal(t, certManager.certs[DefaultTLSKeyPaidID], cert)
}
certFunc = certManager.GetCertificateFunc("unknownID")
if assert.NotNil(t, certFunc) {
hello := &tls.ClientHelloInfo{
ServerName: "localhost",
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305},
}
_, err = certFunc(hello)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "no certificate for id unknownID")
}
}
certManager.SetCACertificates(nil)
err = certManager.LoadRootCAs()
assert.NoError(t, err)
certManager.SetCACertificates([]string{""})
err = certManager.LoadRootCAs()
assert.Error(t, err)
certManager.SetCACertificates([]string{"invalid"})
err = certManager.LoadRootCAs()
assert.Error(t, err)
// laoding the key as root CA must fail
certManager.SetCACertificates([]string{keyPath})
err = certManager.LoadRootCAs()
assert.Error(t, err)
certManager.SetCACertificates([]string{certPath})
err = certManager.LoadRootCAs()
assert.NoError(t, err)
rootCa := certManager.GetRootCAs()
assert.NotNil(t, rootCa)
err = certManager.Reload()
assert.NoError(t, err)
certManager.SetCARevocationLists(nil)
err = certManager.LoadCRLs()
assert.NoError(t, err)
certManager.SetCARevocationLists([]string{""})
err = certManager.LoadCRLs()
assert.Error(t, err)
certManager.SetCARevocationLists([]string{"invalid crl"})
err = certManager.LoadCRLs()
assert.Error(t, err)
// this is not a crl and must fail
certManager.SetCARevocationLists([]string{caCrtPath})
err = certManager.LoadCRLs()
assert.Error(t, err)
certManager.SetCARevocationLists([]string{caCrlPath})
err = certManager.LoadCRLs()
assert.NoError(t, err)
crt, err := tls.X509KeyPair([]byte(caCRT), []byte(caKey))
assert.NoError(t, err)
x509CAcrt, err := x509.ParseCertificate(crt.Certificate[0])
assert.NoError(t, err)
crt, err = tls.X509KeyPair([]byte(client1Crt), []byte(client1Key))
assert.NoError(t, err)
x509crt, err := x509.ParseCertificate(crt.Certificate[0])
if assert.NoError(t, err) {
assert.False(t, certManager.IsRevoked(x509crt, x509CAcrt))
}
crt, err = tls.X509KeyPair([]byte(client2Crt), []byte(client2Key))
assert.NoError(t, err)
x509crt, err = x509.ParseCertificate(crt.Certificate[0])
if assert.NoError(t, err) {
assert.True(t, certManager.IsRevoked(x509crt, x509CAcrt))
}
assert.True(t, certManager.IsRevoked(nil, nil))
err = os.Remove(caCrlPath)
assert.NoError(t, err)
err = certManager.Reload()
assert.Error(t, err)
err = os.Remove(certPath)
assert.NoError(t, err)
err = os.Remove(keyPath)
assert.NoError(t, err)
err = certManager.Reload()
assert.Error(t, err)
err = os.Remove(caCrtPath)
assert.NoError(t, err)
}
func TestLoadInvalidCert(t *testing.T) {
certManager, err := NewCertManager(nil, configDir, logSenderTest)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "no key pairs defined")
}
assert.Nil(t, certManager)
keyPairs := []TLSKeyPair{
{
Cert: "test.crt",
Key: "test.key",
ID: DefaultTLSKeyPaidID,
},
}
certManager, err = NewCertManager(keyPairs, configDir, logSenderTest)
assert.Error(t, err)
assert.Nil(t, certManager)
keyPairs = []TLSKeyPair{
{
Cert: "test.crt",
Key: "test.key",
},
}
certManager, err = NewCertManager(keyPairs, configDir, logSenderTest)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "TLS certificate without ID")
}
assert.Nil(t, certManager)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,23 +10,21 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"errors"
"fmt"
"io/fs"
"path"
"sync"
"sync/atomic"
"time"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/metric"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/vfs"
)
var (
@ -37,8 +35,8 @@ var (
// BaseTransfer contains protocols common transfer details for an upload or a download.
type BaseTransfer struct { //nolint:maligned
ID int64
BytesSent atomic.Int64
BytesReceived atomic.Int64
BytesSent int64
BytesReceived int64
Fs vfs.Fs
File vfs.File
Connection *BaseConnection
@ -54,11 +52,10 @@ type BaseTransfer struct { //nolint:maligned
truncatedSize int64
isNewFile bool
transferType int
AbortTransfer atomic.Bool
AbortTransfer int32
aTime time.Time
mTime time.Time
transferQuota dataprovider.TransferQuota
metadata map[string]string
sync.Mutex
errAbort error
ErrTransfer error
@ -82,14 +79,14 @@ func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPat
InitialSize: initialSize,
isNewFile: isNewFile,
requestPath: requestPath,
BytesSent: 0,
BytesReceived: 0,
MaxWriteSize: maxWriteSize,
AbortTransfer: 0,
truncatedSize: truncatedSize,
transferQuota: transferQuota,
Fs: fs,
}
t.AbortTransfer.Store(false)
t.BytesSent.Store(0)
t.BytesReceived.Store(0)
conn.AddTransfer(t)
return t
@ -118,19 +115,19 @@ func (t *BaseTransfer) GetType() int {
// GetSize returns the transferred size
func (t *BaseTransfer) GetSize() int64 {
if t.transferType == TransferDownload {
return t.BytesSent.Load()
return atomic.LoadInt64(&t.BytesSent)
}
return t.BytesReceived.Load()
return atomic.LoadInt64(&t.BytesReceived)
}
// GetDownloadedSize returns the transferred size
func (t *BaseTransfer) GetDownloadedSize() int64 {
return t.BytesSent.Load()
return atomic.LoadInt64(&t.BytesSent)
}
// GetUploadedSize returns the transferred size
func (t *BaseTransfer) GetUploadedSize() int64 {
return t.BytesReceived.Load()
return atomic.LoadInt64(&t.BytesReceived)
}
// GetStartTime returns the start time
@ -156,7 +153,7 @@ func (t *BaseTransfer) SignalClose(err error) {
t.Lock()
t.errAbort = err
t.Unlock()
t.AbortTransfer.Store(true)
atomic.StoreInt32(&(t.AbortTransfer), 1)
}
// GetTruncatedSize returns the truncated sized if this is an upload overwriting
@ -201,46 +198,30 @@ func (t *BaseTransfer) SetTimes(fsPath string, atime time.Time, mtime time.Time)
// If atomic uploads are enabled this differ from fsPath
func (t *BaseTransfer) GetRealFsPath(fsPath string) string {
if fsPath == t.GetFsPath() {
if t.File != nil || vfs.IsLocalOsFs(t.Fs) {
return t.effectiveFsPath
if t.File != nil {
return t.File.Name()
}
return t.fsPath
}
return ""
}
// SetMetadata sets the metadata for the file
func (t *BaseTransfer) SetMetadata(val map[string]string) {
t.metadata = val
}
// SetCancelFn sets the cancel function for the transfer
func (t *BaseTransfer) SetCancelFn(cancelFn func()) {
t.cancelFn = cancelFn
}
// ConvertError accepts an error that occurs during a read or write and
// converts it into a more understandable form for the client if it is a
// well-known type of error
func (t *BaseTransfer) ConvertError(err error) error {
var pathError *fs.PathError
if errors.As(err, &pathError) {
return fmt.Errorf("%s %s: %s", pathError.Op, t.GetVirtualPath(), pathError.Err.Error())
}
return t.Connection.GetFsError(t.Fs, err)
}
// CheckRead returns an error if read if not allowed
func (t *BaseTransfer) CheckRead() error {
if t.transferQuota.AllowedDLSize == 0 && t.transferQuota.AllowedTotalSize == 0 {
return nil
}
if t.transferQuota.AllowedTotalSize > 0 {
if t.BytesSent.Load()+t.BytesReceived.Load() > t.transferQuota.AllowedTotalSize {
if atomic.LoadInt64(&t.BytesSent)+atomic.LoadInt64(&t.BytesReceived) > t.transferQuota.AllowedTotalSize {
return t.Connection.GetReadQuotaExceededError()
}
} else if t.transferQuota.AllowedDLSize > 0 {
if t.BytesSent.Load() > t.transferQuota.AllowedDLSize {
if atomic.LoadInt64(&t.BytesSent) > t.transferQuota.AllowedDLSize {
return t.Connection.GetReadQuotaExceededError()
}
}
@ -249,18 +230,18 @@ func (t *BaseTransfer) CheckRead() error {
// CheckWrite returns an error if write if not allowed
func (t *BaseTransfer) CheckWrite() error {
if t.MaxWriteSize > 0 && t.BytesReceived.Load() > t.MaxWriteSize {
if t.MaxWriteSize > 0 && atomic.LoadInt64(&t.BytesReceived) > t.MaxWriteSize {
return t.Connection.GetQuotaExceededError()
}
if t.transferQuota.AllowedULSize == 0 && t.transferQuota.AllowedTotalSize == 0 {
return nil
}
if t.transferQuota.AllowedTotalSize > 0 {
if t.BytesSent.Load()+t.BytesReceived.Load() > t.transferQuota.AllowedTotalSize {
if atomic.LoadInt64(&t.BytesSent)+atomic.LoadInt64(&t.BytesReceived) > t.transferQuota.AllowedTotalSize {
return t.Connection.GetQuotaExceededError()
}
} else if t.transferQuota.AllowedULSize > 0 {
if t.BytesReceived.Load() > t.transferQuota.AllowedULSize {
if atomic.LoadInt64(&t.BytesReceived) > t.transferQuota.AllowedULSize {
return t.Connection.GetQuotaExceededError()
}
}
@ -280,25 +261,24 @@ func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
if t.MaxWriteSize > 0 {
sizeDiff := initialSize - size
t.MaxWriteSize += sizeDiff
metric.TransferCompleted(t.BytesSent.Load(), t.BytesReceived.Load(),
t.transferType, t.ErrTransfer, vfs.IsSFTPFs(t.Fs))
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
if t.transferQuota.HasSizeLimits() {
go func(ulSize, dlSize int64, user dataprovider.User) {
dataprovider.UpdateUserTransferQuota(&user, ulSize, dlSize, false) //nolint:errcheck
}(t.BytesReceived.Load(), t.BytesSent.Load(), t.Connection.User)
}(atomic.LoadInt64(&t.BytesReceived), atomic.LoadInt64(&t.BytesSent), t.Connection.User)
}
t.BytesReceived.Store(0)
atomic.StoreInt64(&t.BytesReceived, 0)
}
t.Unlock()
}
t.Connection.Log(logger.LevelDebug, "file %q truncated to size %v max write size %v new initial size %v err: %v",
t.Connection.Log(logger.LevelDebug, "file %#v truncated to size %v max write size %v new initial size %v err: %v",
fsPath, size, t.MaxWriteSize, t.InitialSize, err)
return initialSize, err
}
if size == 0 && t.BytesSent.Load() == 0 {
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads.
// For buffered SFTP and local fs we can have buffered bytes so we returns an error
if !vfs.IsBufferedLocalOrSFTPFs(t.Fs) {
if size == 0 && atomic.LoadInt64(&t.BytesSent) == 0 {
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads
// for buffered SFTP we can have buffered bytes so we returns an error
if !vfs.IsBufferedSFTPFs(t.Fs) {
return 0, nil
}
}
@ -320,46 +300,24 @@ func (t *BaseTransfer) TransferError(err error) {
t.cancelFn()
}
elapsed := time.Since(t.start).Nanoseconds() / 1000000
t.Connection.Log(logger.LevelError, "Unexpected error for transfer, path: %q, error: \"%v\" bytes sent: %v, "+
"bytes received: %v transfer running since %v ms", t.fsPath, t.ErrTransfer, t.BytesSent.Load(),
t.BytesReceived.Load(), elapsed)
t.Connection.Log(logger.LevelError, "Unexpected error for transfer, path: %#v, error: \"%v\" bytes sent: %v, "+
"bytes received: %v transfer running since %v ms", t.fsPath, t.ErrTransfer, atomic.LoadInt64(&t.BytesSent),
atomic.LoadInt64(&t.BytesReceived), elapsed)
}
func (t *BaseTransfer) getUploadFileSize() (int64, int, error) {
func (t *BaseTransfer) getUploadFileSize() (int64, error) {
var fileSize int64
var deletedFiles int
switch dataprovider.GetQuotaTracking() {
case 0:
return fileSize, deletedFiles, errors.New("quota tracking disabled")
case 2:
if !t.Connection.User.HasQuotaRestrictions() {
vfolder, err := t.Connection.User.GetVirtualFolderForPath(path.Dir(t.requestPath))
if err != nil {
return fileSize, deletedFiles, errors.New("quota tracking disabled for this user")
}
if vfolder.IsIncludedInUserQuota() {
return fileSize, deletedFiles, errors.New("quota tracking disabled for this user and folder included in user quota")
}
}
}
info, err := t.Fs.Stat(t.fsPath)
if err == nil {
fileSize = info.Size()
}
if t.ErrTransfer != nil && vfs.IsCryptOsFs(t.Fs) {
if vfs.IsCryptOsFs(t.Fs) && t.ErrTransfer != nil {
errDelete := t.Fs.Remove(t.fsPath, false)
if errDelete != nil {
t.Connection.Log(logger.LevelWarn, "error removing partial crypto file %q: %v", t.fsPath, errDelete)
} else {
fileSize = 0
deletedFiles = 1
t.BytesReceived.Store(0)
t.MinWriteOffset = 0
t.Connection.Log(logger.LevelWarn, "error removing partial crypto file %#v: %v", t.fsPath, errDelete)
}
}
return fileSize, deletedFiles, err
return fileSize, err
}
// return 1 if the file is outside the user home dir
@ -367,17 +325,14 @@ func (t *BaseTransfer) checkUploadOutsideHomeDir(err error) int {
if err == nil {
return 0
}
if t.ErrTransfer == nil {
t.ErrTransfer = err
}
if Config.TempPath == "" {
return 0
}
err = t.Fs.Remove(t.effectiveFsPath, false)
t.Connection.Log(logger.LevelWarn, "upload in temp path cannot be renamed, delete temporary file: %q, deletion error: %v",
t.Connection.Log(logger.LevelWarn, "upload in temp path cannot be renamed, delete temporary file: %#v, deletion error: %v",
t.effectiveFsPath, err)
// the file is outside the home dir so don't update the quota
t.BytesReceived.Store(0)
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
return 1
}
@ -391,153 +346,93 @@ func (t *BaseTransfer) Close() error {
defer t.Connection.RemoveTransfer(t)
var err error
numFiles := t.getUploadedFiles()
metric.TransferCompleted(t.BytesSent.Load(), t.BytesReceived.Load(),
t.transferType, t.ErrTransfer, vfs.IsSFTPFs(t.Fs))
if t.transferQuota.HasSizeLimits() {
dataprovider.UpdateUserTransferQuota(&t.Connection.User, t.BytesReceived.Load(), //nolint:errcheck
t.BytesSent.Load(), false)
numFiles := 0
if t.isNewFile {
numFiles = 1
}
if (t.File != nil || vfs.IsLocalOsFs(t.Fs)) && t.Connection.IsQuotaExceededError(t.ErrTransfer) {
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived),
t.transferType, t.ErrTransfer)
if t.transferQuota.HasSizeLimits() {
dataprovider.UpdateUserTransferQuota(&t.Connection.User, atomic.LoadInt64(&t.BytesReceived), //nolint:errcheck
atomic.LoadInt64(&t.BytesSent), false)
}
if t.File != nil && t.Connection.IsQuotaExceededError(t.ErrTransfer) {
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
err = t.Fs.Remove(t.effectiveFsPath, false)
err = t.Fs.Remove(t.File.Name(), false)
if err == nil {
t.BytesReceived.Store(0)
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
}
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %q, deletion error: %v",
t.effectiveFsPath, err)
} else if t.isAtomicUpload() {
if t.ErrTransfer == nil || Config.UploadMode&UploadModeAtomicWithResume != 0 {
_, _, err = t.Fs.Rename(t.effectiveFsPath, t.fsPath, 0)
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %q -> %q, error: %v",
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %#v, deletion error: %v",
t.File.Name(), err)
} else if t.transferType == TransferUpload && t.effectiveFsPath != t.fsPath {
if t.ErrTransfer == nil || Config.UploadMode == UploadModeAtomicWithResume {
err = t.Fs.Rename(t.effectiveFsPath, t.fsPath)
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
t.effectiveFsPath, t.fsPath, err)
// the file must be removed if it is uploaded to a path outside the home dir and cannot be renamed
t.checkUploadOutsideHomeDir(err)
numFiles -= t.checkUploadOutsideHomeDir(err)
} else {
err = t.Fs.Remove(t.effectiveFsPath, false)
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %q, deletion error: %v",
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, deletion error: %v",
t.ErrTransfer, t.effectiveFsPath, err)
if err == nil {
t.BytesReceived.Store(0)
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
}
}
}
elapsed := time.Since(t.start).Nanoseconds() / 1000000
var uploadFileSize int64
if t.transferType == TransferDownload {
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, t.BytesSent.Load(), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode,
t.ErrTransfer)
ExecuteActionNotification(t.Connection, operationDownload, t.fsPath, t.requestPath, "", "", "", //nolint:errcheck
t.BytesSent.Load(), t.ErrTransfer, elapsed, t.metadata)
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
ExecuteActionNotification(t.Connection, operationDownload, t.fsPath, t.requestPath, "", "", "",
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
} else {
statSize, deletedFiles, errStat := t.getUploadFileSize()
if errStat == nil {
uploadFileSize = statSize
} else {
uploadFileSize = t.BytesReceived.Load() + t.MinWriteOffset
if t.Fs.IsNotExist(errStat) {
uploadFileSize = 0
numFiles--
}
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
if statSize, errStat := t.getUploadFileSize(); errStat == nil {
fileSize = statSize
}
numFiles -= deletedFiles
t.Connection.Log(logger.LevelDebug, "upload file size %d, num files %d, deleted files %d, fs path %q",
uploadFileSize, numFiles, deletedFiles, t.fsPath)
numFiles, uploadFileSize = t.executeUploadHook(numFiles, uploadFileSize, elapsed)
t.updateQuota(numFiles, uploadFileSize)
t.Connection.Log(logger.LevelDebug, "uploaded file size %v", fileSize)
t.updateQuota(numFiles, fileSize)
t.updateTimes()
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, t.BytesReceived.Load(), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode,
t.ErrTransfer)
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
ExecuteActionNotification(t.Connection, operationUpload, t.fsPath, t.requestPath, "", "", "", fileSize, t.ErrTransfer)
}
if t.ErrTransfer != nil {
t.Connection.Log(logger.LevelError, "transfer error: %v, path: %q", t.ErrTransfer, t.fsPath)
t.Connection.Log(logger.LevelError, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
if err == nil {
err = t.ErrTransfer
}
}
t.updateTransferTimestamps(uploadFileSize, elapsed)
return err
}
func (t *BaseTransfer) isAtomicUpload() bool {
return t.transferType == TransferUpload && t.effectiveFsPath != t.fsPath
}
func (t *BaseTransfer) updateTransferTimestamps(uploadFileSize, elapsed int64) {
if t.ErrTransfer != nil {
return
}
if t.transferType == TransferUpload {
if t.Connection.User.FirstUpload == 0 && !t.Connection.uploadDone.Load() {
if err := dataprovider.UpdateUserTransferTimestamps(t.Connection.User.Username, true); err == nil {
t.Connection.uploadDone.Store(true)
ExecuteActionNotification(t.Connection, operationFirstUpload, t.fsPath, t.requestPath, "", //nolint:errcheck
"", "", uploadFileSize, t.ErrTransfer, elapsed, t.metadata)
}
}
return
}
if t.Connection.User.FirstDownload == 0 && !t.Connection.downloadDone.Load() && t.BytesSent.Load() > 0 {
if err := dataprovider.UpdateUserTransferTimestamps(t.Connection.User.Username, false); err == nil {
t.Connection.downloadDone.Store(true)
ExecuteActionNotification(t.Connection, operationFirstDownload, t.fsPath, t.requestPath, "", //nolint:errcheck
"", "", t.BytesSent.Load(), t.ErrTransfer, elapsed, t.metadata)
}
}
}
func (t *BaseTransfer) executeUploadHook(numFiles int, fileSize, elapsed int64) (int, int64) {
err := ExecuteActionNotification(t.Connection, operationUpload, t.fsPath, t.requestPath, "", "", "",
fileSize, t.ErrTransfer, elapsed, t.metadata)
if err != nil {
if t.ErrTransfer == nil {
t.ErrTransfer = err
}
// try to remove the uploaded file
err = t.Fs.Remove(t.fsPath, false)
if err == nil {
numFiles--
fileSize = 0
t.BytesReceived.Store(0)
t.MinWriteOffset = 0
} else {
t.Connection.Log(logger.LevelWarn, "unable to remove path %q after upload hook failure: %v", t.fsPath, err)
}
}
return numFiles, fileSize
}
func (t *BaseTransfer) getUploadedFiles() int {
numFiles := 0
if t.isNewFile {
numFiles = 1
}
return numFiles
}
func (t *BaseTransfer) updateTimes() {
if !t.aTime.IsZero() && !t.mTime.IsZero() {
err := t.Fs.Chtimes(t.fsPath, t.aTime, t.mTime, false)
t.Connection.Log(logger.LevelDebug, "set times for file %q, atime: %v, mtime: %v, err: %v",
err := t.Fs.Chtimes(t.fsPath, t.aTime, t.mTime, true)
t.Connection.Log(logger.LevelDebug, "set times for file %#v, atime: %v, mtime: %v, err: %v",
t.fsPath, t.aTime, t.mTime, err)
}
}
func (t *BaseTransfer) updateQuota(numFiles int, fileSize int64) bool {
// Uploads on some filesystem (S3 and similar) are atomic, if there is an error nothing is uploaded
if t.File == nil && t.ErrTransfer != nil && vfs.HasImplicitAtomicUploads(t.Fs) {
// S3 uploads are atomic, if there is an error nothing is uploaded
if t.File == nil && t.ErrTransfer != nil && !t.Connection.User.HasBufferedSFTP(t.GetVirtualPath()) {
return false
}
sizeDiff := fileSize - t.InitialSize
if t.transferType == TransferUpload && (numFiles != 0 || sizeDiff != 0) {
if t.transferType == TransferUpload && (numFiles != 0 || sizeDiff > 0) {
vfolder, err := t.Connection.User.GetVirtualFolderForPath(path.Dir(t.requestPath))
if err == nil {
dataprovider.UpdateUserFolderQuota(&vfolder, &t.Connection.User, numFiles,
dataprovider.UpdateVirtualFolderQuota(&vfolder.BaseVirtualFolder, numFiles, //nolint:errcheck
sizeDiff, false)
if vfolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
}
} else {
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
}
@ -552,10 +447,10 @@ func (t *BaseTransfer) HandleThrottle() {
var trasferredBytes int64
if t.transferType == TransferDownload {
wantedBandwidth = t.Connection.User.DownloadBandwidth
trasferredBytes = t.BytesSent.Load()
trasferredBytes = atomic.LoadInt64(&t.BytesSent)
} else {
wantedBandwidth = t.Connection.User.UploadBandwidth
trasferredBytes = t.BytesReceived.Load()
trasferredBytes = atomic.LoadInt64(&t.BytesReceived)
}
if wantedBandwidth > 0 {
// real and wanted elapsed as milliseconds, bytes as kilobytes

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,38 +10,37 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"errors"
"fmt"
"os"
"path/filepath"
"testing"
"time"
"github.com/pkg/sftp"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/kms"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/vfs"
)
func TestTransferUpdateQuota(t *testing.T) {
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
Fs: vfs.NewOsFs("", os.TempDir(), "", nil),
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), ""),
}
transfer.BytesReceived.Store(123)
errFake := errors.New("fake error")
transfer.TransferError(errFake)
assert.False(t, transfer.updateQuota(1, 0))
err := transfer.Close()
if assert.Error(t, err) {
assert.EqualError(t, err, errFake.Error())
@ -57,15 +56,11 @@ func TestTransferUpdateQuota(t *testing.T) {
QuotaSize: -1,
})
transfer.ErrTransfer = nil
transfer.BytesReceived.Store(1)
transfer.BytesReceived = 1
transfer.requestPath = "/vdir/file"
assert.True(t, transfer.updateQuota(1, 0))
err = transfer.Close()
assert.NoError(t, err)
transfer.ErrTransfer = errFake
transfer.Fs = newMockOsFs(true, "", "", "S3Fs fake", nil)
assert.False(t, transfer.updateQuota(1, 0))
}
func TestTransferThrottling(t *testing.T) {
@ -76,7 +71,7 @@ func TestTransferThrottling(t *testing.T) {
DownloadBandwidth: 40,
},
}
fs := vfs.NewOsFs("", os.TempDir(), "", nil)
fs := vfs.NewOsFs("", os.TempDir(), "")
testFileSize := int64(131072)
wantedUploadElapsed := 1000 * (testFileSize / 1024) / u.UploadBandwidth
wantedDownloadElapsed := 1000 * (testFileSize / 1024) / u.DownloadBandwidth
@ -85,7 +80,7 @@ func TestTransferThrottling(t *testing.T) {
wantedDownloadElapsed -= wantedDownloadElapsed / 10
conn := NewBaseConnection("id", ProtocolSCP, "", "", u)
transfer := NewBaseTransfer(nil, conn, nil, "", "", "", TransferUpload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
transfer.BytesReceived.Store(testFileSize)
transfer.BytesReceived = testFileSize
transfer.Connection.UpdateLastActivity()
startTime := transfer.Connection.GetLastActivity()
transfer.HandleThrottle()
@ -95,7 +90,7 @@ func TestTransferThrottling(t *testing.T) {
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, "", "", "", TransferDownload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
transfer.BytesSent.Store(testFileSize)
transfer.BytesSent = testFileSize
transfer.Connection.UpdateLastActivity()
startTime = transfer.Connection.GetLastActivity()
@ -108,7 +103,7 @@ func TestTransferThrottling(t *testing.T) {
func TestRealPath(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "afile.txt")
fs := vfs.NewOsFs("123", os.TempDir(), "", nil)
fs := vfs.NewOsFs("123", os.TempDir(), "")
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user",
@ -142,7 +137,7 @@ func TestRealPath(t *testing.T) {
func TestTruncate(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("123", os.TempDir(), "", nil)
fs := vfs.NewOsFs("123", os.TempDir(), "")
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user",
@ -211,7 +206,7 @@ func TestTransferErrors(t *testing.T) {
isCancelled = true
}
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("id", os.TempDir(), "", nil)
fs := vfs.NewOsFs("id", os.TempDir(), "")
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "test",
@ -227,23 +222,11 @@ func TestTransferErrors(t *testing.T) {
conn := NewBaseConnection("id", ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload,
0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
pathError := &os.PathError{
Op: "test",
Path: testFile,
Err: os.ErrInvalid,
}
err = transfer.ConvertError(pathError)
assert.EqualError(t, err, fmt.Sprintf("%s %s: %s", pathError.Op, "/transfer_test_file", pathError.Err.Error()))
err = transfer.ConvertError(os.ErrNotExist)
assert.ErrorIs(t, err, sftp.ErrSSHFxNoSuchFile)
err = transfer.ConvertError(os.ErrPermission)
assert.ErrorIs(t, err, sftp.ErrSSHFxPermissionDenied)
assert.Nil(t, transfer.cancelFn)
assert.Equal(t, testFile, transfer.GetFsPath())
transfer.SetMetadata(map[string]string{"key": "val"})
transfer.SetCancelFn(cancelFn)
errFake := errors.New("err fake")
transfer.BytesReceived.Store(9)
transfer.BytesReceived = 9
transfer.TransferError(ErrQuotaExceeded)
assert.True(t, isCancelled)
transfer.TransferError(errFake)
@ -266,7 +249,7 @@ func TestTransferErrors(t *testing.T) {
fsPath := filepath.Join(os.TempDir(), "test_file")
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, 0, true,
fs, dataprovider.TransferQuota{})
transfer.BytesReceived.Store(9)
transfer.BytesReceived = 9
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, errFake.Error())
// the file is closed from the embedding struct before to call close
@ -286,7 +269,7 @@ func TestTransferErrors(t *testing.T) {
}
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, 0, true,
fs, dataprovider.TransferQuota{})
transfer.BytesReceived.Store(9)
transfer.BytesReceived = 9
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
@ -306,37 +289,32 @@ func TestRemovePartialCryptoFile(t *testing.T) {
require.NoError(t, err)
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "test",
HomeDir: os.TempDir(),
QuotaFiles: 1000000,
Username: "test",
HomeDir: os.TempDir(),
},
}
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(nil, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload,
0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
transfer.ErrTransfer = errors.New("test error")
_, _, err = transfer.getUploadFileSize()
_, err = transfer.getUploadFileSize()
assert.Error(t, err)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
size, deletedFiles, err := transfer.getUploadFileSize()
size, err := transfer.getUploadFileSize()
assert.NoError(t, err)
assert.Equal(t, int64(0), size)
assert.Equal(t, 1, deletedFiles)
assert.Equal(t, int64(9), size)
assert.NoFileExists(t, testFile)
err = transfer.Close()
assert.Error(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}
func TestFTPMode(t *testing.T) {
conn := NewBaseConnection("", ProtocolFTP, "", "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
Fs: vfs.NewOsFs("", os.TempDir(), "", nil),
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), ""),
}
transfer.BytesReceived.Store(123)
assert.Empty(t, transfer.ftpMode)
transfer.SetFtpMode("active")
assert.Equal(t, "active", transfer.ftpMode)
@ -345,22 +323,41 @@ func TestFTPMode(t *testing.T) {
func TestTransferQuota(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
TotalDataTransfer: 3,
UploadDataTransfer: 2,
DownloadDataTransfer: 1,
TotalDataTransfer: -1,
UploadDataTransfer: -1,
DownloadDataTransfer: -1,
},
}
ul, dl, total := user.GetDataTransferLimits()
assert.Equal(t, int64(2*1048576), ul)
assert.Equal(t, int64(1*1048576), dl)
assert.Equal(t, int64(3*1048576), total)
user.TotalDataTransfer = -1
user.UploadDataTransfer = -1
user.DownloadDataTransfer = -1
ul, dl, total = user.GetDataTransferLimits()
user.Filters.DataTransferLimits = []sdk.DataTransferLimit{
{
Sources: []string{"127.0.0.1/32", "192.168.1.0/24"},
TotalDataTransfer: 100,
UploadDataTransfer: 0,
DownloadDataTransfer: 0,
},
{
Sources: []string{"172.16.0.0/24"},
TotalDataTransfer: 0,
UploadDataTransfer: 120,
DownloadDataTransfer: 150,
},
}
ul, dl, total := user.GetDataTransferLimits("127.0.1.1")
assert.Equal(t, int64(0), ul)
assert.Equal(t, int64(0), dl)
assert.Equal(t, int64(0), total)
ul, dl, total = user.GetDataTransferLimits("127.0.0.1")
assert.Equal(t, int64(0), ul)
assert.Equal(t, int64(0), dl)
assert.Equal(t, int64(100*1048576), total)
ul, dl, total = user.GetDataTransferLimits("192.168.1.4")
assert.Equal(t, int64(0), ul)
assert.Equal(t, int64(0), dl)
assert.Equal(t, int64(100*1048576), total)
ul, dl, total = user.GetDataTransferLimits("172.16.0.2")
assert.Equal(t, int64(120*1048576), ul)
assert.Equal(t, int64(150*1048576), dl)
assert.Equal(t, int64(0), total)
transferQuota := dataprovider.TransferQuota{}
assert.True(t, transferQuota.HasDownloadSpace())
assert.True(t, transferQuota.HasUploadSpace())
@ -393,7 +390,7 @@ func TestTransferQuota(t *testing.T) {
conn := NewBaseConnection("", ProtocolSFTP, "", "", user)
transfer := NewBaseTransfer(nil, conn, nil, "file.txt", "file.txt", "/transfer_test_file", TransferUpload,
0, 0, 0, 0, true, vfs.NewOsFs("", os.TempDir(), "", nil), dataprovider.TransferQuota{})
0, 0, 0, 0, true, vfs.NewOsFs("", os.TempDir(), ""), dataprovider.TransferQuota{})
err := transfer.CheckRead()
assert.NoError(t, err)
err = transfer.CheckWrite()
@ -402,14 +399,14 @@ func TestTransferQuota(t *testing.T) {
transfer.transferQuota = dataprovider.TransferQuota{
AllowedTotalSize: 10,
}
transfer.BytesReceived.Store(5)
transfer.BytesSent.Store(4)
transfer.BytesReceived = 5
transfer.BytesSent = 4
err = transfer.CheckRead()
assert.NoError(t, err)
err = transfer.CheckWrite()
assert.NoError(t, err)
transfer.BytesSent.Store(6)
transfer.BytesSent = 6
err = transfer.CheckRead()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
@ -431,18 +428,13 @@ func TestTransferQuota(t *testing.T) {
err = transfer.CheckWrite()
assert.NoError(t, err)
transfer.BytesReceived.Store(11)
transfer.BytesReceived = 11
err = transfer.CheckRead()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
}
err = transfer.CheckWrite()
assert.True(t, conn.IsQuotaExceededError(err))
err = transfer.Close()
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
assert.Equal(t, int32(0), Connections.GetTotalTransfers())
}
func TestUploadOutsideHomeRenameError(t *testing.T) {
@ -450,11 +442,11 @@ func TestUploadOutsideHomeRenameError(t *testing.T) {
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
Fs: vfs.NewOsFs("", filepath.Join(os.TempDir(), "home"), "", nil),
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", filepath.Join(os.TempDir(), "home"), ""),
}
transfer.BytesReceived.Store(123)
fileName := filepath.Join(os.TempDir(), "_temp")
err := os.WriteFile(fileName, []byte(`data`), 0644)
@ -467,10 +459,10 @@ func TestUploadOutsideHomeRenameError(t *testing.T) {
Config.TempPath = filepath.Clean(os.TempDir())
res = transfer.checkUploadOutsideHomeDir(nil)
assert.Equal(t, 0, res)
assert.Greater(t, transfer.BytesReceived.Load(), int64(0))
assert.Greater(t, transfer.BytesReceived, int64(0))
res = transfer.checkUploadOutsideHomeDir(os.ErrPermission)
assert.Equal(t, 1, res)
assert.Equal(t, int64(0), transfer.BytesReceived.Load())
assert.Equal(t, int64(0), transfer.BytesReceived)
assert.NoFileExists(t, fileName)
Config.TempPath = oldTempPath

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
@ -19,9 +19,9 @@ import (
"sync"
"time"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
type overquotaTransfer struct {
@ -61,7 +61,7 @@ type baseTransferChecker struct {
func (t *baseTransferChecker) isDataTransferExceeded(user dataprovider.User, transfer dataprovider.ActiveTransfer, ulSize,
dlSize int64,
) bool {
ulQuota, dlQuota, totalQuota := user.GetDataTransferLimits()
ulQuota, dlQuota, totalQuota := user.GetDataTransferLimits(transfer.IP)
if totalQuota > 0 {
allowedSize := totalQuota - (user.UsedUploadDataTransfer + user.UsedDownloadDataTransfer)
if ulSize+dlSize > allowedSize {
@ -200,7 +200,7 @@ func (t *baseTransferChecker) getOverquotaTransfers(usersToFetch map[string]bool
// file will be successful
usedDiskQuota += tr.CurrentULSize - tr.TruncatedSize
}
logger.Debug(logSender, "", "username %q, folder %q, concurrent transfers: %v, remaining disk quota (bytes): %v, disk quota used in ongoing transfers (bytes): %v",
logger.Debug(logSender, "", "username %#v, folder %#v, concurrent transfers: %v, remaining disk quota (bytes): %v, disk quota used in ongoing transfers (bytes): %v",
username, folderName, len(transfers), remaningDiskQuota, usedDiskQuota)
if usedDiskQuota > remaningDiskQuota {
for _, tr := range transfers {
@ -221,7 +221,7 @@ func (t *baseTransferChecker) getOverquotaTransfers(usersToFetch map[string]bool
ulSize += tr.CurrentULSize
dlSize += tr.CurrentDLSize
}
logger.Debug(logSender, "", "username %q, concurrent transfers: %v, quota (bytes) used in ongoing transfers, ul: %v, dl: %v",
logger.Debug(logSender, "", "username %#v, concurrent transfers: %v, quota (bytes) used in ongoing transfers, ul: %v, dl: %v",
username, len(transfers), ulSize, dlSize)
for _, tr := range transfers {
if t.isDataTransferExceeded(usersMap[username], tr, ulSize, dlSize) {

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
@ -21,6 +21,7 @@ import (
"path/filepath"
"strconv"
"strings"
"sync/atomic"
"testing"
"time"
@ -28,9 +29,9 @@ import (
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
func TestTransfersCheckerDiskQuota(t *testing.T) {
@ -48,10 +49,6 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
},
},
}
folder := vfs.BaseVirtualFolder{
Name: folderName,
MappedPath: filepath.Join(os.TempDir(), folderName),
}
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
@ -66,7 +63,8 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
VirtualFolders: []vfs.VirtualFolder{
{
BaseVirtualFolder: vfs.BaseVirtualFolder{
Name: folderName,
Name: folderName,
MappedPath: filepath.Join(os.TempDir(), folderName),
},
VirtualPath: vdirPath,
QuotaSize: 100,
@ -79,16 +77,14 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
},
},
}
err := dataprovider.AddGroup(&group, "", "", "")
err := dataprovider.AddGroup(&group, "", "")
assert.NoError(t, err)
group, err = dataprovider.GroupExists(groupName)
assert.NoError(t, err)
err = dataprovider.AddFolder(&folder, "", "", "")
assert.NoError(t, err)
assert.Equal(t, int64(120), group.UserSettings.QuotaSize)
err = dataprovider.AddUser(&user, "", "", "")
err = dataprovider.AddUser(&user, "", "")
assert.NoError(t, err)
user, err = dataprovider.GetUserWithGroupSettings(username, "")
user, err = dataprovider.GetUserWithGroupSettings(username)
assert.NoError(t, err)
connID1 := xid.New().String()
@ -100,7 +96,7 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
}
transfer1 := NewBaseTransfer(nil, conn1, nil, filepath.Join(user.HomeDir, "file1"), filepath.Join(user.HomeDir, "file1"),
"/file1", TransferUpload, 0, 0, 120, 0, true, fsUser, dataprovider.TransferQuota{})
transfer1.BytesReceived.Store(150)
transfer1.BytesReceived = 150
err = Connections.Add(fakeConn1)
assert.NoError(t, err)
// the transferschecker will do nothing if there is only one ongoing transfer
@ -114,8 +110,8 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
}
transfer2 := NewBaseTransfer(nil, conn2, nil, filepath.Join(user.HomeDir, "file2"), filepath.Join(user.HomeDir, "file2"),
"/file2", TransferUpload, 0, 0, 120, 40, true, fsUser, dataprovider.TransferQuota{})
transfer1.BytesReceived.Store(50)
transfer2.BytesReceived.Store(60)
transfer1.BytesReceived = 50
transfer2.BytesReceived = 60
err = Connections.Add(fakeConn2)
assert.NoError(t, err)
@ -126,7 +122,7 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
}
transfer3 := NewBaseTransfer(nil, conn3, nil, filepath.Join(user.HomeDir, "file3"), filepath.Join(user.HomeDir, "file3"),
"/file3", TransferDownload, 0, 0, 120, 0, true, fsUser, dataprovider.TransferQuota{})
transfer3.BytesReceived.Store(60) // this value will be ignored, this is a download
transfer3.BytesReceived = 60 // this value will be ignored, this is a download
err = Connections.Add(fakeConn3)
assert.NoError(t, err)
@ -136,20 +132,20 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
transfer1.BytesReceived.Store(80) // truncated size will be subtracted, we are not overquota
transfer1.BytesReceived = 80 // truncated size will be subtracted, we are not overquota
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
transfer1.BytesReceived.Store(120)
transfer1.BytesReceived = 120
// we are now overquota
// if another check is in progress nothing is done
Connections.transfersCheckStatus.Store(true)
atomic.StoreInt32(&Connections.transfersCheckStatus, 1)
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
Connections.transfersCheckStatus.Store(false)
atomic.StoreInt32(&Connections.transfersCheckStatus, 0)
Connections.checkTransfers()
assert.True(t, conn1.IsQuotaExceededError(transfer1.errAbort), transfer1.errAbort)
@ -159,7 +155,7 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
assert.True(t, conn3.IsQuotaExceededError(transfer3.GetAbortError()))
// update the user quota size
group.UserSettings.QuotaSize = 1000
err = dataprovider.UpdateGroup(&group, []string{username}, "", "", "")
err = dataprovider.UpdateGroup(&group, []string{username}, "", "")
assert.NoError(t, err)
transfer1.errAbort = nil
transfer2.errAbort = nil
@ -169,15 +165,15 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
assert.Nil(t, transfer3.errAbort)
group.UserSettings.QuotaSize = 0
err = dataprovider.UpdateGroup(&group, []string{username}, "", "", "")
err = dataprovider.UpdateGroup(&group, []string{username}, "", "")
assert.NoError(t, err)
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
// now check a public folder
transfer1.BytesReceived.Store(0)
transfer2.BytesReceived.Store(0)
transfer1.BytesReceived = 0
transfer2.BytesReceived = 0
connID4 := xid.New().String()
fsFolder, err := user.GetFilesystemForPath(path.Join(vdirPath, "/file1"), connID4)
assert.NoError(t, err)
@ -201,12 +197,12 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
err = Connections.Add(fakeConn5)
assert.NoError(t, err)
transfer4.BytesReceived.Store(50)
transfer5.BytesReceived.Store(40)
transfer4.BytesReceived = 50
transfer5.BytesReceived = 40
Connections.checkTransfers()
assert.Nil(t, transfer4.errAbort)
assert.Nil(t, transfer5.errAbort)
transfer5.BytesReceived.Store(60)
transfer5.BytesReceived = 60
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
@ -248,20 +244,19 @@ func TestTransfersCheckerDiskQuota(t *testing.T) {
Connections.Remove(fakeConn3.GetID())
Connections.Remove(fakeConn4.GetID())
Connections.Remove(fakeConn5.GetID())
stats := Connections.GetStats("")
stats := Connections.GetStats()
assert.Len(t, stats, 0)
assert.Equal(t, int32(0), Connections.GetTotalTransfers())
err = dataprovider.DeleteUser(user.Username, "", "", "")
err = dataprovider.DeleteUser(user.Username, "", "")
assert.NoError(t, err)
err = os.RemoveAll(user.GetHomeDir())
assert.NoError(t, err)
err = dataprovider.DeleteFolder(folderName, "", "", "")
err = dataprovider.DeleteFolder(folderName, "", "")
assert.NoError(t, err)
err = os.RemoveAll(filepath.Join(os.TempDir(), folderName))
assert.NoError(t, err)
err = dataprovider.DeleteGroup(groupName, "", "", "")
err = dataprovider.DeleteGroup(groupName, "", "")
assert.NoError(t, err)
}
@ -279,7 +274,7 @@ func TestTransferCheckerTransferQuota(t *testing.T) {
},
},
}
err := dataprovider.AddUser(&user, "", "", "")
err := dataprovider.AddUser(&user, "", "")
assert.NoError(t, err)
connID1 := xid.New().String()
@ -291,7 +286,7 @@ func TestTransferCheckerTransferQuota(t *testing.T) {
}
transfer1 := NewBaseTransfer(nil, conn1, nil, filepath.Join(user.HomeDir, "file1"), filepath.Join(user.HomeDir, "file1"),
"/file1", TransferUpload, 0, 0, 0, 0, true, fsUser, dataprovider.TransferQuota{AllowedTotalSize: 100})
transfer1.BytesReceived.Store(150)
transfer1.BytesReceived = 150
err = Connections.Add(fakeConn1)
assert.NoError(t, err)
// the transferschecker will do nothing if there is only one ongoing transfer
@ -305,26 +300,26 @@ func TestTransferCheckerTransferQuota(t *testing.T) {
}
transfer2 := NewBaseTransfer(nil, conn2, nil, filepath.Join(user.HomeDir, "file2"), filepath.Join(user.HomeDir, "file2"),
"/file2", TransferUpload, 0, 0, 0, 0, true, fsUser, dataprovider.TransferQuota{AllowedTotalSize: 100})
transfer2.BytesReceived.Store(150)
transfer2.BytesReceived = 150
err = Connections.Add(fakeConn2)
assert.NoError(t, err)
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
// now test overquota
transfer1.BytesReceived.Store(1024*1024 + 1)
transfer2.BytesReceived.Store(0)
transfer1.BytesReceived = 1024*1024 + 1
transfer2.BytesReceived = 0
Connections.checkTransfers()
assert.True(t, conn1.IsQuotaExceededError(transfer1.errAbort), transfer1.errAbort)
assert.True(t, conn1.IsQuotaExceededError(transfer1.errAbort))
assert.Nil(t, transfer2.errAbort)
transfer1.errAbort = nil
transfer1.BytesReceived.Store(1024*1024 + 1)
transfer2.BytesReceived.Store(1024)
transfer1.BytesReceived = 1024*1024 + 1
transfer2.BytesReceived = 1024
Connections.checkTransfers()
assert.True(t, conn1.IsQuotaExceededError(transfer1.errAbort))
assert.True(t, conn2.IsQuotaExceededError(transfer2.errAbort))
transfer1.BytesReceived.Store(0)
transfer2.BytesReceived.Store(0)
transfer1.BytesReceived = 0
transfer2.BytesReceived = 0
transfer1.errAbort = nil
transfer2.errAbort = nil
@ -342,7 +337,7 @@ func TestTransferCheckerTransferQuota(t *testing.T) {
}
transfer3 := NewBaseTransfer(nil, conn3, nil, filepath.Join(user.HomeDir, "file1"), filepath.Join(user.HomeDir, "file1"),
"/file1", TransferDownload, 0, 0, 0, 0, true, fsUser, dataprovider.TransferQuota{AllowedDLSize: 100})
transfer3.BytesSent.Store(150)
transfer3.BytesSent = 150
err = Connections.Add(fakeConn3)
assert.NoError(t, err)
@ -353,15 +348,15 @@ func TestTransferCheckerTransferQuota(t *testing.T) {
}
transfer4 := NewBaseTransfer(nil, conn4, nil, filepath.Join(user.HomeDir, "file2"), filepath.Join(user.HomeDir, "file2"),
"/file2", TransferDownload, 0, 0, 0, 0, true, fsUser, dataprovider.TransferQuota{AllowedDLSize: 100})
transfer4.BytesSent.Store(150)
transfer4.BytesSent = 150
err = Connections.Add(fakeConn4)
assert.NoError(t, err)
Connections.checkTransfers()
assert.Nil(t, transfer3.errAbort)
assert.Nil(t, transfer4.errAbort)
transfer3.BytesSent.Store(512 * 1024)
transfer4.BytesSent.Store(512*1024 + 1)
transfer3.BytesSent = 512 * 1024
transfer4.BytesSent = 512*1024 + 1
Connections.checkTransfers()
if assert.Error(t, transfer3.errAbort) {
assert.Contains(t, transfer3.errAbort.Error(), ErrReadQuotaExceeded.Error())
@ -369,18 +364,13 @@ func TestTransferCheckerTransferQuota(t *testing.T) {
if assert.Error(t, transfer4.errAbort) {
assert.Contains(t, transfer4.errAbort.Error(), ErrReadQuotaExceeded.Error())
}
err = transfer3.Close()
assert.NoError(t, err)
err = transfer4.Close()
assert.NoError(t, err)
Connections.Remove(fakeConn3.GetID())
Connections.Remove(fakeConn4.GetID())
stats := Connections.GetStats("")
stats := Connections.GetStats()
assert.Len(t, stats, 0)
assert.Equal(t, int32(0), Connections.GetTotalTransfers())
err = dataprovider.DeleteUser(user.Username, "", "", "")
err = dataprovider.DeleteUser(user.Username, "", "")
assert.NoError(t, err)
err = os.RemoveAll(user.GetHomeDir())
assert.NoError(t, err)
@ -603,7 +593,7 @@ func TestDataTransferExceeded(t *testing.T) {
func TestGetUsersForQuotaCheck(t *testing.T) {
usersToFetch := make(map[string]bool)
for i := 0; i < 70; i++ {
for i := 0; i < 50; i++ {
usersToFetch[fmt.Sprintf("user%v", i)] = i%2 == 0
}
@ -611,11 +601,7 @@ func TestGetUsersForQuotaCheck(t *testing.T) {
assert.NoError(t, err)
assert.Len(t, users, 0)
for i := 0; i < 60; i++ {
folder := vfs.BaseVirtualFolder{
Name: fmt.Sprintf("f%v", i),
MappedPath: filepath.Join(os.TempDir(), fmt.Sprintf("f%v", i)),
}
for i := 0; i < 40; i++ {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: fmt.Sprintf("user%v", i),
@ -630,16 +616,26 @@ func TestGetUsersForQuotaCheck(t *testing.T) {
VirtualFolders: []vfs.VirtualFolder{
{
BaseVirtualFolder: vfs.BaseVirtualFolder{
Name: folder.Name,
Name: fmt.Sprintf("f%v", i),
MappedPath: filepath.Join(os.TempDir(), fmt.Sprintf("f%v", i)),
},
VirtualPath: "/vfolder",
QuotaSize: 100,
},
},
Filters: dataprovider.UserFilters{
BaseUserFilters: sdk.BaseUserFilters{
DataTransferLimits: []sdk.DataTransferLimit{
{
Sources: []string{"172.16.0.0/16"},
UploadDataTransfer: 50,
DownloadDataTransfer: 80,
},
},
},
},
}
err = dataprovider.AddFolder(&folder, "", "", "")
assert.NoError(t, err)
err = dataprovider.AddUser(&user, "", "", "")
err = dataprovider.AddUser(&user, "", "")
assert.NoError(t, err)
err = dataprovider.UpdateVirtualFolderQuota(&vfs.BaseVirtualFolder{Name: fmt.Sprintf("f%v", i)}, 1, 50, false)
assert.NoError(t, err)
@ -647,7 +643,7 @@ func TestGetUsersForQuotaCheck(t *testing.T) {
users, err = dataprovider.GetUsersForQuotaCheck(usersToFetch)
assert.NoError(t, err)
assert.Len(t, users, 60)
assert.Len(t, users, 40)
for _, user := range users {
userIdxStr := strings.Replace(user.Username, "user", "", 1)
@ -665,16 +661,20 @@ func TestGetUsersForQuotaCheck(t *testing.T) {
assert.Len(t, user.VirtualFolders, 0, user.Username)
}
}
ul, dl, total := user.GetDataTransferLimits()
ul, dl, total := user.GetDataTransferLimits("127.1.1.1")
assert.Equal(t, int64(0), ul)
assert.Equal(t, int64(0), dl)
assert.Equal(t, int64(0), total)
ul, dl, total = user.GetDataTransferLimits("172.16.2.3")
assert.Equal(t, int64(50*1024*1024), ul)
assert.Equal(t, int64(80*1024*1024), dl)
assert.Equal(t, int64(0), total)
}
for i := 0; i < 60; i++ {
err = dataprovider.DeleteUser(fmt.Sprintf("user%v", i), "", "", "")
for i := 0; i < 40; i++ {
err = dataprovider.DeleteUser(fmt.Sprintf("user%v", i), "", "")
assert.NoError(t, err)
err = dataprovider.DeleteFolder(fmt.Sprintf("f%v", i), "", "", "")
err = dataprovider.DeleteFolder(fmt.Sprintf("f%v", i), "", "")
assert.NoError(t, err)
}

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build darwin
// +build darwin

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !linux && !darwin
// +build !linux,!darwin

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build linux
// +build linux

File diff suppressed because it is too large Load diff

View file

@ -1,6 +0,0 @@
project_id_env: CROWDIN_PROJECT_ID
api_token_env: CROWDIN_PERSONAL_TOKEN
files:
- source: /static/locales/en/translation.json
translation: /static/locales/%two_letters_code%/%original_file_name%
type: i18next_json

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
@ -21,17 +21,16 @@ import (
"net/url"
"os/exec"
"path/filepath"
"slices"
"strings"
"time"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/internal/command"
"github.com/drakkan/sftpgo/v2/internal/httpclient"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/plugin"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/command"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
const (
@ -43,25 +42,19 @@ const (
)
const (
actionObjectUser = "user"
actionObjectFolder = "folder"
actionObjectGroup = "group"
actionObjectAdmin = "admin"
actionObjectAPIKey = "api_key"
actionObjectShare = "share"
actionObjectEventAction = "event_action"
actionObjectEventRule = "event_rule"
actionObjectRole = "role"
actionObjectIPListEntry = "ip_list_entry"
actionObjectConfigs = "configs"
actionObjectUser = "user"
actionObjectFolder = "folder"
actionObjectGroup = "group"
actionObjectAdmin = "admin"
actionObjectAPIKey = "api_key"
actionObjectShare = "share"
)
var (
actionsConcurrencyGuard = make(chan struct{}, 100)
reservedUsers = []string{ActionExecutorSelf, ActionExecutorSystem}
)
func executeAction(operation, executor, ip, objectType, objectName, role string, object plugin.Renderer) {
func executeAction(operation, executor, ip, objectType, objectName string, object plugin.Renderer) {
if plugin.Handler.HasNotifiers() {
plugin.Handler.NotifyProviderEvent(&notifier.ProviderEvent{
Action: operation,
@ -69,18 +62,14 @@ func executeAction(operation, executor, ip, objectType, objectName, role string,
ObjectType: objectType,
ObjectName: objectName,
IP: ip,
Role: role,
Timestamp: time.Now().UnixNano(),
}, object)
}
if fnHandleRuleForProviderEvent != nil {
fnHandleRuleForProviderEvent(operation, executor, ip, objectType, objectName, role, object)
}
if config.Actions.Hook == "" {
return
}
if !slices.Contains(config.Actions.ExecuteOn, operation) ||
!slices.Contains(config.Actions.ExecuteFor, objectType) {
if !util.Contains(config.Actions.ExecuteOn, operation) ||
!util.Contains(config.Actions.ExecuteFor, objectType) {
return
}
@ -92,14 +81,14 @@ func executeAction(operation, executor, ip, objectType, objectName, role string,
dataAsJSON, err := object.RenderAsJSON(operation != operationDelete)
if err != nil {
providerLog(logger.LevelError, "unable to serialize user as JSON for operation %q: %v", operation, err)
providerLog(logger.LevelError, "unable to serialize user as JSON for operation %#v: %v", operation, err)
return
}
if strings.HasPrefix(config.Actions.Hook, "http") {
var url *url.URL
url, err := url.Parse(config.Actions.Hook)
if err != nil {
providerLog(logger.LevelError, "Invalid http_notification_url %q for operation %q: %v",
providerLog(logger.LevelError, "Invalid http_notification_url %#v for operation %#v: %v",
config.Actions.Hook, operation, err)
return
}
@ -109,10 +98,7 @@ func executeAction(operation, executor, ip, objectType, objectName, role string,
q.Add("ip", ip)
q.Add("object_type", objectType)
q.Add("object_name", objectName)
if role != "" {
q.Add("role", role)
}
q.Add("timestamp", fmt.Sprintf("%d", time.Now().UnixNano()))
q.Add("timestamp", fmt.Sprintf("%v", time.Now().UnixNano()))
url.RawQuery = q.Encode()
startTime := time.Now()
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(dataAsJSON))
@ -121,39 +107,38 @@ func executeAction(operation, executor, ip, objectType, objectName, role string,
respCode = resp.StatusCode
resp.Body.Close()
}
providerLog(logger.LevelDebug, "notified operation %q to URL: %s status code: %d, elapsed: %s err: %v",
providerLog(logger.LevelDebug, "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
operation, url.Redacted(), respCode, time.Since(startTime), err)
return
} else {
executeNotificationCommand(operation, executor, ip, objectType, objectName, dataAsJSON) //nolint:errcheck // the error is used in test cases only
}
executeNotificationCommand(operation, executor, ip, objectType, objectName, role, dataAsJSON) //nolint:errcheck // the error is used in test cases only
}()
}
func executeNotificationCommand(operation, executor, ip, objectType, objectName, role string, objectAsJSON []byte) error {
func executeNotificationCommand(operation, executor, ip, objectType, objectName string, objectAsJSON []byte) error {
if !filepath.IsAbs(config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %q", config.Actions.Hook)
err := fmt.Errorf("invalid notification command %#v", config.Actions.Hook)
logger.Warn(logSender, "", "unable to execute notification command: %v", err)
return err
}
timeout, env, args := command.GetConfig(config.Actions.Hook, command.HookProviderActions)
timeout, env := command.GetConfig(config.Actions.Hook)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
cmd := exec.CommandContext(ctx, config.Actions.Hook, args...)
cmd := exec.CommandContext(ctx, config.Actions.Hook)
cmd.Env = append(env,
fmt.Sprintf("SFTPGO_PROVIDER_ACTION=%s", operation),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_TYPE=%s", objectType),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_NAME=%s", objectName),
fmt.Sprintf("SFTPGO_PROVIDER_USERNAME=%s", executor),
fmt.Sprintf("SFTPGO_PROVIDER_IP=%s", ip),
fmt.Sprintf("SFTPGO_PROVIDER_ROLE=%s", role),
fmt.Sprintf("SFTPGO_PROVIDER_TIMESTAMP=%d", util.GetTimeAsMsSinceEpoch(time.Now())),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT=%s", util.BytesToString(objectAsJSON)))
fmt.Sprintf("SFTPGO_PROVIDER_ACTION=%v", operation),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_TYPE=%v", objectType),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_NAME=%v", objectName),
fmt.Sprintf("SFTPGO_PROVIDER_USERNAME=%v", executor),
fmt.Sprintf("SFTPGO_PROVIDER_IP=%v", ip),
fmt.Sprintf("SFTPGO_PROVIDER_TIMESTAMP=%v", util.GetTimeAsMsSinceEpoch(time.Now())),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT=%v", string(objectAsJSON)))
startTime := time.Now()
err := cmd.Run()
providerLog(logger.LevelDebug, "executed command %q, elapsed: %s, error: %v", config.Actions.Hook,
providerLog(logger.LevelDebug, "executed command %#v, elapsed: %v, error: %v", config.Actions.Hook,
time.Since(startTime), err)
return err
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,29 +10,28 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"crypto/sha256"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"net"
"os"
"slices"
"strconv"
"strings"
"github.com/alexedwards/argon2id"
"github.com/sftpgo/sdk"
passwordvalidator "github.com/wagslane/go-password-validator"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/internal/kms"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/mfa"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/util"
)
// Available permissions for SFTPGo admins
@ -45,30 +44,24 @@ const (
PermAdminViewConnections = "view_conns"
PermAdminCloseConnections = "close_conns"
PermAdminViewServerStatus = "view_status"
PermAdminManageAdmins = "manage_admins"
PermAdminManageGroups = "manage_groups"
PermAdminManageFolders = "manage_folders"
PermAdminManageAPIKeys = "manage_apikeys"
PermAdminQuotaScans = "quota_scans"
PermAdminManageSystem = "manage_system"
PermAdminManageDefender = "manage_defender"
PermAdminViewDefender = "view_defender"
PermAdminRetentionChecks = "retention_checks"
PermAdminMetadataChecks = "metadata_checks"
PermAdminViewEvents = "view_events"
PermAdminDisableMFA = "disable_mfa"
)
const (
// GroupAddToUsersAsMembership defines that the admin's group will be added as membership group for new users
GroupAddToUsersAsMembership = iota
// GroupAddToUsersAsPrimary defines that the admin's group will be added as primary group for new users
GroupAddToUsersAsPrimary
// GroupAddToUsersAsSecondary defines that the admin's group will be added as secondary group for new users
GroupAddToUsersAsSecondary
)
var (
validAdminPerms = []string{PermAdminAny, PermAdminAddUsers, PermAdminChangeUsers, PermAdminDeleteUsers,
PermAdminViewUsers, PermAdminManageFolders, PermAdminManageGroups, PermAdminViewConnections,
PermAdminCloseConnections, PermAdminViewServerStatus, PermAdminQuotaScans,
PermAdminManageDefender, PermAdminViewDefender, PermAdminViewEvents, PermAdminDisableMFA}
forbiddenPermsForRoleAdmins = []string{PermAdminAny}
PermAdminViewUsers, PermAdminManageGroups, PermAdminViewConnections, PermAdminCloseConnections,
PermAdminViewServerStatus, PermAdminManageAdmins, PermAdminManageAPIKeys, PermAdminQuotaScans,
PermAdminManageSystem, PermAdminManageDefender, PermAdminViewDefender, PermAdminRetentionChecks,
PermAdminMetadataChecks, PermAdminViewEvents}
)
// AdminTOTPConfig defines the time-based one time password configuration
@ -87,8 +80,8 @@ func (c *AdminTOTPConfig) validate(username string) error {
if c.ConfigName == "" {
return util.NewValidationError("totp: config name is mandatory")
}
if !slices.Contains(mfa.GetAvailableTOTPConfigNames(), c.ConfigName) {
return util.NewValidationError(fmt.Sprintf("totp: config name %q not found", c.ConfigName))
if !util.Contains(mfa.GetAvailableTOTPConfigNames(), c.ConfigName) {
return util.NewValidationError(fmt.Sprintf("totp: config name %#v not found", c.ConfigName))
}
if c.Secret.IsEmpty() {
return util.NewValidationError("totp: secret is mandatory")
@ -102,85 +95,6 @@ func (c *AdminTOTPConfig) validate(username string) error {
return nil
}
// AdminPreferences defines the admin preferences
type AdminPreferences struct {
// Allow to hide some sections from the user page.
// These are not security settings and are not enforced server side
// in any way. They are only intended to simplify the user page in
// the WebAdmin UI.
//
// 1 means hide groups section
// 2 means hide filesystem section, "users_base_dir" must be set in the config file otherwise this setting is ignored
// 4 means hide virtual folders section
// 8 means hide profile section
// 16 means hide ACLs section
// 32 means hide disk and bandwidth quota limits section
// 64 means hide advanced settings section
//
// The settings can be combined
HideUserPageSections int `json:"hide_user_page_sections,omitempty"`
// Defines the default expiration for newly created users as number of days.
// 0 means no expiration
DefaultUsersExpiration int `json:"default_users_expiration,omitempty"`
}
// HideGroups returns true if the groups section should be hidden
func (p *AdminPreferences) HideGroups() bool {
return p.HideUserPageSections&1 != 0
}
// HideFilesystem returns true if the filesystem section should be hidden
func (p *AdminPreferences) HideFilesystem() bool {
return config.UsersBaseDir != "" && p.HideUserPageSections&2 != 0
}
// HideVirtualFolders returns true if the virtual folder section should be hidden
func (p *AdminPreferences) HideVirtualFolders() bool {
return p.HideUserPageSections&4 != 0
}
// HideProfile returns true if the profile section should be hidden
func (p *AdminPreferences) HideProfile() bool {
return p.HideUserPageSections&8 != 0
}
// HideACLs returns true if the ACLs section should be hidden
func (p *AdminPreferences) HideACLs() bool {
return p.HideUserPageSections&16 != 0
}
// HideDiskQuotaAndBandwidthLimits returns true if the disk quota and bandwidth limits
// section should be hidden
func (p *AdminPreferences) HideDiskQuotaAndBandwidthLimits() bool {
return p.HideUserPageSections&32 != 0
}
// HideAdvancedSettings returns true if the advanced settings section should be hidden
func (p *AdminPreferences) HideAdvancedSettings() bool {
return p.HideUserPageSections&64 != 0
}
// VisibleUserPageSections returns the number of visible sections
// in the user page
func (p *AdminPreferences) VisibleUserPageSections() int {
var result int
if !p.HideProfile() {
result++
}
if !p.HideACLs() {
result++
}
if !p.HideDiskQuotaAndBandwidthLimits() {
result++
}
if !p.HideAdvancedSettings() {
result++
}
return result
}
// AdminFilters defines additional restrictions for SFTPGo admins
// TODO: rename to AdminOptions in v3
type AdminFilters struct {
@ -190,47 +104,12 @@ type AdminFilters struct {
AllowList []string `json:"allow_list,omitempty"`
// API key auth allows to impersonate this administrator with an API key
AllowAPIKeyAuth bool `json:"allow_api_key_auth,omitempty"`
// A password change is required at the next login
RequirePasswordChange bool `json:"require_password_change,omitempty"`
// Require two factor authentication
RequireTwoFactor bool `json:"require_two_factor"`
// Time-based one time passwords configuration
TOTPConfig AdminTOTPConfig `json:"totp_config,omitempty"`
// Recovery codes to use if the user loses access to their second factor auth device.
// Each code can only be used once, you should use these codes to login and disable or
// reset 2FA for your account
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
Preferences AdminPreferences `json:"preferences"`
}
// AdminGroupMappingOptions defines the options for admin/group mapping
type AdminGroupMappingOptions struct {
AddToUsersAs int `json:"add_to_users_as,omitempty"`
}
func (o *AdminGroupMappingOptions) validate() error {
if o.AddToUsersAs < GroupAddToUsersAsMembership || o.AddToUsersAs > GroupAddToUsersAsSecondary {
return util.NewValidationError(fmt.Sprintf("Invalid mode to add groups to new users: %d", o.AddToUsersAs))
}
return nil
}
// GetUserGroupType returns the type for the matching user group
func (o *AdminGroupMappingOptions) GetUserGroupType() int {
switch o.AddToUsersAs {
case GroupAddToUsersAsPrimary:
return sdk.GroupTypePrimary
case GroupAddToUsersAsSecondary:
return sdk.GroupTypeSecondary
default:
return sdk.GroupTypeMembership
}
}
// AdminGroupMapping defines the mapping between an SFTPGo admin and a group
type AdminGroupMapping struct {
Name string `json:"name"`
Options AdminGroupMappingOptions `json:"options"`
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
}
// Admin defines a SFTPGo admin
@ -247,17 +126,12 @@ type Admin struct {
Filters AdminFilters `json:"filters,omitempty"`
Description string `json:"description,omitempty"`
AdditionalInfo string `json:"additional_info,omitempty"`
// Groups membership
Groups []AdminGroupMapping `json:"groups,omitempty"`
// Creation time as unix timestamp in milliseconds. It will be 0 for admins created before v2.2.0
CreatedAt int64 `json:"created_at"`
// last update time as unix timestamp in milliseconds
UpdatedAt int64 `json:"updated_at"`
// Last login as unix timestamp in milliseconds
LastLogin int64 `json:"last_login"`
// Role name. If set the admin can only administer users with the same role.
// Role admins cannot be super administrators
Role string `json:"role,omitempty"`
}
// CountUnusedRecoveryCodes returns the number of unused recovery codes
@ -275,7 +149,7 @@ func (a *Admin) hashPassword() error {
if a.Password != "" && !util.IsStringPrefixInSlice(a.Password, internalHashPwdPrefixes) {
if config.PasswordValidation.Admins.MinEntropy > 0 {
if err := passwordvalidator.Validate(a.Password, config.PasswordValidation.Admins.MinEntropy); err != nil {
return util.NewI18nError(util.NewValidationError(err.Error()), util.I18nErrorPasswordComplexity)
return util.NewValidationError(err.Error())
}
}
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
@ -283,7 +157,7 @@ func (a *Admin) hashPassword() error {
if err != nil {
return err
}
a.Password = util.BytesToString(pwd)
a.Password = string(pwd)
} else {
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
if err != nil {
@ -318,47 +192,14 @@ func (a *Admin) validateRecoveryCodes() error {
func (a *Admin) validatePermissions() error {
a.Permissions = util.RemoveDuplicates(a.Permissions, false)
if len(a.Permissions) == 0 {
return util.NewI18nError(
util.NewValidationError("please grant some permissions to this admin"),
util.I18nErrorPermissionsRequired,
)
return util.NewValidationError("please grant some permissions to this admin")
}
if slices.Contains(a.Permissions, PermAdminAny) {
if util.Contains(a.Permissions, PermAdminAny) {
a.Permissions = []string{PermAdminAny}
}
for _, perm := range a.Permissions {
if !slices.Contains(validAdminPerms, perm) {
return util.NewValidationError(fmt.Sprintf("invalid permission: %q", perm))
}
if a.Role != "" {
if slices.Contains(forbiddenPermsForRoleAdmins, perm) {
return util.NewI18nError(
util.NewValidationError("a role admin cannot be a super admin"),
util.I18nErrorRoleAdminPerms,
)
}
}
}
return nil
}
func (a *Admin) validateGroups() error {
hasPrimary := false
for _, g := range a.Groups {
if g.Name == "" {
return util.NewValidationError("group name is mandatory")
}
if err := g.Options.validate(); err != nil {
return err
}
if g.Options.AddToUsersAs == GroupAddToUsersAsPrimary {
if hasPrimary {
return util.NewI18nError(
util.NewValidationError("only one primary group is allowed"),
util.I18nErrorPrimaryGroup,
)
}
hasPrimary = true
if !util.Contains(validAdminPerms, perm) {
return util.NewValidationError(fmt.Sprintf("invalid permission: %#v", perm))
}
}
return nil
@ -367,28 +208,22 @@ func (a *Admin) validateGroups() error {
func (a *Admin) validate() error {
a.SetEmptySecretsIfNil()
if a.Username == "" {
return util.NewI18nError(util.NewValidationError("username is mandatory"), util.I18nErrorUsernameRequired)
}
if err := checkReservedUsernames(a.Username); err != nil {
return util.NewI18nError(err, util.I18nErrorReservedUsername)
return util.NewValidationError("username is mandatory")
}
if a.Password == "" {
return util.NewI18nError(util.NewValidationError("please set a password"), util.I18nErrorPasswordRequired)
return util.NewValidationError("please set a password")
}
if a.hasRedactedSecret() {
return util.NewValidationError("cannot save an admin with a redacted secret")
}
if err := a.Filters.TOTPConfig.validate(a.Username); err != nil {
return util.NewI18nError(err, util.I18nError2FAInvalid)
return err
}
if err := a.validateRecoveryCodes(); err != nil {
return util.NewI18nError(err, util.I18nErrorRecoveryCodesInvalid)
return err
}
if config.NamingRules&1 == 0 && !usernameRegex.MatchString(a.Username) {
return util.NewI18nError(
util.NewValidationError(fmt.Sprintf("username %q is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username)),
util.I18nErrorInvalidUser,
)
return util.NewValidationError(fmt.Sprintf("username %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username))
}
if err := a.hashPassword(); err != nil {
return err
@ -397,50 +232,31 @@ func (a *Admin) validate() error {
return err
}
if a.Email != "" && !util.IsEmailValid(a.Email) {
return util.NewI18nError(
util.NewValidationError(fmt.Sprintf("email %q is not valid", a.Email)),
util.I18nErrorInvalidEmail,
)
return util.NewValidationError(fmt.Sprintf("email %#v is not valid", a.Email))
}
a.Filters.AllowList = util.RemoveDuplicates(a.Filters.AllowList, false)
for _, IPMask := range a.Filters.AllowList {
_, _, err := net.ParseCIDR(IPMask)
if err != nil {
return util.NewI18nError(
util.NewValidationError(fmt.Sprintf("could not parse allow list entry %q : %v", IPMask, err)),
util.I18nErrorInvalidIPMask,
)
return util.NewValidationError(fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err))
}
}
return a.validateGroups()
return nil
}
// CheckPassword verifies the admin password
func (a *Admin) CheckPassword(password string) (bool, error) {
if config.PasswordCaching {
found, match := cachedAdminPasswords.Check(a.Username, password, a.Password)
if found {
if !match {
return false, ErrInvalidCredentials
}
return match, nil
}
}
if strings.HasPrefix(a.Password, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(a.Password), []byte(password)); err != nil {
return false, ErrInvalidCredentials
}
cachedAdminPasswords.Add(a.Username, password, a.Password)
return true, nil
}
match, err := argon2id.ComparePasswordAndHash(password, a.Password)
if !match || err != nil {
return false, ErrInvalidCredentials
}
if match {
cachedAdminPasswords.Add(a.Username, password, a.Password)
}
return match, err
}
@ -469,7 +285,7 @@ func (a *Admin) CanLoginFromIP(ip string) bool {
// CanLogin returns an error if the login is not allowed
func (a *Admin) CanLogin(ip string) error {
if a.Status != 1 {
return fmt.Errorf("admin %q is disabled", a.Username)
return fmt.Errorf("admin %#v is disabled", a.Username)
}
if !a.CanLoginFromIP(ip) {
return fmt.Errorf("login from IP %v not allowed", ip)
@ -541,20 +357,23 @@ func (a *Admin) SetNilSecretsIfEmpty() {
// HasPermission returns true if the admin has the specified permission
func (a *Admin) HasPermission(perm string) bool {
if slices.Contains(a.Permissions, PermAdminAny) {
if util.Contains(a.Permissions, PermAdminAny) {
return true
}
return slices.Contains(a.Permissions, perm)
return util.Contains(a.Permissions, perm)
}
// HasPermissions returns true if the admin has all the specified permissions
func (a *Admin) HasPermissions(perms ...string) bool {
for _, perm := range perms {
if !a.HasPermission(perm) {
return false
}
// GetPermissionsAsString returns permission as string
func (a *Admin) GetPermissionsAsString() string {
return strings.Join(a.Permissions, ", ")
}
// GetLastLoginAsString returns the last login as string
func (a *Admin) GetLastLoginAsString() string {
if a.LastLogin > 0 {
return util.GetTimeFromMsecSinceEpoch(a.LastLogin).UTC().Format(iso8601UTCFormat)
}
return len(perms) > 0
return ""
}
// GetAllowedIPAsString returns the allowed IP as comma separated string
@ -573,9 +392,12 @@ func (a *Admin) CanManageMFA() bool {
}
// GetSignature returns a signature for this admin.
// It will change after an update
// It could change after an update
func (a *Admin) GetSignature() string {
return strconv.FormatInt(a.UpdatedAt, 10)
data := []byte(a.Username)
data = append(data, []byte(a.Password)...)
signature := sha256.Sum256(data)
return base64.StdEncoding.EncodeToString(signature[:])
}
func (a *Admin) getACopy() Admin {
@ -585,8 +407,6 @@ func (a *Admin) getACopy() Admin {
filters := AdminFilters{}
filters.AllowList = make([]string, len(a.Filters.AllowList))
filters.AllowAPIKeyAuth = a.Filters.AllowAPIKeyAuth
filters.RequirePasswordChange = a.Filters.RequirePasswordChange
filters.RequireTwoFactor = a.Filters.RequireTwoFactor
filters.TOTPConfig.Enabled = a.Filters.TOTPConfig.Enabled
filters.TOTPConfig.ConfigName = a.Filters.TOTPConfig.ConfigName
filters.TOTPConfig.Secret = a.Filters.TOTPConfig.Secret.Clone()
@ -601,19 +421,6 @@ func (a *Admin) getACopy() Admin {
Used: code.Used,
})
}
filters.Preferences = AdminPreferences{
HideUserPageSections: a.Filters.Preferences.HideUserPageSections,
DefaultUsersExpiration: a.Filters.Preferences.DefaultUsersExpiration,
}
groups := make([]AdminGroupMapping, 0, len(a.Groups))
for _, g := range a.Groups {
groups = append(groups, AdminGroupMapping{
Name: g.Name,
Options: AdminGroupMappingOptions{
AddToUsersAs: g.Options.AddToUsersAs,
},
})
}
return Admin{
ID: a.ID,
@ -622,14 +429,12 @@ func (a *Admin) getACopy() Admin {
Password: a.Password,
Email: a.Email,
Permissions: permissions,
Groups: groups,
Filters: filters,
AdditionalInfo: a.AdditionalInfo,
Description: a.Description,
LastLogin: a.LastLogin,
CreatedAt: a.CreatedAt,
UpdatedAt: a.UpdatedAt,
Role: a.Role,
}
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
@ -23,8 +23,8 @@ import (
"github.com/alexedwards/argon2id"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// APIKeyScope defines the supported API key scopes
@ -118,7 +118,7 @@ func (k *APIKey) hashKey() error {
if err != nil {
return err
}
k.Key = util.BytesToString(hashed)
k.Key = string(hashed)
} else {
hashed, err := argon2id.CreateHash(k.Key, argon2Params)
if err != nil {
@ -165,7 +165,7 @@ func (k *APIKey) validate() error {
k.Admin = ""
}
if k.User != "" {
_, err := provider.userExists(k.User, "")
_, err := provider.userExists(k.User)
if err != nil {
return util.NewValidationError(fmt.Sprintf("unable to check API key user %v: %v", k.User, err))
}
@ -182,18 +182,9 @@ func (k *APIKey) validate() error {
// Authenticate tries to authenticate the provided plain key
func (k *APIKey) Authenticate(plainKey string) error {
if k.ExpiresAt > 0 && k.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return fmt.Errorf("API key %q is expired, expiration timestamp: %v current timestamp: %v", k.KeyID,
return fmt.Errorf("API key %#v is expired, expiration timestamp: %v current timestamp: %v", k.KeyID,
k.ExpiresAt, util.GetTimeAsMsSinceEpoch(time.Now()))
}
if config.PasswordCaching {
found, match := cachedAPIKeys.Check(k.KeyID, plainKey, k.Key)
if found {
if !match {
return ErrInvalidCredentials
}
return nil
}
}
if strings.HasPrefix(k.Key, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(k.Key), []byte(plainKey)); err != nil {
return ErrInvalidCredentials
@ -205,6 +196,5 @@ func (k *APIKey) Authenticate(plainKey string) error {
}
}
cachedAPIKeys.Add(k.KeyID, plainKey, k.Key)
return nil
}

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build nobolt
// +build nobolt
@ -20,13 +20,13 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {
version.AddFeature("-bolt")
}
func initializeBoltProvider(_ string) error {
func initializeBoltProvider(basePath string) error {
return errors.New("bolt disabled at build time")
}

View file

@ -0,0 +1,76 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"sync"
)
var cachedPasswords passwordsCache
func init() {
cachedPasswords = passwordsCache{
cache: make(map[string]string),
}
}
type passwordsCache struct {
sync.RWMutex
cache map[string]string
}
func (c *passwordsCache) Add(username, password string) {
if !config.PasswordCaching || username == "" || password == "" {
return
}
c.Lock()
defer c.Unlock()
c.cache[username] = password
}
func (c *passwordsCache) Remove(username string) {
if !config.PasswordCaching {
return
}
c.Lock()
defer c.Unlock()
delete(c.cache, username)
}
// Check returns if the user is found and if the password match
func (c *passwordsCache) Check(username, password string) (bool, bool) {
if username == "" || password == "" {
return false, false
}
c.RLock()
defer c.RUnlock()
pwd, ok := c.cache[username]
if !ok {
return false, false
}
return true, pwd == password
}
// CheckCachedPassword is an utility method used only in test cases
func CheckCachedPassword(username, password string) (bool, bool) {
return cachedPasswords.Check(username, password)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
@ -18,10 +18,10 @@ import (
"sync"
"time"
"github.com/drakkan/webdav"
"golang.org/x/net/webdav"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@ -77,7 +77,7 @@ func (cache *usersCache) updateLastLogin(username string) {
// swapWebDAVUser updates an existing cached user with the specified one
// preserving the lock fs if possible
// FIXME: this could be racy in rare cases
func (cache *usersCache) swap(userRef *User, plainPassword string) {
func (cache *usersCache) swap(userRef *User) {
user := userRef.getACopy()
err := user.LoadAndApplyGroupSettings()
@ -85,32 +85,28 @@ func (cache *usersCache) swap(userRef *User, plainPassword string) {
defer cache.Unlock()
if cachedUser, ok := cache.users[user.Username]; ok {
if cachedUser.User.Password != user.Password {
providerLog(logger.LevelDebug, "current password different from the cached one for user %#v, removing from cache",
user.Username)
// the password changed, the cached user is no longer valid
delete(cache.users, user.Username)
return
}
if err != nil {
providerLog(logger.LevelDebug, "unable to load group settings, for user %q, removing from cache, err :%v",
providerLog(logger.LevelDebug, "unable to load group settings, for user %#v, removing from cache, err :%v",
user.Username, err)
delete(cache.users, user.Username)
return
}
if plainPassword != "" {
cachedUser.Password = plainPassword
} else {
if cachedUser.User.Password != user.Password {
providerLog(logger.LevelDebug, "current password different from the cached one for user %q, removing from cache",
user.Username)
// the password changed, the cached user is no longer valid
delete(cache.users, user.Username)
return
}
}
if cachedUser.User.isFsEqual(&user) {
// the updated user has the same fs as the cached one, we can preserve the lock filesystem
providerLog(logger.LevelDebug, "current password and fs unchanged for for user %q, swap cached one",
providerLog(logger.LevelDebug, "current password and fs unchanged for for user %#v, swap cached one",
user.Username)
cachedUser.User = user
cache.users[user.Username] = cachedUser
} else {
// filesystem changed, the cached user is no longer valid
providerLog(logger.LevelDebug, "current fs different from the cached one for user %q, removing from cache",
providerLog(logger.LevelDebug, "current fs different from the cached one for user %#v, removing from cache",
user.Username)
delete(cache.users, user.Username)
}
@ -158,10 +154,7 @@ func (cache *usersCache) get(username string) (*CachedUser, bool) {
defer cache.RUnlock()
cachedUser, ok := cache.users[username]
if !ok {
return nil, false
}
return &cachedUser, true
return &cachedUser, ok
}
// CacheWebDAVUser add a user to the WebDAV cache

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
@ -22,10 +22,10 @@ import (
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/plugin"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
// GroupUserSettings defines the settings to apply to users
@ -135,16 +135,13 @@ func (g *Group) hasRedactedSecret() bool {
func (g *Group) validate() error {
g.SetEmptySecretsIfNil()
if g.Name == "" {
return util.NewI18nError(util.NewValidationError("name is mandatory"), util.I18nErrorNameRequired)
return util.NewValidationError("name is mandatory")
}
if config.NamingRules&1 == 0 && !usernameRegex.MatchString(g.Name) {
return util.NewI18nError(
util.NewValidationError(fmt.Sprintf("name %q is not valid, the following characters are allowed: a-zA-Z0-9-_.~", g.Name)),
util.I18nErrorInvalidName,
)
return util.NewValidationError(fmt.Sprintf("name %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", g.Name))
}
if g.hasRedactedSecret() {
return util.NewValidationError("cannot save a group with a redacted secret")
return util.NewValidationError("cannot save a user with a redacted secret")
}
vfolders, err := validateAssociatedVirtualFolders(g.VirtualFolders)
if err != nil {
@ -158,10 +155,8 @@ func (g *Group) validateUserSettings() error {
if g.UserSettings.HomeDir != "" {
g.UserSettings.HomeDir = filepath.Clean(g.UserSettings.HomeDir)
if !filepath.IsAbs(g.UserSettings.HomeDir) {
return util.NewI18nError(
util.NewValidationError(fmt.Sprintf("home_dir must be an absolute path, actual value: %v", g.UserSettings.HomeDir)),
util.I18nErrorInvalidHomeDir,
)
return util.NewValidationError(fmt.Sprintf("home_dir must be an absolute path, actual value: %v",
g.UserSettings.HomeDir))
}
}
if err := g.UserSettings.FsConfig.Validate(g.GetEncryptionAdditionalData()); err != nil {
@ -175,11 +170,10 @@ func (g *Group) validateUserSettings() error {
if len(g.UserSettings.Permissions) > 0 {
permissions, err := validateUserPermissions(g.UserSettings.Permissions)
if err != nil {
return util.NewI18nError(err, util.I18nErrorGenericPermission)
return err
}
g.UserSettings.Permissions = permissions
}
g.UserSettings.Filters.TLSCerts = nil
if err := validateBaseFilters(&g.UserSettings.Filters); err != nil {
return err
}
@ -193,8 +187,6 @@ func (g *Group) validateUserSettings() error {
func (g *Group) getACopy() Group {
users := make([]string, len(g.Users))
copy(users, g.Users)
admins := make([]string, len(g.Admins))
copy(admins, g.Admins)
virtualFolders := make([]vfs.VirtualFolder, 0, len(g.VirtualFolders))
for idx := range g.VirtualFolders {
vfolder := g.VirtualFolders[idx].GetACopy()
@ -215,7 +207,6 @@ func (g *Group) getACopy() Group {
CreatedAt: g.CreatedAt,
UpdatedAt: g.UpdatedAt,
Users: users,
Admins: admins,
},
UserSettings: GroupUserSettings{
BaseGroupUserSettings: sdk.BaseGroupUserSettings{
@ -229,7 +220,6 @@ func (g *Group) getACopy() Group {
UploadDataTransfer: g.UserSettings.UploadDataTransfer,
DownloadDataTransfer: g.UserSettings.DownloadDataTransfer,
TotalDataTransfer: g.UserSettings.TotalDataTransfer,
ExpiresIn: g.UserSettings.ExpiresIn,
Filters: copyBaseUserFilters(g.UserSettings.Filters),
},
FsConfig: g.UserSettings.FsConfig.GetACopy(),
@ -237,3 +227,8 @@ func (g *Group) getACopy() Group {
VirtualFolders: virtualFolders,
}
}
// GetUsersAsString returns the list of users as comma separated string
func (g *Group) GetUsersAsString() string {
return strings.Join(g.Users, ",")
}

2058
dataprovider/memory.go Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !nomysql
// +build !nomysql
@ -25,16 +25,14 @@ import (
"errors"
"fmt"
"os"
"path/filepath"
"strings"
"time"
"github.com/go-sql-driver/mysql"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
@ -42,7 +40,6 @@ const (
"DROP TABLE IF EXISTS `{{folders_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{users_folders_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{users_groups_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{admins_groups_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{groups_folders_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{admins}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{folders}}` CASCADE;" +
@ -53,24 +50,11 @@ const (
"DROP TABLE IF EXISTS `{{defender_hosts}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{active_transfers}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{shared_sessions}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{rules_actions_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{events_actions}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{events_rules}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{tasks}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{nodes}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{roles}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{ip_lists}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{configs}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{schema_version}}` CASCADE;"
mysqlInitialSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);" +
"CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`description` varchar(512) NULL, `password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, " +
"`permissions` longtext NOT NULL, `filters` longtext NULL, `additional_info` longtext NULL, `last_login` bigint NOT NULL, " +
"`role_id` integer NULL, `created_at` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
"CREATE TABLE `{{active_transfers}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`connection_id` varchar(100) NOT NULL, `transfer_id` bigint NOT NULL, `transfer_type` integer NOT NULL, " +
"`username` varchar(255) NOT NULL, `folder_name` varchar(255) NULL, `ip` varchar(50) NOT NULL, " +
"`truncated_size` bigint NOT NULL, `current_ul_size` bigint NOT NULL, `current_dl_size` bigint NOT NULL, " +
"`created_at` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
"CREATE TABLE `{{defender_hosts}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`ip` varchar(50) NOT NULL UNIQUE, `ban_time` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
@ -81,11 +65,6 @@ const (
"CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL UNIQUE, " +
"`description` varchar(512) NULL, `path` longtext NULL, `used_quota_size` bigint NOT NULL, " +
"`used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, `filesystem` longtext NULL);" +
"CREATE TABLE `{{groups}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`name` varchar(255) NOT NULL UNIQUE, `description` varchar(512) NULL, `created_at` bigint NOT NULL, " +
"`updated_at` bigint NOT NULL, `user_settings` longtext NULL);" +
"CREATE TABLE `{{shared_sessions}}` (`key` varchar(128) NOT NULL PRIMARY KEY, " +
"`data` longtext NOT NULL, `type` integer NOT NULL, `timestamp` bigint NOT NULL);" +
"CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`status` integer NOT NULL, `expiration_date` bigint NOT NULL, `description` varchar(512) NULL, `password` longtext NULL, " +
"`public_keys` longtext NULL, `home_dir` longtext NOT NULL, `uid` bigint NOT NULL, `gid` bigint NOT NULL, " +
@ -93,34 +72,12 @@ const (
"`permissions` longtext NOT NULL, `used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, " +
"`last_quota_update` bigint NOT NULL, `upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, " +
"`last_login` bigint NOT NULL, `filters` longtext NULL, `filesystem` longtext NULL, `additional_info` longtext NULL, " +
"`created_at` bigint NOT NULL, `updated_at` bigint NOT NULL, `email` varchar(255) NULL, " +
"`upload_data_transfer` integer NOT NULL, `download_data_transfer` integer NOT NULL, " +
"`total_data_transfer` integer NOT NULL, `used_upload_data_transfer` bigint NOT NULL, " +
"`used_download_data_transfer` bigint NOT NULL, `deleted_at` bigint NOT NULL, `first_download` bigint NOT NULL, " +
"`first_upload` bigint NOT NULL, `last_password_change` bigint NOT NULL, `role_id` integer NULL);" +
"CREATE TABLE `{{groups_folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`group_id` integer NOT NULL, `folder_id` integer NOT NULL, " +
"`virtual_path` longtext NOT NULL, `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL);" +
"CREATE TABLE `{{users_groups_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`user_id` integer NOT NULL, `group_id` integer NOT NULL, `group_type` integer NOT NULL);" +
"CREATE TABLE `{{users_folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` longtext NOT NULL, " +
"`created_at` bigint NOT NULL, `updated_at` bigint NOT NULL, `email` varchar(255) NULL);" +
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` longtext NOT NULL, " +
"`quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `folder_id` integer NOT NULL, `user_id` integer NOT NULL);" +
"ALTER TABLE `{{users_folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_user_folder_mapping` " +
"UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{users_folders_mapping}}` ADD CONSTRAINT `{{prefix}}users_folders_mapping_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{users_folders_mapping}}` ADD CONSTRAINT `{{prefix}}users_folders_mapping_folder_id_fk_folders_id` " +
"FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{users_groups_mapping}}` ADD CONSTRAINT `{{prefix}}unique_user_group_mapping` UNIQUE (`user_id`, `group_id`);" +
"ALTER TABLE `{{groups_folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_group_folder_mapping` UNIQUE (`group_id`, `folder_id`);" +
"ALTER TABLE `{{users_groups_mapping}}` ADD CONSTRAINT `{{prefix}}users_groups_mapping_group_id_fk_groups_id` " +
"FOREIGN KEY (`group_id`) REFERENCES `{{groups}}` (`id`) ON DELETE NO ACTION;" +
"ALTER TABLE `{{users_groups_mapping}}` ADD CONSTRAINT `{{prefix}}users_groups_mapping_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE; " +
"ALTER TABLE `{{groups_folders_mapping}}` ADD CONSTRAINT `{{prefix}}groups_folders_mapping_folder_id_fk_folders_id` " +
"FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{groups_folders_mapping}}` ADD CONSTRAINT `{{prefix}}groups_folders_mapping_group_id_fk_groups_id` " +
"FOREIGN KEY (`group_id`) REFERENCES `{{groups}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"CREATE TABLE `{{shares}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`share_id` varchar(60) NOT NULL UNIQUE, `name` varchar(255) NOT NULL, `description` varchar(512) NULL, " +
"`scope` integer NOT NULL, `paths` longtext NOT NULL, `created_at` bigint NOT NULL, " +
@ -134,66 +91,87 @@ const (
"`expires_at` bigint NOT NULL, `description` longtext NULL, `admin_id` integer NULL, `user_id` integer NULL);" +
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_admin_id_fk_admins_id` FOREIGN KEY (`admin_id`) REFERENCES `{{admins}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"CREATE TABLE `{{events_rules}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`name` varchar(255) NOT NULL UNIQUE, `status` integer NOT NULL, `description` varchar(512) NULL, `created_at` bigint NOT NULL, " +
"`updated_at` bigint NOT NULL, `trigger` integer NOT NULL, `conditions` longtext NOT NULL, `deleted_at` bigint NOT NULL);" +
"CREATE TABLE `{{events_actions}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`name` varchar(255) NOT NULL UNIQUE, `description` varchar(512) NULL, `type` integer NOT NULL, " +
"`options` longtext NOT NULL);" +
"CREATE TABLE `{{rules_actions_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`rule_id` integer NOT NULL, `action_id` integer NOT NULL, `order` integer NOT NULL, `options` longtext NOT NULL);" +
"CREATE TABLE `{{tasks}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL UNIQUE, " +
"`updated_at` bigint NOT NULL, `version` bigint NOT NULL);" +
"ALTER TABLE `{{rules_actions_mapping}}` ADD CONSTRAINT `{{prefix}}unique_rule_action_mapping` UNIQUE (`rule_id`, `action_id`);" +
"ALTER TABLE `{{rules_actions_mapping}}` ADD CONSTRAINT `{{prefix}}rules_actions_mapping_rule_id_fk_events_rules_id` " +
"FOREIGN KEY (`rule_id`) REFERENCES `{{events_rules}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{rules_actions_mapping}}` ADD CONSTRAINT `{{prefix}}rules_actions_mapping_action_id_fk_events_targets_id` " +
"FOREIGN KEY (`action_id`) REFERENCES `{{events_actions}}` (`id`) ON DELETE NO ACTION;" +
"CREATE TABLE `{{admins_groups_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
" `admin_id` integer NOT NULL, `group_id` integer NOT NULL, `options` longtext NOT NULL);" +
"ALTER TABLE `{{admins_groups_mapping}}` ADD CONSTRAINT `{{prefix}}unique_admin_group_mapping` " +
"UNIQUE (`admin_id`, `group_id`);" +
"ALTER TABLE `{{admins_groups_mapping}}` ADD CONSTRAINT `{{prefix}}admins_groups_mapping_admin_id_fk_admins_id` " +
"FOREIGN KEY (`admin_id`) REFERENCES `{{admins}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{admins_groups_mapping}}` ADD CONSTRAINT `{{prefix}}admins_groups_mapping_group_id_fk_groups_id` " +
"FOREIGN KEY (`group_id`) REFERENCES `{{groups}}` (`id`) ON DELETE CASCADE;" +
"CREATE TABLE `{{nodes}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`name` varchar(255) NOT NULL UNIQUE, `data` longtext NOT NULL, `created_at` bigint NOT NULL, " +
"`updated_at` bigint NOT NULL);" +
"CREATE TABLE `{{roles}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL UNIQUE, " +
"`description` varchar(512) NULL, `created_at` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
"ALTER TABLE `{{admins}}` ADD CONSTRAINT `{{prefix}}admins_role_id_fk_roles_id` FOREIGN KEY (`role_id`) " +
"REFERENCES `{{roles}}`(`id`) ON DELETE NO ACTION;" +
"ALTER TABLE `{{users}}` ADD CONSTRAINT `{{prefix}}users_role_id_fk_roles_id` FOREIGN KEY (`role_id`) " +
"REFERENCES `{{roles}}`(`id`) ON DELETE SET NULL;" +
"CREATE TABLE `{{ip_lists}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, `type` integer NOT NULL, " +
"`ipornet` varchar(50) NOT NULL, `mode` integer NOT NULL, `description` varchar(512) NULL, " +
"`first` VARBINARY(16) NOT NULL, `last` VARBINARY(16) NOT NULL, `ip_type` integer NOT NULL, `protocols` integer NOT NULL, " +
"`created_at` bigint NOT NULL, `updated_at` bigint NOT NULL, `deleted_at` bigint NOT NULL);" +
"ALTER TABLE `{{ip_lists}}` ADD CONSTRAINT `{{prefix}}unique_ipornet_type_mapping` UNIQUE (`type`, `ipornet`);" +
"CREATE TABLE `{{configs}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `configs` longtext NOT NULL);" +
"INSERT INTO {{configs}} (configs) VALUES ('{}');" +
"CREATE INDEX `{{prefix}}users_updated_at_idx` ON `{{users}}` (`updated_at`);" +
"CREATE INDEX `{{prefix}}users_deleted_at_idx` ON `{{users}}` (`deleted_at`);" +
"CREATE INDEX `{{prefix}}defender_hosts_updated_at_idx` ON `{{defender_hosts}}` (`updated_at`);" +
"CREATE INDEX `{{prefix}}defender_hosts_ban_time_idx` ON `{{defender_hosts}}` (`ban_time`);" +
"CREATE INDEX `{{prefix}}defender_events_date_time_idx` ON `{{defender_events}}` (`date_time`);" +
"INSERT INTO {{schema_version}} (version) VALUES (15);"
mysqlV16SQL = "ALTER TABLE `{{users}}` ADD COLUMN `download_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `download_data_transfer` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `total_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `total_data_transfer` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `upload_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `upload_data_transfer` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `used_download_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `used_download_data_transfer` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `used_upload_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `used_upload_data_transfer` DROP DEFAULT;" +
"CREATE TABLE `{{active_transfers}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`connection_id` varchar(100) NOT NULL, `transfer_id` bigint NOT NULL, `transfer_type` integer NOT NULL, " +
"`username` varchar(255) NOT NULL, `folder_name` varchar(255) NULL, `ip` varchar(50) NOT NULL, " +
"`truncated_size` bigint NOT NULL, `current_ul_size` bigint NOT NULL, `current_dl_size` bigint NOT NULL, " +
"`created_at` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
"CREATE INDEX `{{prefix}}active_transfers_connection_id_idx` ON `{{active_transfers}}` (`connection_id`);" +
"CREATE INDEX `{{prefix}}active_transfers_transfer_id_idx` ON `{{active_transfers}}` (`transfer_id`);" +
"CREATE INDEX `{{prefix}}active_transfers_updated_at_idx` ON `{{active_transfers}}` (`updated_at`);" +
"CREATE INDEX `{{prefix}}active_transfers_updated_at_idx` ON `{{active_transfers}}` (`updated_at`);"
mysqlV16DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `used_upload_data_transfer`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `used_download_data_transfer`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `upload_data_transfer`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `total_data_transfer`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `download_data_transfer`;" +
"DROP TABLE `{{active_transfers}}` CASCADE;"
mysqlV17SQL = "CREATE TABLE `{{groups}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`name` varchar(255) NOT NULL UNIQUE, `description` varchar(512) NULL, `created_at` bigint NOT NULL, " +
"`updated_at` bigint NOT NULL, `user_settings` longtext NULL);" +
"CREATE TABLE `{{groups_folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`group_id` integer NOT NULL, `folder_id` integer NOT NULL, " +
"`virtual_path` longtext NOT NULL, `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL);" +
"CREATE TABLE `{{users_groups_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`user_id` integer NOT NULL, `group_id` integer NOT NULL, `group_type` integer NOT NULL);" +
"ALTER TABLE `{{folders_mapping}}` DROP FOREIGN KEY `{{prefix}}folders_mapping_folder_id_fk_folders_id`;" +
"ALTER TABLE `{{folders_mapping}}` DROP FOREIGN KEY `{{prefix}}folders_mapping_user_id_fk_users_id`;" +
"ALTER TABLE `{{folders_mapping}}` DROP INDEX `{{prefix}}unique_mapping`;" +
"RENAME TABLE `{{folders_mapping}}` TO `{{users_folders_mapping}}`;" +
"ALTER TABLE `{{users_folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_user_folder_mapping` " +
"UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{users_folders_mapping}}` ADD CONSTRAINT `{{prefix}}users_folders_mapping_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{users_folders_mapping}}` ADD CONSTRAINT `{{prefix}}users_folders_mapping_folder_id_fk_folders_id` " +
"FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{users_groups_mapping}}` ADD CONSTRAINT `{{prefix}}unique_user_group_mapping` UNIQUE (`user_id`, `group_id`);" +
"ALTER TABLE `{{groups_folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_group_folder_mapping` UNIQUE (`group_id`, `folder_id`);" +
"ALTER TABLE `{{users_groups_mapping}}` ADD CONSTRAINT `{{prefix}}users_groups_mapping_group_id_fk_groups_id` " +
"FOREIGN KEY (`group_id`) REFERENCES `{{groups}}` (`id`) ON DELETE NO ACTION;" +
"ALTER TABLE `{{users_groups_mapping}}` ADD CONSTRAINT `{{prefix}}users_groups_mapping_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{groups_folders_mapping}}` ADD CONSTRAINT `{{prefix}}groups_folders_mapping_folder_id_fk_folders_id` " +
"FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{groups_folders_mapping}}` ADD CONSTRAINT `{{prefix}}groups_folders_mapping_group_id_fk_groups_id` " +
"FOREIGN KEY (`group_id`) REFERENCES `{{groups}}` (`id`) ON DELETE CASCADE;" +
"CREATE INDEX `{{prefix}}groups_updated_at_idx` ON `{{groups}}` (`updated_at`);"
mysqlV17DownSQL = "ALTER TABLE `{{groups_folders_mapping}}` DROP FOREIGN KEY `{{prefix}}groups_folders_mapping_group_id_fk_groups_id`;" +
"ALTER TABLE `{{groups_folders_mapping}}` DROP FOREIGN KEY `{{prefix}}groups_folders_mapping_folder_id_fk_folders_id`;" +
"ALTER TABLE `{{users_groups_mapping}}` DROP FOREIGN KEY `{{prefix}}users_groups_mapping_user_id_fk_users_id`;" +
"ALTER TABLE `{{users_groups_mapping}}` DROP FOREIGN KEY `{{prefix}}users_groups_mapping_group_id_fk_groups_id`;" +
"ALTER TABLE `{{groups_folders_mapping}}` DROP INDEX `{{prefix}}unique_group_folder_mapping`;" +
"ALTER TABLE `{{users_groups_mapping}}` DROP INDEX `{{prefix}}unique_user_group_mapping`;" +
"DROP TABLE `{{users_groups_mapping}}` CASCADE;" +
"DROP TABLE `{{groups_folders_mapping}}` CASCADE;" +
"DROP TABLE `{{groups}}` CASCADE;" +
"ALTER TABLE `{{users_folders_mapping}}` DROP FOREIGN KEY `{{prefix}}users_folders_mapping_folder_id_fk_folders_id`;" +
"ALTER TABLE `{{users_folders_mapping}}` DROP FOREIGN KEY `{{prefix}}users_folders_mapping_user_id_fk_users_id`;" +
"ALTER TABLE `{{users_folders_mapping}}` DROP INDEX `{{prefix}}unique_user_folder_mapping`;" +
"RENAME TABLE `{{users_folders_mapping}}` TO `{{folders_mapping}}`;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_folder_id_fk_folders_id` " +
"FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;"
mysqlV19SQL = "CREATE TABLE `{{shared_sessions}}` (`key` varchar(128) NOT NULL PRIMARY KEY, " +
"`data` longtext NOT NULL, `type` integer NOT NULL, `timestamp` bigint NOT NULL);" +
"CREATE INDEX `{{prefix}}shared_sessions_type_idx` ON `{{shared_sessions}}` (`type`);" +
"CREATE INDEX `{{prefix}}shared_sessions_timestamp_idx` ON `{{shared_sessions}}` (`timestamp`);" +
"CREATE INDEX `{{prefix}}events_rules_updated_at_idx` ON `{{events_rules}}` (`updated_at`);" +
"CREATE INDEX `{{prefix}}events_rules_deleted_at_idx` ON `{{events_rules}}` (`deleted_at`);" +
"CREATE INDEX `{{prefix}}events_rules_trigger_idx` ON `{{events_rules}}` (`trigger`);" +
"CREATE INDEX `{{prefix}}rules_actions_mapping_order_idx` ON `{{rules_actions_mapping}}` (`order`);" +
"CREATE INDEX `{{prefix}}ip_lists_type_idx` ON `{{ip_lists}}` (`type`);" +
"CREATE INDEX `{{prefix}}ip_lists_ipornet_idx` ON `{{ip_lists}}` (`ipornet`);" +
"CREATE INDEX `{{prefix}}ip_lists_ip_type_idx` ON `{{ip_lists}}` (`ip_type`);" +
"CREATE INDEX `{{prefix}}ip_lists_updated_at_idx` ON `{{ip_lists}}` (`updated_at`);" +
"CREATE INDEX `{{prefix}}ip_lists_deleted_at_idx` ON `{{ip_lists}}` (`deleted_at`);" +
"CREATE INDEX `{{prefix}}ip_lists_first_last_idx` ON `{{ip_lists}}` (`first`, `last`);" +
"INSERT INTO {{schema_version}} (version) VALUES (29);"
"CREATE INDEX `{{prefix}}shared_sessions_timestamp_idx` ON `{{shared_sessions}}` (`timestamp`);"
mysqlV19DownSQL = "DROP TABLE `{{shared_sessions}}` CASCADE;"
)
// MySQLProvider defines the auth provider for MySQL/MariaDB database
@ -206,6 +184,8 @@ func init() {
}
func initializeMySQLProvider() error {
var err error
connString, err := getMySQLConnectionString(false)
if err != nil {
return err
@ -215,27 +195,22 @@ func initializeMySQLProvider() error {
return err
}
dbHandle, err := sql.Open("mysql", connString)
if err != nil {
providerLog(logger.LevelError, "error creating mysql database handler, connection string: %q, error: %v",
redactedConnString, err)
return err
}
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %q, pool size: %v",
redactedConnString, config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
if config.PoolSize > 0 {
dbHandle.SetMaxIdleConns(config.PoolSize)
if err == nil {
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
redactedConnString, config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
if config.PoolSize > 0 {
dbHandle.SetMaxIdleConns(config.PoolSize)
} else {
dbHandle.SetMaxIdleConns(2)
}
dbHandle.SetConnMaxLifetime(240 * time.Second)
provider = &MySQLProvider{dbHandle: dbHandle}
} else {
dbHandle.SetMaxIdleConns(2)
providerLog(logger.LevelError, "error creating mysql database handler, connection string: %#v, error: %v",
redactedConnString, err)
}
dbHandle.SetConnMaxLifetime(240 * time.Second)
dbHandle.SetConnMaxIdleTime(120 * time.Second)
provider = &MySQLProvider{dbHandle: dbHandle}
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return dbHandle.PingContext(ctx)
return err
}
func getMySQLConnectionString(redactedPwd bool) (string, error) {
var connectionString string
@ -246,11 +221,40 @@ func getMySQLConnectionString(redactedPwd bool) (string, error) {
}
sslMode := getSSLMode()
if sslMode == "custom" && !redactedPwd {
if err := registerMySQLCustomTLSConfig(); err != nil {
return "", err
tlsConfig := &tls.Config{}
if config.RootCert != "" {
rootCAs, err := x509.SystemCertPool()
if err != nil {
rootCAs = x509.NewCertPool()
}
rootCrt, err := os.ReadFile(config.RootCert)
if err != nil {
return "", fmt.Errorf("unable to load root certificate %#v: %v", config.RootCert, err)
}
if !rootCAs.AppendCertsFromPEM(rootCrt) {
return "", fmt.Errorf("unable to parse root certificate %#v", config.RootCert)
}
tlsConfig.RootCAs = rootCAs
}
if config.ClientCert != "" && config.ClientKey != "" {
clientCert := make([]tls.Certificate, 0, 1)
tlsCert, err := tls.LoadX509KeyPair(config.ClientCert, config.ClientKey)
if err != nil {
return "", fmt.Errorf("unable to load key pair %#v, %#v: %v", config.ClientCert, config.ClientKey, err)
}
clientCert = append(clientCert, tlsCert)
tlsConfig.Certificates = clientCert
}
if config.SSLMode == 2 {
tlsConfig.InsecureSkipVerify = true
}
providerLog(logger.LevelInfo, "registering custom TLS config, root cert %#v, client cert %#v, client key %#v",
config.RootCert, config.ClientCert, config.ClientKey)
if err := mysql.RegisterTLSConfig("custom", tlsConfig); err != nil {
return "", fmt.Errorf("unable to register tls config: %v", err)
}
}
connectionString = fmt.Sprintf("%s:%s@tcp([%s]:%d)/%s?collation=utf8mb4_unicode_ci&interpolateParams=true&timeout=10s&parseTime=true&clientFoundRows=true&tls=%s&writeTimeout=60s&readTimeout=60s",
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8mb4&interpolateParams=true&timeout=10s&parseTime=true&tls=%v&writeTimeout=60s&readTimeout=60s",
config.Username, password, config.Host, config.Port, config.Name, sslMode)
} else {
connectionString = config.ConnectionString
@ -258,45 +262,6 @@ func getMySQLConnectionString(redactedPwd bool) (string, error) {
return connectionString, nil
}
func registerMySQLCustomTLSConfig() error {
tlsConfig := &tls.Config{}
if config.RootCert != "" {
rootCAs, err := x509.SystemCertPool()
if err != nil {
rootCAs = x509.NewCertPool()
}
rootCrt, err := os.ReadFile(config.RootCert)
if err != nil {
return fmt.Errorf("unable to load root certificate %q: %v", config.RootCert, err)
}
if !rootCAs.AppendCertsFromPEM(rootCrt) {
return fmt.Errorf("unable to parse root certificate %q", config.RootCert)
}
tlsConfig.RootCAs = rootCAs
}
if config.ClientCert != "" && config.ClientKey != "" {
clientCert := make([]tls.Certificate, 0, 1)
tlsCert, err := tls.LoadX509KeyPair(config.ClientCert, config.ClientKey)
if err != nil {
return fmt.Errorf("unable to load key pair %q, %q: %v", config.ClientCert, config.ClientKey, err)
}
clientCert = append(clientCert, tlsCert)
tlsConfig.Certificates = clientCert
}
if config.SSLMode == 2 || config.SSLMode == 3 {
tlsConfig.InsecureSkipVerify = true
}
if !filepath.IsAbs(config.Host) && !config.DisableSNI {
tlsConfig.ServerName = config.Host
}
providerLog(logger.LevelInfo, "registering custom TLS config, root cert %q, client cert %q, client key %q, disable SNI? %v",
config.RootCert, config.ClientCert, config.ClientKey, config.DisableSNI)
if err := mysql.RegisterTLSConfig("custom", tlsConfig); err != nil {
return fmt.Errorf("unable to register tls config: %v", err)
}
return nil
}
func (p *MySQLProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
@ -325,14 +290,6 @@ func (p *MySQLProvider) getUsedQuota(username string) (int, int64, int64, int64,
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p *MySQLProvider) getAdminSignature(username string) (string, error) {
return sqlCommonGetAdminSignature(username, p.dbHandle)
}
func (p *MySQLProvider) getUserSignature(username string) (string, error) {
return sqlCommonGetUserSignature(username, p.dbHandle)
}
func (p *MySQLProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
@ -345,20 +302,20 @@ func (p *MySQLProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *MySQLProvider) userExists(username, role string) (User, error) {
return sqlCommonGetUserByUsername(username, role, p.dbHandle)
func (p *MySQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
func (p *MySQLProvider) addUser(user *User) error {
return p.normalizeError(sqlCommonAddUser(user, p.dbHandle), fieldUsername)
return sqlCommonAddUser(user, p.dbHandle)
}
func (p *MySQLProvider) updateUser(user *User) error {
return p.normalizeError(sqlCommonUpdateUser(user, p.dbHandle), -1)
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p *MySQLProvider) deleteUser(user User, softDelete bool) error {
return sqlCommonDeleteUser(user, softDelete, p.dbHandle)
func (p *MySQLProvider) deleteUser(user User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p *MySQLProvider) updateUserPassword(username, password string) error {
@ -373,8 +330,8 @@ func (p *MySQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p *MySQLProvider) getUsers(limit int, offset int, order, role string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, role, p.dbHandle)
func (p *MySQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) getUsersForQuotaCheck(toFetch map[string]bool) ([]User, error) {
@ -396,7 +353,7 @@ func (p *MySQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, err
}
func (p *MySQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
return p.normalizeError(sqlCommonAddFolder(folder, p.dbHandle), fieldName)
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p *MySQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
@ -432,7 +389,7 @@ func (p *MySQLProvider) groupExists(name string) (Group, error) {
}
func (p *MySQLProvider) addGroup(group *Group) error {
return p.normalizeError(sqlCommonAddGroup(group, p.dbHandle), fieldName)
return sqlCommonAddGroup(group, p.dbHandle)
}
func (p *MySQLProvider) updateGroup(group *Group) error {
@ -452,11 +409,11 @@ func (p *MySQLProvider) adminExists(username string) (Admin, error) {
}
func (p *MySQLProvider) addAdmin(admin *Admin) error {
return p.normalizeError(sqlCommonAddAdmin(admin, p.dbHandle), fieldUsername)
return sqlCommonAddAdmin(admin, p.dbHandle)
}
func (p *MySQLProvider) updateAdmin(admin *Admin) error {
return p.normalizeError(sqlCommonUpdateAdmin(admin, p.dbHandle), -1)
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *MySQLProvider) deleteAdmin(admin Admin) error {
@ -480,11 +437,11 @@ func (p *MySQLProvider) apiKeyExists(keyID string) (APIKey, error) {
}
func (p *MySQLProvider) addAPIKey(apiKey *APIKey) error {
return p.normalizeError(sqlCommonAddAPIKey(apiKey, p.dbHandle), -1)
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) updateAPIKey(apiKey *APIKey) error {
return p.normalizeError(sqlCommonUpdateAPIKey(apiKey, p.dbHandle), -1)
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) deleteAPIKey(apiKey APIKey) error {
@ -508,11 +465,11 @@ func (p *MySQLProvider) shareExists(shareID, username string) (Share, error) {
}
func (p *MySQLProvider) addShare(share *Share) error {
return p.normalizeError(sqlCommonAddShare(share, p.dbHandle), fieldName)
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *MySQLProvider) updateShare(share *Share) error {
return p.normalizeError(sqlCommonUpdateShare(share, p.dbHandle), -1)
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *MySQLProvider) deleteShare(share Share) error {
@ -599,170 +556,6 @@ func (p *MySQLProvider) cleanupSharedSessions(sessionType SessionType, before in
return sqlCommonCleanupSessions(sessionType, before, p.dbHandle)
}
func (p *MySQLProvider) getEventActions(limit, offset int, order string, minimal bool) ([]BaseEventAction, error) {
return sqlCommonGetEventActions(limit, offset, order, minimal, p.dbHandle)
}
func (p *MySQLProvider) dumpEventActions() ([]BaseEventAction, error) {
return sqlCommonDumpEventActions(p.dbHandle)
}
func (p *MySQLProvider) eventActionExists(name string) (BaseEventAction, error) {
return sqlCommonGetEventActionByName(name, p.dbHandle)
}
func (p *MySQLProvider) addEventAction(action *BaseEventAction) error {
return p.normalizeError(sqlCommonAddEventAction(action, p.dbHandle), fieldName)
}
func (p *MySQLProvider) updateEventAction(action *BaseEventAction) error {
return sqlCommonUpdateEventAction(action, p.dbHandle)
}
func (p *MySQLProvider) deleteEventAction(action BaseEventAction) error {
return sqlCommonDeleteEventAction(action, p.dbHandle)
}
func (p *MySQLProvider) getEventRules(limit, offset int, order string) ([]EventRule, error) {
return sqlCommonGetEventRules(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) dumpEventRules() ([]EventRule, error) {
return sqlCommonDumpEventRules(p.dbHandle)
}
func (p *MySQLProvider) getRecentlyUpdatedRules(after int64) ([]EventRule, error) {
return sqlCommonGetRecentlyUpdatedRules(after, p.dbHandle)
}
func (p *MySQLProvider) eventRuleExists(name string) (EventRule, error) {
return sqlCommonGetEventRuleByName(name, p.dbHandle)
}
func (p *MySQLProvider) addEventRule(rule *EventRule) error {
return p.normalizeError(sqlCommonAddEventRule(rule, p.dbHandle), fieldName)
}
func (p *MySQLProvider) updateEventRule(rule *EventRule) error {
return sqlCommonUpdateEventRule(rule, p.dbHandle)
}
func (p *MySQLProvider) deleteEventRule(rule EventRule, softDelete bool) error {
return sqlCommonDeleteEventRule(rule, softDelete, p.dbHandle)
}
func (p *MySQLProvider) getTaskByName(name string) (Task, error) {
return sqlCommonGetTaskByName(name, p.dbHandle)
}
func (p *MySQLProvider) addTask(name string) error {
return sqlCommonAddTask(name, p.dbHandle)
}
func (p *MySQLProvider) updateTask(name string, version int64) error {
return sqlCommonUpdateTask(name, version, p.dbHandle)
}
func (p *MySQLProvider) updateTaskTimestamp(name string) error {
return sqlCommonUpdateTaskTimestamp(name, p.dbHandle)
}
func (p *MySQLProvider) addNode() error {
return sqlCommonAddNode(p.dbHandle)
}
func (p *MySQLProvider) getNodeByName(name string) (Node, error) {
return sqlCommonGetNodeByName(name, p.dbHandle)
}
func (p *MySQLProvider) getNodes() ([]Node, error) {
return sqlCommonGetNodes(p.dbHandle)
}
func (p *MySQLProvider) updateNodeTimestamp() error {
return sqlCommonUpdateNodeTimestamp(p.dbHandle)
}
func (p *MySQLProvider) cleanupNodes() error {
return sqlCommonCleanupNodes(p.dbHandle)
}
func (p *MySQLProvider) roleExists(name string) (Role, error) {
return sqlCommonGetRoleByName(name, p.dbHandle)
}
func (p *MySQLProvider) addRole(role *Role) error {
return p.normalizeError(sqlCommonAddRole(role, p.dbHandle), fieldName)
}
func (p *MySQLProvider) updateRole(role *Role) error {
return sqlCommonUpdateRole(role, p.dbHandle)
}
func (p *MySQLProvider) deleteRole(role Role) error {
return sqlCommonDeleteRole(role, p.dbHandle)
}
func (p *MySQLProvider) getRoles(limit int, offset int, order string, minimal bool) ([]Role, error) {
return sqlCommonGetRoles(limit, offset, order, minimal, p.dbHandle)
}
func (p *MySQLProvider) dumpRoles() ([]Role, error) {
return sqlCommonDumpRoles(p.dbHandle)
}
func (p *MySQLProvider) ipListEntryExists(ipOrNet string, listType IPListType) (IPListEntry, error) {
return sqlCommonGetIPListEntry(ipOrNet, listType, p.dbHandle)
}
func (p *MySQLProvider) addIPListEntry(entry *IPListEntry) error {
return p.normalizeError(sqlCommonAddIPListEntry(entry, p.dbHandle), fieldIPNet)
}
func (p *MySQLProvider) updateIPListEntry(entry *IPListEntry) error {
return sqlCommonUpdateIPListEntry(entry, p.dbHandle)
}
func (p *MySQLProvider) deleteIPListEntry(entry IPListEntry, softDelete bool) error {
return sqlCommonDeleteIPListEntry(entry, softDelete, p.dbHandle)
}
func (p *MySQLProvider) getIPListEntries(listType IPListType, filter, from, order string, limit int) ([]IPListEntry, error) {
return sqlCommonGetIPListEntries(listType, filter, from, order, limit, p.dbHandle)
}
func (p *MySQLProvider) getRecentlyUpdatedIPListEntries(after int64) ([]IPListEntry, error) {
return sqlCommonGetRecentlyUpdatedIPListEntries(after, p.dbHandle)
}
func (p *MySQLProvider) dumpIPListEntries() ([]IPListEntry, error) {
return sqlCommonDumpIPListEntries(p.dbHandle)
}
func (p *MySQLProvider) countIPListEntries(listType IPListType) (int64, error) {
return sqlCommonCountIPListEntries(listType, p.dbHandle)
}
func (p *MySQLProvider) getListEntriesForIP(ip string, listType IPListType) ([]IPListEntry, error) {
return sqlCommonGetListEntriesForIP(ip, listType, p.dbHandle)
}
func (p *MySQLProvider) getConfigs() (Configs, error) {
return sqlCommonGetConfigs(p.dbHandle)
}
func (p *MySQLProvider) setConfigs(configs *Configs) error {
return sqlCommonSetConfigs(configs, p.dbHandle)
}
func (p *MySQLProvider) setFirstDownloadTimestamp(username string) error {
return sqlCommonSetFirstDownloadTimestamp(username, p.dbHandle)
}
func (p *MySQLProvider) setFirstUploadTimestamp(username string) error {
return sqlCommonSetFirstUploadTimestamp(username, p.dbHandle)
}
func (p *MySQLProvider) close() error {
return p.dbHandle.Close()
}
@ -780,14 +573,23 @@ func (p *MySQLProvider) initializeDatabase() error {
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
logger.InfoToConsole("creating initial database schema, version 29")
providerLog(logger.LevelInfo, "creating initial database schema, version 29")
initialSQL := sqlReplaceAll(mysqlInitialSQL)
logger.InfoToConsole("creating initial database schema, version 15")
providerLog(logger.LevelInfo, "creating initial database schema, version 15")
initialSQL := strings.ReplaceAll(mysqlInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{api_keys}}", sqlTableAPIKeys)
initialSQL = strings.ReplaceAll(initialSQL, "{{shares}}", sqlTableShares)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_events}}", sqlTableDefenderEvents)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_hosts}}", sqlTableDefenderHosts)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(initialSQL, ";"), 29, true)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(initialSQL, ";"), 15, true)
}
func (p *MySQLProvider) migrateDatabase() error {
func (p *MySQLProvider) migrateDatabase() error { //nolint:dupl
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
@ -795,22 +597,30 @@ func (p *MySQLProvider) migrateDatabase() error {
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %d", version)
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
case version < 29:
err = errSchemaVersionTooOld(version)
case version < 15:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 15:
return updateMySQLDatabaseFromV15(p.dbHandle)
case version == 16:
return updateMySQLDatabaseFromV16(p.dbHandle)
case version == 17:
return updateMySQLDatabaseFromV17(p.dbHandle)
case version == 18:
return updateMySQLDatabaseFromV18(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelError, "database schema version %d is newer than the supported one: %d", version,
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database schema version %d is newer than the supported one: %d", version,
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("database schema version not handled: %d", version)
return fmt.Errorf("database version not handled: %v", version)
}
}
@ -824,8 +634,16 @@ func (p *MySQLProvider) revertDatabase(targetVersion int) error {
}
switch dbVersion.Version {
case 16:
return downgradeMySQLDatabaseFromV16(p.dbHandle)
case 17:
return downgradeMySQLDatabaseFromV17(p.dbHandle)
case 18:
return downgradeMySQLDatabaseFromV18(p.dbHandle)
case 19:
return downgradeMySQLDatabaseFromV19(p.dbHandle)
default:
return fmt.Errorf("database schema version not handled: %d", dbVersion.Version)
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
@ -834,30 +652,127 @@ func (p *MySQLProvider) resetDatabase() error {
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(sql, ";"), 0, false)
}
func (p *MySQLProvider) normalizeError(err error, fieldType int) error {
if err == nil {
return nil
func updateMySQLDatabaseFromV15(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom15To16(dbHandle); err != nil {
return err
}
var mysqlErr *mysql.MySQLError
if errors.As(err, &mysqlErr) {
switch mysqlErr.Number {
case 1062:
var message string
switch fieldType {
case fieldUsername:
message = util.I18nErrorDuplicatedUsername
case fieldIPNet:
message = util.I18nErrorDuplicatedIPNet
default:
message = util.I18nErrorDuplicatedName
}
return util.NewI18nError(
fmt.Errorf("%w: %s", ErrDuplicatedKey, err.Error()),
message,
)
case 1452:
return fmt.Errorf("%w: %s", ErrForeignKeyViolated, err.Error())
}
}
return err
return updateMySQLDatabaseFromV16(dbHandle)
}
func updateMySQLDatabaseFromV16(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom16To17(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV17(dbHandle)
}
func updateMySQLDatabaseFromV17(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom17To18(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV18(dbHandle)
}
func updateMySQLDatabaseFromV18(dbHandle *sql.DB) error {
return updateMySQLDatabaseFrom18To19(dbHandle)
}
func downgradeMySQLDatabaseFromV16(dbHandle *sql.DB) error {
return downgradeMySQLDatabaseFrom16To15(dbHandle)
}
func downgradeMySQLDatabaseFromV17(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom17To16(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV16(dbHandle)
}
func downgradeMySQLDatabaseFromV18(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom18To17(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV17(dbHandle)
}
func downgradeMySQLDatabaseFromV19(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom19To18(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV18(dbHandle)
}
func updateMySQLDatabaseFrom15To16(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 15 -> 16")
providerLog(logger.LevelInfo, "updating database version: 15 -> 16")
sql := strings.ReplaceAll(mysqlV16SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 16, true)
}
func updateMySQLDatabaseFrom16To17(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 16 -> 17")
providerLog(logger.LevelInfo, "updating database version: 16 -> 17")
sql := strings.ReplaceAll(mysqlV17SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 17, true)
}
func updateMySQLDatabaseFrom17To18(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 17 -> 18")
providerLog(logger.LevelInfo, "updating database version: 17 -> 18")
if err := importGCSCredentials(); err != nil {
return err
}
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 18, true)
}
func updateMySQLDatabaseFrom18To19(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 18 -> 19")
providerLog(logger.LevelInfo, "updating database version: 18 -> 19")
sql := strings.ReplaceAll(mysqlV19SQL, "{{shared_sessions}}", sqlTableSharedSessions)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 19, true)
}
func downgradeMySQLDatabaseFrom16To15(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 16 -> 15")
providerLog(logger.LevelInfo, "downgrading database version: 16 -> 15")
sql := strings.ReplaceAll(mysqlV16DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 15, false)
}
func downgradeMySQLDatabaseFrom17To16(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 17 -> 16")
providerLog(logger.LevelInfo, "downgrading database version: 17 -> 16")
sql := strings.ReplaceAll(mysqlV17DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 16, false)
}
func downgradeMySQLDatabaseFrom18To17(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 18 -> 17")
providerLog(logger.LevelInfo, "downgrading database version: 18 -> 17")
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 17, false)
}
func downgradeMySQLDatabaseFrom19To18(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 19 -> 18")
providerLog(logger.LevelInfo, "downgrading database version: 19 -> 18")
sql := strings.ReplaceAll(mysqlV19DownSQL, "{{shared_sessions}}", sqlTableSharedSessions)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 18, false)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build nomysql
// +build nomysql
@ -20,7 +20,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !nopgsql
// +build !nopgsql
@ -23,20 +23,15 @@ import (
"database/sql"
"errors"
"fmt"
"net"
"slices"
"strconv"
"strings"
"time"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgconn"
"github.com/jackc/pgx/v5/stdlib"
// we import lib/pq here to be able to disable PostgreSQL support using a build tag
_ "github.com/lib/pq"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
@ -44,7 +39,6 @@ const (
DROP TABLE IF EXISTS "{{folders_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{users_folders_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{users_groups_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{admins_groups_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{groups_folders_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{admins}}" CASCADE;
DROP TABLE IF EXISTS "{{folders}}" CASCADE;
@ -55,61 +49,37 @@ DROP TABLE IF EXISTS "{{defender_events}}" CASCADE;
DROP TABLE IF EXISTS "{{defender_hosts}}" CASCADE;
DROP TABLE IF EXISTS "{{active_transfers}}" CASCADE;
DROP TABLE IF EXISTS "{{shared_sessions}}" CASCADE;
DROP TABLE IF EXISTS "{{rules_actions_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{events_actions}}" CASCADE;
DROP TABLE IF EXISTS "{{events_rules}}" CASCADE;
DROP TABLE IF EXISTS "{{tasks}}" CASCADE;
DROP TABLE IF EXISTS "{{nodes}}" CASCADE;
DROP TABLE IF EXISTS "{{roles}}" CASCADE;
DROP TABLE IF EXISTS "{{ip_lists}}" CASCADE;
DROP TABLE IF EXISTS "{{configs}}" CASCADE;
DROP TABLE IF EXISTS "{{schema_version}}" CASCADE;
`
pgsqlInitial = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "username" varchar(255) NOT NULL UNIQUE,
pgsqlInitial = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL, "last_login" bigint NOT NULL,
"role_id" integer NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{active_transfers}}" ("id" bigint NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "connection_id" varchar(100) NOT NULL,
"transfer_id" bigint NOT NULL, "transfer_type" integer NOT NULL, "username" varchar(255) NOT NULL,
"folder_name" varchar(255) NULL, "ip" varchar(50) NOT NULL, "truncated_size" bigint NOT NULL,
"current_ul_size" bigint NOT NULL, "current_dl_size" bigint NOT NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_hosts}}" ("id" bigint NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "ip" varchar(50) NOT NULL UNIQUE,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_hosts}}" ("id" bigserial NOT NULL PRIMARY KEY, "ip" varchar(50) NOT NULL UNIQUE,
"ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_events}}" ("id" bigint NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "date_time" bigint NOT NULL, "score" integer NOT NULL,
CREATE TABLE "{{defender_events}}" ("id" bigserial NOT NULL PRIMARY KEY, "date_time" bigint NOT NULL, "score" integer NOT NULL,
"host_id" bigint NOT NULL);
ALTER TABLE "{{defender_events}}" ADD CONSTRAINT "{{prefix}}defender_events_host_id_fk_defender_hosts_id" FOREIGN KEY
("host_id") REFERENCES "{{defender_hosts}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "name" varchar(255) NOT NULL UNIQUE, "description" varchar(512) NULL,
CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE, "description" varchar(512) NULL,
"path" text NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"filesystem" text NULL);
CREATE TABLE "{{groups}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "user_settings" text NULL);
CREATE TABLE "{{shared_sessions}}" ("key" varchar(128) NOT NULL PRIMARY KEY,
"data" text NOT NULL, "type" integer NOT NULL, "timestamp" bigint NOT NULL);
CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "username" varchar(255) NOT NULL UNIQUE, "status" integer NOT NULL,
CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE, "status" integer NOT NULL,
"expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL, "public_keys" text NULL,
"home_dir" text NOT NULL, "uid" bigint NOT NULL, "gid" bigint NOT NULL, "max_sessions" integer NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL, "filesystem" text NULL,
"additional_info" text NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "email" varchar(255) NULL,
"upload_data_transfer" integer NOT NULL, "download_data_transfer" integer NOT NULL, "total_data_transfer" integer NOT NULL,
"used_upload_data_transfer" bigint NOT NULL, "used_download_data_transfer" bigint NOT NULL, "deleted_at" bigint NOT NULL,
"first_download" bigint NOT NULL, "first_upload" bigint NOT NULL, "last_password_change" bigint NOT NULL, "role_id" integer NULL);
CREATE TABLE "{{groups_folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "group_id" integer NOT NULL,
"folder_id" integer NOT NULL, "virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL);
CREATE TABLE "{{users_groups_mapping}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "user_id" integer NOT NULL,
"group_id" integer NOT NULL, "group_type" integer NOT NULL);
CREATE TABLE "{{users_folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "virtual_path" text NOT NULL,
"additional_info" text NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "email" varchar(255) NULL);
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" text NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
ALTER TABLE "{{users_folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_user_folder_mapping" UNIQUE ("user_id", "folder_id");
ALTER TABLE "{{users_folders_mapping}}" ADD CONSTRAINT "{{prefix}}users_folders_mapping_folder_id_fk_folders_id"
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id");
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_folder_id_fk_folders_id"
FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
ALTER TABLE "{{users_folders_mapping}}" ADD CONSTRAINT "{{prefix}}users_folders_mapping_user_id_fk_users_id"
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_user_id_fk_users_id"
FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE TABLE "{{shares}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
CREATE TABLE "{{shares}}" ("id" serial NOT NULL PRIMARY KEY,
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL,
@ -117,7 +87,7 @@ CREATE TABLE "{{shares}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS
"user_id" integer NOT NULL);
ALTER TABLE "{{shares}}" ADD CONSTRAINT "{{prefix}}shares_user_id_fk_users_id" FOREIGN KEY ("user_id")
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE TABLE "{{api_keys}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "name" varchar(255) NOT NULL,
CREATE TABLE "{{api_keys}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL,
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL,"expires_at" bigint NOT NULL,
"description" text NULL, "admin_id" integer NULL, "user_id" integer NULL);
@ -125,6 +95,57 @@ ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_admin_id_fk_admins
REFERENCES "{{admins}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_user_id_fk_users_id" FOREIGN KEY ("user_id")
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
INSERT INTO {{schema_version}} (version) VALUES (15);
`
pgsqlV16SQL = `ALTER TABLE "{{users}}" ADD COLUMN "download_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "download_data_transfer" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "total_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "total_data_transfer" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "upload_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "upload_data_transfer" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "used_download_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "used_download_data_transfer" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "used_upload_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "used_upload_data_transfer" DROP DEFAULT;
CREATE TABLE "{{active_transfers}}" ("id" bigserial NOT NULL PRIMARY KEY, "connection_id" varchar(100) NOT NULL,
"transfer_id" bigint NOT NULL, "transfer_type" integer NOT NULL, "username" varchar(255) NOT NULL,
"folder_name" varchar(255) NULL, "ip" varchar(50) NOT NULL, "truncated_size" bigint NOT NULL,
"current_ul_size" bigint NOT NULL, "current_dl_size" bigint NOT NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL);
CREATE INDEX "{{prefix}}active_transfers_connection_id_idx" ON "{{active_transfers}}" ("connection_id");
CREATE INDEX "{{prefix}}active_transfers_transfer_id_idx" ON "{{active_transfers}}" ("transfer_id");
CREATE INDEX "{{prefix}}active_transfers_updated_at_idx" ON "{{active_transfers}}" ("updated_at");
`
pgsqlV16DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "used_upload_data_transfer" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "used_download_data_transfer" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "upload_data_transfer" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "total_data_transfer" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "download_data_transfer" CASCADE;
DROP TABLE "{{active_transfers}}" CASCADE;
`
pgsqlV17SQL = `CREATE TABLE "{{groups}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "user_settings" text NULL);
CREATE TABLE "{{groups_folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "group_id" integer NOT NULL,
"folder_id" integer NOT NULL, "virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL);
CREATE TABLE "{{users_groups_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "user_id" integer NOT NULL,
"group_id" integer NOT NULL, "group_type" integer NOT NULL);
DROP INDEX "{{prefix}}folders_mapping_folder_id_idx";
DROP INDEX "{{prefix}}folders_mapping_user_id_idx";
ALTER TABLE "{{folders_mapping}}" DROP CONSTRAINT "{{prefix}}unique_mapping";
ALTER TABLE "{{folders_mapping}}" RENAME TO "{{users_folders_mapping}}";
ALTER TABLE "{{users_folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_user_folder_mapping" UNIQUE ("user_id", "folder_id");
CREATE INDEX "{{prefix}}users_folders_mapping_folder_id_idx" ON "{{users_folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}users_folders_mapping_user_id_idx" ON "{{users_folders_mapping}}" ("user_id");
ALTER TABLE "{{users_groups_mapping}}" ADD CONSTRAINT "{{prefix}}unique_user_group_mapping" UNIQUE ("user_id", "group_id");
ALTER TABLE "{{groups_folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_group_folder_mapping" UNIQUE ("group_id", "folder_id");
CREATE INDEX "{{prefix}}users_groups_mapping_group_id_idx" ON "{{users_groups_mapping}}" ("group_id");
@ -139,81 +160,24 @@ FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE
CREATE INDEX "{{prefix}}groups_folders_mapping_group_id_idx" ON "{{groups_folders_mapping}}" ("group_id");
ALTER TABLE "{{groups_folders_mapping}}" ADD CONSTRAINT "{{prefix}}groups_folders_mapping_group_id_fk_groups_id"
FOREIGN KEY ("group_id") REFERENCES "{{groups}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE TABLE "{{events_rules}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "name" varchar(255) NOT NULL UNIQUE,
"status" integer NOT NULL, "description" varchar(512) NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
"trigger" integer NOT NULL, "conditions" text NOT NULL, "deleted_at" bigint NOT NULL);
CREATE TABLE "{{events_actions}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "type" integer NOT NULL, "options" text NOT NULL);
CREATE TABLE "{{rules_actions_mapping}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "rule_id" integer NOT NULL,
"action_id" integer NOT NULL, "order" integer NOT NULL, "options" text NOT NULL);
CREATE TABLE "{{tasks}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "name" varchar(255) NOT NULL UNIQUE, "updated_at" bigint NOT NULL,
"version" bigint NOT NULL);
ALTER TABLE "{{rules_actions_mapping}}" ADD CONSTRAINT "{{prefix}}unique_rule_action_mapping" UNIQUE ("rule_id", "action_id");
ALTER TABLE "{{rules_actions_mapping}}" ADD CONSTRAINT "{{prefix}}rules_actions_mapping_rule_id_fk_events_rules_id"
FOREIGN KEY ("rule_id") REFERENCES "{{events_rules}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
ALTER TABLE "{{rules_actions_mapping}}" ADD CONSTRAINT "{{prefix}}rules_actions_mapping_action_id_fk_events_targets_id"
FOREIGN KEY ("action_id") REFERENCES "{{events_actions}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION;
CREATE TABLE "{{admins_groups_mapping}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
"admin_id" integer NOT NULL, "group_id" integer NOT NULL, "options" text NOT NULL);
ALTER TABLE "{{admins_groups_mapping}}" ADD CONSTRAINT "{{prefix}}unique_admin_group_mapping" UNIQUE ("admin_id", "group_id");
ALTER TABLE "{{admins_groups_mapping}}" ADD CONSTRAINT "{{prefix}}admins_groups_mapping_admin_id_fk_admins_id"
FOREIGN KEY ("admin_id") REFERENCES "{{admins}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
ALTER TABLE "{{admins_groups_mapping}}" ADD CONSTRAINT "{{prefix}}admins_groups_mapping_group_id_fk_groups_id"
FOREIGN KEY ("group_id") REFERENCES "{{groups}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE TABLE "{{nodes}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "name" varchar(255) NOT NULL UNIQUE,
"data" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{roles}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL);
ALTER TABLE "{{admins}}" ADD CONSTRAINT "{{prefix}}admins_role_id_fk_roles_id" FOREIGN KEY ("role_id")
REFERENCES "{{roles}}" ("id") ON DELETE NO ACTION;
ALTER TABLE "{{users}}" ADD CONSTRAINT "{{prefix}}users_role_id_fk_roles_id" FOREIGN KEY ("role_id")
REFERENCES "{{roles}}" ("id") ON DELETE SET NULL;
CREATE TABLE "{{ip_lists}}" ("id" bigint NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "type" integer NOT NULL,
"ipornet" varchar(50) NOT NULL, "mode" integer NOT NULL, "description" varchar(512) NULL, "first" inet NOT NULL,
"last" inet NOT NULL, "ip_type" integer NOT NULL, "protocols" integer NOT NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL, "deleted_at" bigint NOT NULL);
ALTER TABLE "{{ip_lists}}" ADD CONSTRAINT "{{prefix}}unique_ipornet_type_mapping" UNIQUE ("type", "ipornet");
CREATE TABLE "{{configs}}" ("id" integer NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "configs" text NOT NULL);
INSERT INTO {{configs}} (configs) VALUES ('{}');
CREATE INDEX "{{prefix}}users_folders_mapping_folder_id_idx" ON "{{users_folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}users_folders_mapping_user_id_idx" ON "{{users_folders_mapping}}" ("user_id");
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
CREATE INDEX "{{prefix}}users_deleted_at_idx" ON "{{users}}" ("deleted_at");
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
CREATE INDEX "{{prefix}}active_transfers_connection_id_idx" ON "{{active_transfers}}" ("connection_id");
CREATE INDEX "{{prefix}}active_transfers_transfer_id_idx" ON "{{active_transfers}}" ("transfer_id");
CREATE INDEX "{{prefix}}active_transfers_updated_at_idx" ON "{{active_transfers}}" ("updated_at");
CREATE INDEX "{{prefix}}shared_sessions_type_idx" ON "{{shared_sessions}}" ("type");
CREATE INDEX "{{prefix}}shared_sessions_timestamp_idx" ON "{{shared_sessions}}" ("timestamp");
CREATE INDEX "{{prefix}}events_rules_updated_at_idx" ON "{{events_rules}}" ("updated_at");
CREATE INDEX "{{prefix}}events_rules_deleted_at_idx" ON "{{events_rules}}" ("deleted_at");
CREATE INDEX "{{prefix}}events_rules_trigger_idx" ON "{{events_rules}}" ("trigger");
CREATE INDEX "{{prefix}}rules_actions_mapping_rule_id_idx" ON "{{rules_actions_mapping}}" ("rule_id");
CREATE INDEX "{{prefix}}rules_actions_mapping_action_id_idx" ON "{{rules_actions_mapping}}" ("action_id");
CREATE INDEX "{{prefix}}rules_actions_mapping_order_idx" ON "{{rules_actions_mapping}}" ("order");
CREATE INDEX "{{prefix}}admins_groups_mapping_admin_id_idx" ON "{{admins_groups_mapping}}" ("admin_id");
CREATE INDEX "{{prefix}}admins_groups_mapping_group_id_idx" ON "{{admins_groups_mapping}}" ("group_id");
CREATE INDEX "{{prefix}}admins_role_id_idx" ON "{{admins}}" ("role_id");
CREATE INDEX "{{prefix}}users_role_id_idx" ON "{{users}}" ("role_id");
CREATE INDEX "{{prefix}}ip_lists_type_idx" ON "{{ip_lists}}" ("type");
CREATE INDEX "{{prefix}}ip_lists_ipornet_idx" ON "{{ip_lists}}" ("ipornet");
CREATE INDEX "{{prefix}}ip_lists_updated_at_idx" ON "{{ip_lists}}" ("updated_at");
CREATE INDEX "{{prefix}}ip_lists_deleted_at_idx" ON "{{ip_lists}}" ("deleted_at");
CREATE INDEX "{{prefix}}ip_lists_first_last_idx" ON "{{ip_lists}}" ("first", "last");
INSERT INTO {{schema_version}} (version) VALUES (29);
CREATE INDEX "{{prefix}}groups_updated_at_idx" ON "{{groups}}" ("updated_at");
`
// not supported in CockroachDB
ipListsLikeIndex = `CREATE INDEX "{{prefix}}ip_lists_ipornet_like_idx" ON "{{ip_lists}}" ("ipornet" varchar_pattern_ops);`
)
var (
pgSQLTargetSessionAttrs = []string{"any", "read-write", "read-only", "primary", "standby", "prefer-standby"}
pgsqlV17DownSQL = `DROP TABLE "{{users_groups_mapping}}" CASCADE;
DROP TABLE "{{groups_folders_mapping}}" CASCADE;
DROP TABLE "{{groups}}" CASCADE;
DROP INDEX "{{prefix}}users_folders_mapping_folder_id_idx";
DROP INDEX "{{prefix}}users_folders_mapping_user_id_idx";
ALTER TABLE "{{users_folders_mapping}}" DROP CONSTRAINT "{{prefix}}unique_user_folder_mapping";
ALTER TABLE "{{users_folders_mapping}}" RENAME TO "{{folders_mapping}}";
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id");
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
pgsqlV19SQL = `CREATE TABLE "{{shared_sessions}}" ("key" varchar(128) NOT NULL PRIMARY KEY,
"data" text NOT NULL, "type" integer NOT NULL, "timestamp" bigint NOT NULL);
CREATE INDEX "{{prefix}}shared_sessions_type_idx" ON "{{shared_sessions}}" ("type");
CREATE INDEX "{{prefix}}shared_sessions_timestamp_idx" ON "{{shared_sessions}}" ("timestamp");`
pgsqlV19DownSQL = `DROP TABLE "{{shared_sessions}}" CASCADE;`
)
// PGSQLProvider defines the auth provider for PostgreSQL database
@ -226,65 +190,24 @@ func init() {
}
func initializePGSQLProvider() error {
var dbHandle *sql.DB
if config.TargetSessionAttrs == "any" {
pgxConfig, err := pgx.ParseConfig(getPGSQLConnectionString(false))
if err != nil {
providerLog(logger.LevelError, "error parsing postgres configuration, connection string: %q, error: %v",
getPGSQLConnectionString(true), err)
return err
}
dbHandle = stdlib.OpenDB(*pgxConfig, stdlib.OptionBeforeConnect(stdlib.RandomizeHostOrderFunc))
} else {
var err error
dbHandle, err = sql.Open("pgx", getPGSQLConnectionString(false))
if err != nil {
providerLog(logger.LevelError, "error creating postgres database handler, connection string: %q, error: %v",
getPGSQLConnectionString(true), err)
return err
}
}
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %q, pool size: %d",
getPGSQLConnectionString(true), config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
if config.PoolSize > 0 {
dbHandle.SetMaxIdleConns(config.PoolSize)
} else {
dbHandle.SetMaxIdleConns(2)
}
dbHandle.SetConnMaxLifetime(240 * time.Second)
dbHandle.SetConnMaxIdleTime(120 * time.Second)
provider = &PGSQLProvider{dbHandle: dbHandle}
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return dbHandle.PingContext(ctx)
}
func getPGSQLHostsAndPorts(configHost string, configPort int) (string, string) {
var hosts, ports []string
defaultPort := strconv.Itoa(configPort)
if defaultPort == "0" {
defaultPort = "5432"
}
for _, hostport := range strings.Split(configHost, ",") {
hostport = strings.TrimSpace(hostport)
if hostport == "" {
continue
}
host, port, err := net.SplitHostPort(hostport)
if err == nil {
hosts = append(hosts, host)
ports = append(ports, port)
var err error
dbHandle, err := sql.Open("postgres", getPGSQLConnectionString(false))
if err == nil {
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %#v, pool size: %v",
getPGSQLConnectionString(true), config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
if config.PoolSize > 0 {
dbHandle.SetMaxIdleConns(config.PoolSize)
} else {
hosts = append(hosts, hostport)
ports = append(ports, defaultPort)
dbHandle.SetMaxIdleConns(2)
}
dbHandle.SetConnMaxLifetime(240 * time.Second)
provider = &PGSQLProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelError, "error creating postgres database handler, connection string: %#v, error: %v",
getPGSQLConnectionString(true), err)
}
return strings.Join(hosts, ","), strings.Join(ports, ",")
return err
}
func getPGSQLConnectionString(redactedPwd bool) string {
@ -294,20 +217,13 @@ func getPGSQLConnectionString(redactedPwd bool) string {
if redactedPwd && password != "" {
password = "[redacted]"
}
host, port := getPGSQLHostsAndPorts(config.Host, config.Port)
connectionString = fmt.Sprintf("host='%s' port='%s' dbname='%s' user='%s' password='%s' sslmode=%s connect_timeout=10",
host, port, config.Name, config.Username, password, getSSLMode())
connectionString = fmt.Sprintf("host='%v' port=%v dbname='%v' user='%v' password='%v' sslmode=%v connect_timeout=10",
config.Host, config.Port, config.Name, config.Username, password, getSSLMode())
if config.RootCert != "" {
connectionString += fmt.Sprintf(" sslrootcert='%s'", config.RootCert)
connectionString += fmt.Sprintf(" sslrootcert='%v'", config.RootCert)
}
if config.ClientCert != "" && config.ClientKey != "" {
connectionString += fmt.Sprintf(" sslcert='%s' sslkey='%s'", config.ClientCert, config.ClientKey)
}
if config.DisableSNI {
connectionString += " sslsni=0"
}
if slices.Contains(pgSQLTargetSessionAttrs, config.TargetSessionAttrs) {
connectionString += fmt.Sprintf(" target_session_attrs='%s'", config.TargetSessionAttrs)
connectionString += fmt.Sprintf(" sslcert='%v' sslkey='%v'", config.ClientCert, config.ClientKey)
}
} else {
connectionString = config.ConnectionString
@ -343,14 +259,6 @@ func (p *PGSQLProvider) getUsedQuota(username string) (int, int64, int64, int64,
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p *PGSQLProvider) getAdminSignature(username string) (string, error) {
return sqlCommonGetAdminSignature(username, p.dbHandle)
}
func (p *PGSQLProvider) getUserSignature(username string) (string, error) {
return sqlCommonGetUserSignature(username, p.dbHandle)
}
func (p *PGSQLProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
@ -363,20 +271,20 @@ func (p *PGSQLProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *PGSQLProvider) userExists(username, role string) (User, error) {
return sqlCommonGetUserByUsername(username, role, p.dbHandle)
func (p *PGSQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
func (p *PGSQLProvider) addUser(user *User) error {
return p.normalizeError(sqlCommonAddUser(user, p.dbHandle), fieldUsername)
return sqlCommonAddUser(user, p.dbHandle)
}
func (p *PGSQLProvider) updateUser(user *User) error {
return p.normalizeError(sqlCommonUpdateUser(user, p.dbHandle), -1)
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p *PGSQLProvider) deleteUser(user User, softDelete bool) error {
return sqlCommonDeleteUser(user, softDelete, p.dbHandle)
func (p *PGSQLProvider) deleteUser(user User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p *PGSQLProvider) updateUserPassword(username, password string) error {
@ -391,8 +299,8 @@ func (p *PGSQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p *PGSQLProvider) getUsers(limit int, offset int, order, role string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, role, p.dbHandle)
func (p *PGSQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) getUsersForQuotaCheck(toFetch map[string]bool) ([]User, error) {
@ -414,7 +322,7 @@ func (p *PGSQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, err
}
func (p *PGSQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
return p.normalizeError(sqlCommonAddFolder(folder, p.dbHandle), fieldName)
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p *PGSQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
@ -450,7 +358,7 @@ func (p *PGSQLProvider) groupExists(name string) (Group, error) {
}
func (p *PGSQLProvider) addGroup(group *Group) error {
return p.normalizeError(sqlCommonAddGroup(group, p.dbHandle), fieldName)
return sqlCommonAddGroup(group, p.dbHandle)
}
func (p *PGSQLProvider) updateGroup(group *Group) error {
@ -470,11 +378,11 @@ func (p *PGSQLProvider) adminExists(username string) (Admin, error) {
}
func (p *PGSQLProvider) addAdmin(admin *Admin) error {
return p.normalizeError(sqlCommonAddAdmin(admin, p.dbHandle), fieldUsername)
return sqlCommonAddAdmin(admin, p.dbHandle)
}
func (p *PGSQLProvider) updateAdmin(admin *Admin) error {
return p.normalizeError(sqlCommonUpdateAdmin(admin, p.dbHandle), -1)
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *PGSQLProvider) deleteAdmin(admin Admin) error {
@ -498,11 +406,11 @@ func (p *PGSQLProvider) apiKeyExists(keyID string) (APIKey, error) {
}
func (p *PGSQLProvider) addAPIKey(apiKey *APIKey) error {
return p.normalizeError(sqlCommonAddAPIKey(apiKey, p.dbHandle), -1)
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) updateAPIKey(apiKey *APIKey) error {
return p.normalizeError(sqlCommonUpdateAPIKey(apiKey, p.dbHandle), -1)
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) deleteAPIKey(apiKey APIKey) error {
@ -526,11 +434,11 @@ func (p *PGSQLProvider) shareExists(shareID, username string) (Share, error) {
}
func (p *PGSQLProvider) addShare(share *Share) error {
return p.normalizeError(sqlCommonAddShare(share, p.dbHandle), fieldName)
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *PGSQLProvider) updateShare(share *Share) error {
return p.normalizeError(sqlCommonUpdateShare(share, p.dbHandle), -1)
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *PGSQLProvider) deleteShare(share Share) error {
@ -617,170 +525,6 @@ func (p *PGSQLProvider) cleanupSharedSessions(sessionType SessionType, before in
return sqlCommonCleanupSessions(sessionType, before, p.dbHandle)
}
func (p *PGSQLProvider) getEventActions(limit, offset int, order string, minimal bool) ([]BaseEventAction, error) {
return sqlCommonGetEventActions(limit, offset, order, minimal, p.dbHandle)
}
func (p *PGSQLProvider) dumpEventActions() ([]BaseEventAction, error) {
return sqlCommonDumpEventActions(p.dbHandle)
}
func (p *PGSQLProvider) eventActionExists(name string) (BaseEventAction, error) {
return sqlCommonGetEventActionByName(name, p.dbHandle)
}
func (p *PGSQLProvider) addEventAction(action *BaseEventAction) error {
return p.normalizeError(sqlCommonAddEventAction(action, p.dbHandle), fieldName)
}
func (p *PGSQLProvider) updateEventAction(action *BaseEventAction) error {
return sqlCommonUpdateEventAction(action, p.dbHandle)
}
func (p *PGSQLProvider) deleteEventAction(action BaseEventAction) error {
return sqlCommonDeleteEventAction(action, p.dbHandle)
}
func (p *PGSQLProvider) getEventRules(limit, offset int, order string) ([]EventRule, error) {
return sqlCommonGetEventRules(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) dumpEventRules() ([]EventRule, error) {
return sqlCommonDumpEventRules(p.dbHandle)
}
func (p *PGSQLProvider) getRecentlyUpdatedRules(after int64) ([]EventRule, error) {
return sqlCommonGetRecentlyUpdatedRules(after, p.dbHandle)
}
func (p *PGSQLProvider) eventRuleExists(name string) (EventRule, error) {
return sqlCommonGetEventRuleByName(name, p.dbHandle)
}
func (p *PGSQLProvider) addEventRule(rule *EventRule) error {
return p.normalizeError(sqlCommonAddEventRule(rule, p.dbHandle), fieldName)
}
func (p *PGSQLProvider) updateEventRule(rule *EventRule) error {
return sqlCommonUpdateEventRule(rule, p.dbHandle)
}
func (p *PGSQLProvider) deleteEventRule(rule EventRule, softDelete bool) error {
return sqlCommonDeleteEventRule(rule, softDelete, p.dbHandle)
}
func (p *PGSQLProvider) getTaskByName(name string) (Task, error) {
return sqlCommonGetTaskByName(name, p.dbHandle)
}
func (p *PGSQLProvider) addTask(name string) error {
return sqlCommonAddTask(name, p.dbHandle)
}
func (p *PGSQLProvider) updateTask(name string, version int64) error {
return sqlCommonUpdateTask(name, version, p.dbHandle)
}
func (p *PGSQLProvider) updateTaskTimestamp(name string) error {
return sqlCommonUpdateTaskTimestamp(name, p.dbHandle)
}
func (p *PGSQLProvider) addNode() error {
return sqlCommonAddNode(p.dbHandle)
}
func (p *PGSQLProvider) getNodeByName(name string) (Node, error) {
return sqlCommonGetNodeByName(name, p.dbHandle)
}
func (p *PGSQLProvider) getNodes() ([]Node, error) {
return sqlCommonGetNodes(p.dbHandle)
}
func (p *PGSQLProvider) updateNodeTimestamp() error {
return sqlCommonUpdateNodeTimestamp(p.dbHandle)
}
func (p *PGSQLProvider) cleanupNodes() error {
return sqlCommonCleanupNodes(p.dbHandle)
}
func (p *PGSQLProvider) roleExists(name string) (Role, error) {
return sqlCommonGetRoleByName(name, p.dbHandle)
}
func (p *PGSQLProvider) addRole(role *Role) error {
return p.normalizeError(sqlCommonAddRole(role, p.dbHandle), fieldName)
}
func (p *PGSQLProvider) updateRole(role *Role) error {
return sqlCommonUpdateRole(role, p.dbHandle)
}
func (p *PGSQLProvider) deleteRole(role Role) error {
return sqlCommonDeleteRole(role, p.dbHandle)
}
func (p *PGSQLProvider) getRoles(limit int, offset int, order string, minimal bool) ([]Role, error) {
return sqlCommonGetRoles(limit, offset, order, minimal, p.dbHandle)
}
func (p *PGSQLProvider) dumpRoles() ([]Role, error) {
return sqlCommonDumpRoles(p.dbHandle)
}
func (p *PGSQLProvider) ipListEntryExists(ipOrNet string, listType IPListType) (IPListEntry, error) {
return sqlCommonGetIPListEntry(ipOrNet, listType, p.dbHandle)
}
func (p *PGSQLProvider) addIPListEntry(entry *IPListEntry) error {
return p.normalizeError(sqlCommonAddIPListEntry(entry, p.dbHandle), fieldIPNet)
}
func (p *PGSQLProvider) updateIPListEntry(entry *IPListEntry) error {
return sqlCommonUpdateIPListEntry(entry, p.dbHandle)
}
func (p *PGSQLProvider) deleteIPListEntry(entry IPListEntry, softDelete bool) error {
return sqlCommonDeleteIPListEntry(entry, softDelete, p.dbHandle)
}
func (p *PGSQLProvider) getIPListEntries(listType IPListType, filter, from, order string, limit int) ([]IPListEntry, error) {
return sqlCommonGetIPListEntries(listType, filter, from, order, limit, p.dbHandle)
}
func (p *PGSQLProvider) getRecentlyUpdatedIPListEntries(after int64) ([]IPListEntry, error) {
return sqlCommonGetRecentlyUpdatedIPListEntries(after, p.dbHandle)
}
func (p *PGSQLProvider) dumpIPListEntries() ([]IPListEntry, error) {
return sqlCommonDumpIPListEntries(p.dbHandle)
}
func (p *PGSQLProvider) countIPListEntries(listType IPListType) (int64, error) {
return sqlCommonCountIPListEntries(listType, p.dbHandle)
}
func (p *PGSQLProvider) getListEntriesForIP(ip string, listType IPListType) ([]IPListEntry, error) {
return sqlCommonGetListEntriesForIP(ip, listType, p.dbHandle)
}
func (p *PGSQLProvider) getConfigs() (Configs, error) {
return sqlCommonGetConfigs(p.dbHandle)
}
func (p *PGSQLProvider) setConfigs(configs *Configs) error {
return sqlCommonSetConfigs(configs, p.dbHandle)
}
func (p *PGSQLProvider) setFirstDownloadTimestamp(username string) error {
return sqlCommonSetFirstDownloadTimestamp(username, p.dbHandle)
}
func (p *PGSQLProvider) setFirstUploadTimestamp(username string) error {
return sqlCommonSetFirstUploadTimestamp(username, p.dbHandle)
}
func (p *PGSQLProvider) close() error {
return p.dbHandle.Close()
}
@ -798,17 +542,26 @@ func (p *PGSQLProvider) initializeDatabase() error {
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
logger.InfoToConsole("creating initial database schema, version 29")
providerLog(logger.LevelInfo, "creating initial database schema, version 29")
var initialSQL string
logger.InfoToConsole("creating initial database schema, version 15")
providerLog(logger.LevelInfo, "creating initial database schema, version 15")
initialSQL := strings.ReplaceAll(pgsqlInitial, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{api_keys}}", sqlTableAPIKeys)
initialSQL = strings.ReplaceAll(initialSQL, "{{shares}}", sqlTableShares)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_events}}", sqlTableDefenderEvents)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_hosts}}", sqlTableDefenderHosts)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
if config.Driver == CockroachDataProviderName {
initialSQL = sqlReplaceAll(pgsqlInitial)
initialSQL = strings.ReplaceAll(initialSQL, "GENERATED ALWAYS AS IDENTITY", "DEFAULT unordered_unique_rowid()")
} else {
initialSQL = sqlReplaceAll(pgsqlInitial + ipListsLikeIndex)
// Cockroach does not support deferrable constraint validation, we don't need them,
// we keep these definitions for the PostgreSQL driver to avoid changes for users
// upgrading from old SFTPGo versions
initialSQL = strings.ReplaceAll(initialSQL, "DEFERRABLE INITIALLY DEFERRED", "")
}
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 29, true)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 15, true)
}
func (p *PGSQLProvider) migrateDatabase() error { //nolint:dupl
@ -819,22 +572,30 @@ func (p *PGSQLProvider) migrateDatabase() error { //nolint:dupl
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %d", version)
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
case version < 29:
err = errSchemaVersionTooOld(version)
case version < 15:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 15:
return updatePGSQLDatabaseFromV15(p.dbHandle)
case version == 16:
return updatePGSQLDatabaseFromV16(p.dbHandle)
case version == 17:
return updatePGSQLDatabaseFromV17(p.dbHandle)
case version == 18:
return updatePGSQLDatabaseFromV18(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelError, "database schema version %d is newer than the supported one: %d", version,
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database schema version %d is newer than the supported one: %d", version,
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("database schema version not handled: %d", version)
return fmt.Errorf("database version not handled: %v", version)
}
}
@ -848,8 +609,16 @@ func (p *PGSQLProvider) revertDatabase(targetVersion int) error {
}
switch dbVersion.Version {
case 16:
return downgradePGSQLDatabaseFromV16(p.dbHandle)
case 17:
return downgradePGSQLDatabaseFromV17(p.dbHandle)
case 18:
return downgradePGSQLDatabaseFromV18(p.dbHandle)
case 19:
return downgradePGSQLDatabaseFromV19(p.dbHandle)
default:
return fmt.Errorf("database schema version not handled: %d", dbVersion.Version)
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
@ -858,30 +627,153 @@ func (p *PGSQLProvider) resetDatabase() error {
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0, false)
}
func (p *PGSQLProvider) normalizeError(err error, fieldType int) error {
if err == nil {
return nil
func updatePGSQLDatabaseFromV15(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom15To16(dbHandle); err != nil {
return err
}
var pgsqlErr *pgconn.PgError
if errors.As(err, &pgsqlErr) {
switch pgsqlErr.Code {
case "23505":
var message string
switch fieldType {
case fieldUsername:
message = util.I18nErrorDuplicatedUsername
case fieldIPNet:
message = util.I18nErrorDuplicatedIPNet
default:
message = util.I18nErrorDuplicatedName
}
return util.NewI18nError(
fmt.Errorf("%w: %s", ErrDuplicatedKey, err.Error()),
message,
)
case "23503":
return fmt.Errorf("%w: %s", ErrForeignKeyViolated, err.Error())
}
}
return err
return updatePGSQLDatabaseFromV16(dbHandle)
}
func updatePGSQLDatabaseFromV16(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom16To17(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV17(dbHandle)
}
func updatePGSQLDatabaseFromV17(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom17To18(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV18(dbHandle)
}
func updatePGSQLDatabaseFromV18(dbHandle *sql.DB) error {
return updatePGSQLDatabaseFrom18To19(dbHandle)
}
func downgradePGSQLDatabaseFromV16(dbHandle *sql.DB) error {
return downgradePGSQLDatabaseFrom16To15(dbHandle)
}
func downgradePGSQLDatabaseFromV17(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom17To16(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV16(dbHandle)
}
func downgradePGSQLDatabaseFromV18(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom18To17(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV17(dbHandle)
}
func downgradePGSQLDatabaseFromV19(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom19To18(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV18(dbHandle)
}
func updatePGSQLDatabaseFrom15To16(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 15 -> 16")
providerLog(logger.LevelInfo, "updating database version: 15 -> 16")
sql := strings.ReplaceAll(pgsqlV16SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
if config.Driver == CockroachDataProviderName {
// Cockroach does not allow to run this schema migration within a transaction
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
for _, q := range strings.Split(sql, ";") {
if strings.TrimSpace(q) == "" {
continue
}
_, err := dbHandle.ExecContext(ctx, q)
if err != nil {
return err
}
}
return sqlCommonUpdateDatabaseVersion(ctx, dbHandle, 16)
}
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 16, true)
}
func updatePGSQLDatabaseFrom16To17(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 16 -> 17")
providerLog(logger.LevelInfo, "updating database version: 16 -> 17")
sql := pgsqlV17SQL
if config.Driver == CockroachDataProviderName {
sql = strings.ReplaceAll(sql, `ALTER TABLE "{{folders_mapping}}" DROP CONSTRAINT "{{prefix}}unique_mapping";`,
`DROP INDEX "{{prefix}}unique_mapping" CASCADE;`)
}
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 17, true)
}
func updatePGSQLDatabaseFrom17To18(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 17 -> 18")
providerLog(logger.LevelInfo, "updating database version: 17 -> 18")
if err := importGCSCredentials(); err != nil {
return err
}
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 18, true)
}
func updatePGSQLDatabaseFrom18To19(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 18 -> 19")
providerLog(logger.LevelInfo, "updating database version: 18 -> 19")
sql := strings.ReplaceAll(pgsqlV19SQL, "{{shared_sessions}}", sqlTableSharedSessions)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 19, true)
}
func downgradePGSQLDatabaseFrom16To15(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 16 -> 15")
providerLog(logger.LevelInfo, "downgrading database version: 16 -> 15")
sql := strings.ReplaceAll(pgsqlV16DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15, false)
}
func downgradePGSQLDatabaseFrom17To16(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 17 -> 16")
providerLog(logger.LevelInfo, "downgrading database version: 17 -> 16")
sql := pgsqlV17DownSQL
if config.Driver == CockroachDataProviderName {
sql = strings.ReplaceAll(sql, `ALTER TABLE "{{users_folders_mapping}}" DROP CONSTRAINT "{{prefix}}unique_user_folder_mapping";`,
`DROP INDEX "{{prefix}}unique_user_folder_mapping" CASCADE;`)
}
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 16, false)
}
func downgradePGSQLDatabaseFrom18To17(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 18 -> 17")
providerLog(logger.LevelInfo, "downgrading database version: 18 -> 17")
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 17, false)
}
func downgradePGSQLDatabaseFrom19To18(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 19 -> 18")
providerLog(logger.LevelInfo, "downgrading database version: 19 -> 18")
sql := strings.ReplaceAll(pgsqlV19DownSQL, "{{shared_sessions}}", sqlTableSharedSessions)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 18, false)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build nopgsql
// +build nopgsql
@ -20,7 +20,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
@ -18,7 +18,7 @@ import (
"sync"
"time"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/logger"
)
var delayedQuotaUpdater quotaUpdater
@ -224,7 +224,7 @@ func (q *quotaUpdater) storeUsersQuota() {
if size != 0 || files != 0 {
err := provider.updateQuota(username, files, size, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota delayed for user %q: %v", username, err)
providerLog(logger.LevelWarn, "unable to update quota delayed for user %#v: %v", username, err)
continue
}
q.updateUserQuota(username, -files, -size)
@ -238,7 +238,7 @@ func (q *quotaUpdater) storeFoldersQuota() {
if size != 0 || files != 0 {
err := provider.updateFolderQuota(name, files, size, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota delayed for folder %q: %v", name, err)
providerLog(logger.LevelWarn, "unable to update quota delayed for folder %#v: %v", name, err)
continue
}
q.updateFolderQuota(name, -files, -size)
@ -252,7 +252,7 @@ func (q *quotaUpdater) storeUsersTransferQuota() {
if ulSize != 0 || dlSize != 0 {
err := provider.updateTransferQuota(username, ulSize, dlSize, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update transfer quota delayed for user %q: %v", username, err)
providerLog(logger.LevelWarn, "unable to update transfer quota delayed for user %#v: %v", username, err)
continue
}
q.updateUserTransferQuota(username, -ulSize, -dlSize)

110
dataprovider/scheduler.go Normal file
View file

@ -0,0 +1,110 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"fmt"
"sync/atomic"
"time"
"github.com/robfig/cron/v3"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/util"
)
var (
scheduler *cron.Cron
lastCachesUpdate int64
// used for bolt and memory providers, so we avoid iterating all users
// to find recently modified ones
lastUserUpdate int64
)
func stopScheduler() {
if scheduler != nil {
scheduler.Stop()
scheduler = nil
}
}
func startScheduler() error {
stopScheduler()
scheduler = cron.New()
_, err := scheduler.AddFunc("@every 30s", checkDataprovider)
if err != nil {
return fmt.Errorf("unable to schedule dataprovider availability check: %w", err)
}
if config.AutoBackup.Enabled {
spec := fmt.Sprintf("0 %v * * %v", config.AutoBackup.Hour, config.AutoBackup.DayOfWeek)
_, err = scheduler.AddFunc(spec, config.doBackup)
if err != nil {
return fmt.Errorf("unable to schedule auto backup: %w", err)
}
}
err = addScheduledCacheUpdates()
if err != nil {
return err
}
scheduler.Start()
return nil
}
func addScheduledCacheUpdates() error {
lastCachesUpdate = util.GetTimeAsMsSinceEpoch(time.Now())
_, err := scheduler.AddFunc("@every 10m", checkCacheUpdates)
if err != nil {
return fmt.Errorf("unable to schedule cache updates: %w", err)
}
return nil
}
func checkDataprovider() {
err := provider.checkAvailability()
if err != nil {
providerLog(logger.LevelError, "check availability error: %v", err)
}
metric.UpdateDataProviderAvailability(err)
}
func checkCacheUpdates() {
providerLog(logger.LevelDebug, "start caches check, update time %v", util.GetTimeFromMsecSinceEpoch(lastCachesUpdate))
checkTime := util.GetTimeAsMsSinceEpoch(time.Now())
users, err := provider.getRecentlyUpdatedUsers(lastCachesUpdate)
if err != nil {
providerLog(logger.LevelError, "unable to get recently updated users: %v", err)
return
}
for _, user := range users {
providerLog(logger.LevelDebug, "invalidate caches for user %#v", user.Username)
webDAVUsersCache.swap(&user)
cachedPasswords.Remove(user.Username)
}
lastCachesUpdate = checkTime
providerLog(logger.LevelDebug, "end caches check, new update time %v", util.GetTimeFromMsecSinceEpoch(lastCachesUpdate))
}
func setLastUserUpdate() {
atomic.StoreInt64(&lastUserUpdate, util.GetTimeAsMsSinceEpoch(time.Now()))
}
func getLastUserUpdate() int64 {
return atomic.LoadInt64(&lastUserUpdate)
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
@ -27,9 +27,6 @@ const (
SessionTypeOIDCAuth SessionType = iota + 1
SessionTypeOIDCToken
SessionTypeResetCode
SessionTypeOAuth2Auth
SessionTypeInvalidToken
SessionTypeWebTask
)
// Session defines a shared session persisted in the data provider
@ -44,7 +41,7 @@ func (s *Session) validate() error {
if s.Key == "" {
return errors.New("unable to save a session with an empty key")
}
if s.Type < SessionTypeOIDCAuth || s.Type > SessionTypeWebTask {
if s.Type < SessionTypeOIDCAuth || s.Type > SessionTypeResetCode {
return fmt.Errorf("invalid session type: %v", s.Type)
}
return nil

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,7 +10,7 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
@ -22,11 +22,10 @@ import (
"time"
"github.com/alexedwards/argon2id"
passwordvalidator "github.com/wagslane/go-password-validator"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// ShareScope defines the supported share scopes
@ -76,6 +75,19 @@ type Share struct {
IsRestore bool `json:"-"`
}
// GetScopeAsString returns the share's scope as string.
// Used in web pages
func (s *Share) GetScopeAsString() string {
switch s.Scope {
case ShareScopeWrite:
return "Write"
case ShareScopeReadWrite:
return "Read/Write"
default:
return "Read"
}
}
// IsExpired returns true if the share is expired
func (s *Share) IsExpired() bool {
if s.ExpiresAt > 0 {
@ -84,16 +96,36 @@ func (s *Share) IsExpired() bool {
return false
}
// GetInfoString returns share's info as string.
func (s *Share) GetInfoString() string {
var result strings.Builder
if s.ExpiresAt > 0 {
t := util.GetTimeFromMsecSinceEpoch(s.ExpiresAt)
result.WriteString(fmt.Sprintf("Expiration: %v. ", t.Format("2006-01-02 15:04"))) // YYYY-MM-DD HH:MM
}
if s.LastUseAt > 0 {
t := util.GetTimeFromMsecSinceEpoch(s.LastUseAt)
result.WriteString(fmt.Sprintf("Last use: %v. ", t.Format("2006-01-02 15:04")))
}
if s.MaxTokens > 0 {
result.WriteString(fmt.Sprintf("Usage: %v/%v. ", s.UsedTokens, s.MaxTokens))
} else {
result.WriteString(fmt.Sprintf("Used tokens: %v. ", s.UsedTokens))
}
if len(s.AllowFrom) > 0 {
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(s.AllowFrom)))
}
if s.Password != "" {
result.WriteString("Password protected.")
}
return result.String()
}
// GetAllowedFromAsString returns the allowed IP as comma separated string
func (s *Share) GetAllowedFromAsString() string {
return strings.Join(s.AllowFrom, ",")
}
// IsPasswordHashed returns true if the password is hashed
func (s *Share) IsPasswordHashed() bool {
return util.IsStringPrefixInSlice(s.Password, hashPwdPrefixes)
}
func (s *Share) getACopy() Share {
allowFrom := make([]string, len(s.AllowFrom))
copy(allowFrom, s.AllowFrom)
@ -146,21 +178,12 @@ func (s *Share) HasRedactedPassword() bool {
func (s *Share) hashPassword() error {
if s.Password != "" && !util.IsStringPrefixInSlice(s.Password, internalHashPwdPrefixes) {
user, err := UserExists(s.Username, "")
if err != nil {
return util.NewGenericError(fmt.Sprintf("unable to validate user: %v", err))
}
if minEntropy := user.getMinPasswordEntropy(); minEntropy > 0 {
if err := passwordvalidator.Validate(s.Password, minEntropy); err != nil {
return util.NewI18nError(util.NewValidationError(err.Error()), util.I18nErrorPasswordComplexity)
}
}
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
hashed, err := bcrypt.GenerateFromPassword([]byte(s.Password), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
s.Password = util.BytesToString(hashed)
s.Password = string(hashed)
} else {
hashed, err := argon2id.CreateHash(s.Password, argon2Params)
if err != nil {
@ -175,20 +198,21 @@ func (s *Share) hashPassword() error {
func (s *Share) validatePaths() error {
var paths []string
for _, p := range s.Paths {
if strings.TrimSpace(p) != "" {
p = strings.TrimSpace(p)
if p != "" {
paths = append(paths, p)
}
}
s.Paths = paths
if len(s.Paths) == 0 {
return util.NewI18nError(util.NewValidationError("at least a shared path is required"), util.I18nErrorSharePathRequired)
return util.NewValidationError("at least a shared path is required")
}
for idx := range s.Paths {
s.Paths[idx] = util.CleanPath(s.Paths[idx])
}
s.Paths = util.RemoveDuplicates(s.Paths, false)
if s.Scope >= ShareScopeWrite && len(s.Paths) != 1 {
return util.NewI18nError(util.NewValidationError("the write share scope requires exactly one path"), util.I18nErrorShareWriteScope)
return util.NewValidationError("the write share scope requires exactly one path")
}
// check nested paths
if len(s.Paths) > 1 {
@ -197,8 +221,8 @@ func (s *Share) validatePaths() error {
if idx == innerIdx {
continue
}
if s.Paths[idx] == "/" || s.Paths[innerIdx] == "/" || util.IsDirOverlapped(s.Paths[idx], s.Paths[innerIdx], true, "/") {
return util.NewI18nError(util.NewGenericError("shared paths cannot be nested"), util.I18nErrorShareNestedPaths)
if isVirtualDirOverlapped(s.Paths[idx], s.Paths[innerIdx], true) {
return util.NewGenericError("shared paths cannot be nested")
}
}
}
@ -211,26 +235,26 @@ func (s *Share) validate() error {
return util.NewValidationError("share_id is mandatory")
}
if s.Name == "" {
return util.NewI18nError(util.NewValidationError("name is mandatory"), util.I18nErrorNameRequired)
return util.NewValidationError("name is mandatory")
}
if s.Scope < ShareScopeRead || s.Scope > ShareScopeReadWrite {
return util.NewI18nError(util.NewValidationError(fmt.Sprintf("invalid scope: %v", s.Scope)), util.I18nErrorShareScope)
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", s.Scope))
}
if err := s.validatePaths(); err != nil {
return err
}
if s.ExpiresAt > 0 {
if !s.IsRestore && s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return util.NewI18nError(util.NewValidationError("expiration must be in the future"), util.I18nErrorShareExpirationPast)
return util.NewValidationError("expiration must be in the future")
}
} else {
s.ExpiresAt = 0
}
if s.MaxTokens < 0 {
return util.NewI18nError(util.NewValidationError("invalid max tokens"), util.I18nErrorShareMaxTokens)
return util.NewValidationError("invalid max tokens")
}
if s.Username == "" {
return util.NewI18nError(util.NewValidationError("username is mandatory"), util.I18nErrorUsernameRequired)
return util.NewValidationError("username is mandatory")
}
if s.HasRedactedPassword() {
return util.NewValidationError("cannot save a share with a redacted password")
@ -242,21 +266,21 @@ func (s *Share) validate() error {
for _, IPMask := range s.AllowFrom {
_, _, err := net.ParseCIDR(IPMask)
if err != nil {
return util.NewI18nError(
util.NewValidationError(fmt.Sprintf("could not parse allow from entry %q : %v", IPMask, err)),
util.I18nErrorInvalidIPMask,
)
return util.NewValidationError(fmt.Sprintf("could not parse allow from entry %#v : %v", IPMask, err))
}
}
return nil
}
// CheckCredentials verifies the share credentials if a password if set
func (s *Share) CheckCredentials(password string) (bool, error) {
func (s *Share) CheckCredentials(username, password string) (bool, error) {
if s.Password == "" {
return true, nil
}
if password == "" {
if username == "" || password == "" {
return false, ErrInvalidCredentials
}
if username != s.Username {
return false, ErrInvalidCredentials
}
if strings.HasPrefix(s.Password, bcryptPwdPrefix) {
@ -283,11 +307,11 @@ func (s *Share) GetRelativePath(name string) string {
// IsUsable checks if the share is usable from the specified IP
func (s *Share) IsUsable(ip string) (bool, error) {
if s.MaxTokens > 0 && s.UsedTokens >= s.MaxTokens {
return false, util.NewI18nError(util.NewRecordNotFoundError("max share usage exceeded"), util.I18nErrorShareUsage)
return false, util.NewRecordNotFoundError("max share usage exceeded")
}
if s.ExpiresAt > 0 {
if s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return false, util.NewI18nError(util.NewRecordNotFoundError("share expired"), util.I18nErrorShareExpired)
return false, util.NewRecordNotFoundError("share expired")
}
}
if len(s.AllowFrom) == 0 {
@ -295,7 +319,7 @@ func (s *Share) IsUsable(ip string) (bool, error) {
}
parsedIP := net.ParseIP(ip)
if parsedIP == nil {
return false, util.NewI18nError(ErrLoginNotAllowedFromIP, util.I18nErrorLoginFromIPDenied)
return false, ErrLoginNotAllowedFromIP
}
for _, ipMask := range s.AllowFrom {
_, network, err := net.ParseCIDR(ipMask)
@ -306,5 +330,5 @@ func (s *Share) IsUsable(ip string) (bool, error) {
return true, nil
}
}
return false, util.NewI18nError(ErrLoginNotAllowedFromIP, util.I18nErrorLoginFromIPDenied)
return false, ErrLoginNotAllowedFromIP
}

View file

@ -1,4 +1,4 @@
// Copyright (C) 2019 Nicola Murino
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
@ -10,10 +10,10 @@
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !nosqlite && cgo
// +build !nosqlite,cgo
//go:build !nosqlite
// +build !nosqlite
package dataprovider
@ -24,14 +24,16 @@ import (
"errors"
"fmt"
"path/filepath"
"strings"
"time"
"github.com/mattn/go-sqlite3"
// we import go-sqlite3 here to be able to disable SQLite support using a build tag
_ "github.com/mattn/go-sqlite3"
"github.com/drakkan/sftpgo/v2/internal/logger"
"github.com/drakkan/sftpgo/v2/internal/util"
"github.com/drakkan/sftpgo/v2/internal/version"
"github.com/drakkan/sftpgo/v2/internal/vfs"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
@ -39,7 +41,6 @@ const (
DROP TABLE IF EXISTS "{{folders_mapping}}";
DROP TABLE IF EXISTS "{{users_folders_mapping}}";
DROP TABLE IF EXISTS "{{users_groups_mapping}}";
DROP TABLE IF EXISTS "{{admins_groups_mapping}}";
DROP TABLE IF EXISTS "{{groups_folders_mapping}}";
DROP TABLE IF EXISTS "{{admins}}";
DROP TABLE IF EXISTS "{{folders}}";
@ -50,136 +51,125 @@ DROP TABLE IF EXISTS "{{defender_events}}";
DROP TABLE IF EXISTS "{{defender_hosts}}";
DROP TABLE IF EXISTS "{{active_transfers}}";
DROP TABLE IF EXISTS "{{shared_sessions}}";
DROP TABLE IF EXISTS "{{rules_actions_mapping}}";
DROP TABLE IF EXISTS "{{events_rules}}";
DROP TABLE IF EXISTS "{{events_actions}}";
DROP TABLE IF EXISTS "{{tasks}}";
DROP TABLE IF EXISTS "{{roles}}";
DROP TABLE IF EXISTS "{{ip_lists}}";
DROP TABLE IF EXISTS "{{configs}}";
DROP TABLE IF EXISTS "{{schema_version}}";
`
sqliteInitialSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY, "version" integer NOT NULL);
CREATE TABLE "{{roles}}" ("id" integer NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
sqliteInitialSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL, "last_login" bigint NOT NULL,
"role_id" integer NULL REFERENCES "{{roles}}" ("id") ON DELETE NO ACTION, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL);
CREATE TABLE "{{active_transfers}}" ("id" integer NOT NULL PRIMARY KEY, "connection_id" varchar(100) NOT NULL,
"transfer_id" bigint NOT NULL, "transfer_type" integer NOT NULL, "username" varchar(255) NOT NULL,
"folder_name" varchar(255) NULL, "ip" varchar(50) NOT NULL, "truncated_size" bigint NOT NULL,
"current_ul_size" bigint NOT NULL, "current_dl_size" bigint NOT NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_hosts}}" ("id" integer NOT NULL PRIMARY KEY, "ip" varchar(50) NOT NULL UNIQUE,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_hosts}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "ip" varchar(50) NOT NULL UNIQUE,
"ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_events}}" ("id" integer NOT NULL PRIMARY KEY, "date_time" bigint NOT NULL,
CREATE TABLE "{{defender_events}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "date_time" bigint NOT NULL,
"score" integer NOT NULL, "host_id" integer NOT NULL REFERENCES "{{defender_hosts}}" ("id") ON DELETE CASCADE
DEFERRABLE INITIALLY DEFERRED);
CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE,
CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "path" text NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "filesystem" text NULL);
CREATE TABLE "{{groups}}" ("id" integer NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "user_settings" text NULL);
CREATE TABLE "{{shared_sessions}}" ("key" varchar(128) NOT NULL PRIMARY KEY, "data" text NOT NULL,
"type" integer NOT NULL, "timestamp" bigint NOT NULL);
CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"status" integer NOT NULL, "expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL,
"public_keys" text NULL, "home_dir" text NOT NULL, "uid" bigint NOT NULL, "gid" bigint NOT NULL,
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL,
"filters" text NULL, "filesystem" text NULL, "additional_info" text NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL, "email" varchar(255) NULL, "upload_data_transfer" integer NOT NULL,
"download_data_transfer" integer NOT NULL, "total_data_transfer" integer NOT NULL, "used_upload_data_transfer" bigint NOT NULL,
"used_download_data_transfer" bigint NOT NULL, "deleted_at" bigint NOT NULL, "first_download" bigint NOT NULL,
"first_upload" bigint NOT NULL, "last_password_change" bigint NOT NULL, "role_id" integer NULL REFERENCES "{{roles}}" ("id") ON DELETE SET NULL);
CREATE TABLE "{{groups_folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY,
"folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"group_id" integer NOT NULL REFERENCES "{{groups}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
CONSTRAINT "{{prefix}}unique_group_folder_mapping" UNIQUE ("group_id", "folder_id"));
CREATE TABLE "{{users_groups_mapping}}" ("id" integer NOT NULL PRIMARY KEY,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"group_id" integer NOT NULL REFERENCES "{{groups}}" ("id") ON DELETE NO ACTION,
"group_type" integer NOT NULL, CONSTRAINT "{{prefix}}unique_user_group_mapping" UNIQUE ("user_id", "group_id"));
CREATE TABLE "{{users_folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
CONSTRAINT "{{prefix}}unique_user_folder_mapping" UNIQUE ("user_id", "folder_id"));
CREATE TABLE "{{shares}}" ("id" integer NOT NULL PRIMARY KEY, "share_id" varchar(60) NOT NULL UNIQUE,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL,
"filesystem" text NULL, "additional_info" text NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
"email" varchar(255) NULL);
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" text NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id")
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, "user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id"));
CREATE TABLE "{{shares}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "share_id" varchar(60) NOT NULL UNIQUE,
"name" varchar(255) NOT NULL, "description" varchar(512) NULL, "scope" integer NOT NULL, "paths" text NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL,
"password" text NULL, "max_tokens" integer NOT NULL, "used_tokens" integer NOT NULL, "allow_from" text NULL,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
CREATE TABLE "{{api_keys}}" ("id" integer NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL,
CREATE TABLE "{{api_keys}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL,
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL,
"description" text NULL, "admin_id" integer NULL REFERENCES "{{admins}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"user_id" integer NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
CREATE TABLE "{{events_rules}}" ("id" integer NOT NULL PRIMARY KEY,
"name" varchar(255) NOT NULL UNIQUE, "status" integer NOT NULL, "description" varchar(512) NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL, "trigger" integer NOT NULL, "conditions" text NOT NULL, "deleted_at" bigint NOT NULL);
CREATE TABLE "{{events_actions}}" ("id" integer NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "type" integer NOT NULL, "options" text NOT NULL);
CREATE TABLE "{{rules_actions_mapping}}" ("id" integer NOT NULL PRIMARY KEY,
"rule_id" integer NOT NULL REFERENCES "{{events_rules}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"action_id" integer NOT NULL REFERENCES "{{events_actions}}" ("id") ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED,
"order" integer NOT NULL, "options" text NOT NULL,
CONSTRAINT "{{prefix}}unique_rule_action_mapping" UNIQUE ("rule_id", "action_id"));
CREATE TABLE "{{tasks}}" ("id" integer NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE,
"updated_at" bigint NOT NULL, "version" bigint NOT NULL);
CREATE TABLE "{{admins_groups_mapping}}" ("id" integer NOT NULL PRIMARY KEY,
"admin_id" integer NOT NULL REFERENCES "{{admins}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
INSERT INTO {{schema_version}} (version) VALUES (15);
`
sqliteV16SQL = `ALTER TABLE "{{users}}" ADD COLUMN "download_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "total_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "upload_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "used_download_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "used_upload_data_transfer" integer DEFAULT 0 NOT NULL;
CREATE TABLE "{{active_transfers}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "connection_id" varchar(100) NOT NULL,
"transfer_id" bigint NOT NULL, "transfer_type" integer NOT NULL, "username" varchar(255) NOT NULL,
"folder_name" varchar(255) NULL, "ip" varchar(50) NOT NULL, "truncated_size" bigint NOT NULL,
"current_ul_size" bigint NOT NULL, "current_dl_size" bigint NOT NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL);
CREATE INDEX "{{prefix}}active_transfers_connection_id_idx" ON "{{active_transfers}}" ("connection_id");
CREATE INDEX "{{prefix}}active_transfers_transfer_id_idx" ON "{{active_transfers}}" ("transfer_id");
CREATE INDEX "{{prefix}}active_transfers_updated_at_idx" ON "{{active_transfers}}" ("updated_at");
`
sqliteV16DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "used_upload_data_transfer";
ALTER TABLE "{{users}}" DROP COLUMN "used_download_data_transfer";
ALTER TABLE "{{users}}" DROP COLUMN "upload_data_transfer";
ALTER TABLE "{{users}}" DROP COLUMN "total_data_transfer";
ALTER TABLE "{{users}}" DROP COLUMN "download_data_transfer";
DROP TABLE "{{active_transfers}}";
`
sqliteV17SQL = `CREATE TABLE "{{groups}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "user_settings" text NULL);
CREATE TABLE "{{groups_folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"group_id" integer NOT NULL REFERENCES "{{groups}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"options" text NOT NULL, CONSTRAINT "{{prefix}}unique_admin_group_mapping" UNIQUE ("admin_id", "group_id"));
CREATE TABLE "{{ip_lists}}" ("id" integer NOT NULL PRIMARY KEY,
"type" integer NOT NULL, "ipornet" varchar(50) NOT NULL, "mode" integer NOT NULL, "description" varchar(512) NULL,
"first" BLOB NOT NULL, "last" BLOB NOT NULL, "ip_type" integer NOT NULL, "protocols" integer NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "deleted_at" bigint NOT NULL,
CONSTRAINT "{{prefix}}unique_ipornet_type_mapping" UNIQUE ("type", "ipornet"));
CREATE TABLE "{{configs}}" ("id" integer NOT NULL PRIMARY KEY, "configs" text NOT NULL);
INSERT INTO {{configs}} (configs) VALUES ('{}');
"virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
CONSTRAINT "{{prefix}}unique_group_folder_mapping" UNIQUE ("group_id", "folder_id"));
CREATE TABLE "{{users_groups_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"group_id" integer NOT NULL REFERENCES "{{groups}}" ("id") ON DELETE NO ACTION,
"group_type" integer NOT NULL, CONSTRAINT "{{prefix}}unique_user_group_mapping" UNIQUE ("user_id", "group_id"));
CREATE TABLE "new__folders_mapping" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
CONSTRAINT "{{prefix}}unique_user_folder_mapping" UNIQUE ("user_id", "folder_id"));
INSERT INTO "new__folders_mapping" ("id", "virtual_path", "quota_size", "quota_files", "folder_id", "user_id") SELECT "id",
"virtual_path", "quota_size", "quota_files", "folder_id", "user_id" FROM "{{folders_mapping}}";
DROP TABLE "{{folders_mapping}}";
ALTER TABLE "new__folders_mapping" RENAME TO "{{users_folders_mapping}}";
CREATE INDEX "{{prefix}}groups_updated_at_idx" ON "{{groups}}" ("updated_at");
CREATE INDEX "{{prefix}}users_folders_mapping_folder_id_idx" ON "{{users_folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}users_folders_mapping_user_id_idx" ON "{{users_folders_mapping}}" ("user_id");
CREATE INDEX "{{prefix}}users_groups_mapping_group_id_idx" ON "{{users_groups_mapping}}" ("group_id");
CREATE INDEX "{{prefix}}users_groups_mapping_user_id_idx" ON "{{users_groups_mapping}}" ("user_id");
CREATE INDEX "{{prefix}}groups_folders_mapping_folder_id_idx" ON "{{groups_folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}groups_folders_mapping_group_id_idx" ON "{{groups_folders_mapping}}" ("group_id");
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
CREATE INDEX "{{prefix}}users_deleted_at_idx" ON "{{users}}" ("deleted_at");
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
CREATE INDEX "{{prefix}}active_transfers_connection_id_idx" ON "{{active_transfers}}" ("connection_id");
CREATE INDEX "{{prefix}}active_transfers_transfer_id_idx" ON "{{active_transfers}}" ("transfer_id");
CREATE INDEX "{{prefix}}active_transfers_updated_at_idx" ON "{{active_transfers}}" ("updated_at");
`
sqliteV17DownSQL = `DROP TABLE "{{users_groups_mapping}}";
DROP TABLE "{{groups_folders_mapping}}";
DROP TABLE "{{groups}}";
CREATE TABLE "new__folders_mapping" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
CONSTRAINT "{{prefix}}unique_folder_mapping" UNIQUE ("user_id", "folder_id"));
INSERT INTO "new__folders_mapping" ("id", "virtual_path", "quota_size", "quota_files", "folder_id", "user_id") SELECT "id",
"virtual_path", "quota_size", "quota_files", "folder_id", "user_id" FROM "{{users_folders_mapping}}";
DROP TABLE "{{users_folders_mapping}}";
ALTER TABLE "new__folders_mapping" RENAME TO "{{folders_mapping}}";
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
sqliteV19SQL = `CREATE TABLE "{{shared_sessions}}" ("key" varchar(128) NOT NULL PRIMARY KEY, "data" text NOT NULL,
"type" integer NOT NULL, "timestamp" bigint NOT NULL);
CREATE INDEX "{{prefix}}shared_sessions_type_idx" ON "{{shared_sessions}}" ("type");
CREATE INDEX "{{prefix}}shared_sessions_timestamp_idx" ON "{{shared_sessions}}" ("timestamp");
CREATE INDEX "{{prefix}}events_rules_updated_at_idx" ON "{{events_rules}}" ("updated_at");
CREATE INDEX "{{prefix}}events_rules_deleted_at_idx" ON "{{events_rules}}" ("deleted_at");
CREATE INDEX "{{prefix}}events_rules_trigger_idx" ON "{{events_rules}}" ("trigger");
CREATE INDEX "{{prefix}}rules_actions_mapping_rule_id_idx" ON "{{rules_actions_mapping}}" ("rule_id");
CREATE INDEX "{{prefix}}rules_actions_mapping_action_id_idx" ON "{{rules_actions_mapping}}" ("action_id");
CREATE INDEX "{{prefix}}rules_actions_mapping_order_idx" ON "{{rules_actions_mapping}}" ("order");
CREATE INDEX "{{prefix}}admins_groups_mapping_admin_id_idx" ON "{{admins_groups_mapping}}" ("admin_id");
CREATE INDEX "{{prefix}}admins_groups_mapping_group_id_idx" ON "{{admins_groups_mapping}}" ("group_id");
CREATE INDEX "{{prefix}}users_role_id_idx" ON "{{users}}" ("role_id");
CREATE INDEX "{{prefix}}admins_role_id_idx" ON "{{admins}}" ("role_id");
CREATE INDEX "{{prefix}}ip_lists_type_idx" ON "{{ip_lists}}" ("type");
CREATE INDEX "{{prefix}}ip_lists_ipornet_idx" ON "{{ip_lists}}" ("ipornet");
CREATE INDEX "{{prefix}}ip_lists_ip_type_idx" ON "{{ip_lists}}" ("ip_type");
CREATE INDEX "{{prefix}}ip_lists_ip_updated_at_idx" ON "{{ip_lists}}" ("updated_at");
CREATE INDEX "{{prefix}}ip_lists_ip_deleted_at_idx" ON "{{ip_lists}}" ("deleted_at");
CREATE INDEX "{{prefix}}ip_lists_first_last_idx" ON "{{ip_lists}}" ("first", "last");
INSERT INTO {{schema_version}} (version) VALUES (29);
`
`
sqliteV19DownSQL = `DROP TABLE "{{shared_sessions}}";`
)
// SQLiteProvider defines the auth provider for SQLite database
@ -192,30 +182,31 @@ func init() {
}
func initializeSQLiteProvider(basePath string) error {
var err error
var connectionString string
if config.ConnectionString == "" {
dbPath := config.Name
if !util.IsFileInputValid(dbPath) {
return fmt.Errorf("invalid database path: %q", dbPath)
return fmt.Errorf("invalid database path: %#v", dbPath)
}
if !filepath.IsAbs(dbPath) {
dbPath = filepath.Join(basePath, dbPath)
}
connectionString = fmt.Sprintf("file:%s?cache=shared&_foreign_keys=1", dbPath)
connectionString = fmt.Sprintf("file:%v?cache=shared&_foreign_keys=1", dbPath)
} else {
connectionString = config.ConnectionString
}
dbHandle, err := sql.Open("sqlite3", connectionString)
if err != nil {
providerLog(logger.LevelError, "error creating sqlite database handler, connection string: %q, error: %v",
if err == nil {
providerLog(logger.LevelDebug, "sqlite database handle created, connection string: %#v", connectionString)
dbHandle.SetMaxOpenConns(1)
provider = &SQLiteProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelError, "error creating sqlite database handler, connection string: %#v, error: %v",
connectionString, err)
return err
}
providerLog(logger.LevelDebug, "sqlite database handle created, connection string: %q", connectionString)
dbHandle.SetMaxOpenConns(1)
provider = &SQLiteProvider{dbHandle: dbHandle}
return executePragmaOptimize(dbHandle)
return err
}
func (p *SQLiteProvider) checkAvailability() error {
@ -246,14 +237,6 @@ func (p *SQLiteProvider) getUsedQuota(username string) (int, int64, int64, int64
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p *SQLiteProvider) getAdminSignature(username string) (string, error) {
return sqlCommonGetAdminSignature(username, p.dbHandle)
}
func (p *SQLiteProvider) getUserSignature(username string) (string, error) {
return sqlCommonGetUserSignature(username, p.dbHandle)
}
func (p *SQLiteProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
@ -266,20 +249,20 @@ func (p *SQLiteProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *SQLiteProvider) userExists(username, role string) (User, error) {
return sqlCommonGetUserByUsername(username, role, p.dbHandle)
func (p *SQLiteProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
func (p *SQLiteProvider) addUser(user *User) error {
return p.normalizeError(sqlCommonAddUser(user, p.dbHandle), fieldUsername)
return sqlCommonAddUser(user, p.dbHandle)
}
func (p *SQLiteProvider) updateUser(user *User) error {
return p.normalizeError(sqlCommonUpdateUser(user, p.dbHandle), -1)
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p *SQLiteProvider) deleteUser(user User, softDelete bool) error {
return sqlCommonDeleteUser(user, softDelete, p.dbHandle)
func (p *SQLiteProvider) deleteUser(user User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p *SQLiteProvider) updateUserPassword(username, password string) error {
@ -294,8 +277,8 @@ func (p *SQLiteProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p *SQLiteProvider) getUsers(limit int, offset int, order, role string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, role, p.dbHandle)
func (p *SQLiteProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) getUsersForQuotaCheck(toFetch map[string]bool) ([]User, error) {
@ -317,7 +300,7 @@ func (p *SQLiteProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, er
}
func (p *SQLiteProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
return p.normalizeError(sqlCommonAddFolder(folder, p.dbHandle), fieldName)
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p *SQLiteProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
@ -353,7 +336,7 @@ func (p *SQLiteProvider) groupExists(name string) (Group, error) {
}
func (p *SQLiteProvider) addGroup(group *Group) error {
return p.normalizeError(sqlCommonAddGroup(group, p.dbHandle), fieldName)
return sqlCommonAddGroup(group, p.dbHandle)
}
func (p *SQLiteProvider) updateGroup(group *Group) error {
@ -373,11 +356,11 @@ func (p *SQLiteProvider) adminExists(username string) (Admin, error) {
}
func (p *SQLiteProvider) addAdmin(admin *Admin) error {
return p.normalizeError(sqlCommonAddAdmin(admin, p.dbHandle), fieldUsername)
return sqlCommonAddAdmin(admin, p.dbHandle)
}
func (p *SQLiteProvider) updateAdmin(admin *Admin) error {
return p.normalizeError(sqlCommonUpdateAdmin(admin, p.dbHandle), -1)
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *SQLiteProvider) deleteAdmin(admin Admin) error {
@ -401,11 +384,11 @@ func (p *SQLiteProvider) apiKeyExists(keyID string) (APIKey, error) {
}
func (p *SQLiteProvider) addAPIKey(apiKey *APIKey) error {
return p.normalizeError(sqlCommonAddAPIKey(apiKey, p.dbHandle), -1)
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) updateAPIKey(apiKey *APIKey) error {
return p.normalizeError(sqlCommonUpdateAPIKey(apiKey, p.dbHandle), -1)
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) deleteAPIKey(apiKey APIKey) error {
@ -429,11 +412,11 @@ func (p *SQLiteProvider) shareExists(shareID, username string) (Share, error) {
}
func (p *SQLiteProvider) addShare(share *Share) error {
return p.normalizeError(sqlCommonAddShare(share, p.dbHandle), fieldName)
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *SQLiteProvider) updateShare(share *Share) error {
return p.normalizeError(sqlCommonUpdateShare(share, p.dbHandle), -1)
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *SQLiteProvider) deleteShare(share Share) error {
@ -520,170 +503,6 @@ func (p *SQLiteProvider) cleanupSharedSessions(sessionType SessionType, before i
return sqlCommonCleanupSessions(sessionType, before, p.dbHandle)
}
func (p *SQLiteProvider) getEventActions(limit, offset int, order string, minimal bool) ([]BaseEventAction, error) {
return sqlCommonGetEventActions(limit, offset, order, minimal, p.dbHandle)
}
func (p *SQLiteProvider) dumpEventActions() ([]BaseEventAction, error) {
return sqlCommonDumpEventActions(p.dbHandle)
}
func (p *SQLiteProvider) eventActionExists(name string) (BaseEventAction, error) {
return sqlCommonGetEventActionByName(name, p.dbHandle)
}
func (p *SQLiteProvider) addEventAction(action *BaseEventAction) error {
return p.normalizeError(sqlCommonAddEventAction(action, p.dbHandle), fieldName)
}
func (p *SQLiteProvider) updateEventAction(action *BaseEventAction) error {
return sqlCommonUpdateEventAction(action, p.dbHandle)
}
func (p *SQLiteProvider) deleteEventAction(action BaseEventAction) error {
return sqlCommonDeleteEventAction(action, p.dbHandle)
}
func (p *SQLiteProvider) getEventRules(limit, offset int, order string) ([]EventRule, error) {
return sqlCommonGetEventRules(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) dumpEventRules() ([]EventRule, error) {
return sqlCommonDumpEventRules(p.dbHandle)
}
func (p *SQLiteProvider) getRecentlyUpdatedRules(after int64) ([]EventRule, error) {
return sqlCommonGetRecentlyUpdatedRules(after, p.dbHandle)
}
func (p *SQLiteProvider) eventRuleExists(name string) (EventRule, error) {
return sqlCommonGetEventRuleByName(name, p.dbHandle)
}
func (p *SQLiteProvider) addEventRule(rule *EventRule) error {
return p.normalizeError(sqlCommonAddEventRule(rule, p.dbHandle), fieldName)
}
func (p *SQLiteProvider) updateEventRule(rule *EventRule) error {
return sqlCommonUpdateEventRule(rule, p.dbHandle)
}
func (p *SQLiteProvider) deleteEventRule(rule EventRule, softDelete bool) error {
return sqlCommonDeleteEventRule(rule, softDelete, p.dbHandle)
}
func (p *SQLiteProvider) getTaskByName(name string) (Task, error) {
return sqlCommonGetTaskByName(name, p.dbHandle)
}
func (p *SQLiteProvider) addTask(name string) error {
return sqlCommonAddTask(name, p.dbHandle)
}
func (p *SQLiteProvider) updateTask(name string, version int64) error {
return sqlCommonUpdateTask(name, version, p.dbHandle)
}
func (p *SQLiteProvider) updateTaskTimestamp(name string) error {
return sqlCommonUpdateTaskTimestamp(name, p.dbHandle)
}
func (*SQLiteProvider) addNode() error {
return ErrNotImplemented
}
func (*SQLiteProvider) getNodeByName(_ string) (Node, error) {
return Node{}, ErrNotImplemented
}
func (*SQLiteProvider) getNodes() ([]Node, error) {
return nil, ErrNotImplemented
}
func (*SQLiteProvider) updateNodeTimestamp() error {
return ErrNotImplemented
}
func (*SQLiteProvider) cleanupNodes() error {
return ErrNotImplemented
}
func (p *SQLiteProvider) roleExists(name string) (Role, error) {
return sqlCommonGetRoleByName(name, p.dbHandle)
}
func (p *SQLiteProvider) addRole(role *Role) error {
return p.normalizeError(sqlCommonAddRole(role, p.dbHandle), fieldName)
}
func (p *SQLiteProvider) updateRole(role *Role) error {
return sqlCommonUpdateRole(role, p.dbHandle)
}
func (p *SQLiteProvider) deleteRole(role Role) error {
return sqlCommonDeleteRole(role, p.dbHandle)
}
func (p *SQLiteProvider) getRoles(limit int, offset int, order string, minimal bool) ([]Role, error) {
return sqlCommonGetRoles(limit, offset, order, minimal, p.dbHandle)
}
func (p *SQLiteProvider) dumpRoles() ([]Role, error) {
return sqlCommonDumpRoles(p.dbHandle)
}
func (p *SQLiteProvider) ipListEntryExists(ipOrNet string, listType IPListType) (IPListEntry, error) {
return sqlCommonGetIPListEntry(ipOrNet, listType, p.dbHandle)
}
func (p *SQLiteProvider) addIPListEntry(entry *IPListEntry) error {
return p.normalizeError(sqlCommonAddIPListEntry(entry, p.dbHandle), fieldIPNet)
}
func (p *SQLiteProvider) updateIPListEntry(entry *IPListEntry) error {
return sqlCommonUpdateIPListEntry(entry, p.dbHandle)
}
func (p *SQLiteProvider) deleteIPListEntry(entry IPListEntry, softDelete bool) error {
return sqlCommonDeleteIPListEntry(entry, softDelete, p.dbHandle)
}
func (p *SQLiteProvider) getIPListEntries(listType IPListType, filter, from, order string, limit int) ([]IPListEntry, error) {
return sqlCommonGetIPListEntries(listType, filter, from, order, limit, p.dbHandle)
}
func (p *SQLiteProvider) getRecentlyUpdatedIPListEntries(after int64) ([]IPListEntry, error) {
return sqlCommonGetRecentlyUpdatedIPListEntries(after, p.dbHandle)
}
func (p *SQLiteProvider) dumpIPListEntries() ([]IPListEntry, error) {
return sqlCommonDumpIPListEntries(p.dbHandle)
}
func (p *SQLiteProvider) countIPListEntries(listType IPListType) (int64, error) {
return sqlCommonCountIPListEntries(listType, p.dbHandle)
}
func (p *SQLiteProvider) getListEntriesForIP(ip string, listType IPListType) ([]IPListEntry, error) {
return sqlCommonGetListEntriesForIP(ip, listType, p.dbHandle)
}
func (p *SQLiteProvider) getConfigs() (Configs, error) {
return sqlCommonGetConfigs(p.dbHandle)
}
func (p *SQLiteProvider) setConfigs(configs *Configs) error {
return sqlCommonSetConfigs(configs, p.dbHandle)
}
func (p *SQLiteProvider) setFirstDownloadTimestamp(username string) error {
return sqlCommonSetFirstDownloadTimestamp(username, p.dbHandle)
}
func (p *SQLiteProvider) setFirstUploadTimestamp(username string) error {
return sqlCommonSetFirstUploadTimestamp(username, p.dbHandle)
}
func (p *SQLiteProvider) close() error {
return p.dbHandle.Close()
}
@ -701,10 +520,20 @@ func (p *SQLiteProvider) initializeDatabase() error {
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
logger.InfoToConsole("creating initial database schema, version 29")
providerLog(logger.LevelInfo, "creating initial database schema, version 29")
sql := sqlReplaceAll(sqliteInitialSQL)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 29, true)
logger.InfoToConsole("creating initial database schema, version 15")
providerLog(logger.LevelInfo, "creating initial database schema, version 15")
initialSQL := strings.ReplaceAll(sqliteInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{api_keys}}", sqlTableAPIKeys)
initialSQL = strings.ReplaceAll(initialSQL, "{{shares}}", sqlTableShares)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_events}}", sqlTableDefenderEvents)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_hosts}}", sqlTableDefenderHosts)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 15, true)
}
func (p *SQLiteProvider) migrateDatabase() error { //nolint:dupl
@ -715,22 +544,30 @@ func (p *SQLiteProvider) migrateDatabase() error { //nolint:dupl
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %d", version)
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
case version < 29:
err = errSchemaVersionTooOld(version)
case version < 15:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 15:
return updateSQLiteDatabaseFromV15(p.dbHandle)
case version == 16:
return updateSQLiteDatabaseFromV16(p.dbHandle)
case version == 17:
return updateSQLiteDatabaseFromV17(p.dbHandle)
case version == 18:
return updateSQLiteDatabaseFromV18(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelError, "database schema version %d is newer than the supported one: %d", version,
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database schema version %d is newer than the supported one: %d", version,
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("database schema version not handled: %d", version)
return fmt.Errorf("database version not handled: %v", version)
}
}
@ -744,8 +581,16 @@ func (p *SQLiteProvider) revertDatabase(targetVersion int) error {
}
switch dbVersion.Version {
case 16:
return downgradeSQLiteDatabaseFromV16(p.dbHandle)
case 17:
return downgradeSQLiteDatabaseFromV17(p.dbHandle)
case 18:
return downgradeSQLiteDatabaseFromV18(p.dbHandle)
case 19:
return downgradeSQLiteDatabaseFromV19(p.dbHandle)
default:
return fmt.Errorf("database schema version not handled: %d", dbVersion.Version)
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
@ -754,42 +599,145 @@ func (p *SQLiteProvider) resetDatabase() error {
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0, false)
}
func (p *SQLiteProvider) normalizeError(err error, fieldType int) error {
if err == nil {
return nil
func updateSQLiteDatabaseFromV15(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom15To16(dbHandle); err != nil {
return err
}
if e, ok := err.(sqlite3.Error); ok {
switch e.ExtendedCode {
case 1555, 2067:
var message string
switch fieldType {
case fieldUsername:
message = util.I18nErrorDuplicatedUsername
case fieldIPNet:
message = util.I18nErrorDuplicatedIPNet
default:
message = util.I18nErrorDuplicatedName
}
return util.NewI18nError(
fmt.Errorf("%w: %s", ErrDuplicatedKey, err.Error()),
message,
)
case 787:
return fmt.Errorf("%w: %s", ErrForeignKeyViolated, err.Error())
}
}
return err
return updateSQLiteDatabaseFromV16(dbHandle)
}
func executePragmaOptimize(dbHandle *sql.DB) error {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
_, err := dbHandle.ExecContext(ctx, "PRAGMA optimize;")
return err
func updateSQLiteDatabaseFromV16(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom16To17(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV17(dbHandle)
}
/*func setPragmaFK(dbHandle *sql.DB, value string) error {
func updateSQLiteDatabaseFromV17(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom17To18(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV18(dbHandle)
}
func updateSQLiteDatabaseFromV18(dbHandle *sql.DB) error {
return updateSQLiteDatabaseFrom18To19(dbHandle)
}
func downgradeSQLiteDatabaseFromV16(dbHandle *sql.DB) error {
return downgradeSQLiteDatabaseFrom16To15(dbHandle)
}
func downgradeSQLiteDatabaseFromV17(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom17To16(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV16(dbHandle)
}
func downgradeSQLiteDatabaseFromV18(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom18To17(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV17(dbHandle)
}
func downgradeSQLiteDatabaseFromV19(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom19To18(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV18(dbHandle)
}
func updateSQLiteDatabaseFrom15To16(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 15 -> 16")
providerLog(logger.LevelInfo, "updating database version: 15 -> 16")
sql := strings.ReplaceAll(sqliteV16SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 16, true)
}
func updateSQLiteDatabaseFrom16To17(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 16 -> 17")
providerLog(logger.LevelInfo, "updating database version: 16 -> 17")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV17SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 17, true); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func updateSQLiteDatabaseFrom17To18(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 17 -> 18")
providerLog(logger.LevelInfo, "updating database version: 17 -> 18")
if err := importGCSCredentials(); err != nil {
return err
}
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 18, true)
}
func updateSQLiteDatabaseFrom18To19(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 18 -> 19")
providerLog(logger.LevelInfo, "updating database version: 18 -> 19")
sql := strings.ReplaceAll(sqliteV19SQL, "{{shared_sessions}}", sqlTableSharedSessions)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 19, true)
}
func downgradeSQLiteDatabaseFrom16To15(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 16 -> 15")
providerLog(logger.LevelInfo, "downgrading database version: 16 -> 15")
sql := strings.ReplaceAll(sqliteV16DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15, false)
}
func downgradeSQLiteDatabaseFrom17To16(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 17 -> 16")
providerLog(logger.LevelInfo, "downgrading database version: 17 -> 16")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV17DownSQL, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 16, false); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func downgradeSQLiteDatabaseFrom18To17(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 18 -> 17")
providerLog(logger.LevelInfo, "downgrading database version: 18 -> 17")
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 17, false)
}
func downgradeSQLiteDatabaseFrom19To18(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 19 -> 18")
providerLog(logger.LevelInfo, "downgrading database version: 19 -> 18")
sql := strings.ReplaceAll(sqliteV19DownSQL, "{{shared_sessions}}", sqlTableSharedSessions)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 18, false)
}
func setPragmaFK(dbHandle *sql.DB, value string) error {
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
@ -797,4 +745,4 @@ func executePragmaOptimize(dbHandle *sql.DB) error {
_, err := dbHandle.ExecContext(ctx, sql)
return err
}*/
}

Some files were not shown because too many files have changed in this diff Show more