Compare commits
41 commits
Author | SHA1 | Date | |
---|---|---|---|
|
2e9919d1a5 | ||
|
c40a48c6f3 | ||
|
c7073f90cb | ||
|
80c8486d24 | ||
|
cf9d081495 | ||
|
05ed7b6aa4 | ||
|
68a4bbd10c | ||
|
1b21c19a78 | ||
|
ee600c716b | ||
|
6b77b55068 | ||
|
5a45af76f3 | ||
|
7959737442 | ||
|
d3fee39388 | ||
|
97122ef06c | ||
|
8a6c2265a4 | ||
|
b65dae89e8 | ||
|
4ed6e96c7b | ||
|
6d3ff5a8ad | ||
|
a7921500f5 | ||
|
c3188a2b5a | ||
|
3f38f44d42 | ||
|
0a3122f03e | ||
|
8cd9e886f3 | ||
|
016e285745 | ||
|
467708dc1c | ||
|
ef626befb1 | ||
|
f61456ce87 | ||
|
ba3548c2c3 | ||
|
0e2d673889 | ||
|
bf03eb2a88 | ||
|
3603493146 | ||
|
6a20e7411b | ||
|
0e1d8fc4d9 | ||
|
08a7f08d6e | ||
|
2c8968b5dc | ||
|
f65c973c99 | ||
|
85c2d474d9 | ||
|
6c6a6e3d16 | ||
|
92122bd962 | ||
|
112306b9a2 | ||
|
92af6efc0c |
646 changed files with 83253 additions and 147669 deletions
31
.cirrus.yml
31
.cirrus.yml
|
@ -1,31 +0,0 @@
|
|||
freebsd_task:
|
||||
name: FreeBSD
|
||||
|
||||
matrix:
|
||||
- name: FreeBSD 14.0
|
||||
freebsd_instance:
|
||||
image_family: freebsd-14-0
|
||||
|
||||
pkginstall_script:
|
||||
- pkg update -f
|
||||
- pkg install -y go122
|
||||
- pkg install -y git
|
||||
|
||||
setup_script:
|
||||
- ln -s /usr/local/bin/go122 /usr/local/bin/go
|
||||
- pw groupadd sftpgo
|
||||
- pw useradd sftpgo -g sftpgo -w none -m
|
||||
- mkdir /home/sftpgo/sftpgo
|
||||
- cp -R . /home/sftpgo/sftpgo
|
||||
- chown -R sftpgo:sftpgo /home/sftpgo/sftpgo
|
||||
|
||||
compile_script:
|
||||
- su sftpgo -c 'cd ~/sftpgo && go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo'
|
||||
- su sftpgo -c 'cd ~/sftpgo/tests/eventsearcher && go build -trimpath -ldflags "-s -w" -o eventsearcher'
|
||||
- su sftpgo -c 'cd ~/sftpgo/tests/ipfilter && go build -trimpath -ldflags "-s -w" -o ipfilter'
|
||||
|
||||
check_script:
|
||||
- su sftpgo -c 'cd ~/sftpgo && ./sftpgo initprovider && ./sftpgo resetprovider --force'
|
||||
|
||||
test_script:
|
||||
- su sftpgo -c 'cd ~/sftpgo && go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 20m ./... -coverprofile=coverage.txt -covermode=atomic'
|
108
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
108
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
@ -1,108 +0,0 @@
|
|||
name: Open Source Bug Report
|
||||
description: "Submit a report and help us improve SFTPGo"
|
||||
title: "[Bug]: "
|
||||
labels: ["bug"]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
### 👍 Thank you for contributing to our project!
|
||||
Before asking for help please check the [support policy](https://github.com/drakkan/sftpgo#support-policy).
|
||||
If you are a commercial user or a project sponsor please contact us using the dedicated [email address](mailto:support@sftpgo.com).
|
||||
- type: checkboxes
|
||||
id: before-posting
|
||||
attributes:
|
||||
label: "⚠️ This issue respects the following points: ⚠️"
|
||||
description: All conditions are **required**.
|
||||
options:
|
||||
- label: This is a **bug**, not a question or a configuration issue.
|
||||
required: true
|
||||
- label: This issue is **not** already reported on Github _(I've searched it)_.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: bug-description
|
||||
attributes:
|
||||
label: Bug description
|
||||
description: |
|
||||
Provide a description of the bug you're experiencing.
|
||||
Don't just expect someone will guess what your specific problem is and provide full details.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: reproduce
|
||||
attributes:
|
||||
label: Steps to reproduce
|
||||
description: |
|
||||
Describe the steps to reproduce the bug.
|
||||
The better your description is the fastest you'll get an _(accurate)_ answer.
|
||||
value: |
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected-behavior
|
||||
attributes:
|
||||
label: Expected behavior
|
||||
description: Describe what you expected to happen instead.
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: version
|
||||
attributes:
|
||||
label: SFTPGo version
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: data-provider
|
||||
attributes:
|
||||
label: Data provider
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: install-method
|
||||
attributes:
|
||||
label: Installation method
|
||||
description: |
|
||||
Select installation method you've used.
|
||||
_Describe the method in the "Additional info" section if you chose "Other"._
|
||||
options:
|
||||
- "Community Docker image"
|
||||
- "Community Deb package"
|
||||
- "Community RPM package"
|
||||
- "Other"
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Configuration
|
||||
description: "Describe your customizations to the configuration: both config file changes and overrides via environment variables"
|
||||
value: config
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: Relevant log output
|
||||
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
|
||||
render: shell
|
||||
- type: dropdown
|
||||
id: usecase
|
||||
attributes:
|
||||
label: What are you using SFTPGo for?
|
||||
description: We'd like to understand your SFTPGo usecase more
|
||||
multiple: true
|
||||
options:
|
||||
- "Private user, home usecase (home backup/VPS)"
|
||||
- "Professional user, 1 person business"
|
||||
- "Small business (3-person firm with file exchange?)"
|
||||
- "Medium business"
|
||||
- "Enterprise"
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: additional-info
|
||||
attributes:
|
||||
label: Additional info
|
||||
description: Any additional information related to the issue.
|
9
.github/ISSUE_TEMPLATE/config.yml
vendored
9
.github/ISSUE_TEMPLATE/config.yml
vendored
|
@ -1,9 +0,0 @@
|
|||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Commercial Support
|
||||
url: https://sftpgo.com/
|
||||
about: >
|
||||
If you need Professional support, so your reports are prioritized and resolved more quickly.
|
||||
- name: GitHub Community Discussions
|
||||
url: https://github.com/drakkan/sftpgo/discussions
|
||||
about: Please ask and answer questions here.
|
42
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
42
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
|
@ -1,42 +0,0 @@
|
|||
name: 🚀 Feature request
|
||||
description: Suggest an idea for SFTPGo
|
||||
labels: ["suggestion"]
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Is your feature request related to a problem? Please describe.
|
||||
description: A clear and concise description of what the problem is.
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Describe the solution you'd like
|
||||
description: A clear and concise description of what you want to happen.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Describe alternatives you've considered
|
||||
description: A clear and concise description of any alternative solutions or features you've considered.
|
||||
validations:
|
||||
required: false
|
||||
- type: dropdown
|
||||
id: usecase
|
||||
attributes:
|
||||
label: What are you using SFTPGo for?
|
||||
description: We'd like to understand your SFTPGo usecase more
|
||||
multiple: true
|
||||
options:
|
||||
- "Private user, home usecase (home backup/VPS)"
|
||||
- "Professional user, 1 person business"
|
||||
- "Small business (3-person firm with file exchange?)"
|
||||
- "Medium business"
|
||||
- "Enterprise"
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: Add any other context or screenshots about the feature request here.
|
||||
validations:
|
||||
required: false
|
5
.github/PULL_REQUEST_TEMPLATE.md
vendored
5
.github/PULL_REQUEST_TEMPLATE.md
vendored
|
@ -1,5 +0,0 @@
|
|||
# Checklist for Pull Requests
|
||||
|
||||
- [ ] Have you signed the [Contributor License Agreement](https://sftpgo.com/cla.html)?
|
||||
|
||||
---
|
10
.github/dependabot.yml
vendored
10
.github/dependabot.yml
vendored
|
@ -1,11 +1,11 @@
|
|||
version: 2
|
||||
|
||||
updates:
|
||||
#- package-ecosystem: "gomod"
|
||||
# directory: "/"
|
||||
# schedule:
|
||||
# interval: "weekly"
|
||||
# open-pull-requests-limit: 2
|
||||
- package-ecosystem: "gomod"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
open-pull-requests-limit: 2
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/"
|
||||
|
|
36
.github/workflows/codeql.yml
vendored
36
.github/workflows/codeql.yml
vendored
|
@ -1,36 +0,0 @@
|
|||
name: "Code scanning - action"
|
||||
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
schedule:
|
||||
- cron: '30 1 * * 6'
|
||||
|
||||
jobs:
|
||||
CodeQL-Build:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
permissions:
|
||||
security-events: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.22'
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v3
|
||||
with:
|
||||
languages: go
|
||||
|
||||
- name: Autobuild
|
||||
uses: github/codeql-action/autobuild@v3
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v3
|
233
.github/workflows/development.yml
vendored
233
.github/workflows/development.yml
vendored
|
@ -2,7 +2,7 @@ name: CI
|
|||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
branches: [2.2.x]
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
|
@ -11,45 +11,40 @@ jobs:
|
|||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
go: ['1.22']
|
||||
os: [ubuntu-latest, macos-latest]
|
||||
go: [1.17]
|
||||
os: [ubuntu-18.04, macos-10.15]
|
||||
upload-coverage: [true]
|
||||
include:
|
||||
- go: '1.22'
|
||||
os: windows-latest
|
||||
- go: 1.17
|
||||
os: windows-2019
|
||||
upload-coverage: false
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.go }}
|
||||
|
||||
- name: Build for Linux/macOS x86_64
|
||||
if: startsWith(matrix.os, 'windows-') != true
|
||||
run: |
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
cd tests/eventsearcher
|
||||
go build -trimpath -ldflags "-s -w" -o eventsearcher
|
||||
cd -
|
||||
cd tests/ipfilter
|
||||
go build -trimpath -ldflags "-s -w" -o ipfilter
|
||||
cd -
|
||||
./sftpgo initprovider
|
||||
./sftpgo resetprovider --force
|
||||
|
||||
- name: Build for macOS arm64
|
||||
if: startsWith(matrix.os, 'macos-') == true
|
||||
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
|
||||
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
|
||||
|
||||
- name: Build for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
$GIT_COMMIT = (git describe --always --abbrev=8 --dirty) | Out-String
|
||||
$GIT_COMMIT = (git describe --always --dirty) | Out-String
|
||||
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
|
||||
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
|
||||
$REV_LIST=$LATEST_TAG+"..HEAD"
|
||||
|
@ -57,55 +52,50 @@ jobs:
|
|||
$FILE_VERSION = $LATEST_TAG.substring(1) + "." + $COMMITS_FROM_TAG
|
||||
go install github.com/tc-hib/go-winres@latest
|
||||
go-winres simply --arch amd64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o sftpgo.exe
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
|
||||
cd tests/eventsearcher
|
||||
go build -trimpath -ldflags "-s -w" -o eventsearcher.exe
|
||||
cd ../..
|
||||
cd tests/ipfilter
|
||||
go build -trimpath -ldflags "-s -w" -o ipfilter.exe
|
||||
cd ../..
|
||||
mkdir arm64
|
||||
$Env:CGO_ENABLED='0'
|
||||
$Env:GOOS='windows'
|
||||
$Env:GOARCH='arm64'
|
||||
go-winres simply --arch arm64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
|
||||
mkdir x86
|
||||
$Env:GOARCH='386'
|
||||
go-winres simply --arch 386 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
|
||||
Remove-Item Env:\CGO_ENABLED
|
||||
Remove-Item Env:\GOOS
|
||||
Remove-Item Env:\GOARCH
|
||||
|
||||
- name: Run test cases using SQLite provider
|
||||
run: go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
|
||||
run: go test -v -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
if: ${{ matrix.upload-coverage }}
|
||||
uses: codecov/codecov-action@v4
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
file: ./coverage.txt
|
||||
fail_ci_if_error: false
|
||||
token: ${{ secrets.CODECOV_TOKEN }}
|
||||
|
||||
- name: Run test cases using bolt provider
|
||||
run: |
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 2m ./internal/config -covermode=atomic
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 5m ./internal/common -covermode=atomic
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 5m ./internal/httpd -covermode=atomic
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 8m ./internal/sftpd -covermode=atomic
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 5m ./internal/ftpd -covermode=atomic
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 5m ./internal/webdavd -covermode=atomic
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 2m ./internal/telemetry -covermode=atomic
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 2m ./internal/mfa -covermode=atomic
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 2m ./internal/command -covermode=atomic
|
||||
go test -v -p 1 -timeout 2m ./config -covermode=atomic
|
||||
go test -v -p 1 -timeout 5m ./common -covermode=atomic
|
||||
go test -v -p 1 -timeout 5m ./httpd -covermode=atomic
|
||||
go test -v -p 1 -timeout 8m ./sftpd -covermode=atomic
|
||||
go test -v -p 1 -timeout 5m ./ftpd -covermode=atomic
|
||||
go test -v -p 1 -timeout 5m ./webdavd -covermode=atomic
|
||||
go test -v -p 1 -timeout 2m ./telemetry -covermode=atomic
|
||||
go test -v -p 1 -timeout 2m ./mfa -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: bolt
|
||||
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
|
||||
|
||||
- name: Run test cases using memory provider
|
||||
run: go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
|
||||
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: memory
|
||||
SFTPGO_DATA_PROVIDER__NAME: ''
|
||||
|
@ -149,10 +139,10 @@ jobs:
|
|||
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
|
||||
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
|
||||
rm "$CERT_PATH"
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
|
||||
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
|
||||
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
|
||||
iscc "$INNO_S" .\windows-installer\sftpgo.iss
|
||||
|
||||
rm .\output\sftpgo.exe
|
||||
|
@ -178,21 +168,21 @@ jobs:
|
|||
|
||||
- name: Upload Windows installer x86_64 artifact
|
||||
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_windows_installer_x86_64
|
||||
path: ./sftpgo_windows_x86_64.exe
|
||||
|
||||
- name: Upload Windows installer arm64 artifact
|
||||
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_windows_installer_arm64
|
||||
path: ./sftpgo_windows_arm64.exe
|
||||
|
||||
- name: Upload Windows installer x86 artifact
|
||||
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_windows_installer_x86
|
||||
path: ./sftpgo_windows_x86.exe
|
||||
|
@ -218,34 +208,41 @@ jobs:
|
|||
|
||||
- name: Upload build artifact
|
||||
if: startsWith(matrix.os, 'ubuntu-') != true
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
|
||||
path: output
|
||||
|
||||
test-build-flags:
|
||||
name: Test build flags
|
||||
runs-on: ubuntu-latest
|
||||
test-goarch-386:
|
||||
name: Run test cases on 32-bit arch
|
||||
runs-on: ubuntu-18.04
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: '1.22'
|
||||
go-version: 1.17
|
||||
|
||||
- name: Build
|
||||
run: |
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes,nogcs,nos3,noportable,nobolt,nomysql,nopgsql,nosqlite,nometrics,noazblob,unixcrypt -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
./sftpgo -v
|
||||
cp -r openapi static templates internal/bundle/
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes,bundle -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
./sftpgo -v
|
||||
cd tests/eventsearcher
|
||||
go build -trimpath -ldflags "-s -w" -o eventsearcher
|
||||
cd -
|
||||
env:
|
||||
GOARCH: 386
|
||||
|
||||
- name: Run test cases
|
||||
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: memory
|
||||
SFTPGO_DATA_PROVIDER__NAME: ''
|
||||
GOARCH: 386
|
||||
|
||||
test-postgresql-mysql-crdb:
|
||||
name: Test with PgSQL/MySQL/Cockroach
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-18.04
|
||||
|
||||
services:
|
||||
postgres:
|
||||
|
@ -269,64 +266,30 @@ jobs:
|
|||
MYSQL_USER: sftpgo
|
||||
MYSQL_PASSWORD: sftpgo
|
||||
options: >-
|
||||
--health-cmd "mariadb-admin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
|
||||
--health-cmd "mysqladmin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 6
|
||||
ports:
|
||||
- 3307:3306
|
||||
|
||||
mysql:
|
||||
image: mysql:latest
|
||||
env:
|
||||
MYSQL_ROOT_PASSWORD: mysql
|
||||
MYSQL_DATABASE: sftpgo
|
||||
MYSQL_USER: sftpgo
|
||||
MYSQL_PASSWORD: sftpgo
|
||||
options: >-
|
||||
--health-cmd "mysqladmin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 6
|
||||
ports:
|
||||
- 3308:3306
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: '1.22'
|
||||
go-version: 1.17
|
||||
|
||||
- name: Build
|
||||
run: |
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
cd tests/eventsearcher
|
||||
go build -trimpath -ldflags "-s -w" -o eventsearcher
|
||||
cd -
|
||||
cd tests/ipfilter
|
||||
go build -trimpath -ldflags "-s -w" -o ipfilter
|
||||
cd -
|
||||
|
||||
- name: Run tests using MySQL provider
|
||||
run: |
|
||||
./sftpgo initprovider
|
||||
./sftpgo resetprovider --force
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: mysql
|
||||
SFTPGO_DATA_PROVIDER__NAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__HOST: localhost
|
||||
SFTPGO_DATA_PROVIDER__PORT: 3308
|
||||
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
|
||||
|
||||
- name: Run tests using PostgreSQL provider
|
||||
run: |
|
||||
./sftpgo initprovider
|
||||
./sftpgo resetprovider --force
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
|
||||
go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: postgresql
|
||||
SFTPGO_DATA_PROVIDER__NAME: sftpgo
|
||||
|
@ -335,11 +298,9 @@ jobs:
|
|||
SFTPGO_DATA_PROVIDER__USERNAME: postgres
|
||||
SFTPGO_DATA_PROVIDER__PASSWORD: postgres
|
||||
|
||||
- name: Run tests using MariaDB provider
|
||||
- name: Run tests using MySQL provider
|
||||
run: |
|
||||
./sftpgo initprovider
|
||||
./sftpgo resetprovider --force
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
|
||||
go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: mysql
|
||||
SFTPGO_DATA_PROVIDER__NAME: sftpgo
|
||||
|
@ -347,16 +308,12 @@ jobs:
|
|||
SFTPGO_DATA_PROVIDER__PORT: 3307
|
||||
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__SQL_TABLES_PREFIX: prefix_
|
||||
|
||||
- name: Run tests using CockroachDB provider
|
||||
run: |
|
||||
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr :26257
|
||||
sleep 10
|
||||
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr 0.0.0.0:26257
|
||||
docker exec crdb cockroach sql --insecure -e 'create database "sftpgo"'
|
||||
./sftpgo initprovider
|
||||
./sftpgo resetprovider --force
|
||||
go test -v -tags nopgxregisterdefaulttypes -p 1 -timeout 15m ./... -covermode=atomic
|
||||
go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
docker stop crdb
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: cockroachdb
|
||||
|
@ -365,65 +322,42 @@ jobs:
|
|||
SFTPGO_DATA_PROVIDER__PORT: 26257
|
||||
SFTPGO_DATA_PROVIDER__USERNAME: root
|
||||
SFTPGO_DATA_PROVIDER__PASSWORD:
|
||||
SFTPGO_DATA_PROVIDER__TARGET_SESSION_ATTRS: any
|
||||
SFTPGO_DATA_PROVIDER__SQL_TABLES_PREFIX: prefix_
|
||||
|
||||
build-linux-packages:
|
||||
name: Build Linux packages
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-18.04
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- arch: amd64
|
||||
distro: ubuntu:18.04
|
||||
go: latest
|
||||
go: 1.17
|
||||
go-arch: amd64
|
||||
- arch: aarch64
|
||||
distro: ubuntu18.04
|
||||
go: latest
|
||||
go: go1.17.9
|
||||
go-arch: arm64
|
||||
- arch: ppc64le
|
||||
distro: ubuntu18.04
|
||||
go: latest
|
||||
go: go1.17.9
|
||||
go-arch: ppc64le
|
||||
- arch: armv7
|
||||
distro: ubuntu18.04
|
||||
go: latest
|
||||
go: go1.17.9
|
||||
go-arch: arm7
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Get commit SHA
|
||||
id: get_commit
|
||||
run: echo "COMMIT=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
|
||||
shell: bash
|
||||
- name: Set up Go
|
||||
if: ${{ matrix.arch == 'amd64' }}
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.go }}
|
||||
|
||||
- name: Build on amd64
|
||||
if: ${{ matrix.arch == 'amd64' }}
|
||||
run: |
|
||||
echo '#!/bin/bash' > build.sh
|
||||
echo '' >> build.sh
|
||||
echo 'set -e' >> build.sh
|
||||
echo 'apt-get update -q -y' >> build.sh
|
||||
echo 'apt-get install -q -y curl gcc' >> build.sh
|
||||
if [ ${{ matrix.go }} == 'latest' ]
|
||||
then
|
||||
echo 'GO_VERSION=$(curl -L https://go.dev/VERSION?m=text | head -n 1)' >> build.sh
|
||||
else
|
||||
echo 'GO_VERSION=${{ matrix.go }}' >> build.sh
|
||||
fi
|
||||
echo 'GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}' >> build.sh
|
||||
echo 'curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/${GO_VERSION}.linux-${GO_DOWNLOAD_ARCH}.tar.gz' >> build.sh
|
||||
echo 'tar -C /usr/local -xzf go.tar.gz' >> build.sh
|
||||
echo 'export PATH=$PATH:/usr/local/go/bin' >> build.sh
|
||||
echo 'go version' >> build.sh
|
||||
echo 'cd /usr/local/src' >> build.sh
|
||||
echo 'go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_commit.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo' >> build.sh
|
||||
|
||||
chmod 755 build.sh
|
||||
docker run --rm --name ubuntu-build --mount type=bind,source=`pwd`,target=/usr/local/src ${{ matrix.distro }} /usr/local/src/build.sh
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
mkdir -p output/{init,bash_completion,zsh_completion}
|
||||
cp sftpgo.json output/
|
||||
cp -r templates output/
|
||||
|
@ -450,10 +384,10 @@ jobs:
|
|||
shell: /bin/bash
|
||||
install: |
|
||||
apt-get update -q -y
|
||||
apt-get install -q -y curl gcc
|
||||
apt-get install -q -y curl gcc git
|
||||
if [ ${{ matrix.go }} == 'latest' ]
|
||||
then
|
||||
GO_VERSION=$(curl -L https://go.dev/VERSION?m=text | head -n 1)
|
||||
GO_VERSION=$(curl -L https://go.dev/VERSION?m=text)
|
||||
else
|
||||
GO_VERSION=${{ matrix.go }}
|
||||
fi
|
||||
|
@ -466,12 +400,11 @@ jobs:
|
|||
tar -C /usr/local -xzf go.tar.gz
|
||||
run: |
|
||||
export PATH=$PATH:/usr/local/go/bin
|
||||
go version
|
||||
if [ ${{ matrix.arch}} == 'armv7' ]
|
||||
then
|
||||
export GOARM=7
|
||||
fi
|
||||
go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_commit.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
mkdir -p output/{init,bash_completion,zsh_completion}
|
||||
cp sftpgo.json output/
|
||||
cp -r templates output/
|
||||
|
@ -485,7 +418,7 @@ jobs:
|
|||
cp sftpgo output/
|
||||
|
||||
- name: Upload build artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-linux-${{ matrix.arch }}-go-${{ matrix.go }}
|
||||
path: output
|
||||
|
@ -497,16 +430,16 @@ jobs:
|
|||
cd pkgs
|
||||
./build.sh
|
||||
PKG_VERSION=$(cat dist/version)
|
||||
echo "pkg-version=${PKG_VERSION}" >> $GITHUB_OUTPUT
|
||||
echo "::set-output name=pkg-version::${PKG_VERSION}"
|
||||
|
||||
- name: Upload Debian Package
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-deb
|
||||
path: pkgs/dist/deb/*
|
||||
|
||||
- name: Upload RPM Package
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-rpm
|
||||
path: pkgs/dist/rpm/*
|
||||
|
@ -516,11 +449,11 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: '1.22'
|
||||
- uses: actions/checkout@v4
|
||||
go-version: 1.17
|
||||
- uses: actions/checkout@v3
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@v6
|
||||
uses: golangci/golangci-lint-action@v3
|
||||
with:
|
||||
version: latest
|
||||
|
|
64
.github/workflows/docker.yml
vendored
64
.github/workflows/docker.yml
vendored
|
@ -5,7 +5,7 @@ on:
|
|||
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
- 2.2.x
|
||||
tags:
|
||||
- v*
|
||||
pull_request:
|
||||
|
@ -28,12 +28,9 @@ jobs:
|
|||
- os: ubuntu-latest
|
||||
docker_pkg: distroless
|
||||
optional_deps: false
|
||||
- os: ubuntu-latest
|
||||
docker_pkg: debian-plugins
|
||||
optional_deps: true
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Gather image information
|
||||
id: info
|
||||
|
@ -42,7 +39,6 @@ jobs:
|
|||
DOCKERFILE=Dockerfile
|
||||
MINOR=""
|
||||
MAJOR=""
|
||||
FEATURES="nopgxregisterdefaulttypes"
|
||||
if [ "${{ github.event_name }}" = "schedule" ]; then
|
||||
VERSION=nightly
|
||||
elif [[ $GITHUB_REF == refs/tags/* ]]; then
|
||||
|
@ -68,13 +64,6 @@ jobs:
|
|||
VERSION="${VERSION}-distroless"
|
||||
VERSION_SLIM="${VERSION}-slim"
|
||||
DOCKERFILE=Dockerfile.distroless
|
||||
FEATURES="${FEATURES},nosqlite"
|
||||
elif [[ $DOCKER_PKG == debian-plugins ]]; then
|
||||
VERSION="${VERSION}-plugins"
|
||||
VERSION_SLIM="${VERSION}-slim"
|
||||
FEATURES="${FEATURES},unixcrypt"
|
||||
elif [[ $DOCKER_PKG == debian ]]; then
|
||||
FEATURES="${FEATURES},unixcrypt"
|
||||
fi
|
||||
DOCKER_IMAGES=("drakkan/sftpgo" "ghcr.io/drakkan/sftpgo")
|
||||
TAGS="${DOCKER_IMAGES[0]}:${VERSION}"
|
||||
|
@ -100,13 +89,6 @@ jobs:
|
|||
fi
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:distroless"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:distroless-slim"
|
||||
elif [[ $DOCKER_PKG == debian-plugins ]]; then
|
||||
if [[ -n $MAJOR && -n $MINOR ]]; then
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-plugins,${DOCKER_IMAGE}:${MAJOR}-plugins"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-plugins-slim,${DOCKER_IMAGE}:${MAJOR}-plugins-slim"
|
||||
fi
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:plugins"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:plugins-slim"
|
||||
else
|
||||
if [[ -n $MAJOR && -n $MINOR ]]; then
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-alpine,${DOCKER_IMAGE}:${MAJOR}-alpine"
|
||||
|
@ -119,43 +101,37 @@ jobs:
|
|||
done
|
||||
|
||||
if [[ $OPTIONAL_DEPS == true ]]; then
|
||||
echo "version=${VERSION}" >> $GITHUB_OUTPUT
|
||||
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
|
||||
echo "full=true" >> $GITHUB_OUTPUT
|
||||
echo ::set-output name=version::${VERSION}
|
||||
echo ::set-output name=tags::${TAGS}
|
||||
echo ::set-output name=full::true
|
||||
else
|
||||
echo "version=${VERSION_SLIM}" >> $GITHUB_OUTPUT
|
||||
echo "tags=${TAGS_SLIM}" >> $GITHUB_OUTPUT
|
||||
echo "full=false" >> $GITHUB_OUTPUT
|
||||
echo ::set-output name=version::${VERSION_SLIM}
|
||||
echo ::set-output name=tags::${TAGS_SLIM}
|
||||
echo ::set-output name=full::false
|
||||
fi
|
||||
if [[ $DOCKER_PKG == debian-plugins ]]; then
|
||||
echo "plugins=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "plugins=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
echo "dockerfile=${DOCKERFILE}" >> $GITHUB_OUTPUT
|
||||
echo "features=${FEATURES}" >> $GITHUB_OUTPUT
|
||||
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
|
||||
echo "sha=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
|
||||
echo ::set-output name=dockerfile::${DOCKERFILE}
|
||||
echo ::set-output name=created::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
|
||||
echo ::set-output name=sha::${GITHUB_SHA::8}
|
||||
env:
|
||||
DOCKER_PKG: ${{ matrix.docker_pkg }}
|
||||
OPTIONAL_DEPS: ${{ matrix.optional_deps }}
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
uses: docker/setup-qemu-action@v1
|
||||
|
||||
- name: Set up builder
|
||||
uses: docker/setup-buildx-action@v3
|
||||
uses: docker/setup-buildx-action@v1
|
||||
id: builder
|
||||
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v3
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
if: ${{ github.event_name != 'pull_request' }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@v3
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
|
@ -163,26 +139,24 @@ jobs:
|
|||
if: ${{ github.event_name != 'pull_request' }}
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v6
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
context: .
|
||||
builder: ${{ steps.builder.outputs.name }}
|
||||
file: ./${{ steps.info.outputs.dockerfile }}
|
||||
platforms: linux/amd64,linux/arm64,linux/ppc64le,linux/arm/v7
|
||||
platforms: linux/amd64,linux/arm64,linux/ppc64le
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.info.outputs.tags }}
|
||||
build-args: |
|
||||
COMMIT_SHA=${{ steps.info.outputs.sha }}
|
||||
INSTALL_OPTIONAL_PACKAGES=${{ steps.info.outputs.full }}
|
||||
DOWNLOAD_PLUGINS=${{ steps.info.outputs.plugins }}
|
||||
FEATURES=${{ steps.info.outputs.features }}
|
||||
labels: |
|
||||
org.opencontainers.image.title=SFTPGo
|
||||
org.opencontainers.image.description=Full-featured and highly configurable file transfer server: SFTP, HTTP/S,FTP/S, WebDAV
|
||||
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support
|
||||
org.opencontainers.image.url=https://github.com/drakkan/sftpgo
|
||||
org.opencontainers.image.documentation=https://github.com/drakkan/sftpgo/blob/${{ github.sha }}/docker/README.md
|
||||
org.opencontainers.image.source=https://github.com/drakkan/sftpgo
|
||||
org.opencontainers.image.version=${{ steps.info.outputs.version }}
|
||||
org.opencontainers.image.created=${{ steps.info.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
org.opencontainers.image.licenses=AGPL-3.0-only
|
||||
org.opencontainers.image.licenses=AGPL-3.0
|
158
.github/workflows/release.yml
vendored
158
.github/workflows/release.yml
vendored
|
@ -5,34 +5,33 @@ on:
|
|||
tags: 'v*'
|
||||
|
||||
env:
|
||||
GO_VERSION: 1.22.4
|
||||
GO_VERSION: 1.17.9
|
||||
|
||||
jobs:
|
||||
prepare-sources-with-deps:
|
||||
name: Prepare sources with deps
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-18.04
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Get SFTPGo version
|
||||
id: get_version
|
||||
run: echo "VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
|
||||
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
|
||||
- name: Prepare release
|
||||
run: |
|
||||
go mod vendor
|
||||
echo "${SFTPGO_VERSION}" > VERSION.txt
|
||||
echo "${GITHUB_SHA::8}" >> VERSION.txt
|
||||
tar cJvf sftpgo_${SFTPGO_VERSION}_src_with_deps.tar.xz *
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
|
||||
- name: Upload build artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
|
||||
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
|
||||
|
@ -43,18 +42,18 @@ jobs:
|
|||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [macos-12, windows-2022]
|
||||
os: [macos-10.15, windows-2019]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Get SFTPGo version
|
||||
id: get_version
|
||||
run: echo "VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
|
||||
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
shell: bash
|
||||
|
||||
- name: Get OS name
|
||||
|
@ -62,9 +61,9 @@ jobs:
|
|||
run: |
|
||||
if [[ $MATRIX_OS =~ ^macos.* ]]
|
||||
then
|
||||
echo "OS=macOS" >> $GITHUB_OUTPUT
|
||||
echo ::set-output name=OS::macOS
|
||||
else
|
||||
echo "OS=windows" >> $GITHUB_OUTPUT
|
||||
echo ::set-output name=OS::windows
|
||||
fi
|
||||
shell: bash
|
||||
env:
|
||||
|
@ -72,31 +71,31 @@ jobs:
|
|||
|
||||
- name: Build for macOS x86_64
|
||||
if: startsWith(matrix.os, 'windows-') != true
|
||||
run: go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
|
||||
- name: Build for macOS arm64
|
||||
if: startsWith(matrix.os, 'macos-') == true
|
||||
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
|
||||
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
|
||||
|
||||
- name: Build for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
$GIT_COMMIT = (git describe --always --abbrev=8 --dirty) | Out-String
|
||||
$GIT_COMMIT = (git describe --always --dirty) | Out-String
|
||||
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
|
||||
$FILE_VERSION = $Env:SFTPGO_VERSION.substring(1) + ".0"
|
||||
go install github.com/tc-hib/go-winres@latest
|
||||
go-winres simply --arch amd64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o sftpgo.exe
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
|
||||
mkdir arm64
|
||||
$Env:CGO_ENABLED='0'
|
||||
$Env:GOOS='windows'
|
||||
$Env:GOARCH='arm64'
|
||||
go-winres simply --arch arm64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
|
||||
mkdir x86
|
||||
$Env:GOARCH='386'
|
||||
go-winres simply --arch 386 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -tags nopgxregisterdefaulttypes,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/internal/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
|
||||
Remove-Item Env:\CGO_ENABLED
|
||||
Remove-Item Env:\GOOS
|
||||
Remove-Item Env:\GOARCH
|
||||
|
@ -155,10 +154,10 @@ jobs:
|
|||
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
|
||||
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
|
||||
rm "$CERT_PATH"
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
|
||||
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
|
||||
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
|
||||
iscc "$INNO_S" .\windows-installer\sftpgo.iss
|
||||
|
||||
rm .\output\sftpgo.exe
|
||||
|
@ -207,7 +206,7 @@ jobs:
|
|||
|
||||
- name: Upload macOS x86_64 artifact
|
||||
if: startsWith(matrix.os, 'macos-')
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
|
||||
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
|
||||
|
@ -215,7 +214,7 @@ jobs:
|
|||
|
||||
- name: Upload macOS arm64 artifact
|
||||
if: startsWith(matrix.os, 'macos-')
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
|
||||
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
|
||||
|
@ -223,7 +222,7 @@ jobs:
|
|||
|
||||
- name: Upload Windows installer x86_64 artifact
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
|
||||
path: ./sftpgo_windows_x86_64.exe
|
||||
|
@ -231,7 +230,7 @@ jobs:
|
|||
|
||||
- name: Upload Windows installer arm64 artifact
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.exe
|
||||
path: ./sftpgo_windows_arm64.exe
|
||||
|
@ -239,7 +238,7 @@ jobs:
|
|||
|
||||
- name: Upload Windows installer x86 artifact
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86.exe
|
||||
path: ./sftpgo_windows_x86.exe
|
||||
|
@ -247,7 +246,7 @@ jobs:
|
|||
|
||||
- name: Upload Windows portable artifact
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable.zip
|
||||
path: ./sftpgo_portable.zip
|
||||
|
@ -255,12 +254,11 @@ jobs:
|
|||
|
||||
prepare-linux:
|
||||
name: Prepare Linux binaries
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-18.04
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- arch: amd64
|
||||
distro: ubuntu:18.04
|
||||
go-arch: amd64
|
||||
deb-arch: amd64
|
||||
rpm-arch: x86_64
|
||||
|
@ -285,14 +283,18 @@ jobs:
|
|||
tar-arch: armv7
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Go
|
||||
if: ${{ matrix.arch == 'amd64' }}
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Get versions
|
||||
id: get_version
|
||||
run: |
|
||||
echo "SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
|
||||
echo "GO_VERSION=${GO_VERSION}" >> $GITHUB_OUTPUT
|
||||
echo "COMMIT=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
|
||||
echo ::set-output name=SFTPGO_VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
echo ::set-output name=GO_VERSION::${GO_VERSION}
|
||||
shell: bash
|
||||
env:
|
||||
GO_VERSION: ${{ env.GO_VERSION }}
|
||||
|
@ -300,20 +302,7 @@ jobs:
|
|||
- name: Build on amd64
|
||||
if: ${{ matrix.arch == 'amd64' }}
|
||||
run: |
|
||||
echo '#!/bin/bash' > build.sh
|
||||
echo '' >> build.sh
|
||||
echo 'set -e' >> build.sh
|
||||
echo 'apt-get update -q -y' >> build.sh
|
||||
echo 'apt-get install -q -y curl gcc' >> build.sh
|
||||
echo 'curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/go${{ steps.get_version.outputs.GO_VERSION }}.linux-${{ matrix.go-arch }}.tar.gz' >> build.sh
|
||||
echo 'tar -C /usr/local -xzf go.tar.gz' >> build.sh
|
||||
echo 'export PATH=$PATH:/usr/local/go/bin' >> build.sh
|
||||
echo 'go version' >> build.sh
|
||||
echo 'cd /usr/local/src' >> build.sh
|
||||
echo 'go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_version.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo' >> build.sh
|
||||
|
||||
chmod 755 build.sh
|
||||
docker run --rm --name ubuntu-build --mount type=bind,source=`pwd`,target=/usr/local/src ${{ matrix.distro }} /usr/local/src/build.sh
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
|
||||
echo "For documentation please take a look here:" > output/README.txt
|
||||
echo "" >> output/README.txt
|
||||
|
@ -351,7 +340,7 @@ jobs:
|
|||
shell: /bin/bash
|
||||
install: |
|
||||
apt-get update -q -y
|
||||
apt-get install -q -y curl gcc xz-utils
|
||||
apt-get install -q -y curl gcc git xz-utils
|
||||
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
|
||||
if [ ${{ matrix.arch}} == 'armv7' ]
|
||||
then
|
||||
|
@ -361,8 +350,7 @@ jobs:
|
|||
tar -C /usr/local -xzf go.tar.gz
|
||||
run: |
|
||||
export PATH=$PATH:/usr/local/go/bin
|
||||
go version
|
||||
go build -buildvcs=false -trimpath -tags nopgxregisterdefaulttypes -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${{ steps.get_version.outputs.COMMIT }} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
|
||||
echo "For documentation please take a look here:" > output/README.txt
|
||||
echo "" >> output/README.txt
|
||||
|
@ -385,7 +373,7 @@ jobs:
|
|||
cd ..
|
||||
|
||||
- name: Upload build artifact for ${{ matrix.arch }}
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
|
||||
path: ./output/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
|
||||
|
@ -398,19 +386,19 @@ jobs:
|
|||
cd pkgs
|
||||
./build.sh
|
||||
PKG_VERSION=${SFTPGO_VERSION:1}
|
||||
echo "pkg-version=${PKG_VERSION}" >> $GITHUB_OUTPUT
|
||||
echo "::set-output name=pkg-version::${PKG_VERSION}"
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
|
||||
|
||||
- name: Upload Deb Package
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
|
||||
path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
|
||||
retention-days: 1
|
||||
|
||||
- name: Upload RPM Package
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
|
||||
path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
|
||||
|
@ -425,26 +413,26 @@ jobs:
|
|||
- name: Get versions
|
||||
id: get_version
|
||||
run: |
|
||||
echo "SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
|
||||
echo ::set-output name=SFTPGO_VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
shell: bash
|
||||
|
||||
- name: Download amd64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
|
||||
|
||||
- name: Download arm64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
|
||||
|
||||
- name: Download ppc64le artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
|
||||
|
||||
- name: Download armv7 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
|
||||
|
||||
|
@ -467,7 +455,7 @@ jobs:
|
|||
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
|
||||
|
||||
- name: Upload Linux bundle
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
|
||||
path: ./bundle/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
|
||||
|
@ -479,113 +467,113 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
- name: Get versions
|
||||
id: get_version
|
||||
run: |
|
||||
SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}
|
||||
PKG_VERSION=${SFTPGO_VERSION:1}
|
||||
echo "SFTPGO_VERSION=${SFTPGO_VERSION}" >> $GITHUB_OUTPUT
|
||||
echo "PKG_VERSION=${PKG_VERSION}" >> $GITHUB_OUTPUT
|
||||
echo ::set-output name=SFTPGO_VERSION::${SFTPGO_VERSION}
|
||||
echo "::set-output name=PKG_VERSION::${PKG_VERSION}"
|
||||
shell: bash
|
||||
|
||||
- name: Download amd64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
|
||||
|
||||
- name: Download arm64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
|
||||
|
||||
- name: Download ppc64le artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
|
||||
|
||||
- name: Download armv7 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
|
||||
|
||||
- name: Download Linux bundle artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
|
||||
|
||||
- name: Download Deb amd64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_amd64.deb
|
||||
|
||||
- name: Download Deb arm64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_arm64.deb
|
||||
|
||||
- name: Download Deb ppc64le artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_ppc64el.deb
|
||||
|
||||
- name: Download Deb armv7 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_armhf.deb
|
||||
|
||||
- name: Download RPM x86_64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.x86_64.rpm
|
||||
|
||||
- name: Download RPM aarch64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.aarch64.rpm
|
||||
|
||||
- name: Download RPM ppc64le artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.ppc64le.rpm
|
||||
|
||||
- name: Download RPM armv7 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.armv7hl.rpm
|
||||
|
||||
- name: Download macOS x86_64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_x86_64.tar.xz
|
||||
|
||||
- name: Download macOS arm64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_arm64.tar.xz
|
||||
|
||||
- name: Download Windows installer x86_64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86_64.exe
|
||||
|
||||
- name: Download Windows installer arm64 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_arm64.exe
|
||||
|
||||
- name: Download Windows installer x86 artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86.exe
|
||||
|
||||
- name: Download Windows portable artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_portable.zip
|
||||
|
||||
- name: Download source with deps artifact
|
||||
uses: actions/download-artifact@v4
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_src_with_deps.tar.xz
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
run:
|
||||
timeout: 10m
|
||||
timeout: 5m
|
||||
issues-exit-code: 1
|
||||
tests: true
|
||||
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
* @drakkan
|
|
@ -1,128 +0,0 @@
|
|||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, religion, or sexual identity
|
||||
and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the
|
||||
overall community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or
|
||||
advances of any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email
|
||||
address, without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
support@sftpgo.com.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series
|
||||
of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or
|
||||
permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within
|
||||
the community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
https://www.contributor-covenant.org/faq. Translations are available at
|
||||
https://www.contributor-covenant.org/translations.
|
28
Dockerfile
28
Dockerfile
|
@ -1,9 +1,7 @@
|
|||
FROM golang:1.22-bookworm as builder
|
||||
FROM golang:1.17-bullseye as builder
|
||||
|
||||
ENV GOFLAGS="-mod=readonly"
|
||||
|
||||
RUN apt-get update && apt-get -y upgrade && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN mkdir -p /workspace
|
||||
WORKDIR /workspace
|
||||
|
||||
|
@ -22,20 +20,15 @@ ARG FEATURES
|
|||
COPY . .
|
||||
|
||||
RUN set -xe && \
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --abbrev=8 --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
# Set to "true" to download the "official" plugins in /usr/local/bin
|
||||
ARG DOWNLOAD_PLUGINS=false
|
||||
|
||||
RUN if [ "${DOWNLOAD_PLUGINS}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y curl && ./docker/scripts/download-plugins.sh; fi
|
||||
|
||||
FROM debian:bookworm-slim
|
||||
FROM debian:bullseye-slim
|
||||
|
||||
# Set to "true" to install jq and the optional git and rsync dependencies
|
||||
ARG INSTALL_OPTIONAL_PACKAGES=false
|
||||
|
||||
RUN apt-get update && apt-get -y upgrade && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y jq git rsync && rm -rf /var/lib/apt/lists/*; fi
|
||||
|
||||
|
@ -50,14 +43,19 @@ COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
|
|||
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
|
||||
COPY --from=builder /workspace/static /usr/share/sftpgo/static
|
||||
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
|
||||
COPY --from=builder /workspace/sftpgo /usr/local/bin/sftpgo-plugin-* /usr/local/bin/
|
||||
COPY --from=builder /workspace/sftpgo /usr/local/bin/
|
||||
|
||||
# Log to the stdout so the logs will be available using docker logs
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
# templates and static paths are inside the container
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
|
||||
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
|
||||
|
||||
# Modify the default configuration file
|
||||
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' /etc/sftpgo/sftpgo.json && \
|
||||
sed -i 's|"backups"|"/srv/sftpgo/backups"|' /etc/sftpgo/sftpgo.json
|
||||
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
|
||||
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
|
||||
|
||||
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
|
||||
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
FROM golang:1.22-alpine3.20 AS builder
|
||||
FROM golang:1.17-alpine3.15 AS builder
|
||||
|
||||
ENV GOFLAGS="-mod=readonly"
|
||||
|
||||
RUN apk -U upgrade --no-cache && apk add --update --no-cache bash ca-certificates curl git gcc g++
|
||||
RUN apk add --update --no-cache bash ca-certificates curl git gcc g++
|
||||
|
||||
RUN mkdir -p /workspace
|
||||
WORKDIR /workspace
|
||||
|
@ -22,18 +22,23 @@ ARG FEATURES
|
|||
COPY . .
|
||||
|
||||
RUN set -xe && \
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --abbrev=8 --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
FROM alpine:3.20
|
||||
|
||||
FROM alpine:3.15
|
||||
|
||||
# Set to "true" to install jq and the optional git and rsync dependencies
|
||||
ARG INSTALL_OPTIONAL_PACKAGES=false
|
||||
|
||||
RUN apk -U upgrade --no-cache && apk add --update --no-cache ca-certificates tzdata mailcap
|
||||
RUN apk add --update --no-cache ca-certificates tzdata mailcap
|
||||
|
||||
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache jq git rsync; fi
|
||||
|
||||
# set up nsswitch.conf for Go's "netgo" implementation
|
||||
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457
|
||||
RUN test ! -e /etc/nsswitch.conf && echo 'hosts: files dns' > /etc/nsswitch.conf
|
||||
|
||||
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
|
||||
|
||||
RUN addgroup -g 1000 -S sftpgo && \
|
||||
|
@ -47,10 +52,15 @@ COPY --from=builder /workspace/sftpgo /usr/local/bin/
|
|||
|
||||
# Log to the stdout so the logs will be available using docker logs
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
# templates and static paths are inside the container
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
|
||||
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
|
||||
|
||||
# Modify the default configuration file
|
||||
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' /etc/sftpgo/sftpgo.json && \
|
||||
sed -i 's|"backups"|"/srv/sftpgo/backups"|' /etc/sftpgo/sftpgo.json
|
||||
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
|
||||
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
|
||||
|
||||
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
|
||||
|
||||
|
|
|
@ -1,9 +1,7 @@
|
|||
FROM golang:1.22-bookworm as builder
|
||||
FROM golang:1.17-bullseye as builder
|
||||
|
||||
ENV CGO_ENABLED=0 GOFLAGS="-mod=readonly"
|
||||
|
||||
RUN apt-get update && apt-get -y upgrade && apt-get install --no-install-recommends -y media-types && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN mkdir -p /workspace
|
||||
WORKDIR /workspace
|
||||
|
||||
|
@ -17,22 +15,24 @@ ARG COMMIT_SHA
|
|||
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
|
||||
# For this variant we disable SQLite support since it requires CGO and so a C runtime which is not installed
|
||||
# in distroless/static-* images
|
||||
ARG FEATURES
|
||||
ARG FEATURES=nosqlite
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN set -xe && \
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --abbrev=8 --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
# Modify the default configuration file
|
||||
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' sftpgo.json && \
|
||||
sed -i 's|"backups"|"/srv/sftpgo/backups"|' sftpgo.json && \
|
||||
sed -i 's|"sqlite"|"bolt"|' sftpgo.json
|
||||
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" sftpgo.json && \
|
||||
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" sftpgo.json && \
|
||||
sed -i "s|\"sqlite\"|\"bolt\"|" sftpgo.json
|
||||
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y media-types && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN mkdir /etc/sftpgo /var/lib/sftpgo /srv/sftpgo
|
||||
|
||||
FROM gcr.io/distroless/static-debian12
|
||||
FROM gcr.io/distroless/static-debian11
|
||||
|
||||
COPY --from=builder --chown=1000:1000 /etc/sftpgo /etc/sftpgo
|
||||
COPY --from=builder --chown=1000:1000 /srv/sftpgo /srv/sftpgo
|
||||
|
@ -46,6 +46,11 @@ COPY --from=builder /etc/mime.types /etc/mime.types
|
|||
|
||||
# Log to the stdout so the logs will be available using docker logs
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
# templates and static paths are inside the container
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
|
||||
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
|
||||
# These env vars are required to avoid the following error when calling user.Current():
|
||||
# unable to get the current user: user: Current requires cgo or $USER set in environment
|
||||
ENV USER=sftpgo
|
||||
|
@ -54,4 +59,4 @@ ENV HOME=/var/lib/sftpgo
|
|||
WORKDIR /var/lib/sftpgo
|
||||
USER 1000:1000
|
||||
|
||||
CMD ["sftpgo", "serve"]
|
||||
CMD ["sftpgo", "serve"]
|
||||
|
|
313
README.md
313
README.md
|
@ -1,67 +1,289 @@
|
|||
# SFTPGo
|
||||
|
||||
[![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
|
||||
![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
|
||||
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/main/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/main)
|
||||
[![License: AGPL-3.0-only](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[![License: AGPL v3](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[![Docker Pulls](https://img.shields.io/docker/pulls/drakkan/sftpgo)](https://hub.docker.com/r/drakkan/sftpgo)
|
||||
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
|
||||
|
||||
Full-featured and highly configurable event-driven file transfer solution.
|
||||
Server protocols: SFTP, HTTP/S, FTP/S, WebDAV.
|
||||
Storage backends: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, other SFTP servers.
|
||||
Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support.
|
||||
Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.
|
||||
|
||||
With SFTPGo you can leverage local and cloud storage backends for exchanging and storing files internally or with business partners using the same tools and processes you are already familiar with.
|
||||
## Features
|
||||
|
||||
The WebAdmin UI allows to easily create and manage your users, folders, groups and other resources.
|
||||
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
|
||||
- Virtual folders are supported: a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one. Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
|
||||
- Configurable [custom commands and/or HTTP hooks](./docs/custom-actions.md) on file upload, pre-upload, download, pre-download, delete, pre-delete, rename, mmkdir, rmdir on SSH commands and on user add, update and delete.
|
||||
- Virtual accounts stored within a "data provider".
|
||||
- SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (key/value store in pure Go) and in-memory data providers are supported.
|
||||
- Chroot isolation for local accounts. Cloud-based accounts can be restricted to a certain base path.
|
||||
- Per user and per directory virtual permissions, for each exposed path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode.
|
||||
- [REST API](./docs/rest-api.md) for users and folders management, data retention, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
|
||||
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
|
||||
- [Web client interface](./docs/web-client.md) so that end users can change their credentials, manage and share their files.
|
||||
- Public key and password authentication. Multiple public keys per user are supported.
|
||||
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
|
||||
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
|
||||
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
|
||||
- Per user authentication methods.
|
||||
- [Two-factor authentication](./docs/howto/two-factor-authentication.md) based on time-based one time passwords (RFC 6238) which works with Authy, Google Authenticator and other compatible apps.
|
||||
- Custom authentication via external programs/HTTP API.
|
||||
- [Data At Rest Encryption](./docs/dare.md).
|
||||
- Dynamic user modification before login via external programs/HTTP API.
|
||||
- Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
|
||||
- Bandwidth throttling, with distinct settings for upload and download and overrides based on the client IP address.
|
||||
- Per-protocol [rate limiting](./docs/rate-limiting.md) is supported and can be optionally connected to the built-in defender to automatically block hosts that repeatedly exceed the configured limit.
|
||||
- Per user maximum concurrent sessions.
|
||||
- Per user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
|
||||
- Per user and per directory shell like patterns filters: files can be allowed or denied based on shell like patterns.
|
||||
- Automatically terminating idle connections.
|
||||
- Automatic blocklist management using the built-in [defender](./docs/defender.md).
|
||||
- Atomic uploads are configurable.
|
||||
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
|
||||
- Support for Git repositories over SSH.
|
||||
- SCP and rsync are supported.
|
||||
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
|
||||
- [WebDAV](./docs/webdav.md) is supported.
|
||||
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
|
||||
- Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
|
||||
- [Prometheus metrics](./docs/metrics.md) are exposed.
|
||||
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
|
||||
- Easy [migration](./examples/convertusers) from Linux system user accounts.
|
||||
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
|
||||
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
|
||||
- Performance analysis using built-in [profiler](./docs/profiling.md).
|
||||
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
|
||||
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
|
||||
- SFTPGo supports a [plugin system](./docs/plugins.md) and therefore can be extended using external plugins.
|
||||
|
||||
The WebClient UI allows end users to change their credentials, browse and manage their files in the browser and setup two-factor authentication which works with Microsoft Authenticator, Google Authenticator, Authy and other compatible apps.
|
||||
## Platforms
|
||||
|
||||
## Sponsors
|
||||
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using a [GitHub Action](./.github/workflows/development.yml). The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.
|
||||
|
||||
We strongly believe in Open Source software model, so we decided to make SFTPGo available to everyone, but maintaining and evolving SFTPGo takes a lot of time and work. To make development and maintenance sustainable you should consider to support the project with a [sponsorship](https://github.com/sponsors/drakkan).
|
||||
## Requirements
|
||||
|
||||
We also provide [professional services](https://sftpgo.com/#pricing) to support you in using SFTPGo to the fullest.
|
||||
- Go as build only dependency. We support the Go version(s) used in [continuous integration workflows](./tree/main/.github/workflows).
|
||||
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or CockroachDB stable.
|
||||
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
|
||||
|
||||
The open source license grant you freedom but not assurance of help. So why would you rely on free software without support or any guarantee it will stay healthy and maintained for the upcoming years?
|
||||
## Installation
|
||||
|
||||
Supporting the project benefit businesses and the community because if the project is financially sustainable, using this business model, we don't have to restrict features and/or switch to an [Open-core](https://en.wikipedia.org/wiki/Open-core_model) model. The technology stays truly open source. Everyone wins.
|
||||
Binary releases for Linux, macOS, and Windows are available. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
|
||||
|
||||
It is important to understand that you should support SFTPGo and any other Open Source project you rely on for ongoing maintenance, even if you don't have any questions or need new features, to mitigate the business risk of a project you depend on going unmaintained, with its security and development velocity implications.
|
||||
An official Docker image is available. Documentation is [here](./docker/README.md).
|
||||
|
||||
### Thank you to our sponsors
|
||||
Some Linux distro packages are available:
|
||||
|
||||
#### Platinum sponsors
|
||||
- For Arch Linux via AUR:
|
||||
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/). This package follows stable releases. It requires `git`, `gcc` and `go` to build.
|
||||
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/). This package follows stable releases downloading the prebuilt linux binary from GitHub. It does not require `git`, `gcc` and `go` to build.
|
||||
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git `main` branch. It requires `git`, `gcc` and `go` to build.
|
||||
- Deb and RPM packages are built after each commit and for each release.
|
||||
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
|
||||
|
||||
[<img src="./img/Aledade_logo.png" alt="Aledade logo" width="202" height="70">](https://www.aledade.com/)
|
||||
</br></br>
|
||||
[<img src="./img/jumptrading.png" alt="Jump Trading logo" width="362" height="63">](https://www.jumptrading.com/)
|
||||
</br></br>
|
||||
[<img src="./img/wpengine.png" alt="WP Engine logo" width="331" height="63">](https://wpengine.com/)
|
||||
SFTPGo is also available on [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335), purchasing from there will help keep SFTPGo a long-term sustainable project.
|
||||
|
||||
#### Silver sponsors
|
||||
On FreeBSD you can install from the [SFTPGo port](https://www.freshports.org/ftp/sftpgo).
|
||||
|
||||
[<img src="./img/IDCS.png" alt="IDCS logo" width="212" height="51">](https://idcs.ip-paris.fr/)
|
||||
On Windows you can use:
|
||||
|
||||
#### Bronze sponsors
|
||||
- The Windows installer to install and run SFTPGo as a Windows service.
|
||||
- The portable package to start SFTPGo on demand.
|
||||
- The [Chocolatey package](https://community.chocolatey.org/packages/sftpgo) to install and run SFTPGo as a Windows service.
|
||||
|
||||
[<img src="./img/7digital.png" alt="7digital logo" width="178" height="56">](https://www.7digital.com/)
|
||||
</br></br>
|
||||
[<img src="./img/vps2day.png" alt="VPS2day logo" width="234" height="56">](https://www.vps2day.com/)
|
||||
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
|
||||
|
||||
## Support policy
|
||||
Alternately, you can [build from source](./docs/build-from-source.md).
|
||||
|
||||
You can use SFTPGo for free, respecting the obligations of the Open Source license, but please do not ask or expect free support as well.
|
||||
[Getting Started Guide for the Impatient](./docs/howto/getting-started.md).
|
||||
|
||||
Use [discussions](https://github.com/drakkan/sftpgo/discussions) to ask questions and get support from the community.
|
||||
## Configuration
|
||||
|
||||
If you report an invalid issue and/or ask for step-by-step support, your issue will be closed as invalid without further explanation and/or the "support request" label will be added. Invalid bug reports may confuse other users. Thanks for understanding.
|
||||
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
|
||||
|
||||
## Documentation
|
||||
Please make sure to [initialize the data provider](#data-provider-initialization-and-management) before running the daemon.
|
||||
|
||||
You can read more about supported features and documentation at [sftpgo.github.io](https://sftpgo.github.io/).
|
||||
To start SFTPGo with the default settings, simply run:
|
||||
|
||||
```bash
|
||||
sftpgo serve
|
||||
```
|
||||
|
||||
Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a service.
|
||||
|
||||
### Data provider initialization and management
|
||||
|
||||
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
|
||||
|
||||
For PostgreSQL, MySQL and CockroachDB providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
|
||||
|
||||
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
|
||||
|
||||
Alternately, you can create/update the required data provider structures yourself using the `initprovider` command.
|
||||
|
||||
For example, you can simply execute the following command from the configuration directory:
|
||||
|
||||
```bash
|
||||
sftpgo initprovider
|
||||
```
|
||||
|
||||
Take a look at the CLI usage to learn how to specify a different configuration file:
|
||||
|
||||
```bash
|
||||
sftpgo initprovider --help
|
||||
```
|
||||
|
||||
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
|
||||
|
||||
You can also reset your provider by using the `resetprovider` sub-command. Take a look at the CLI usage for more details:
|
||||
|
||||
```bash
|
||||
sftpgo resetprovider --help
|
||||
```
|
||||
|
||||
## Create the first admin
|
||||
|
||||
To start using SFTPGo you need to create an admin user, you can do it in several ways:
|
||||
|
||||
- by using the web admin interface. The default URL is [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
|
||||
- by loading initial data
|
||||
- by enabling `create_default_admin` in your configuration file and setting the environment variables `SFTPGO_DEFAULT_ADMIN_USERNAME` and `SFTPGO_DEFAULT_ADMIN_PASSWORD`
|
||||
|
||||
## Upgrading
|
||||
|
||||
SFTPGo supports upgrading from the previous release branch to the current one.
|
||||
Some examples for supported upgrade paths are:
|
||||
|
||||
- from 1.2.x to 2.0.x
|
||||
- from 2.0.x to 2.1.x and so on.
|
||||
|
||||
For supported upgrade paths, the data and schema are migrated automatically, alternately you can use the `initprovider` command.
|
||||
|
||||
So if, for example, you want to upgrade from a version before 1.2.x to 2.0.x, you must first install version 1.2.x, update the data provider and finally install the version 2.0.x. It is recommended to always install the latest available minor version, ie do not install 1.2.0 if 1.2.2 is available.
|
||||
|
||||
Loading data from a provider independent JSON dump is supported from the previous release branch to the current one too. After upgrading SFTPGo it is advisable to regenerate the JSON dump from the new version.
|
||||
|
||||
## Downgrading
|
||||
|
||||
If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the `revertprovider` command for this task.
|
||||
|
||||
As for upgrading, SFTPGo supports downgrading from the previous release branch to the current one.
|
||||
|
||||
So, if you plan to downgrade from 2.0.x to 1.2.x, before uninstalling 2.0.x version, you can prepare your data provider executing the following command from the configuration directory:
|
||||
|
||||
```shell
|
||||
sftpgo revertprovider --to-version 4
|
||||
```
|
||||
|
||||
Take a look at the CLI usage to see the supported parameter for the `--to-version` argument and to learn how to specify a different configuration file:
|
||||
|
||||
```shell
|
||||
sftpgo revertprovider --help
|
||||
```
|
||||
|
||||
The `revertprovider` command is not supported for the memory provider.
|
||||
|
||||
Please note that we only support the current release branch and the current main branch, if you find a bug it is better to report it rather than downgrading to an older unsupported version.
|
||||
|
||||
## Users and folders management
|
||||
|
||||
After starting SFTPGo you can manage users and folders using:
|
||||
|
||||
- the [web based administration interface](./docs/web-admin.md)
|
||||
- the [REST API](./docs/rest-api.md)
|
||||
|
||||
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
|
||||
|
||||
Full details for users, folders, admins and other resources are documented in the [OpenAPI](/openapi/openapi.yaml) schema. If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
|
||||
|
||||
## Tutorials
|
||||
|
||||
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
|
||||
|
||||
## Authentication options
|
||||
|
||||
### External Authentication
|
||||
|
||||
Custom authentication methods can easily be added. SFTPGo supports external authentication modules, and writing a new backend can be as simple as a few lines of shell script. More information can be found [here](./docs/external-auth.md).
|
||||
|
||||
### Keyboard Interactive Authentication
|
||||
|
||||
Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client.
|
||||
This authentication method is typically used for multi-factor authentication.
|
||||
|
||||
More information can be found [here](./docs/keyboard-interactive.md).
|
||||
|
||||
## Dynamic user creation or modification
|
||||
|
||||
A user can be created or modified by an external program just before the login. More information about this can be found [here](./docs/dynamic-user-mod.md).
|
||||
|
||||
## Custom Actions
|
||||
|
||||
SFTPGo allows you to configure custom commands and/or HTTP hooks to receive notifications about file uploads, deletions and several other events.
|
||||
|
||||
More information about custom actions can be found [here](./docs/custom-actions.md).
|
||||
|
||||
## Virtual folders
|
||||
|
||||
Directories outside the user home directory or based on a different storage provider can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
|
||||
|
||||
## Other hooks
|
||||
|
||||
You can get notified as soon as a new connection is established using the [Post-connect hook](./docs/post-connect-hook.md) and after each login using the [Post-login hook](./docs/post-login-hook.md).
|
||||
You can use your own hook to [check passwords](./docs/check-password-hook.md).
|
||||
|
||||
## Storage backends
|
||||
|
||||
### S3 Compatible Object Storage backends
|
||||
|
||||
Each user can be mapped to the whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about S3 integration can be found [here](./docs/s3.md).
|
||||
|
||||
### Google Cloud Storage backend
|
||||
|
||||
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
|
||||
|
||||
### Azure Blob Storage backend
|
||||
|
||||
Each user can be mapped with an Azure Blob Storage container or a container virtual folder. This way, the mapped container/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Azure Blob Storage integration can be found [here](./docs/azure-blob-storage.md).
|
||||
|
||||
### SFTP backend
|
||||
|
||||
Each user can be mapped to another SFTP server account or a subfolder of it. More information can be found [here](./docs/sftpfs.md).
|
||||
|
||||
### Encrypted backend
|
||||
|
||||
Data at-rest encryption is supported via the [cryptfs backend](./docs/dare.md).
|
||||
|
||||
### Other Storage backends
|
||||
|
||||
Adding new storage backends is quite easy:
|
||||
|
||||
- implement the [Fs interface](./vfs/vfs.go#L28 "interface for filesystem backends").
|
||||
- update the user method `GetFilesystem` to return the new backend
|
||||
- update the web interface and the REST API CLI
|
||||
- add the flags for the new storage backed to the `portable` mode
|
||||
|
||||
Anyway, some backends require a pay per use account (or they offer free account for a limited time period only). To be able to add support for such backends or to review pull requests, please provide a test account. The test account must be available for enough time to be able to maintain the backend and do basic tests before each new release.
|
||||
|
||||
## Brute force protection
|
||||
|
||||
The [connection failed logs](./docs/logs.md) can be used for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
|
||||
|
||||
You can also use the built-in [defender](./docs/defender.md).
|
||||
|
||||
## Account's configuration properties
|
||||
|
||||
Details information about account configuration properties can be found [here](./docs/account.md).
|
||||
|
||||
## Performance
|
||||
|
||||
SFTPGo can easily saturate a Gigabit connection on low end hardware with no special configuration, this is generally enough for most use cases.
|
||||
|
||||
More in-depth analysis of performance can be found [here](./docs/performance.md).
|
||||
|
||||
## Release Cadence
|
||||
|
||||
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new major releases per year and several bug fix releases.
|
||||
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new releases per year.
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
|
@ -69,25 +291,16 @@ SFTPGo makes use of the third party libraries listed inside [go.mod](./go.mod).
|
|||
|
||||
We are very grateful to all the people who contributed with ideas and/or pull requests.
|
||||
|
||||
Thank you to [ysura](https://www.ysura.com/) for granting us stable access to a test AWS S3 account.
|
||||
Thank you [ysura](https://www.ysura.com/) for granting me stable access to a test AWS S3 account.
|
||||
|
||||
Thank you to [KeenThemes](https://keenthemes.com/) for granting us a custom license to use their amazing [Mega Bundle](https://keenthemes.com/products/templates-mega-bundle) for SFTPGo UI.
|
||||
## Sponsors
|
||||
|
||||
Thank you to [Crowdin](https://crowdin.com/) for granting us an Open Source License.
|
||||
I'd like to make SFTPGo into a sustainable long term project and your [sponsorship](https://github.com/sponsors/drakkan) will really help :heart:
|
||||
|
||||
Thank you to [Incode](https://www.incode.it/) for helping us to improve the UI/UX.
|
||||
Thank you to our sponsors!
|
||||
|
||||
[<img src="https://www.7digital.com/wp-content/themes/sevendigital/images/top_logo.png" alt="7digital logo">](https://www.7digital.com/)
|
||||
|
||||
## License
|
||||
|
||||
SFTPGo source code is licensed under the GNU AGPL-3.0-only.
|
||||
|
||||
The [theme](https://keenthemes.com/products/templates-mega-bundle) used in WebAdmin and WebClient user interfaces is proprietary, this means:
|
||||
|
||||
- KeenThemes HTML/CSS/JS components are allowed for use only within the SFTPGo product and restricted to be used in a resealable HTML template that can compete with KeenThemes products anyhow.
|
||||
- The SFTPGo WebAdmin and WebClient user interfaces (HTML, CSS and JS components) based on this theme are allowed for use only within the SFTPGo product and therefore cannot be used in derivative works/products without an explicit grant from the [SFTPGo Team](mailto:support@sftpgo.com).
|
||||
|
||||
More information about [compliance](https://sftpgo.com/compliance.html).
|
||||
|
||||
## Copyright
|
||||
|
||||
Copyright (C) 2019 Nicola Murino
|
||||
GNU AGPLv3
|
||||
|
|
|
@ -2,9 +2,11 @@
|
|||
|
||||
## Supported Versions
|
||||
|
||||
Only the current release of the software is actively supported.
|
||||
[Contact us](mailto:support@sftpgo.com) if you need early security patches and enterprise-grade security.
|
||||
Only the current release of the software is actively supported. If you need
|
||||
help backporting fixes into an older release, feel free to ask.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
To report (possible) security issues in SFTPGo, please either send a mail to the [SFTPGo Team](mailto:support@sftpgo.com) or use Github's [private reporting feature](https://github.com/drakkan/sftpgo/security/advisories/new).
|
||||
Email your vulnerability information to SFTPGo's maintainer:
|
||||
|
||||
Nicola Murino <nicola.murino@gmail.com>
|
||||
|
|
12
cmd/gen.go
Normal file
12
cmd/gen.go
Normal file
|
@ -0,0 +1,12 @@
|
|||
package cmd
|
||||
|
||||
import "github.com/spf13/cobra"
|
||||
|
||||
var genCmd = &cobra.Command{
|
||||
Use: "gen",
|
||||
Short: "A collection of useful generators",
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(genCmd)
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
|
@ -53,7 +39,7 @@ MacOS:
|
|||
You will need to start a new shell for this setup to take effect.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Root().GenBashCompletionV2(os.Stdout, true)
|
||||
},
|
||||
}
|
||||
|
@ -79,7 +65,7 @@ macOS:
|
|||
You will need to start a new shell for this setup to take effect.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Root().GenZshCompletion(os.Stdout)
|
||||
},
|
||||
}
|
||||
|
@ -100,7 +86,7 @@ $ sftpgo gen completion fish > ~/.config/fish/completions/sftpgo.fish
|
|||
You will need to start a new shell for this setup to take effect.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Root().GenFishCompletion(os.Stdout, true)
|
||||
},
|
||||
}
|
||||
|
@ -118,7 +104,7 @@ To load completions for every new session, add the output of the above command
|
|||
to your powershell profile.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
|
||||
},
|
||||
}
|
|
@ -1,31 +1,15 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/fs"
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/cobra/doc"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/version"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -38,10 +22,10 @@ command-line interface.
|
|||
By default, it creates the man page files in the "man" directory under the
|
||||
current directory.
|
||||
`,
|
||||
Run: func(cmd *cobra.Command, _ []string) {
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
if _, err := os.Stat(manDir); errors.Is(err, fs.ErrNotExist) {
|
||||
if _, err := os.Stat(manDir); os.IsNotExist(err) {
|
||||
err = os.MkdirAll(manDir, os.ModePerm)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to generate man page files: %v", err)
|
70
cmd/initprovider.go
Normal file
70
cmd/initprovider.go
Normal file
|
@ -0,0 +1,70 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
initProviderCmd = &cobra.Command{
|
||||
Use: "initprovider",
|
||||
Short: "Initialize and/or updates the configured data provider",
|
||||
Long: `This command reads the data provider connection details from the specified
|
||||
configuration file and creates the initial structure or update the existing one,
|
||||
as needed.
|
||||
|
||||
Some data providers such as bolt and memory does not require an initialization
|
||||
but they could require an update to the existing data after upgrading SFTPGo.
|
||||
|
||||
For SQLite/bolt providers the database file will be auto-created if missing.
|
||||
|
||||
For PostgreSQL and MySQL providers you need to create the configured database,
|
||||
this command will create/update the required tables as needed.
|
||||
|
||||
To initialize/update the data provider from the configuration directory simply use:
|
||||
|
||||
$ sftpgo initprovider
|
||||
|
||||
Please take a look at the usage below to customize the options.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
return
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
err = kmsConfig.Initialize()
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("unable to initialize KMS: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
providerConf := config.GetProviderConf()
|
||||
logger.InfoToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
|
||||
err = dataprovider.InitializeDatabase(providerConf, configDir)
|
||||
if err == nil {
|
||||
logger.InfoToConsole("Data provider successfully initialized/updated")
|
||||
} else if err == dataprovider.ErrNoInitRequired {
|
||||
logger.InfoToConsole("%v", err.Error())
|
||||
} else {
|
||||
logger.WarnToConsole("Unable to initialize/update the data provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(initProviderCmd)
|
||||
addConfigFlags(initProviderCmd)
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
|
@ -21,8 +7,8 @@ import (
|
|||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/service"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -35,7 +21,7 @@ line flags simply use:
|
|||
sftpgo service install
|
||||
|
||||
Please take a look at the usage below to customize the startup options`,
|
||||
Run: func(_ *cobra.Command, _ []string) {
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.Service{
|
||||
ConfigDir: util.CleanDirInput(configDir),
|
||||
ConfigFile: configFile,
|
||||
|
@ -44,7 +30,7 @@ Please take a look at the usage below to customize the startup options`,
|
|||
LogMaxBackups: logMaxBackups,
|
||||
LogMaxAge: logMaxAge,
|
||||
LogCompress: logCompress,
|
||||
LogLevel: logLevel,
|
||||
LogVerbose: logVerbose,
|
||||
LogUTCTime: logUTCTime,
|
||||
Shutdown: make(chan bool),
|
||||
}
|
||||
|
@ -99,9 +85,8 @@ func getCustomServeFlags() []string {
|
|||
result = append(result, "--"+logMaxAgeFlag)
|
||||
result = append(result, strconv.Itoa(logMaxAge))
|
||||
}
|
||||
if logLevel != defaultLogLevel {
|
||||
result = append(result, "--"+logLevelFlag)
|
||||
result = append(result, logLevel)
|
||||
if logVerbose != defaultLogVerbose {
|
||||
result = append(result, "--"+logVerboseFlag+"=false")
|
||||
}
|
||||
if logUTCTime != defaultLogUTCTime {
|
||||
result = append(result, "--"+logUTCTimeFlag+"=true")
|
||||
|
@ -109,9 +94,5 @@ func getCustomServeFlags() []string {
|
|||
if logCompress != defaultLogCompress {
|
||||
result = append(result, "--"+logCompressFlag+"=true")
|
||||
}
|
||||
if graceTime != defaultGraceTime {
|
||||
result = append(result, "--"+graceTimeFlag)
|
||||
result = append(result, strconv.Itoa(graceTime))
|
||||
}
|
||||
return result
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
//go:build !noportable
|
||||
// +build !noportable
|
||||
|
||||
|
@ -27,25 +13,24 @@ import (
|
|||
"github.com/sftpgo/sdk"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/common"
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/kms"
|
||||
"github.com/drakkan/sftpgo/v2/internal/service"
|
||||
"github.com/drakkan/sftpgo/v2/internal/sftpd"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/internal/version"
|
||||
"github.com/drakkan/sftpgo/v2/internal/vfs"
|
||||
"github.com/drakkan/sftpgo/v2/common"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
"github.com/drakkan/sftpgo/v2/sftpd"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
var (
|
||||
directoryToServe string
|
||||
portableSFTPDPort int
|
||||
portableAdvertiseService bool
|
||||
portableAdvertiseCredentials bool
|
||||
portableUsername string
|
||||
portablePassword string
|
||||
portablePasswordFile string
|
||||
portableStartDir string
|
||||
portableLogFile string
|
||||
portableLogLevel string
|
||||
portableLogVerbose bool
|
||||
portableLogUTCTime bool
|
||||
portablePublicKeys []string
|
||||
portablePermissions []string
|
||||
|
@ -57,7 +42,6 @@ var (
|
|||
portableS3Region string
|
||||
portableS3AccessKey string
|
||||
portableS3AccessSecret string
|
||||
portableS3RoleARN string
|
||||
portableS3Endpoint string
|
||||
portableS3StorageClass string
|
||||
portableS3ACL string
|
||||
|
@ -65,7 +49,6 @@ var (
|
|||
portableS3ULPartSize int
|
||||
portableS3ULConcurrency int
|
||||
portableS3ForcePathStyle bool
|
||||
portableS3SkipTLSVerify bool
|
||||
portableGCSBucket string
|
||||
portableGCSCredentialsFile string
|
||||
portableGCSAutoCredentials int
|
||||
|
@ -77,9 +60,6 @@ var (
|
|||
portableWebDAVPort int
|
||||
portableWebDAVCert string
|
||||
portableWebDAVKey string
|
||||
portableHTTPPort int
|
||||
portableHTTPSCert string
|
||||
portableHTTPSKey string
|
||||
portableAzContainer string
|
||||
portableAzAccountName string
|
||||
portableAzAccountKey string
|
||||
|
@ -89,8 +69,6 @@ var (
|
|||
portableAzKeyPrefix string
|
||||
portableAzULPartSize int
|
||||
portableAzULConcurrency int
|
||||
portableAzDLPartSize int
|
||||
portableAzDLConcurrency int
|
||||
portableAzUseEmulator bool
|
||||
portableCryptPassphrase string
|
||||
portableSFTPEndpoint string
|
||||
|
@ -110,9 +88,9 @@ use:
|
|||
$ sftpgo portable
|
||||
|
||||
Please take a look at the usage below to customize the serving parameters`,
|
||||
Run: func(_ *cobra.Command, _ []string) {
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
portableDir := directoryToServe
|
||||
fsProvider := dataprovider.GetProviderFromValue(convertFsProvider())
|
||||
fsProvider := sdk.GetProviderByName(portableFsProvider)
|
||||
if !filepath.IsAbs(portableDir) {
|
||||
if fsProvider == sdk.LocalFilesystemProvider {
|
||||
portableDir, _ = filepath.Abs(portableDir)
|
||||
|
@ -141,80 +119,40 @@ Please take a look at the usage below to customize the serving parameters`,
|
|||
}
|
||||
portableSFTPPrivateKey = contents
|
||||
}
|
||||
if portableFTPDPort >= 0 && portableFTPSCert != "" && portableFTPSKey != "" {
|
||||
keyPairs := []common.TLSKeyPair{
|
||||
{
|
||||
Cert: portableFTPSCert,
|
||||
Key: portableFTPSKey,
|
||||
ID: common.DefaultTLSKeyPaidID,
|
||||
},
|
||||
}
|
||||
_, err := common.NewCertManager(keyPairs, filepath.Clean(defaultConfigDir),
|
||||
if portableFTPDPort >= 0 && len(portableFTPSCert) > 0 && len(portableFTPSKey) > 0 {
|
||||
_, err := common.NewCertManager(portableFTPSCert, portableFTPSKey, filepath.Clean(defaultConfigDir),
|
||||
"FTP portable")
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to load FTPS key pair, cert file %q key file %q error: %v\n",
|
||||
fmt.Printf("Unable to load FTPS key pair, cert file %#v key file %#v error: %v\n",
|
||||
portableFTPSCert, portableFTPSKey, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if portableWebDAVPort >= 0 && portableWebDAVCert != "" && portableWebDAVKey != "" {
|
||||
keyPairs := []common.TLSKeyPair{
|
||||
{
|
||||
Cert: portableWebDAVCert,
|
||||
Key: portableWebDAVKey,
|
||||
ID: common.DefaultTLSKeyPaidID,
|
||||
},
|
||||
}
|
||||
_, err := common.NewCertManager(keyPairs, filepath.Clean(defaultConfigDir),
|
||||
if portableWebDAVPort > 0 && len(portableWebDAVCert) > 0 && len(portableWebDAVKey) > 0 {
|
||||
_, err := common.NewCertManager(portableWebDAVCert, portableWebDAVKey, filepath.Clean(defaultConfigDir),
|
||||
"WebDAV portable")
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to load WebDAV key pair, cert file %q key file %q error: %v\n",
|
||||
fmt.Printf("Unable to load WebDAV key pair, cert file %#v key file %#v error: %v\n",
|
||||
portableWebDAVCert, portableWebDAVKey, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if portableHTTPPort >= 0 && portableHTTPSCert != "" && portableHTTPSKey != "" {
|
||||
keyPairs := []common.TLSKeyPair{
|
||||
{
|
||||
Cert: portableHTTPSCert,
|
||||
Key: portableHTTPSKey,
|
||||
ID: common.DefaultTLSKeyPaidID,
|
||||
},
|
||||
}
|
||||
_, err := common.NewCertManager(keyPairs, filepath.Clean(defaultConfigDir),
|
||||
"HTTP portable")
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to load HTTPS key pair, cert file %q key file %q error: %v\n",
|
||||
portableHTTPSCert, portableHTTPSKey, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
pwd := portablePassword
|
||||
if portablePasswordFile != "" {
|
||||
content, err := os.ReadFile(portablePasswordFile)
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to read password file %q: %v", portablePasswordFile, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
pwd = strings.TrimSpace(util.BytesToString(content))
|
||||
}
|
||||
service.SetGraceTime(graceTime)
|
||||
service := service.Service{
|
||||
ConfigDir: util.CleanDirInput(configDir),
|
||||
ConfigFile: configFile,
|
||||
ConfigDir: filepath.Clean(defaultConfigDir),
|
||||
ConfigFile: defaultConfigFile,
|
||||
LogFilePath: portableLogFile,
|
||||
LogMaxSize: defaultLogMaxSize,
|
||||
LogMaxBackups: defaultLogMaxBackup,
|
||||
LogMaxAge: defaultLogMaxAge,
|
||||
LogCompress: defaultLogCompress,
|
||||
LogLevel: portableLogLevel,
|
||||
LogVerbose: portableLogVerbose,
|
||||
LogUTCTime: portableLogUTCTime,
|
||||
Shutdown: make(chan bool),
|
||||
PortableMode: 1,
|
||||
PortableUser: dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: portableUsername,
|
||||
Password: pwd,
|
||||
Password: portablePassword,
|
||||
PublicKeys: portablePublicKeys,
|
||||
Permissions: permissions,
|
||||
HomeDir: portableDir,
|
||||
|
@ -222,18 +160,16 @@ Please take a look at the usage below to customize the serving parameters`,
|
|||
},
|
||||
Filters: dataprovider.UserFilters{
|
||||
BaseUserFilters: sdk.BaseUserFilters{
|
||||
FilePatterns: parsePatternsFilesFilters(),
|
||||
StartDirectory: portableStartDir,
|
||||
FilePatterns: parsePatternsFilesFilters(),
|
||||
},
|
||||
},
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: fsProvider,
|
||||
Provider: sdk.GetProviderByName(portableFsProvider),
|
||||
S3Config: vfs.S3FsConfig{
|
||||
BaseS3FsConfig: sdk.BaseS3FsConfig{
|
||||
Bucket: portableS3Bucket,
|
||||
Region: portableS3Region,
|
||||
AccessKey: portableS3AccessKey,
|
||||
RoleARN: portableS3RoleARN,
|
||||
Endpoint: portableS3Endpoint,
|
||||
StorageClass: portableS3StorageClass,
|
||||
ACL: portableS3ACL,
|
||||
|
@ -241,7 +177,6 @@ Please take a look at the usage below to customize the serving parameters`,
|
|||
UploadPartSize: int64(portableS3ULPartSize),
|
||||
UploadConcurrency: portableS3ULConcurrency,
|
||||
ForcePathStyle: portableS3ForcePathStyle,
|
||||
SkipTLSVerify: portableS3SkipTLSVerify,
|
||||
},
|
||||
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
|
||||
},
|
||||
|
@ -256,16 +191,14 @@ Please take a look at the usage below to customize the serving parameters`,
|
|||
},
|
||||
AzBlobConfig: vfs.AzBlobFsConfig{
|
||||
BaseAzBlobFsConfig: sdk.BaseAzBlobFsConfig{
|
||||
Container: portableAzContainer,
|
||||
AccountName: portableAzAccountName,
|
||||
Endpoint: portableAzEndpoint,
|
||||
AccessTier: portableAzAccessTier,
|
||||
KeyPrefix: portableAzKeyPrefix,
|
||||
UseEmulator: portableAzUseEmulator,
|
||||
UploadPartSize: int64(portableAzULPartSize),
|
||||
UploadConcurrency: portableAzULConcurrency,
|
||||
DownloadPartSize: int64(portableAzDLPartSize),
|
||||
DownloadConcurrency: portableAzDLConcurrency,
|
||||
Container: portableAzContainer,
|
||||
AccountName: portableAzAccountName,
|
||||
Endpoint: portableAzEndpoint,
|
||||
AccessTier: portableAzAccessTier,
|
||||
KeyPrefix: portableAzKeyPrefix,
|
||||
UseEmulator: portableAzUseEmulator,
|
||||
UploadPartSize: int64(portableAzULPartSize),
|
||||
UploadConcurrency: portableAzULConcurrency,
|
||||
},
|
||||
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
|
||||
SASURL: kms.NewPlainSecret(portableAzSASURL),
|
||||
|
@ -282,17 +215,14 @@ Please take a look at the usage below to customize the serving parameters`,
|
|||
DisableCouncurrentReads: portableSFTPDisableConcurrentReads,
|
||||
BufferSize: portableSFTPDBufferSize,
|
||||
},
|
||||
Password: kms.NewPlainSecret(portableSFTPPassword),
|
||||
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
|
||||
KeyPassphrase: kms.NewEmptySecret(),
|
||||
Password: kms.NewPlainSecret(portableSFTPPassword),
|
||||
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableHTTPPort,
|
||||
portableSSHCommands, portableFTPSCert, portableFTPSKey, portableWebDAVCert, portableWebDAVKey,
|
||||
portableHTTPSCert, portableHTTPSKey)
|
||||
if err == nil {
|
||||
if err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableSSHCommands, portableAdvertiseService,
|
||||
portableAdvertiseCredentials, portableFTPSCert, portableFTPSKey, portableWebDAVCert, portableWebDAVKey); err == nil {
|
||||
service.Wait()
|
||||
if service.Error == nil {
|
||||
os.Exit(0)
|
||||
|
@ -310,18 +240,13 @@ func init() {
|
|||
This can be an absolute path or a path
|
||||
relative to the current directory
|
||||
`)
|
||||
portableCmd.Flags().StringVar(&portableStartDir, "start-directory", "/", `Alternate start directory.
|
||||
This is a virtual path not a filesystem
|
||||
path`)
|
||||
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().IntVar(&portableFTPDPort, "ftpd-port", -1, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().IntVar(&portableWebDAVPort, "webdav-port", -1, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().IntVar(&portableHTTPPort, "httpd-port", -1, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().StringSliceVar(&portableSSHCommands, "ssh-commands", sftpd.GetDefaultSSHCommands(),
|
||||
portableCmd.Flags().StringSliceVarP(&portableSSHCommands, "ssh-commands", "c", sftpd.GetDefaultSSHCommands(),
|
||||
`SSH commands to enable.
|
||||
"*" means any supported SSH command
|
||||
including scp
|
||||
|
@ -330,15 +255,8 @@ including scp
|
|||
value`)
|
||||
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", `Leave empty to use an auto generated
|
||||
value`)
|
||||
portableCmd.Flags().StringVar(&portablePasswordFile, "password-file", "", `Read the password from the specified
|
||||
file path. Leave empty to use an auto
|
||||
generated value`)
|
||||
portableCmd.Flags().StringVarP(&portableLogFile, logFilePathFlag, "l", "", "Leave empty to disable logging")
|
||||
portableCmd.Flags().StringVar(&portableLogLevel, logLevelFlag, defaultLogLevel, `Set the log level.
|
||||
Supported values:
|
||||
|
||||
debug, info, warn, error.
|
||||
`)
|
||||
portableCmd.Flags().BoolVarP(&portableLogVerbose, logVerboseFlag, "v", false, "Enable verbose logs")
|
||||
portableCmd.Flags().BoolVar(&portableLogUTCTime, logUTCTimeFlag, false, "Use UTC time for logging")
|
||||
portableCmd.Flags().StringSliceVarP(&portablePublicKeys, "public-key", "k", []string{}, "")
|
||||
portableCmd.Flags().StringSliceVarP(&portablePermissions, "permissions", "g", []string{"list", "download"},
|
||||
|
@ -354,6 +272,14 @@ For example: "/somedir::*.jpg,a*b?.png"`)
|
|||
The format is:
|
||||
/dir::pattern1,pattern2.
|
||||
For example: "/somedir::*.jpg,a*b?.png"`)
|
||||
portableCmd.Flags().BoolVarP(&portableAdvertiseService, "advertise-service", "S", false,
|
||||
`Advertise configured services using
|
||||
multicast DNS`)
|
||||
portableCmd.Flags().BoolVarP(&portableAdvertiseCredentials, "advertise-credentials", "C", false,
|
||||
`If the SFTP/FTP service is
|
||||
advertised via multicast DNS, this
|
||||
flag allows to put username/password
|
||||
inside the advertised TXT record`)
|
||||
portableCmd.Flags().StringVarP(&portableFsProvider, "fs-provider", "f", "osfs", `osfs => local filesystem (legacy value: 0)
|
||||
s3fs => AWS S3 compatible (legacy: 1)
|
||||
gcsfs => Google Cloud Storage (legacy: 2)
|
||||
|
@ -364,7 +290,6 @@ sftpfs => SFTP (legacy: 5)`)
|
|||
portableCmd.Flags().StringVar(&portableS3Region, "s3-region", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3AccessKey, "s3-access-key", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3AccessSecret, "s3-access-secret", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3RoleARN, "s3-role-arn", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3Endpoint, "s3-endpoint", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3StorageClass, "s3-storage-class", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3ACL, "s3-acl", "", "")
|
||||
|
@ -376,13 +301,6 @@ prefix and its contents`)
|
|||
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, `How many parts are uploaded in
|
||||
parallel`)
|
||||
portableCmd.Flags().BoolVar(&portableS3ForcePathStyle, "s3-force-path-style", false, `Force path style bucket URL`)
|
||||
portableCmd.Flags().BoolVar(&portableS3SkipTLSVerify, "s3-skip-tls-verify", false, `If enabled the S3 client accepts any TLS
|
||||
certificate presented by the server and
|
||||
any host name in that certificate.
|
||||
In this mode, TLS is susceptible to
|
||||
man-in-the-middle attacks.
|
||||
This should be used only for testing.
|
||||
`)
|
||||
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
|
||||
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
|
||||
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", `Allows to restrict access to the
|
||||
|
@ -398,10 +316,6 @@ a JSON credentials file, 1 automatic
|
|||
portableCmd.Flags().StringVar(&portableWebDAVCert, "webdav-cert", "", `Path to the certificate file for WebDAV
|
||||
over HTTPS`)
|
||||
portableCmd.Flags().StringVar(&portableWebDAVKey, "webdav-key", "", `Path to the key file for WebDAV over
|
||||
HTTPS`)
|
||||
portableCmd.Flags().StringVar(&portableHTTPSCert, "httpd-cert", "", `Path to the certificate file for WebClient
|
||||
over HTTPS`)
|
||||
portableCmd.Flags().StringVar(&portableHTTPSKey, "httpd-key", "", `Path to the key file for WebClient over
|
||||
HTTPS`)
|
||||
portableCmd.Flags().StringVar(&portableAzContainer, "az-container", "", "")
|
||||
portableCmd.Flags().StringVar(&portableAzAccountName, "az-account-name", "", "")
|
||||
|
@ -414,13 +328,9 @@ container setting`)
|
|||
portableCmd.Flags().StringVar(&portableAzKeyPrefix, "az-key-prefix", "", `Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents`)
|
||||
portableCmd.Flags().IntVar(&portableAzULPartSize, "az-upload-part-size", 5, `The buffer size for multipart uploads
|
||||
portableCmd.Flags().IntVar(&portableAzULPartSize, "az-upload-part-size", 4, `The buffer size for multipart uploads
|
||||
(MB)`)
|
||||
portableCmd.Flags().IntVar(&portableAzULConcurrency, "az-upload-concurrency", 5, `How many parts are uploaded in
|
||||
parallel`)
|
||||
portableCmd.Flags().IntVar(&portableAzDLPartSize, "az-download-part-size", 5, `The buffer size for multipart downloads
|
||||
(MB)`)
|
||||
portableCmd.Flags().IntVar(&portableAzDLConcurrency, "az-download-concurrency", 5, `How many parts are downloaded in
|
||||
portableCmd.Flags().IntVar(&portableAzULConcurrency, "az-upload-concurrency", 2, `How many parts are uploaded in
|
||||
parallel`)
|
||||
portableCmd.Flags().BoolVar(&portableAzUseEmulator, "az-use-emulator", false, "")
|
||||
portableCmd.Flags().StringVar(&portableCryptPassphrase, "crypto-passphrase", "", `Passphrase for encryption/decryption`)
|
||||
|
@ -445,14 +355,6 @@ multiple concurrent requests and this
|
|||
allows data to be transferred at a
|
||||
faster rate, over high latency networks,
|
||||
by overlapping round-trip times`)
|
||||
portableCmd.Flags().IntVar(&graceTime, graceTimeFlag, 0,
|
||||
`This grace time defines the number of
|
||||
seconds allowed for existing transfers
|
||||
to get completed before shutting down.
|
||||
A graceful shutdown is triggered by an
|
||||
interrupt signal.
|
||||
`)
|
||||
addConfigFlags(portableCmd)
|
||||
rootCmd.AddCommand(portableCmd)
|
||||
}
|
||||
|
||||
|
@ -517,30 +419,11 @@ func getFileContents(name string) (string, error) {
|
|||
return "", err
|
||||
}
|
||||
if fi.Size() > 1048576 {
|
||||
return "", fmt.Errorf("%q is too big %v/1048576 bytes", name, fi.Size())
|
||||
return "", fmt.Errorf("%#v is too big %v/1048576 bytes", name, fi.Size())
|
||||
}
|
||||
contents, err := os.ReadFile(name)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return util.BytesToString(contents), nil
|
||||
}
|
||||
|
||||
func convertFsProvider() string {
|
||||
switch portableFsProvider {
|
||||
case "osfs", "6": // httpfs (6) is not supported in portable mode, so return the default
|
||||
return "0"
|
||||
case "s3fs":
|
||||
return "1"
|
||||
case "gcsfs":
|
||||
return "2"
|
||||
case "azblobfs":
|
||||
return "3"
|
||||
case "cryptfs":
|
||||
return "4"
|
||||
case "sftpfs":
|
||||
return "5"
|
||||
default:
|
||||
return portableFsProvider
|
||||
}
|
||||
return string(contents), nil
|
||||
}
|
10
cmd/portable_disabled.go
Normal file
10
cmd/portable_disabled.go
Normal file
|
@ -0,0 +1,10 @@
|
|||
//go:build noportable
|
||||
// +build noportable
|
||||
|
||||
package cmd
|
||||
|
||||
import "github.com/drakkan/sftpgo/v2/version"
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-portable")
|
||||
}
|
35
cmd/reload_windows.go
Normal file
35
cmd/reload_windows.go
Normal file
|
@ -0,0 +1,35 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
reloadCmd = &cobra.Command{
|
||||
Use: "reload",
|
||||
Short: "Reload the SFTPGo Windows Service sending a \"paramchange\" request",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
Shutdown: make(chan bool),
|
||||
},
|
||||
}
|
||||
err := s.Reload()
|
||||
if err != nil {
|
||||
fmt.Printf("Error sending reload signal: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Reload signal sent!\r\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
serviceCmd.AddCommand(reloadCmd)
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
|
@ -23,10 +9,10 @@ import (
|
|||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/config"
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -39,13 +25,13 @@ configuration file and resets the provider by deleting all data and schemas.
|
|||
This command is not supported for the memory provider.
|
||||
|
||||
Please take a look at the usage below to customize the options.`,
|
||||
Run: func(_ *cobra.Command, _ []string) {
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to load configuration: %v", err)
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
|
@ -56,7 +42,7 @@ Please take a look at the usage below to customize the options.`,
|
|||
}
|
||||
providerConf := config.GetProviderConf()
|
||||
if !resetProviderForce {
|
||||
logger.WarnToConsole("You are about to delete all the SFTPGo data for provider %q, config file: %q",
|
||||
logger.WarnToConsole("You are about to delete all the SFTPGo data for provider %#v, config file: %#v",
|
||||
providerConf.Driver, viper.ConfigFileUsed())
|
||||
logger.WarnToConsole("Are you sure? (Y/n)")
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
|
@ -70,7 +56,7 @@ Please take a look at the usage below to customize the options.`,
|
|||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
logger.InfoToConsole("Resetting provider: %q, config file: %q", providerConf.Driver, viper.ConfigFileUsed())
|
||||
logger.InfoToConsole("Resetting provider: %#v, config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
|
||||
err = dataprovider.ResetDatabase(providerConf, configDir)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Error resetting provider: %v", err)
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
|
@ -21,10 +7,10 @@ import (
|
|||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/config"
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -37,17 +23,17 @@ configuration file and restore the provider schema and/or data to a previous ver
|
|||
This command is not supported for the memory provider.
|
||||
|
||||
Please take a look at the usage below to customize the options.`,
|
||||
Run: func(_ *cobra.Command, _ []string) {
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
if revertProviderTargetVersion != 29 {
|
||||
logger.WarnToConsole("Unsupported target version, 29 is the only supported one")
|
||||
if revertProviderTargetVersion != 10 {
|
||||
logger.WarnToConsole("Unsupported target version, 10 is the only supported one")
|
||||
os.Exit(1)
|
||||
}
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to load configuration: %v", err)
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
|
@ -57,7 +43,7 @@ Please take a look at the usage below to customize the options.`,
|
|||
os.Exit(1)
|
||||
}
|
||||
providerConf := config.GetProviderConf()
|
||||
logger.InfoToConsole("Reverting provider: %q config file: %q target version %d", providerConf.Driver,
|
||||
logger.InfoToConsole("Reverting provider: %#v config file: %#v target version %v", providerConf.Driver,
|
||||
viper.ConfigFileUsed(), revertProviderTargetVersion)
|
||||
err = dataprovider.RevertDatabase(providerConf, configDir, revertProviderTargetVersion)
|
||||
if err != nil {
|
||||
|
@ -71,7 +57,7 @@ Please take a look at the usage below to customize the options.`,
|
|||
|
||||
func init() {
|
||||
addConfigFlags(revertProviderCmd)
|
||||
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 29, `29 means the version supported in v2.6.x`)
|
||||
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 10, `10 means the version supported in v2.1.x`)
|
||||
|
||||
rootCmd.AddCommand(revertProviderCmd)
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
// Package cmd provides Command Line Interface support
|
||||
package cmd
|
||||
|
||||
|
@ -22,7 +8,7 @@ import (
|
|||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/version"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -40,8 +26,8 @@ const (
|
|||
logMaxAgeKey = "log_max_age"
|
||||
logCompressFlag = "log-compress"
|
||||
logCompressKey = "log_compress"
|
||||
logLevelFlag = "log-level"
|
||||
logLevelKey = "log_level"
|
||||
logVerboseFlag = "log-verbose"
|
||||
logVerboseKey = "log_verbose"
|
||||
logUTCTimeFlag = "log-utc-time"
|
||||
logUTCTimeKey = "log_utc_time"
|
||||
loadDataFromFlag = "loaddata-from"
|
||||
|
@ -52,8 +38,6 @@ const (
|
|||
loadDataQuotaScanKey = "loaddata_scan"
|
||||
loadDataCleanFlag = "loaddata-clean"
|
||||
loadDataCleanKey = "loaddata_clean"
|
||||
graceTimeFlag = "grace-time"
|
||||
graceTimeKey = "grace_time"
|
||||
defaultConfigDir = "."
|
||||
defaultConfigFile = ""
|
||||
defaultLogFile = "sftpgo.log"
|
||||
|
@ -61,13 +45,12 @@ const (
|
|||
defaultLogMaxBackup = 5
|
||||
defaultLogMaxAge = 28
|
||||
defaultLogCompress = false
|
||||
defaultLogLevel = "debug"
|
||||
defaultLogVerbose = true
|
||||
defaultLogUTCTime = false
|
||||
defaultLoadDataFrom = ""
|
||||
defaultLoadDataMode = 1
|
||||
defaultLoadDataQuotaScan = 0
|
||||
defaultLoadDataClean = false
|
||||
defaultGraceTime = 0
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -78,19 +61,16 @@ var (
|
|||
logMaxBackups int
|
||||
logMaxAge int
|
||||
logCompress bool
|
||||
logLevel string
|
||||
logVerbose bool
|
||||
logUTCTime bool
|
||||
loadDataFrom string
|
||||
loadDataMode int
|
||||
loadDataQuotaScan int
|
||||
loadDataClean bool
|
||||
graceTime int
|
||||
// used if awscontainer build tag is enabled
|
||||
disableAWSInstallationCode bool
|
||||
|
||||
rootCmd = &cobra.Command{
|
||||
Use: "sftpgo",
|
||||
Short: "Full-featured and highly configurable file transfer server",
|
||||
Short: "Fully featured and highly configurable SFTP server",
|
||||
}
|
||||
)
|
||||
|
||||
|
@ -115,11 +95,11 @@ func addConfigFlags(cmd *cobra.Command) {
|
|||
viper.SetDefault(configDirKey, defaultConfigDir)
|
||||
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR") //nolint:errcheck // err is not nil only if the key to bind is missing
|
||||
cmd.Flags().StringVarP(&configDir, configDirFlag, "c", viper.GetString(configDirKey),
|
||||
`Location of the config dir. This directory
|
||||
`Location for the config dir. This directory
|
||||
is used as the base for files with a relative
|
||||
path, e.g. the private keys for the SFTP
|
||||
server or the database file if you use a
|
||||
file-based data provider.
|
||||
path, eg. the private keys for the SFTP
|
||||
server or the SQLite database if you use
|
||||
SQLite as data provider.
|
||||
The configuration file, if not explicitly set,
|
||||
is looked for in this dir. We support reading
|
||||
from JSON, TOML, YAML, HCL, envfile and Java
|
||||
|
@ -146,43 +126,6 @@ env var too.`)
|
|||
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag)) //nolint:errcheck
|
||||
}
|
||||
|
||||
func addBaseLoadDataFlags(cmd *cobra.Command) {
|
||||
viper.SetDefault(loadDataFromKey, defaultLoadDataFrom)
|
||||
viper.BindEnv(loadDataFromKey, "SFTPGO_LOADDATA_FROM") //nolint:errcheck
|
||||
cmd.Flags().StringVar(&loadDataFrom, loadDataFromFlag, viper.GetString(loadDataFromKey),
|
||||
`Load users and folders from this file.
|
||||
The file must be specified as absolute path
|
||||
and it must contain a backup obtained using
|
||||
the "dumpdata" REST API or compatible content.
|
||||
This flag can be set using SFTPGO_LOADDATA_FROM
|
||||
env var too.
|
||||
`)
|
||||
viper.BindPFlag(loadDataFromKey, cmd.Flags().Lookup(loadDataFromFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataModeKey, defaultLoadDataMode)
|
||||
viper.BindEnv(loadDataModeKey, "SFTPGO_LOADDATA_MODE") //nolint:errcheck
|
||||
cmd.Flags().IntVar(&loadDataMode, loadDataModeFlag, viper.GetInt(loadDataModeKey),
|
||||
`Restore mode for data to load:
|
||||
0 - new users are added, existing users are
|
||||
updated
|
||||
1 - New users are added, existing users are
|
||||
not modified
|
||||
This flag can be set using SFTPGO_LOADDATA_MODE
|
||||
env var too.
|
||||
`)
|
||||
viper.BindPFlag(loadDataModeKey, cmd.Flags().Lookup(loadDataModeFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataCleanKey, defaultLoadDataClean)
|
||||
viper.BindEnv(loadDataCleanKey, "SFTPGO_LOADDATA_CLEAN") //nolint:errcheck
|
||||
cmd.Flags().BoolVar(&loadDataClean, loadDataCleanFlag, viper.GetBool(loadDataCleanKey),
|
||||
`Determine if the loaddata-from file should
|
||||
be removed after a successful load. This flag
|
||||
can be set using SFTPGO_LOADDATA_CLEAN env var
|
||||
too. (default "false")
|
||||
`)
|
||||
viper.BindPFlag(loadDataCleanKey, cmd.Flags().Lookup(loadDataCleanFlag)) //nolint:errcheck
|
||||
}
|
||||
|
||||
func addServeFlags(cmd *cobra.Command) {
|
||||
addConfigFlags(cmd)
|
||||
|
||||
|
@ -233,17 +176,13 @@ It is unused if log-file-path is empty.
|
|||
`)
|
||||
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logLevelKey, defaultLogLevel)
|
||||
viper.BindEnv(logLevelKey, "SFTPGO_LOG_LEVEL") //nolint:errcheck
|
||||
cmd.Flags().StringVar(&logLevel, logLevelFlag, viper.GetString(logLevelKey),
|
||||
`Set the log level. Supported values:
|
||||
|
||||
debug, info, warn, error.
|
||||
|
||||
This flag can be set
|
||||
using SFTPGO_LOG_LEVEL env var too.
|
||||
viper.SetDefault(logVerboseKey, defaultLogVerbose)
|
||||
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
|
||||
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
|
||||
`Enable verbose logs. This flag can be set
|
||||
using SFTPGO_LOG_VERBOSE env var too.
|
||||
`)
|
||||
viper.BindPFlag(logLevelKey, cmd.Flags().Lookup(logLevelFlag)) //nolint:errcheck
|
||||
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
|
||||
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
|
||||
|
@ -253,7 +192,30 @@ using SFTPGO_LOG_UTC_TIME env var too.
|
|||
`)
|
||||
viper.BindPFlag(logUTCTimeKey, cmd.Flags().Lookup(logUTCTimeFlag)) //nolint:errcheck
|
||||
|
||||
addBaseLoadDataFlags(cmd)
|
||||
viper.SetDefault(loadDataFromKey, defaultLoadDataFrom)
|
||||
viper.BindEnv(loadDataFromKey, "SFTPGO_LOADDATA_FROM") //nolint:errcheck
|
||||
cmd.Flags().StringVar(&loadDataFrom, loadDataFromFlag, viper.GetString(loadDataFromKey),
|
||||
`Load users and folders from this file.
|
||||
The file must be specified as absolute path
|
||||
and it must contain a backup obtained using
|
||||
the "dumpdata" REST API or compatible content.
|
||||
This flag can be set using SFTPGO_LOADDATA_FROM
|
||||
env var too.
|
||||
`)
|
||||
viper.BindPFlag(loadDataFromKey, cmd.Flags().Lookup(loadDataFromFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataModeKey, defaultLoadDataMode)
|
||||
viper.BindEnv(loadDataModeKey, "SFTPGO_LOADDATA_MODE") //nolint:errcheck
|
||||
cmd.Flags().IntVar(&loadDataMode, loadDataModeFlag, viper.GetInt(loadDataModeKey),
|
||||
`Restore mode for data to load:
|
||||
0 - new users are added, existing users are
|
||||
updated
|
||||
1 - New users are added, existing users are
|
||||
not modified
|
||||
This flag can be set using SFTPGO_LOADDATA_MODE
|
||||
env var too.
|
||||
`)
|
||||
viper.BindPFlag(loadDataModeKey, cmd.Flags().Lookup(loadDataModeFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataQuotaScanKey, defaultLoadDataQuotaScan)
|
||||
viper.BindEnv(loadDataQuotaScanKey, "SFTPGO_LOADDATA_QUOTA_SCAN") //nolint:errcheck
|
||||
|
@ -267,19 +229,13 @@ env var too.
|
|||
(default 0)`)
|
||||
viper.BindPFlag(loadDataQuotaScanKey, cmd.Flags().Lookup(loadDataQuotaScanFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(graceTimeKey, defaultGraceTime)
|
||||
viper.BindEnv(graceTimeKey, "SFTPGO_GRACE_TIME") //nolint:errcheck
|
||||
cmd.Flags().IntVar(&graceTime, graceTimeFlag, viper.GetInt(graceTimeKey),
|
||||
`Graceful shutdown is an option to initiate a
|
||||
shutdown without abrupt cancellation of the
|
||||
currently ongoing client-initiated transfer
|
||||
sessions.
|
||||
This grace time defines the number of seconds
|
||||
allowed for existing transfers to get
|
||||
completed before shutting down.
|
||||
A graceful shutdown is triggered by an
|
||||
interrupt signal.
|
||||
This flag can be set using SFTPGO_GRACE_TIME env
|
||||
var too. 0 means disabled. (default 0)`)
|
||||
viper.BindPFlag(graceTimeKey, cmd.Flags().Lookup(graceTimeFlag)) //nolint:errcheck
|
||||
viper.SetDefault(loadDataCleanKey, defaultLoadDataClean)
|
||||
viper.BindEnv(loadDataCleanKey, "SFTPGO_LOADDATA_CLEAN") //nolint:errcheck
|
||||
cmd.Flags().BoolVar(&loadDataClean, loadDataCleanFlag, viper.GetBool(loadDataCleanKey),
|
||||
`Determine if the loaddata-from file should
|
||||
be removed after a successful load. This flag
|
||||
can be set using SFTPGO_LOADDATA_CLEAN env var
|
||||
too. (default "false")
|
||||
`)
|
||||
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
|
||||
}
|
35
cmd/rotatelogs_windows.go
Normal file
35
cmd/rotatelogs_windows.go
Normal file
|
@ -0,0 +1,35 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
rotateLogCmd = &cobra.Command{
|
||||
Use: "rotatelogs",
|
||||
Short: "Signal to the running service to rotate the logs",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
Shutdown: make(chan bool),
|
||||
},
|
||||
}
|
||||
err := s.RotateLogFile()
|
||||
if err != nil {
|
||||
fmt.Printf("Error sending rotate log file signal to the service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Rotate log file signal sent!\r\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
serviceCmd.AddCommand(rotateLogCmd)
|
||||
}
|
53
cmd/serve.go
Normal file
53
cmd/serve.go
Normal file
|
@ -0,0 +1,53 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
serveCmd = &cobra.Command{
|
||||
Use: "serve",
|
||||
Short: "Start the SFTPGo service",
|
||||
Long: `To start the SFTPGo with the default values for the command line flags simply
|
||||
use:
|
||||
|
||||
$ sftpgo serve
|
||||
|
||||
Please take a look at the usage below to customize the startup options`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
service := service.Service{
|
||||
ConfigDir: util.CleanDirInput(configDir),
|
||||
ConfigFile: configFile,
|
||||
LogFilePath: logFilePath,
|
||||
LogMaxSize: logMaxSize,
|
||||
LogMaxBackups: logMaxBackups,
|
||||
LogMaxAge: logMaxAge,
|
||||
LogCompress: logCompress,
|
||||
LogVerbose: logVerbose,
|
||||
LogUTCTime: logUTCTime,
|
||||
LoadDataFrom: loadDataFrom,
|
||||
LoadDataMode: loadDataMode,
|
||||
LoadDataQuotaScan: loadDataQuotaScan,
|
||||
LoadDataClean: loadDataClean,
|
||||
Shutdown: make(chan bool),
|
||||
}
|
||||
if err := service.Start(); err == nil {
|
||||
service.Wait()
|
||||
if service.Error == nil {
|
||||
os.Exit(0)
|
||||
}
|
||||
}
|
||||
os.Exit(1)
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(serveCmd)
|
||||
addServeFlags(serveCmd)
|
||||
}
|
16
cmd/service_windows.go
Normal file
16
cmd/service_windows.go
Normal file
|
@ -0,0 +1,16 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
serviceCmd = &cobra.Command{
|
||||
Use: "service",
|
||||
Short: "Manage the SFTPGo Windows Service",
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(serviceCmd)
|
||||
}
|
54
cmd/smtptest.go
Normal file
54
cmd/smtptest.go
Normal file
|
@ -0,0 +1,54 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/smtp"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
smtpTestRecipient string
|
||||
smtpTestCmd = &cobra.Command{
|
||||
Use: "smtptest",
|
||||
Short: "Test the SMTP configuration",
|
||||
Long: `SFTPGo will try to send a test email to the specified recipient.
|
||||
If the SMTP configuration is correct you should receive this email.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
smtpConfig := config.GetSMTPConfig()
|
||||
err = smtpConfig.Initialize(configDir)
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("unable to initialize SMTP configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
err = smtp.SendEmail(smtpTestRecipient, "SFTPGo - Testing Email Settings", "It appears your SFTPGo email is setup correctly!",
|
||||
smtp.EmailContentTypeTextPlain)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Error sending email: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.InfoToConsole("No errors were reported while sending an email. Please check your inbox to make sure.")
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
addConfigFlags(smtpTestCmd)
|
||||
smtpTestCmd.Flags().StringVar(&smtpTestRecipient, "recipient", "", `email address to send the test e-mail to`)
|
||||
smtpTestCmd.MarkFlagRequired("recipient") //nolint:errcheck
|
||||
|
||||
rootCmd.AddCommand(smtpTestCmd)
|
||||
}
|
52
cmd/start_windows.go
Normal file
52
cmd/start_windows.go
Normal file
|
@ -0,0 +1,52 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
startCmd = &cobra.Command{
|
||||
Use: "start",
|
||||
Short: "Start the SFTPGo Windows Service",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
if !filepath.IsAbs(logFilePath) && util.IsFileInputValid(logFilePath) {
|
||||
logFilePath = filepath.Join(configDir, logFilePath)
|
||||
}
|
||||
s := service.Service{
|
||||
ConfigDir: configDir,
|
||||
ConfigFile: configFile,
|
||||
LogFilePath: logFilePath,
|
||||
LogMaxSize: logMaxSize,
|
||||
LogMaxBackups: logMaxBackups,
|
||||
LogMaxAge: logMaxAge,
|
||||
LogCompress: logCompress,
|
||||
LogVerbose: logVerbose,
|
||||
LogUTCTime: logUTCTime,
|
||||
Shutdown: make(chan bool),
|
||||
}
|
||||
winService := service.WindowsService{
|
||||
Service: s,
|
||||
}
|
||||
err := winService.RunService()
|
||||
if err != nil {
|
||||
fmt.Printf("Error starting service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service started!\r\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
serviceCmd.AddCommand(startCmd)
|
||||
addServeFlags(startCmd)
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
|
@ -25,13 +11,13 @@ import (
|
|||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/common"
|
||||
"github.com/drakkan/sftpgo/v2/internal/config"
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/internal/sftpd"
|
||||
"github.com/drakkan/sftpgo/v2/internal/version"
|
||||
"github.com/drakkan/sftpgo/v2/common"
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/sftpd"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -51,25 +37,18 @@ Subsystem sftp sftpgo startsubsys
|
|||
|
||||
Command-line flags should be specified in the Subsystem declaration.
|
||||
`,
|
||||
Run: func(_ *cobra.Command, _ []string) {
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logSender := "startsubsys"
|
||||
connectionID := xid.New().String()
|
||||
var zeroLogLevel zerolog.Level
|
||||
switch logLevel {
|
||||
case "info":
|
||||
zeroLogLevel = zerolog.InfoLevel
|
||||
case "warn":
|
||||
zeroLogLevel = zerolog.WarnLevel
|
||||
case "error":
|
||||
zeroLogLevel = zerolog.ErrorLevel
|
||||
default:
|
||||
zeroLogLevel = zerolog.DebugLevel
|
||||
logLevel := zerolog.DebugLevel
|
||||
if !logVerbose {
|
||||
logLevel = zerolog.InfoLevel
|
||||
}
|
||||
logger.SetLogTime(logUTCTime)
|
||||
if logJournalD {
|
||||
logger.InitJournalDLogger(zeroLogLevel)
|
||||
logger.InitJournalDLogger(logLevel)
|
||||
} else {
|
||||
logger.InitStdErrLogger(zeroLogLevel)
|
||||
logger.InitStdErrLogger(logLevel)
|
||||
}
|
||||
osUser, err := user.Current()
|
||||
if err != nil {
|
||||
|
@ -78,13 +57,21 @@ Command-line flags should be specified in the Subsystem declaration.
|
|||
}
|
||||
username := osUser.Username
|
||||
homedir := osUser.HomeDir
|
||||
logger.Info(logSender, connectionID, "starting SFTPGo %v as subsystem, user %q home dir %q config dir %q base home dir %q",
|
||||
logger.Info(logSender, connectionID, "starting SFTPGo %v as subsystem, user %#v home dir %#v config dir %#v base home dir %#v",
|
||||
version.Get(), username, homedir, configDir, baseHomeDir)
|
||||
err = config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to load configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
commonConfig := config.GetCommonConfig()
|
||||
// idle connection are managed externally
|
||||
commonConfig.IdleTimeout = 0
|
||||
config.SetCommonConfig(commonConfig)
|
||||
if err := common.Initialize(config.GetCommonConfig()); err != nil {
|
||||
logger.Error(logSender, connectionID, "%v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
if err := kmsConfig.Initialize(); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize KMS: %v", err)
|
||||
|
@ -96,12 +83,23 @@ Command-line flags should be specified in the Subsystem declaration.
|
|||
logger.Error(logSender, "", "unable to initialize MFA: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := plugin.Initialize(config.GetPluginsConfig(), logVerbose); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize plugin system: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
smtpConfig := config.GetSMTPConfig()
|
||||
err = smtpConfig.Initialize(configDir)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize SMTP configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
dataProviderConf := config.GetProviderConf()
|
||||
if dataProviderConf.Driver == dataprovider.SQLiteDataProviderName || dataProviderConf.Driver == dataprovider.BoltDataProviderName {
|
||||
logger.Debug(logSender, connectionID, "data provider %q not supported in subsystem mode, using %q provider",
|
||||
logger.Debug(logSender, connectionID, "data provider %#v not supported in subsystem mode, using %#v provider",
|
||||
dataProviderConf.Driver, dataprovider.MemoryDataProviderName)
|
||||
dataProviderConf.Driver = dataprovider.MemoryDataProviderName
|
||||
dataProviderConf.Name = ""
|
||||
dataProviderConf.PreferDatabaseCredentials = true
|
||||
}
|
||||
config.SetProviderConf(dataProviderConf)
|
||||
err = dataprovider.Initialize(dataProviderConf, configDir, false)
|
||||
|
@ -109,42 +107,19 @@ Command-line flags should be specified in the Subsystem declaration.
|
|||
logger.Error(logSender, connectionID, "unable to initialize the data provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := plugin.Initialize(config.GetPluginsConfig(), logLevel); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize plugin system: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
smtpConfig := config.GetSMTPConfig()
|
||||
err = smtpConfig.Initialize(configDir, false)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize SMTP configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
commonConfig := config.GetCommonConfig()
|
||||
// idle connection are managed externally
|
||||
commonConfig.IdleTimeout = 0
|
||||
config.SetCommonConfig(commonConfig)
|
||||
if err := common.Initialize(config.GetCommonConfig(), dataProviderConf.GetShared()); err != nil {
|
||||
logger.Error(logSender, connectionID, "%v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
httpConfig := config.GetHTTPConfig()
|
||||
if err := httpConfig.Initialize(configDir); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize http client: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
commandConfig := config.GetCommandConfig()
|
||||
if err := commandConfig.Initialize(); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize commands configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
user, err := dataprovider.UserExists(username, "")
|
||||
user, err := dataprovider.UserExists(username)
|
||||
if err == nil {
|
||||
if user.HomeDir != filepath.Clean(homedir) && !preserveHomeDir {
|
||||
// update the user
|
||||
user.HomeDir = filepath.Clean(homedir)
|
||||
err = dataprovider.UpdateUser(&user, dataprovider.ActionExecutorSystem, "", "")
|
||||
err = dataprovider.UpdateUser(&user, dataprovider.ActionExecutorSystem, "")
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to update user %q: %v", username, err)
|
||||
logger.Error(logSender, connectionID, "unable to update user %#v: %v", username, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
@ -155,21 +130,16 @@ Command-line flags should be specified in the Subsystem declaration.
|
|||
} else {
|
||||
user.HomeDir = filepath.Clean(homedir)
|
||||
}
|
||||
logger.Debug(logSender, connectionID, "home dir for new user %q", user.HomeDir)
|
||||
logger.Debug(logSender, connectionID, "home dir for new user %#v", user.HomeDir)
|
||||
user.Password = connectionID
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
err = dataprovider.AddUser(&user, dataprovider.ActionExecutorSystem, "", "")
|
||||
err = dataprovider.AddUser(&user, dataprovider.ActionExecutorSystem, "")
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to add user %q: %v", username, err)
|
||||
logger.Error(logSender, connectionID, "unable to add user %#v: %v", username, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
err = user.LoadAndApplyGroupSettings()
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to apply group settings for user %q: %v", username, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
err = sftpd.ServeSubSystemConnection(&user, connectionID, os.Stdin, os.Stdout)
|
||||
if err != nil && err != io.EOF {
|
||||
logger.Warn(logSender, connectionID, "serving subsystem finished with error: %v", err)
|
||||
|
@ -203,17 +173,13 @@ error`)
|
|||
|
||||
addConfigFlags(subsystemCmd)
|
||||
|
||||
viper.SetDefault(logLevelKey, defaultLogLevel)
|
||||
viper.BindEnv(logLevelKey, "SFTPGO_LOG_LEVEL") //nolint:errcheck
|
||||
subsystemCmd.Flags().StringVar(&logLevel, logLevelFlag, viper.GetString(logLevelKey),
|
||||
`Set the log level. Supported values:
|
||||
|
||||
debug, info, warn, error.
|
||||
|
||||
This flag can be set
|
||||
using SFTPGO_LOG_LEVEL env var too.
|
||||
viper.SetDefault(logVerboseKey, defaultLogVerbose)
|
||||
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
|
||||
subsystemCmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
|
||||
`Enable verbose logs. This flag can be set
|
||||
using SFTPGO_LOG_VERBOSE env var too.
|
||||
`)
|
||||
viper.BindPFlag(logLevelKey, subsystemCmd.Flags().Lookup(logLevelFlag)) //nolint:errcheck
|
||||
viper.BindPFlag(logVerboseKey, subsystemCmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
|
||||
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
|
35
cmd/status_windows.go
Normal file
35
cmd/status_windows.go
Normal file
|
@ -0,0 +1,35 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
statusCmd = &cobra.Command{
|
||||
Use: "status",
|
||||
Short: "Retrieve the status for the SFTPGo Windows Service",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
Shutdown: make(chan bool),
|
||||
},
|
||||
}
|
||||
status, err := s.Status()
|
||||
if err != nil {
|
||||
fmt.Printf("Error querying service status: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service status: %#v\r\n", status.String())
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
serviceCmd.AddCommand(statusCmd)
|
||||
}
|
35
cmd/stop_windows.go
Normal file
35
cmd/stop_windows.go
Normal file
|
@ -0,0 +1,35 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
stopCmd = &cobra.Command{
|
||||
Use: "stop",
|
||||
Short: "Stop the SFTPGo Windows Service",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
Shutdown: make(chan bool),
|
||||
},
|
||||
}
|
||||
err := s.Stop()
|
||||
if err != nil {
|
||||
fmt.Printf("Error stopping service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service stopped!\r\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
serviceCmd.AddCommand(stopCmd)
|
||||
}
|
35
cmd/uninstall_windows.go
Normal file
35
cmd/uninstall_windows.go
Normal file
|
@ -0,0 +1,35 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
uninstallCmd = &cobra.Command{
|
||||
Use: "uninstall",
|
||||
Short: "Uninstall the SFTPGo Windows Service",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
Shutdown: make(chan bool),
|
||||
},
|
||||
}
|
||||
err := s.Uninstall()
|
||||
if err != nil {
|
||||
fmt.Printf("Error removing service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service uninstalled\r\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
serviceCmd.AddCommand(uninstallCmd)
|
||||
}
|
261
common/actions.go
Normal file
261
common/actions.go
Normal file
|
@ -0,0 +1,261 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/sftpgo/sdk/plugin/notifier"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/httpclient"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
errUnconfiguredAction = errors.New("no hook is configured for this action")
|
||||
errNoHook = errors.New("unable to execute action, no hook defined")
|
||||
errUnexpectedHTTResponse = errors.New("unexpected HTTP response code")
|
||||
)
|
||||
|
||||
// ProtocolActions defines the action to execute on file operations and SSH commands
|
||||
type ProtocolActions struct {
|
||||
// Valid values are download, upload, pre-delete, delete, rename, ssh_cmd. Empty slice to disable
|
||||
ExecuteOn []string `json:"execute_on" mapstructure:"execute_on"`
|
||||
// Actions to be performed synchronously.
|
||||
// The pre-delete action is always executed synchronously while the other ones are asynchronous.
|
||||
// Executing an action synchronously means that SFTPGo will not return a result code to the client
|
||||
// (which is waiting for it) until your hook have completed its execution.
|
||||
ExecuteSync []string `json:"execute_sync" mapstructure:"execute_sync"`
|
||||
// Absolute path to an external program or an HTTP URL
|
||||
Hook string `json:"hook" mapstructure:"hook"`
|
||||
}
|
||||
|
||||
var actionHandler ActionHandler = &defaultActionHandler{}
|
||||
|
||||
// InitializeActionHandler lets the user choose an action handler implementation.
|
||||
//
|
||||
// Do NOT call this function after application initialization.
|
||||
func InitializeActionHandler(handler ActionHandler) {
|
||||
actionHandler = handler
|
||||
}
|
||||
|
||||
func handleUnconfiguredPreAction(operation string) error {
|
||||
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
|
||||
// Other pre action will deny the operation on error so if we have no configuration we must return
|
||||
// a nil error
|
||||
if operation == operationPreDelete {
|
||||
return errUnconfiguredAction
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExecutePreAction executes a pre-* action and returns the result
|
||||
func ExecutePreAction(conn *BaseConnection, operation, filePath, virtualPath string, fileSize int64, openFlags int) error {
|
||||
var event *notifier.FsEvent
|
||||
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
|
||||
hasHook := util.IsStringInSlice(operation, Config.Actions.ExecuteOn)
|
||||
if !hasHook && !hasNotifiersPlugin {
|
||||
return handleUnconfiguredPreAction(operation)
|
||||
}
|
||||
event = newActionNotification(&conn.User, operation, filePath, virtualPath, "", "", "",
|
||||
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, openFlags, nil)
|
||||
if hasNotifiersPlugin {
|
||||
plugin.Handler.NotifyFsEvent(event)
|
||||
}
|
||||
if !hasHook {
|
||||
return handleUnconfiguredPreAction(operation)
|
||||
}
|
||||
return actionHandler.Handle(event)
|
||||
}
|
||||
|
||||
// ExecuteActionNotification executes the defined hook, if any, for the specified action
|
||||
func ExecuteActionNotification(conn *BaseConnection, operation, filePath, virtualPath, target, virtualTarget, sshCmd string,
|
||||
fileSize int64, err error,
|
||||
) {
|
||||
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
|
||||
hasHook := util.IsStringInSlice(operation, Config.Actions.ExecuteOn)
|
||||
if !hasHook && !hasNotifiersPlugin {
|
||||
return
|
||||
}
|
||||
notification := newActionNotification(&conn.User, operation, filePath, virtualPath, target, virtualTarget, sshCmd,
|
||||
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, 0, err)
|
||||
if hasNotifiersPlugin {
|
||||
plugin.Handler.NotifyFsEvent(notification)
|
||||
}
|
||||
|
||||
if hasHook {
|
||||
if util.IsStringInSlice(operation, Config.Actions.ExecuteSync) {
|
||||
actionHandler.Handle(notification) //nolint:errcheck
|
||||
return
|
||||
}
|
||||
|
||||
go actionHandler.Handle(notification) //nolint:errcheck
|
||||
}
|
||||
}
|
||||
|
||||
// ActionHandler handles a notification for a Protocol Action.
|
||||
type ActionHandler interface {
|
||||
Handle(notification *notifier.FsEvent) error
|
||||
}
|
||||
|
||||
func newActionNotification(
|
||||
user *dataprovider.User,
|
||||
operation, filePath, virtualPath, target, virtualTarget, sshCmd, protocol, ip, sessionID string,
|
||||
fileSize int64,
|
||||
openFlags int,
|
||||
err error,
|
||||
) *notifier.FsEvent {
|
||||
var bucket, endpoint string
|
||||
status := 1
|
||||
|
||||
fsConfig := user.GetFsConfigForPath(virtualPath)
|
||||
|
||||
switch fsConfig.Provider {
|
||||
case sdk.S3FilesystemProvider:
|
||||
bucket = fsConfig.S3Config.Bucket
|
||||
endpoint = fsConfig.S3Config.Endpoint
|
||||
case sdk.GCSFilesystemProvider:
|
||||
bucket = fsConfig.GCSConfig.Bucket
|
||||
case sdk.AzureBlobFilesystemProvider:
|
||||
bucket = fsConfig.AzBlobConfig.Container
|
||||
if fsConfig.AzBlobConfig.Endpoint != "" {
|
||||
endpoint = fsConfig.AzBlobConfig.Endpoint
|
||||
}
|
||||
case sdk.SFTPFilesystemProvider:
|
||||
endpoint = fsConfig.SFTPConfig.Endpoint
|
||||
}
|
||||
|
||||
if err == ErrQuotaExceeded {
|
||||
status = 3
|
||||
} else if err != nil {
|
||||
status = 2
|
||||
}
|
||||
|
||||
return ¬ifier.FsEvent{
|
||||
Action: operation,
|
||||
Username: user.Username,
|
||||
Path: filePath,
|
||||
TargetPath: target,
|
||||
VirtualPath: virtualPath,
|
||||
VirtualTargetPath: virtualTarget,
|
||||
SSHCmd: sshCmd,
|
||||
FileSize: fileSize,
|
||||
FsProvider: int(fsConfig.Provider),
|
||||
Bucket: bucket,
|
||||
Endpoint: endpoint,
|
||||
Status: status,
|
||||
Protocol: protocol,
|
||||
IP: ip,
|
||||
SessionID: sessionID,
|
||||
OpenFlags: openFlags,
|
||||
Timestamp: time.Now().UnixNano(),
|
||||
}
|
||||
}
|
||||
|
||||
type defaultActionHandler struct{}
|
||||
|
||||
func (h *defaultActionHandler) Handle(event *notifier.FsEvent) error {
|
||||
if !util.IsStringInSlice(event.Action, Config.Actions.ExecuteOn) {
|
||||
return errUnconfiguredAction
|
||||
}
|
||||
|
||||
if Config.Actions.Hook == "" {
|
||||
logger.Warn(event.Protocol, "", "Unable to send notification, no hook is defined")
|
||||
|
||||
return errNoHook
|
||||
}
|
||||
|
||||
if strings.HasPrefix(Config.Actions.Hook, "http") {
|
||||
return h.handleHTTP(event)
|
||||
}
|
||||
|
||||
return h.handleCommand(event)
|
||||
}
|
||||
|
||||
func (h *defaultActionHandler) handleHTTP(event *notifier.FsEvent) error {
|
||||
u, err := url.Parse(Config.Actions.Hook)
|
||||
if err != nil {
|
||||
logger.Error(event.Protocol, "", "Invalid hook %#v for operation %#v: %v",
|
||||
Config.Actions.Hook, event.Action, err)
|
||||
return err
|
||||
}
|
||||
|
||||
startTime := time.Now()
|
||||
respCode := 0
|
||||
|
||||
var b bytes.Buffer
|
||||
_ = json.NewEncoder(&b).Encode(event)
|
||||
|
||||
resp, err := httpclient.RetryablePost(Config.Actions.Hook, "application/json", &b)
|
||||
if err == nil {
|
||||
respCode = resp.StatusCode
|
||||
resp.Body.Close()
|
||||
|
||||
if respCode != http.StatusOK {
|
||||
err = errUnexpectedHTTResponse
|
||||
}
|
||||
}
|
||||
|
||||
logger.Debug(event.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
|
||||
event.Action, u.Redacted(), respCode, time.Since(startTime), err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (h *defaultActionHandler) handleCommand(event *notifier.FsEvent) error {
|
||||
if !filepath.IsAbs(Config.Actions.Hook) {
|
||||
err := fmt.Errorf("invalid notification command %#v", Config.Actions.Hook)
|
||||
logger.Warn(event.Protocol, "", "unable to execute notification command: %v", err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cmd := exec.CommandContext(ctx, Config.Actions.Hook)
|
||||
cmd.Env = append(os.Environ(), notificationAsEnvVars(event)...)
|
||||
|
||||
startTime := time.Now()
|
||||
err := cmd.Run()
|
||||
|
||||
logger.Debug(event.Protocol, "", "executed command %#v, elapsed: %v, error: %v",
|
||||
Config.Actions.Hook, time.Since(startTime), err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func notificationAsEnvVars(event *notifier.FsEvent) []string {
|
||||
return []string{
|
||||
fmt.Sprintf("SFTPGO_ACTION=%v", event.Action),
|
||||
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", event.Username),
|
||||
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", event.Path),
|
||||
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", event.TargetPath),
|
||||
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_PATH=%v", event.VirtualPath),
|
||||
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_TARGET=%v", event.VirtualTargetPath),
|
||||
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", event.SSHCmd),
|
||||
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", event.FileSize),
|
||||
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", event.FsProvider),
|
||||
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", event.Bucket),
|
||||
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", event.Endpoint),
|
||||
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", event.Status),
|
||||
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", event.Protocol),
|
||||
fmt.Sprintf("SFTPGO_ACTION_IP=%v", event.IP),
|
||||
fmt.Sprintf("SFTPGO_ACTION_SESSION_ID=%v", event.SessionID),
|
||||
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", event.OpenFlags),
|
||||
fmt.Sprintf("SFTPGO_ACTION_TIMESTAMP=%v", event.Timestamp),
|
||||
}
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
|
@ -29,13 +15,13 @@ import (
|
|||
"github.com/sftpgo/sdk/plugin/notifier"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/internal/vfs"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
func TestNewActionNotification(t *testing.T) {
|
||||
user := dataprovider.User{
|
||||
user := &dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "username",
|
||||
},
|
||||
|
@ -63,62 +49,45 @@ func TestNewActionNotification(t *testing.T) {
|
|||
Endpoint: "sftpendpoint",
|
||||
},
|
||||
}
|
||||
user.FsConfig.HTTPConfig = vfs.HTTPFsConfig{
|
||||
BaseHTTPFsConfig: sdk.BaseHTTPFsConfig{
|
||||
Endpoint: "httpendpoint",
|
||||
},
|
||||
}
|
||||
c := NewBaseConnection("id", ProtocolSSH, "", "", user)
|
||||
sessionID := xid.New().String()
|
||||
a := newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
|
||||
123, 0, c.getNotificationStatus(errors.New("fake error")), 0, nil)
|
||||
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
|
||||
123, 0, errors.New("fake error"))
|
||||
assert.Equal(t, user.Username, a.Username)
|
||||
assert.Equal(t, 0, len(a.Bucket))
|
||||
assert.Equal(t, 0, len(a.Endpoint))
|
||||
assert.Equal(t, 2, a.Status)
|
||||
|
||||
user.FsConfig.Provider = sdk.S3FilesystemProvider
|
||||
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", sessionID,
|
||||
123, 0, c.getNotificationStatus(nil), 0, nil)
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", sessionID,
|
||||
123, 0, nil)
|
||||
assert.Equal(t, "s3bucket", a.Bucket)
|
||||
assert.Equal(t, "endpoint", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
|
||||
user.FsConfig.Provider = sdk.GCSFilesystemProvider
|
||||
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, 0, c.getNotificationStatus(ErrQuotaExceeded), 0, nil)
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, 0, ErrQuotaExceeded)
|
||||
assert.Equal(t, "gcsbucket", a.Bucket)
|
||||
assert.Equal(t, 0, len(a.Endpoint))
|
||||
assert.Equal(t, 3, a.Status)
|
||||
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, 0, c.getNotificationStatus(fmt.Errorf("wrapper quota error: %w", ErrQuotaExceeded)), 0, nil)
|
||||
assert.Equal(t, "gcsbucket", a.Bucket)
|
||||
assert.Equal(t, 0, len(a.Endpoint))
|
||||
assert.Equal(t, 3, a.Status)
|
||||
|
||||
user.FsConfig.Provider = sdk.HTTPFilesystemProvider
|
||||
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", sessionID,
|
||||
123, 0, c.getNotificationStatus(nil), 0, nil)
|
||||
assert.Equal(t, "httpendpoint", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
|
||||
user.FsConfig.Provider = sdk.AzureBlobFilesystemProvider
|
||||
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, 0, c.getNotificationStatus(nil), 0, nil)
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, 0, nil)
|
||||
assert.Equal(t, "azcontainer", a.Bucket)
|
||||
assert.Equal(t, "azendpoint", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
|
||||
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, os.O_APPEND, c.getNotificationStatus(nil), 0, nil)
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, os.O_APPEND, nil)
|
||||
assert.Equal(t, "azcontainer", a.Bucket)
|
||||
assert.Equal(t, "azendpoint", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
assert.Equal(t, os.O_APPEND, a.OpenFlags)
|
||||
|
||||
user.FsConfig.Provider = sdk.SFTPFilesystemProvider
|
||||
a = newActionNotification(&user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
|
||||
123, 0, c.getNotificationStatus(nil), 0, nil)
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
|
||||
123, 0, nil)
|
||||
assert.Equal(t, "sftpendpoint", a.Endpoint)
|
||||
}
|
||||
|
||||
|
@ -135,22 +104,19 @@ func TestActionHTTP(t *testing.T) {
|
|||
},
|
||||
}
|
||||
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "",
|
||||
xid.New().String(), 123, 0, 1, 0, nil)
|
||||
status, err := actionHandler.Handle(a)
|
||||
xid.New().String(), 123, 0, nil)
|
||||
err := actionHandler.Handle(a)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, status)
|
||||
|
||||
Config.Actions.Hook = "http://invalid:1234"
|
||||
status, err = actionHandler.Handle(a)
|
||||
err = actionHandler.Handle(a)
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, 1, status)
|
||||
|
||||
Config.Actions.Hook = fmt.Sprintf("http://%v/404", httpAddr)
|
||||
status, err = actionHandler.Handle(a)
|
||||
err = actionHandler.Handle(a)
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, errUnexpectedHTTResponse.Error())
|
||||
}
|
||||
assert.Equal(t, 1, status)
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
@ -175,17 +141,14 @@ func TestActionCMD(t *testing.T) {
|
|||
}
|
||||
sessionID := shortuuid.New()
|
||||
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
|
||||
123, 0, 1, 0, map[string]string{"key": "value"})
|
||||
status, err := actionHandler.Handle(a)
|
||||
123, 0, nil)
|
||||
err = actionHandler.Handle(a)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, status)
|
||||
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", *user)
|
||||
err = ExecuteActionNotification(c, OperationSSHCmd, "path", "vpath", "target", "vtarget", "sha1sum", 0, nil, 0, nil)
|
||||
assert.NoError(t, err)
|
||||
ExecuteActionNotification(c, OperationSSHCmd, "path", "vpath", "target", "vtarget", "sha1sum", 0, nil)
|
||||
|
||||
err = ExecuteActionNotification(c, operationDownload, "path", "vpath", "", "", "", 0, nil, 0, nil)
|
||||
assert.NoError(t, err)
|
||||
ExecuteActionNotification(c, operationDownload, "path", "vpath", "", "", "", 0, nil)
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
@ -208,33 +171,30 @@ func TestWrongActions(t *testing.T) {
|
|||
}
|
||||
|
||||
a := newActionNotification(user, operationUpload, "", "", "", "", "", ProtocolSFTP, "", xid.New().String(),
|
||||
123, 0, 1, 0, nil)
|
||||
status, err := actionHandler.Handle(a)
|
||||
123, 0, nil)
|
||||
err := actionHandler.Handle(a)
|
||||
assert.Error(t, err, "action with bad command must fail")
|
||||
assert.Equal(t, 1, status)
|
||||
|
||||
a.Action = operationDelete
|
||||
status, err = actionHandler.Handle(a)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, status)
|
||||
err = actionHandler.Handle(a)
|
||||
assert.EqualError(t, err, errUnconfiguredAction.Error())
|
||||
|
||||
Config.Actions.Hook = "http://foo\x7f.com/"
|
||||
a.Action = operationUpload
|
||||
status, err = actionHandler.Handle(a)
|
||||
err = actionHandler.Handle(a)
|
||||
assert.Error(t, err, "action with bad url must fail")
|
||||
assert.Equal(t, 1, status)
|
||||
|
||||
Config.Actions.Hook = ""
|
||||
status, err = actionHandler.Handle(a)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, status)
|
||||
err = actionHandler.Handle(a)
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, errNoHook.Error())
|
||||
}
|
||||
|
||||
Config.Actions.Hook = "relative path"
|
||||
status, err = actionHandler.Handle(a)
|
||||
err = actionHandler.Handle(a)
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, fmt.Sprintf("invalid notification command %q", Config.Actions.Hook))
|
||||
assert.EqualError(t, err, fmt.Sprintf("invalid notification command %#v", Config.Actions.Hook))
|
||||
}
|
||||
assert.Equal(t, 1, status)
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
@ -249,7 +209,7 @@ func TestPreDeleteAction(t *testing.T) {
|
|||
assert.NoError(t, err)
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationPreDelete},
|
||||
Hook: "missing hook",
|
||||
Hook: hookCmd,
|
||||
}
|
||||
homeDir := filepath.Join(os.TempDir(), "test_user")
|
||||
err = os.MkdirAll(homeDir, os.ModePerm)
|
||||
|
@ -262,7 +222,7 @@ func TestPreDeleteAction(t *testing.T) {
|
|||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
fs := vfs.NewOsFs("id", homeDir, "", nil)
|
||||
fs := vfs.NewOsFs("id", homeDir, "")
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", user)
|
||||
|
||||
testfile := filepath.Join(user.HomeDir, "testfile")
|
||||
|
@ -271,12 +231,8 @@ func TestPreDeleteAction(t *testing.T) {
|
|||
info, err := os.Stat(testfile)
|
||||
assert.NoError(t, err)
|
||||
err = c.RemoveFile(fs, testfile, "testfile", info)
|
||||
assert.ErrorIs(t, err, c.GetPermissionDeniedError())
|
||||
assert.FileExists(t, testfile)
|
||||
Config.Actions.Hook = hookCmd
|
||||
err = c.RemoveFile(fs, testfile, "testfile", info)
|
||||
assert.NoError(t, err)
|
||||
assert.NoFileExists(t, testfile)
|
||||
assert.FileExists(t, testfile)
|
||||
|
||||
os.RemoveAll(homeDir)
|
||||
|
||||
|
@ -295,22 +251,19 @@ func TestUnconfiguredHook(t *testing.T) {
|
|||
Type: "notifier",
|
||||
},
|
||||
}
|
||||
err := plugin.Initialize(pluginsConfig, "debug")
|
||||
err := plugin.Initialize(pluginsConfig, true)
|
||||
assert.Error(t, err)
|
||||
assert.True(t, plugin.Handler.HasNotifiers())
|
||||
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
|
||||
status, err := ExecutePreAction(c, OperationPreDownload, "", "", 0, 0)
|
||||
err = ExecutePreAction(c, OperationPreDownload, "", "", 0, 0)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, status, 0)
|
||||
status, err = ExecutePreAction(c, operationPreDelete, "", "", 0, 0)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, status, 0)
|
||||
err = ExecutePreAction(c, operationPreDelete, "", "", 0, 0)
|
||||
assert.ErrorIs(t, err, errUnconfiguredAction)
|
||||
|
||||
err = ExecuteActionNotification(c, operationDownload, "", "", "", "", "", 0, nil, 0, nil)
|
||||
assert.NoError(t, err)
|
||||
ExecuteActionNotification(c, operationDownload, "", "", "", "", "", 0, nil)
|
||||
|
||||
err = plugin.Initialize(nil, "debug")
|
||||
err = plugin.Initialize(nil, true)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, plugin.Handler.HasNotifiers())
|
||||
|
||||
|
@ -321,10 +274,10 @@ type actionHandlerStub struct {
|
|||
called bool
|
||||
}
|
||||
|
||||
func (h *actionHandlerStub) Handle(_ *notifier.FsEvent) (int, error) {
|
||||
func (h *actionHandlerStub) Handle(event *notifier.FsEvent) error {
|
||||
h.called = true
|
||||
|
||||
return 1, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestInitializeActionHandler(t *testing.T) {
|
||||
|
@ -335,8 +288,8 @@ func TestInitializeActionHandler(t *testing.T) {
|
|||
InitializeActionHandler(&defaultActionHandler{})
|
||||
})
|
||||
|
||||
status, err := actionHandler.Handle(¬ifier.FsEvent{})
|
||||
err := actionHandler.Handle(¬ifier.FsEvent{})
|
||||
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, handler.called)
|
||||
assert.Equal(t, 1, status)
|
||||
}
|
51
common/clientsmap.go
Normal file
51
common/clientsmap.go
Normal file
|
@ -0,0 +1,51 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
)
|
||||
|
||||
// clienstMap is a struct containing the map of the connected clients
|
||||
type clientsMap struct {
|
||||
totalConnections int32
|
||||
mu sync.RWMutex
|
||||
clients map[string]int
|
||||
}
|
||||
|
||||
func (c *clientsMap) add(source string) {
|
||||
atomic.AddInt32(&c.totalConnections, 1)
|
||||
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
c.clients[source]++
|
||||
}
|
||||
|
||||
func (c *clientsMap) remove(source string) {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
if val, ok := c.clients[source]; ok {
|
||||
atomic.AddInt32(&c.totalConnections, -1)
|
||||
c.clients[source]--
|
||||
if val > 1 {
|
||||
return
|
||||
}
|
||||
delete(c.clients, source)
|
||||
} else {
|
||||
logger.Warn(logSender, "", "cannot remove client %v it is not mapped", source)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *clientsMap) getTotal() int32 {
|
||||
return atomic.LoadInt32(&c.totalConnections)
|
||||
}
|
||||
|
||||
func (c *clientsMap) getTotalFrom(source string) int {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
return c.clients[source]
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
1073
common/common.go
Normal file
1073
common/common.go
Normal file
File diff suppressed because it is too large
Load diff
910
common/common_test.go
Normal file
910
common/common_test.go
Normal file
|
@ -0,0 +1,910 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/alexedwards/argon2id"
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
logSenderTest = "common_test"
|
||||
httpAddr = "127.0.0.1:9999"
|
||||
configDir = ".."
|
||||
osWindows = "windows"
|
||||
userTestUsername = "common_test_username"
|
||||
)
|
||||
|
||||
type fakeConnection struct {
|
||||
*BaseConnection
|
||||
command string
|
||||
}
|
||||
|
||||
func (c *fakeConnection) AddUser(user dataprovider.User) error {
|
||||
_, err := user.GetFilesystem(c.GetID())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.BaseConnection.User = user
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *fakeConnection) Disconnect() error {
|
||||
Connections.Remove(c.GetID())
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetClientVersion() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetCommand() string {
|
||||
return c.command
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetLocalAddress() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetRemoteAddress() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
type customNetConn struct {
|
||||
net.Conn
|
||||
id string
|
||||
isClosed bool
|
||||
}
|
||||
|
||||
func (c *customNetConn) Close() error {
|
||||
Connections.RemoveSSHConnection(c.id)
|
||||
c.isClosed = true
|
||||
return c.Conn.Close()
|
||||
}
|
||||
|
||||
func TestSSHConnections(t *testing.T) {
|
||||
conn1, conn2 := net.Pipe()
|
||||
now := time.Now()
|
||||
sshConn1 := NewSSHConnection("id1", conn1)
|
||||
sshConn2 := NewSSHConnection("id2", conn2)
|
||||
sshConn3 := NewSSHConnection("id3", conn2)
|
||||
assert.Equal(t, "id1", sshConn1.GetID())
|
||||
assert.Equal(t, "id2", sshConn2.GetID())
|
||||
assert.Equal(t, "id3", sshConn3.GetID())
|
||||
sshConn1.UpdateLastActivity()
|
||||
assert.GreaterOrEqual(t, sshConn1.GetLastActivity().UnixNano(), now.UnixNano())
|
||||
Connections.AddSSHConnection(sshConn1)
|
||||
Connections.AddSSHConnection(sshConn2)
|
||||
Connections.AddSSHConnection(sshConn3)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 3)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn1.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 2)
|
||||
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
|
||||
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn1.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 2)
|
||||
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
|
||||
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn2.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 1)
|
||||
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn3.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 0)
|
||||
Connections.RUnlock()
|
||||
assert.NoError(t, sshConn1.Close())
|
||||
assert.NoError(t, sshConn2.Close())
|
||||
assert.NoError(t, sshConn3.Close())
|
||||
}
|
||||
|
||||
func TestDefenderIntegration(t *testing.T) {
|
||||
// by default defender is nil
|
||||
configCopy := Config
|
||||
|
||||
ip := "127.1.1.1"
|
||||
|
||||
assert.Nil(t, ReloadDefender())
|
||||
|
||||
AddDefenderEvent(ip, HostEventNoLoginTried)
|
||||
assert.False(t, IsBanned(ip))
|
||||
|
||||
banTime, err := GetDefenderBanTime(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
assert.False(t, DeleteDefenderHost(ip))
|
||||
score, err := GetDefenderScore(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
_, err = GetDefenderHost(ip)
|
||||
assert.Error(t, err)
|
||||
hosts, err := GetDefenderHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, hosts)
|
||||
|
||||
Config.DefenderConfig = DefenderConfig{
|
||||
Enabled: true,
|
||||
Driver: DefenderDriverProvider,
|
||||
BanTime: 10,
|
||||
BanTimeIncrement: 50,
|
||||
Threshold: 0,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 100,
|
||||
EntriesHardLimit: 150,
|
||||
}
|
||||
err = Initialize(Config)
|
||||
// ScoreInvalid cannot be greater than threshold
|
||||
assert.Error(t, err)
|
||||
Config.DefenderConfig.Driver = "unsupported"
|
||||
err = Initialize(Config)
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "unsupported defender driver")
|
||||
}
|
||||
Config.DefenderConfig.Driver = DefenderDriverMemory
|
||||
err = Initialize(Config)
|
||||
// ScoreInvalid cannot be greater than threshold
|
||||
assert.Error(t, err)
|
||||
Config.DefenderConfig.Threshold = 3
|
||||
err = Initialize(Config)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, ReloadDefender())
|
||||
|
||||
AddDefenderEvent(ip, HostEventNoLoginTried)
|
||||
assert.False(t, IsBanned(ip))
|
||||
score, err = GetDefenderScore(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 2, score)
|
||||
entry, err := GetDefenderHost(ip)
|
||||
assert.NoError(t, err)
|
||||
asJSON, err := json.Marshal(&entry)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `{"id":"3132372e312e312e31","ip":"127.1.1.1","score":2}`, string(asJSON), "entry %v", entry)
|
||||
assert.True(t, DeleteDefenderHost(ip))
|
||||
banTime, err = GetDefenderBanTime(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
|
||||
AddDefenderEvent(ip, HostEventLoginFailed)
|
||||
AddDefenderEvent(ip, HostEventNoLoginTried)
|
||||
assert.True(t, IsBanned(ip))
|
||||
score, err = GetDefenderScore(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
banTime, err = GetDefenderBanTime(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, banTime)
|
||||
hosts, err = GetDefenderHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 1)
|
||||
entry, err = GetDefenderHost(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, entry.BanTime.IsZero())
|
||||
assert.True(t, DeleteDefenderHost(ip))
|
||||
hosts, err = GetDefenderHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 0)
|
||||
banTime, err = GetDefenderBanTime(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
assert.False(t, DeleteDefenderHost(ip))
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestRateLimitersIntegration(t *testing.T) {
|
||||
// by default defender is nil
|
||||
configCopy := Config
|
||||
|
||||
Config.RateLimitersConfig = []RateLimiterConfig{
|
||||
{
|
||||
Average: 100,
|
||||
Period: 10,
|
||||
Burst: 5,
|
||||
Type: int(rateLimiterTypeGlobal),
|
||||
Protocols: rateLimiterProtocolValues,
|
||||
},
|
||||
{
|
||||
Average: 1,
|
||||
Period: 1000,
|
||||
Burst: 1,
|
||||
Type: int(rateLimiterTypeSource),
|
||||
Protocols: []string{ProtocolWebDAV, ProtocolWebDAV, ProtocolFTP},
|
||||
GenerateDefenderEvents: true,
|
||||
EntriesSoftLimit: 100,
|
||||
EntriesHardLimit: 150,
|
||||
},
|
||||
}
|
||||
err := Initialize(Config)
|
||||
assert.Error(t, err)
|
||||
Config.RateLimitersConfig[0].Period = 1000
|
||||
Config.RateLimitersConfig[0].AllowList = []string{"1.1.1", "1.1.1.2"}
|
||||
err = Initialize(Config)
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "unable to parse rate limiter allow list")
|
||||
}
|
||||
Config.RateLimitersConfig[0].AllowList = []string{"172.16.24.7"}
|
||||
Config.RateLimitersConfig[1].AllowList = []string{"172.16.0.0/16"}
|
||||
|
||||
err = Initialize(Config)
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Len(t, rateLimiters, 4)
|
||||
assert.Len(t, rateLimiters[ProtocolSSH], 1)
|
||||
assert.Len(t, rateLimiters[ProtocolFTP], 2)
|
||||
assert.Len(t, rateLimiters[ProtocolWebDAV], 2)
|
||||
assert.Len(t, rateLimiters[ProtocolHTTP], 1)
|
||||
|
||||
source1 := "127.1.1.1"
|
||||
source2 := "127.1.1.2"
|
||||
source3 := "172.16.24.7" // whitelisted
|
||||
|
||||
_, err = LimitRate(ProtocolSSH, source1)
|
||||
assert.NoError(t, err)
|
||||
_, err = LimitRate(ProtocolFTP, source1)
|
||||
assert.NoError(t, err)
|
||||
// sleep to allow the add configured burst to the token.
|
||||
// This sleep is not enough to add the per-source burst
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
_, err = LimitRate(ProtocolWebDAV, source2)
|
||||
assert.NoError(t, err)
|
||||
_, err = LimitRate(ProtocolFTP, source1)
|
||||
assert.Error(t, err)
|
||||
_, err = LimitRate(ProtocolWebDAV, source2)
|
||||
assert.Error(t, err)
|
||||
_, err = LimitRate(ProtocolSSH, source1)
|
||||
assert.NoError(t, err)
|
||||
_, err = LimitRate(ProtocolSSH, source2)
|
||||
assert.NoError(t, err)
|
||||
for i := 0; i < 10; i++ {
|
||||
_, err = LimitRate(ProtocolWebDAV, source3)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestMaxConnections(t *testing.T) {
|
||||
oldValue := Config.MaxTotalConnections
|
||||
perHost := Config.MaxPerHostConnections
|
||||
|
||||
Config.MaxPerHostConnections = 0
|
||||
|
||||
ipAddr := "192.168.7.8"
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
|
||||
Config.MaxTotalConnections = 1
|
||||
Config.MaxPerHostConnections = perHost
|
||||
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
Connections.Add(fakeConn)
|
||||
assert.Len(t, Connections.GetStats(), 1)
|
||||
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
|
||||
res := Connections.Close(fakeConn.GetID())
|
||||
assert.True(t, res)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
|
||||
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
|
||||
Config.MaxTotalConnections = oldValue
|
||||
}
|
||||
|
||||
func TestMaxConnectionPerHost(t *testing.T) {
|
||||
oldValue := Config.MaxPerHostConnections
|
||||
|
||||
Config.MaxPerHostConnections = 2
|
||||
|
||||
ipAddr := "192.168.9.9"
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
assert.Equal(t, int32(3), Connections.GetClientConnections())
|
||||
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
|
||||
assert.Equal(t, int32(0), Connections.GetClientConnections())
|
||||
|
||||
Config.MaxPerHostConnections = oldValue
|
||||
}
|
||||
|
||||
func TestIdleConnections(t *testing.T) {
|
||||
configCopy := Config
|
||||
|
||||
Config.IdleTimeout = 1
|
||||
err := Initialize(Config)
|
||||
assert.NoError(t, err)
|
||||
|
||||
conn1, conn2 := net.Pipe()
|
||||
customConn1 := &customNetConn{
|
||||
Conn: conn1,
|
||||
id: "id1",
|
||||
}
|
||||
customConn2 := &customNetConn{
|
||||
Conn: conn2,
|
||||
id: "id2",
|
||||
}
|
||||
sshConn1 := NewSSHConnection(customConn1.id, customConn1)
|
||||
sshConn2 := NewSSHConnection(customConn2.id, customConn2)
|
||||
|
||||
username := "test_user"
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: username,
|
||||
},
|
||||
}
|
||||
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, "", "", user)
|
||||
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
// both ssh connections are expired but they should get removed only
|
||||
// if there is no associated connection
|
||||
sshConn1.lastActivity = c.lastActivity
|
||||
sshConn2.lastActivity = c.lastActivity
|
||||
Connections.AddSSHConnection(sshConn1)
|
||||
Connections.Add(fakeConn)
|
||||
assert.Equal(t, Connections.GetActiveSessions(username), 1)
|
||||
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, "", "", user)
|
||||
fakeConn = &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
Connections.AddSSHConnection(sshConn2)
|
||||
Connections.Add(fakeConn)
|
||||
assert.Equal(t, Connections.GetActiveSessions(username), 2)
|
||||
|
||||
cFTP := NewBaseConnection("id2", ProtocolFTP, "", "", dataprovider.User{})
|
||||
cFTP.lastActivity = time.Now().UnixNano()
|
||||
fakeConn = &fakeConnection{
|
||||
BaseConnection: cFTP,
|
||||
}
|
||||
Connections.Add(fakeConn)
|
||||
assert.Equal(t, Connections.GetActiveSessions(username), 2)
|
||||
assert.Len(t, Connections.GetStats(), 3)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 2)
|
||||
Connections.RUnlock()
|
||||
|
||||
startIdleTimeoutTicker(100 * time.Millisecond)
|
||||
assert.Eventually(t, func() bool { return Connections.GetActiveSessions(username) == 1 }, 1*time.Second, 200*time.Millisecond)
|
||||
assert.Eventually(t, func() bool {
|
||||
Connections.RLock()
|
||||
defer Connections.RUnlock()
|
||||
return len(Connections.sshConnections) == 1
|
||||
}, 1*time.Second, 200*time.Millisecond)
|
||||
stopIdleTimeoutTicker()
|
||||
assert.Len(t, Connections.GetStats(), 2)
|
||||
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
|
||||
cFTP.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
|
||||
sshConn2.lastActivity = c.lastActivity
|
||||
startIdleTimeoutTicker(100 * time.Millisecond)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 1*time.Second, 200*time.Millisecond)
|
||||
assert.Eventually(t, func() bool {
|
||||
Connections.RLock()
|
||||
defer Connections.RUnlock()
|
||||
return len(Connections.sshConnections) == 0
|
||||
}, 1*time.Second, 200*time.Millisecond)
|
||||
assert.Equal(t, int32(0), Connections.GetClientConnections())
|
||||
stopIdleTimeoutTicker()
|
||||
assert.True(t, customConn1.isClosed)
|
||||
assert.True(t, customConn2.isClosed)
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestCloseConnection(t *testing.T) {
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
assert.True(t, Connections.IsNewConnectionAllowed("127.0.0.1"))
|
||||
Connections.Add(fakeConn)
|
||||
assert.Len(t, Connections.GetStats(), 1)
|
||||
res := Connections.Close(fakeConn.GetID())
|
||||
assert.True(t, res)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
|
||||
res = Connections.Close(fakeConn.GetID())
|
||||
assert.False(t, res)
|
||||
Connections.Remove(fakeConn.GetID())
|
||||
}
|
||||
|
||||
func TestSwapConnection(t *testing.T) {
|
||||
c := NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{})
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
Connections.Add(fakeConn)
|
||||
if assert.Len(t, Connections.GetStats(), 1) {
|
||||
assert.Equal(t, "", Connections.GetStats()[0].Username)
|
||||
}
|
||||
c = NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
},
|
||||
})
|
||||
fakeConn = &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
err := Connections.Swap(fakeConn)
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, Connections.GetStats(), 1) {
|
||||
assert.Equal(t, userTestUsername, Connections.GetStats()[0].Username)
|
||||
}
|
||||
res := Connections.Close(fakeConn.GetID())
|
||||
assert.True(t, res)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
|
||||
err = Connections.Swap(fakeConn)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestAtomicUpload(t *testing.T) {
|
||||
configCopy := Config
|
||||
|
||||
Config.UploadMode = UploadModeStandard
|
||||
assert.False(t, Config.IsAtomicUploadEnabled())
|
||||
Config.UploadMode = UploadModeAtomic
|
||||
assert.True(t, Config.IsAtomicUploadEnabled())
|
||||
Config.UploadMode = UploadModeAtomicWithResume
|
||||
assert.True(t, Config.IsAtomicUploadEnabled())
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestConnectionStatus(t *testing.T) {
|
||||
username := "test_user"
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: username,
|
||||
},
|
||||
}
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
c1 := NewBaseConnection("id1", ProtocolSFTP, "", "", user)
|
||||
fakeConn1 := &fakeConnection{
|
||||
BaseConnection: c1,
|
||||
}
|
||||
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
|
||||
t1.BytesReceived = 123
|
||||
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
|
||||
t2.BytesSent = 456
|
||||
c2 := NewBaseConnection("id2", ProtocolSSH, "", "", user)
|
||||
fakeConn2 := &fakeConnection{
|
||||
BaseConnection: c2,
|
||||
command: "md5sum",
|
||||
}
|
||||
c3 := NewBaseConnection("id3", ProtocolWebDAV, "", "", user)
|
||||
fakeConn3 := &fakeConnection{
|
||||
BaseConnection: c3,
|
||||
command: "PROPFIND",
|
||||
}
|
||||
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
|
||||
Connections.Add(fakeConn1)
|
||||
Connections.Add(fakeConn2)
|
||||
Connections.Add(fakeConn3)
|
||||
|
||||
stats := Connections.GetStats()
|
||||
assert.Len(t, stats, 3)
|
||||
for _, stat := range stats {
|
||||
assert.Equal(t, stat.Username, username)
|
||||
assert.True(t, strings.HasPrefix(stat.GetConnectionInfo(), stat.Protocol))
|
||||
assert.True(t, strings.HasPrefix(stat.GetConnectionDuration(), "00:"))
|
||||
if stat.ConnectionID == "SFTP_id1" {
|
||||
assert.Len(t, stat.Transfers, 2)
|
||||
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
|
||||
for _, tr := range stat.Transfers {
|
||||
if tr.OperationType == operationDownload {
|
||||
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "DL"))
|
||||
} else if tr.OperationType == operationUpload {
|
||||
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "UL"))
|
||||
}
|
||||
}
|
||||
} else if stat.ConnectionID == "DAV_id3" {
|
||||
assert.Len(t, stat.Transfers, 1)
|
||||
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
|
||||
} else {
|
||||
assert.Equal(t, 0, len(stat.GetTransfersAsString()))
|
||||
}
|
||||
}
|
||||
|
||||
err := t1.Close()
|
||||
assert.NoError(t, err)
|
||||
err = t2.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = fakeConn3.SignalTransfersAbort()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int32(1), atomic.LoadInt32(&t3.AbortTransfer))
|
||||
err = t3.Close()
|
||||
assert.NoError(t, err)
|
||||
err = fakeConn3.SignalTransfersAbort()
|
||||
assert.Error(t, err)
|
||||
|
||||
Connections.Remove(fakeConn1.GetID())
|
||||
stats = Connections.GetStats()
|
||||
assert.Len(t, stats, 2)
|
||||
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
|
||||
assert.Equal(t, fakeConn2.GetID(), stats[1].ConnectionID)
|
||||
Connections.Remove(fakeConn2.GetID())
|
||||
stats = Connections.GetStats()
|
||||
assert.Len(t, stats, 1)
|
||||
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
|
||||
Connections.Remove(fakeConn3.GetID())
|
||||
stats = Connections.GetStats()
|
||||
assert.Len(t, stats, 0)
|
||||
}
|
||||
|
||||
func TestQuotaScans(t *testing.T) {
|
||||
username := "username"
|
||||
assert.True(t, QuotaScans.AddUserQuotaScan(username))
|
||||
assert.False(t, QuotaScans.AddUserQuotaScan(username))
|
||||
usersScans := QuotaScans.GetUsersQuotaScans()
|
||||
if assert.Len(t, usersScans, 1) {
|
||||
assert.Equal(t, usersScans[0].Username, username)
|
||||
assert.Equal(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
|
||||
QuotaScans.UserScans[0].StartTime = 0
|
||||
assert.NotEqual(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
|
||||
}
|
||||
|
||||
assert.True(t, QuotaScans.RemoveUserQuotaScan(username))
|
||||
assert.False(t, QuotaScans.RemoveUserQuotaScan(username))
|
||||
assert.Len(t, QuotaScans.GetUsersQuotaScans(), 0)
|
||||
assert.Len(t, usersScans, 1)
|
||||
|
||||
folderName := "folder"
|
||||
assert.True(t, QuotaScans.AddVFolderQuotaScan(folderName))
|
||||
assert.False(t, QuotaScans.AddVFolderQuotaScan(folderName))
|
||||
if assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 1) {
|
||||
assert.Equal(t, QuotaScans.GetVFoldersQuotaScans()[0].Name, folderName)
|
||||
}
|
||||
|
||||
assert.True(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
|
||||
assert.False(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
|
||||
assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 0)
|
||||
}
|
||||
|
||||
func TestProxyProtocolVersion(t *testing.T) {
|
||||
c := Configuration{
|
||||
ProxyProtocol: 0,
|
||||
}
|
||||
_, err := c.GetProxyListener(nil)
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "proxy protocol not configured")
|
||||
}
|
||||
c.ProxyProtocol = 1
|
||||
proxyListener, err := c.GetProxyListener(nil)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, proxyListener.Policy)
|
||||
|
||||
c.ProxyProtocol = 2
|
||||
proxyListener, err = c.GetProxyListener(nil)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, proxyListener.Policy)
|
||||
|
||||
c.ProxyProtocol = 1
|
||||
c.ProxyAllowed = []string{"invalid"}
|
||||
_, err = c.GetProxyListener(nil)
|
||||
assert.Error(t, err)
|
||||
|
||||
c.ProxyProtocol = 2
|
||||
_, err = c.GetProxyListener(nil)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestStartupHook(t *testing.T) {
|
||||
Config.StartupHook = ""
|
||||
|
||||
assert.NoError(t, Config.ExecuteStartupHook())
|
||||
|
||||
Config.StartupHook = "http://foo\x7f.com/startup"
|
||||
assert.Error(t, Config.ExecuteStartupHook())
|
||||
|
||||
Config.StartupHook = "http://invalid:5678/"
|
||||
assert.Error(t, Config.ExecuteStartupHook())
|
||||
|
||||
Config.StartupHook = fmt.Sprintf("http://%v", httpAddr)
|
||||
assert.NoError(t, Config.ExecuteStartupHook())
|
||||
|
||||
Config.StartupHook = "invalidhook"
|
||||
assert.Error(t, Config.ExecuteStartupHook())
|
||||
|
||||
if runtime.GOOS != osWindows {
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
Config.StartupHook = hookCmd
|
||||
assert.NoError(t, Config.ExecuteStartupHook())
|
||||
}
|
||||
|
||||
Config.StartupHook = ""
|
||||
}
|
||||
|
||||
func TestPostDisconnectHook(t *testing.T) {
|
||||
Config.PostDisconnectHook = "http://127.0.0.1/"
|
||||
|
||||
remoteAddr := "127.0.0.1:80"
|
||||
Config.checkPostDisconnectHook(remoteAddr, ProtocolHTTP, "", "", time.Now())
|
||||
Config.checkPostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
Config.PostDisconnectHook = "http://bar\x7f.com/"
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
Config.PostDisconnectHook = fmt.Sprintf("http://%v", httpAddr)
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
Config.PostDisconnectHook = "relativePath"
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
if runtime.GOOS == osWindows {
|
||||
Config.PostDisconnectHook = "C:\\a\\bad\\command"
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
} else {
|
||||
Config.PostDisconnectHook = "/invalid/path"
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
Config.PostDisconnectHook = hookCmd
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
}
|
||||
Config.PostDisconnectHook = ""
|
||||
}
|
||||
|
||||
func TestPostConnectHook(t *testing.T) {
|
||||
Config.PostConnectHook = ""
|
||||
|
||||
ipAddr := "127.0.0.1"
|
||||
|
||||
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
Config.PostConnectHook = "http://foo\x7f.com/"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
|
||||
Config.PostConnectHook = "http://invalid:1234/"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
|
||||
Config.PostConnectHook = fmt.Sprintf("http://%v/404", httpAddr)
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
Config.PostConnectHook = fmt.Sprintf("http://%v", httpAddr)
|
||||
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
Config.PostConnectHook = "invalid"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
if runtime.GOOS == osWindows {
|
||||
Config.PostConnectHook = "C:\\bad\\command"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
} else {
|
||||
Config.PostConnectHook = "/invalid/path"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
Config.PostConnectHook = hookCmd
|
||||
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
}
|
||||
|
||||
Config.PostConnectHook = ""
|
||||
}
|
||||
|
||||
func TestCryptoConvertFileInfo(t *testing.T) {
|
||||
name := "name"
|
||||
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), "", vfs.CryptFsConfig{
|
||||
Passphrase: kms.NewPlainSecret("secret"),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
cryptFs := fs.(*vfs.CryptFs)
|
||||
info := vfs.NewFileInfo(name, true, 48, time.Now(), false)
|
||||
assert.Equal(t, info, cryptFs.ConvertFileInfo(info))
|
||||
info = vfs.NewFileInfo(name, false, 48, time.Now(), false)
|
||||
assert.NotEqual(t, info.Size(), cryptFs.ConvertFileInfo(info).Size())
|
||||
info = vfs.NewFileInfo(name, false, 33, time.Now(), false)
|
||||
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
|
||||
info = vfs.NewFileInfo(name, false, 1, time.Now(), false)
|
||||
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
|
||||
}
|
||||
|
||||
func TestFolderCopy(t *testing.T) {
|
||||
folder := vfs.BaseVirtualFolder{
|
||||
ID: 1,
|
||||
Name: "name",
|
||||
MappedPath: filepath.Clean(os.TempDir()),
|
||||
UsedQuotaSize: 4096,
|
||||
UsedQuotaFiles: 2,
|
||||
LastQuotaUpdate: util.GetTimeAsMsSinceEpoch(time.Now()),
|
||||
Users: []string{"user1", "user2"},
|
||||
}
|
||||
folderCopy := folder.GetACopy()
|
||||
folder.ID = 2
|
||||
folder.Users = []string{"user3"}
|
||||
require.Len(t, folderCopy.Users, 2)
|
||||
require.True(t, util.IsStringInSlice("user1", folderCopy.Users))
|
||||
require.True(t, util.IsStringInSlice("user2", folderCopy.Users))
|
||||
require.Equal(t, int64(1), folderCopy.ID)
|
||||
require.Equal(t, folder.Name, folderCopy.Name)
|
||||
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
|
||||
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
|
||||
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
|
||||
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
|
||||
|
||||
folder.FsConfig = vfs.Filesystem{
|
||||
CryptConfig: vfs.CryptFsConfig{
|
||||
Passphrase: kms.NewPlainSecret("crypto secret"),
|
||||
},
|
||||
}
|
||||
folderCopy = folder.GetACopy()
|
||||
folder.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
|
||||
require.Len(t, folderCopy.Users, 1)
|
||||
require.True(t, util.IsStringInSlice("user3", folderCopy.Users))
|
||||
require.Equal(t, int64(2), folderCopy.ID)
|
||||
require.Equal(t, folder.Name, folderCopy.Name)
|
||||
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
|
||||
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
|
||||
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
|
||||
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
|
||||
require.Equal(t, "crypto secret", folderCopy.FsConfig.CryptConfig.Passphrase.GetPayload())
|
||||
}
|
||||
|
||||
func TestCachedFs(t *testing.T) {
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
HomeDir: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
}
|
||||
conn := NewBaseConnection("id", ProtocolSFTP, "", "", user)
|
||||
// changing the user should not affect the connection
|
||||
user.HomeDir = filepath.Join(os.TempDir(), "temp")
|
||||
err := os.Mkdir(user.HomeDir, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
fs, err := user.GetFilesystem("")
|
||||
assert.NoError(t, err)
|
||||
p, err := fs.ResolvePath("/")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, user.GetHomeDir(), p)
|
||||
|
||||
_, p, err = conn.GetFsAndResolvedPath("/")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, filepath.Clean(os.TempDir()), p)
|
||||
user.FsConfig.Provider = sdk.S3FilesystemProvider
|
||||
_, err = user.GetFilesystem("")
|
||||
assert.Error(t, err)
|
||||
conn.User.FsConfig.Provider = sdk.S3FilesystemProvider
|
||||
_, p, err = conn.GetFsAndResolvedPath("/")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, filepath.Clean(os.TempDir()), p)
|
||||
err = os.Remove(user.HomeDir)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestParseAllowedIPAndRanges(t *testing.T) {
|
||||
_, err := util.ParseAllowedIPAndRanges([]string{"1.1.1.1", "not an ip"})
|
||||
assert.Error(t, err)
|
||||
_, err = util.ParseAllowedIPAndRanges([]string{"1.1.1.5", "192.168.1.0/240"})
|
||||
assert.Error(t, err)
|
||||
allow, err := util.ParseAllowedIPAndRanges([]string{"192.168.1.2", "172.16.0.0/24"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, allow[0](net.ParseIP("192.168.1.2")))
|
||||
assert.False(t, allow[0](net.ParseIP("192.168.2.2")))
|
||||
assert.True(t, allow[1](net.ParseIP("172.16.0.1")))
|
||||
assert.False(t, allow[1](net.ParseIP("172.16.1.1")))
|
||||
}
|
||||
|
||||
func TestHideConfidentialData(t *testing.T) {
|
||||
for _, provider := range []sdk.FilesystemProvider{sdk.LocalFilesystemProvider,
|
||||
sdk.CryptedFilesystemProvider, sdk.S3FilesystemProvider, sdk.GCSFilesystemProvider,
|
||||
sdk.AzureBlobFilesystemProvider, sdk.SFTPFilesystemProvider,
|
||||
} {
|
||||
u := dataprovider.User{
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: provider,
|
||||
},
|
||||
}
|
||||
u.PrepareForRendering()
|
||||
f := vfs.BaseVirtualFolder{
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: provider,
|
||||
},
|
||||
}
|
||||
f.PrepareForRendering()
|
||||
}
|
||||
a := dataprovider.Admin{}
|
||||
a.HideConfidentialData()
|
||||
}
|
||||
|
||||
func TestUserPerms(t *testing.T) {
|
||||
u := dataprovider.User{}
|
||||
u.Permissions = make(map[string][]string)
|
||||
u.Permissions["/"] = []string{dataprovider.PermUpload, dataprovider.PermDelete}
|
||||
assert.True(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermDelete}, "/"))
|
||||
assert.False(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermCreateDirs}, "/"))
|
||||
u.Permissions["/"] = []string{dataprovider.PermDelete, dataprovider.PermCreateDirs}
|
||||
assert.True(t, u.HasPermsDeleteAll("/"))
|
||||
assert.False(t, u.HasPermsRenameAll("/"))
|
||||
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermDeleteFiles, dataprovider.PermRenameDirs}
|
||||
assert.True(t, u.HasPermsDeleteAll("/"))
|
||||
assert.False(t, u.HasPermsRenameAll("/"))
|
||||
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermRenameFiles, dataprovider.PermRenameDirs}
|
||||
assert.False(t, u.HasPermsDeleteAll("/"))
|
||||
assert.True(t, u.HasPermsRenameAll("/"))
|
||||
}
|
||||
|
||||
func BenchmarkBcryptHashing(b *testing.B) {
|
||||
bcryptPassword := "bcryptpassword"
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := bcrypt.GenerateFromPassword([]byte(bcryptPassword), 10)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCompareBcryptPassword(b *testing.B) {
|
||||
bcryptPassword := "$2a$10$lPDdnDimJZ7d5/GwL6xDuOqoZVRXok6OHHhivCnanWUtcgN0Zafki"
|
||||
for i := 0; i < b.N; i++ {
|
||||
err := bcrypt.CompareHashAndPassword([]byte(bcryptPassword), []byte("password"))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkArgon2Hashing(b *testing.B) {
|
||||
argonPassword := "argon2password"
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := argon2id.CreateHash(argonPassword, argon2id.DefaultParams)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCompareArgon2Password(b *testing.B) {
|
||||
argon2Password := "$argon2id$v=19$m=65536,t=1,p=2$aOoAOdAwvzhOgi7wUFjXlw$wn/y37dBWdKHtPXHR03nNaKHWKPXyNuVXOknaU+YZ+s"
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := argon2id.ComparePasswordAndHash("password", argon2Password)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
1263
common/connection.go
Normal file
1263
common/connection.go
Normal file
File diff suppressed because it is too large
Load diff
447
common/connection_test.go
Normal file
447
common/connection_test.go
Normal file
|
@ -0,0 +1,447 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/sftp"
|
||||
"github.com/rs/xid"
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
// MockOsFs mockable OsFs
|
||||
type MockOsFs struct {
|
||||
vfs.Fs
|
||||
hasVirtualFolders bool
|
||||
}
|
||||
|
||||
// Name returns the name for the Fs implementation
|
||||
func (fs *MockOsFs) Name() string {
|
||||
return "mockOsFs"
|
||||
}
|
||||
|
||||
// HasVirtualFolders returns true if folders are emulated
|
||||
func (fs *MockOsFs) HasVirtualFolders() bool {
|
||||
return fs.hasVirtualFolders
|
||||
}
|
||||
|
||||
func (fs *MockOsFs) IsUploadResumeSupported() bool {
|
||||
return !fs.hasVirtualFolders
|
||||
}
|
||||
|
||||
func (fs *MockOsFs) Chtimes(name string, atime, mtime time.Time, isUploading bool) error {
|
||||
return vfs.ErrVfsUnsupported
|
||||
}
|
||||
|
||||
func newMockOsFs(hasVirtualFolders bool, connectionID, rootDir string) vfs.Fs {
|
||||
return &MockOsFs{
|
||||
Fs: vfs.NewOsFs(connectionID, rootDir, ""),
|
||||
hasVirtualFolders: hasVirtualFolders,
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoveErrors(t *testing.T) {
|
||||
mappedPath := filepath.Join(os.TempDir(), "map")
|
||||
homePath := filepath.Join(os.TempDir(), "home")
|
||||
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "remove_errors_user",
|
||||
HomeDir: homePath,
|
||||
},
|
||||
VirtualFolders: []vfs.VirtualFolder{
|
||||
{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
Name: filepath.Base(mappedPath),
|
||||
MappedPath: mappedPath,
|
||||
},
|
||||
VirtualPath: "/virtualpath",
|
||||
},
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
|
||||
err := conn.IsRemoveDirAllowed(fs, mappedPath, "/virtualpath1")
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
}
|
||||
err = conn.RemoveFile(fs, filepath.Join(homePath, "missing_file"), "/missing_file",
|
||||
vfs.NewFileInfo("info", false, 100, time.Now(), false))
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestSetStatMode(t *testing.T) {
|
||||
oldSetStatMode := Config.SetstatMode
|
||||
Config.SetstatMode = 1
|
||||
|
||||
fakePath := "fake path"
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
HomeDir: os.TempDir(),
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
fs := newMockOsFs(true, "", user.GetHomeDir())
|
||||
conn := NewBaseConnection("", ProtocolWebDAV, "", "", user)
|
||||
err := conn.handleChmod(fs, fakePath, fakePath, nil)
|
||||
assert.NoError(t, err)
|
||||
err = conn.handleChown(fs, fakePath, fakePath, nil)
|
||||
assert.NoError(t, err)
|
||||
err = conn.handleChtimes(fs, fakePath, fakePath, nil)
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.SetstatMode = 2
|
||||
err = conn.handleChmod(fs, fakePath, fakePath, nil)
|
||||
assert.NoError(t, err)
|
||||
err = conn.handleChtimes(fs, fakePath, fakePath, &StatAttributes{
|
||||
Atime: time.Now(),
|
||||
Mtime: time.Now(),
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.SetstatMode = oldSetStatMode
|
||||
}
|
||||
|
||||
func TestRecursiveRenameWalkError(t *testing.T) {
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
|
||||
err := conn.checkRecursiveRenameDirPermissions(fs, fs, "/source", "/target")
|
||||
assert.ErrorIs(t, err, os.ErrNotExist)
|
||||
}
|
||||
|
||||
func TestCrossRenameFsErrors(t *testing.T) {
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
|
||||
res := conn.hasSpaceForCrossRename(fs, vfs.QuotaCheckResult{}, 1, "missingsource")
|
||||
assert.False(t, res)
|
||||
if runtime.GOOS != osWindows {
|
||||
dirPath := filepath.Join(os.TempDir(), "d")
|
||||
err := os.Mkdir(dirPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.Chmod(dirPath, 0001)
|
||||
assert.NoError(t, err)
|
||||
|
||||
res = conn.hasSpaceForCrossRename(fs, vfs.QuotaCheckResult{}, 1, dirPath)
|
||||
assert.False(t, res)
|
||||
|
||||
err = os.Chmod(dirPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(dirPath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRenameVirtualFolders(t *testing.T) {
|
||||
vdir := "/avdir"
|
||||
u := dataprovider.User{}
|
||||
u.VirtualFolders = append(u.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
Name: "name",
|
||||
MappedPath: "mappedPath",
|
||||
},
|
||||
VirtualPath: vdir,
|
||||
})
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolFTP, "", "", u)
|
||||
res := conn.isRenamePermitted(fs, fs, "source", "target", vdir, "vdirtarget", nil)
|
||||
assert.False(t, res)
|
||||
}
|
||||
|
||||
func TestRenamePerms(t *testing.T) {
|
||||
src := "source"
|
||||
target := "target"
|
||||
u := dataprovider.User{}
|
||||
u.Permissions = map[string][]string{}
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermCreateSymlinks,
|
||||
dataprovider.PermDeleteFiles}
|
||||
conn := NewBaseConnection("", ProtocolSFTP, "", "", u)
|
||||
assert.False(t, conn.hasRenamePerms(src, target, nil))
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermCreateSymlinks,
|
||||
dataprovider.PermDeleteFiles, dataprovider.PermDeleteDirs}
|
||||
assert.True(t, conn.hasRenamePerms(src, target, nil))
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteFiles,
|
||||
dataprovider.PermDeleteDirs}
|
||||
assert.False(t, conn.hasRenamePerms(src, target, nil))
|
||||
|
||||
info := vfs.NewFileInfo(src, true, 0, time.Now(), false)
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteFiles}
|
||||
assert.False(t, conn.hasRenamePerms(src, target, info))
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteDirs}
|
||||
assert.True(t, conn.hasRenamePerms(src, target, info))
|
||||
u.Permissions["/"] = []string{dataprovider.PermDownload, dataprovider.PermUpload, dataprovider.PermDeleteDirs}
|
||||
assert.False(t, conn.hasRenamePerms(src, target, info))
|
||||
}
|
||||
|
||||
func TestUpdateQuotaAfterRename(t *testing.T) {
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
HomeDir: filepath.Join(os.TempDir(), "home"),
|
||||
},
|
||||
}
|
||||
mappedPath := filepath.Join(os.TempDir(), "vdir")
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: mappedPath,
|
||||
},
|
||||
VirtualPath: "/vdir",
|
||||
QuotaFiles: -1,
|
||||
QuotaSize: -1,
|
||||
})
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: mappedPath,
|
||||
},
|
||||
VirtualPath: "/vdir1",
|
||||
QuotaFiles: -1,
|
||||
QuotaSize: -1,
|
||||
})
|
||||
err := os.MkdirAll(user.GetHomeDir(), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.MkdirAll(mappedPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
fs, err := user.GetFilesystem("id")
|
||||
assert.NoError(t, err)
|
||||
c := NewBaseConnection("", ProtocolSFTP, "", "", user)
|
||||
request := sftp.NewRequest("Rename", "/testfile")
|
||||
if runtime.GOOS != osWindows {
|
||||
request.Filepath = "/dir"
|
||||
request.Target = path.Join("/vdir", "dir")
|
||||
testDirPath := filepath.Join(mappedPath, "dir")
|
||||
err := os.MkdirAll(testDirPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.Chmod(testDirPath, 0001)
|
||||
assert.NoError(t, err)
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, testDirPath, 0)
|
||||
assert.Error(t, err)
|
||||
err = os.Chmod(testDirPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
testFile1 := "/testfile1"
|
||||
request.Target = testFile1
|
||||
request.Filepath = path.Join("/vdir", "file")
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 0)
|
||||
assert.Error(t, err)
|
||||
err = os.WriteFile(filepath.Join(mappedPath, "file"), []byte("test content"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
request.Filepath = testFile1
|
||||
request.Target = path.Join("/vdir", "file")
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(filepath.Join(user.GetHomeDir(), "testfile1"), []byte("test content"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
request.Target = testFile1
|
||||
request.Filepath = path.Join("/vdir", "file")
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
|
||||
assert.NoError(t, err)
|
||||
request.Target = path.Join("/vdir1", "file")
|
||||
request.Filepath = path.Join("/vdir", "file")
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.RemoveAll(mappedPath)
|
||||
assert.NoError(t, err)
|
||||
err = os.RemoveAll(user.GetHomeDir())
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestErrorsMapping(t *testing.T) {
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{BaseUser: sdk.BaseUser{HomeDir: os.TempDir()}})
|
||||
for _, protocol := range supportedProtocols {
|
||||
conn.SetProtocol(protocol)
|
||||
err := conn.GetFsError(fs, os.ErrNotExist)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxNoSuchFile)
|
||||
} else if protocol == ProtocolWebDAV || protocol == ProtocolFTP || protocol == ProtocolHTTP ||
|
||||
protocol == ProtocolHTTPShare || protocol == ProtocolDataRetention {
|
||||
assert.EqualError(t, err, os.ErrNotExist.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrNotExist.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, os.ErrPermission)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.EqualError(t, err, sftp.ErrSSHFxPermissionDenied.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrPermissionDenied.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, os.ErrClosed)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
|
||||
assert.Contains(t, err.Error(), os.ErrClosed.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrGenericFailure.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, ErrPermissionDenied)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
|
||||
assert.Contains(t, err.Error(), ErrPermissionDenied.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrPermissionDenied.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, vfs.ErrVfsUnsupported)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.EqualError(t, err, sftp.ErrSSHFxOpUnsupported.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrOpUnsupported.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, vfs.ErrStorageSizeUnavailable)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxOpUnsupported)
|
||||
assert.Contains(t, err.Error(), vfs.ErrStorageSizeUnavailable.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, vfs.ErrStorageSizeUnavailable.Error())
|
||||
}
|
||||
err = conn.GetQuotaExceededError()
|
||||
assert.True(t, conn.IsQuotaExceededError(err))
|
||||
err = conn.GetNotExistError()
|
||||
assert.True(t, conn.IsNotExistError(err))
|
||||
err = conn.GetFsError(fs, nil)
|
||||
assert.NoError(t, err)
|
||||
err = conn.GetOpUnsupportedError()
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.EqualError(t, err, sftp.ErrSSHFxOpUnsupported.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrOpUnsupported.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMaxWriteSize(t *testing.T) {
|
||||
permissions := make(map[string][]string)
|
||||
permissions["/"] = []string{dataprovider.PermAny}
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
Permissions: permissions,
|
||||
HomeDir: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
}
|
||||
fs, err := user.GetFilesystem("123")
|
||||
assert.NoError(t, err)
|
||||
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
|
||||
quotaResult := vfs.QuotaCheckResult{
|
||||
HasSpace: true,
|
||||
}
|
||||
size, err := conn.GetMaxWriteSize(quotaResult, false, 0, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(0), size)
|
||||
|
||||
conn.User.Filters.MaxUploadFileSize = 100
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, false, 0, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(100), size)
|
||||
|
||||
quotaResult.QuotaSize = 1000
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, false, 50, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(100), size)
|
||||
|
||||
quotaResult.QuotaSize = 1000
|
||||
quotaResult.UsedSize = 990
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, false, 50, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(60), size)
|
||||
|
||||
quotaResult.QuotaSize = 0
|
||||
quotaResult.UsedSize = 0
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, true, 100, fs.IsUploadResumeSupported())
|
||||
assert.True(t, conn.IsQuotaExceededError(err))
|
||||
assert.Equal(t, int64(0), size)
|
||||
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, true, 10, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(90), size)
|
||||
|
||||
fs = newMockOsFs(true, fs.ConnectionID(), user.GetHomeDir())
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, true, 100, fs.IsUploadResumeSupported())
|
||||
assert.EqualError(t, err, ErrOpUnsupported.Error())
|
||||
assert.Equal(t, int64(0), size)
|
||||
}
|
||||
|
||||
func TestCheckParentDirsErrors(t *testing.T) {
|
||||
permissions := make(map[string][]string)
|
||||
permissions["/"] = []string{dataprovider.PermAny}
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
Permissions: permissions,
|
||||
HomeDir: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: sdk.CryptedFilesystemProvider,
|
||||
},
|
||||
}
|
||||
c := NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
|
||||
err := c.CheckParentDirs("/a/dir")
|
||||
assert.Error(t, err)
|
||||
|
||||
user.FsConfig.Provider = sdk.LocalFilesystemProvider
|
||||
user.VirtualFolders = nil
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: sdk.CryptedFilesystemProvider,
|
||||
},
|
||||
},
|
||||
VirtualPath: "/vdir",
|
||||
})
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
VirtualPath: "/vdir/sub",
|
||||
})
|
||||
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
|
||||
err = c.CheckParentDirs("/vdir/sub/dir")
|
||||
assert.Error(t, err)
|
||||
|
||||
user = dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
Permissions: permissions,
|
||||
HomeDir: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: sdk.S3FilesystemProvider,
|
||||
S3Config: vfs.S3FsConfig{
|
||||
BaseS3FsConfig: sdk.BaseS3FsConfig{
|
||||
Bucket: "buck",
|
||||
Region: "us-east-1",
|
||||
AccessKey: "key",
|
||||
},
|
||||
AccessSecret: kms.NewPlainSecret("s3secret"),
|
||||
},
|
||||
},
|
||||
}
|
||||
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
|
||||
err = c.CheckParentDirs("/a/dir")
|
||||
assert.NoError(t, err)
|
||||
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
VirtualPath: "/local/dir",
|
||||
})
|
||||
|
||||
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
|
||||
err = c.CheckParentDirs("/local/dir/sub-dir")
|
||||
assert.NoError(t, err)
|
||||
err = os.RemoveAll(filepath.Join(os.TempDir(), "sub-dir"))
|
||||
assert.NoError(t, err)
|
||||
}
|
|
@ -1,26 +1,10 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
|
@ -31,15 +15,11 @@ import (
|
|||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/wneessen/go-mail"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/command"
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/httpclient"
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/smtp"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/internal/vfs"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/httpclient"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/smtp"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// RetentionCheckNotification defines the supported notification methods for a retention check result
|
||||
|
@ -54,36 +34,34 @@ const (
|
|||
)
|
||||
|
||||
var (
|
||||
// RetentionChecks is the list of active retention checks
|
||||
// RetentionChecks is the list of active quota scans
|
||||
RetentionChecks ActiveRetentionChecks
|
||||
)
|
||||
|
||||
// ActiveRetentionChecks holds the active retention checks
|
||||
// ActiveRetentionChecks holds the active quota scans
|
||||
type ActiveRetentionChecks struct {
|
||||
sync.RWMutex
|
||||
Checks []RetentionCheck
|
||||
}
|
||||
|
||||
// Get returns the active retention checks
|
||||
func (c *ActiveRetentionChecks) Get(role string) []RetentionCheck {
|
||||
func (c *ActiveRetentionChecks) Get() []RetentionCheck {
|
||||
c.RLock()
|
||||
defer c.RUnlock()
|
||||
|
||||
checks := make([]RetentionCheck, 0, len(c.Checks))
|
||||
for _, check := range c.Checks {
|
||||
if role == "" || role == check.Role {
|
||||
foldersCopy := make([]dataprovider.FolderRetention, len(check.Folders))
|
||||
copy(foldersCopy, check.Folders)
|
||||
notificationsCopy := make([]string, len(check.Notifications))
|
||||
copy(notificationsCopy, check.Notifications)
|
||||
checks = append(checks, RetentionCheck{
|
||||
Username: check.Username,
|
||||
StartTime: check.StartTime,
|
||||
Notifications: notificationsCopy,
|
||||
Email: check.Email,
|
||||
Folders: foldersCopy,
|
||||
})
|
||||
}
|
||||
foldersCopy := make([]FolderRetention, len(check.Folders))
|
||||
copy(foldersCopy, check.Folders)
|
||||
notificationsCopy := make([]string, len(check.Notifications))
|
||||
copy(notificationsCopy, check.Notifications)
|
||||
checks = append(checks, RetentionCheck{
|
||||
Username: check.Username,
|
||||
StartTime: check.StartTime,
|
||||
Notifications: notificationsCopy,
|
||||
Email: check.Email,
|
||||
Folders: foldersCopy,
|
||||
})
|
||||
}
|
||||
return checks
|
||||
}
|
||||
|
@ -105,7 +83,6 @@ func (c *ActiveRetentionChecks) Add(check RetentionCheck, user *dataprovider.Use
|
|||
conn.SetProtocol(ProtocolDataRetention)
|
||||
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
|
||||
check.Username = user.Username
|
||||
check.Role = user.Role
|
||||
check.StartTime = util.GetTimeAsMsSinceEpoch(time.Now())
|
||||
check.conn = conn
|
||||
check.updateUserPermissions()
|
||||
|
@ -132,6 +109,37 @@ func (c *ActiveRetentionChecks) remove(username string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// FolderRetention defines the retention policy for the specified directory path
|
||||
type FolderRetention struct {
|
||||
// Path is the exposed virtual directory path, if no other specific retention is defined,
|
||||
// the retention applies for sub directories too. For example if retention is defined
|
||||
// for the paths "/" and "/sub" then the retention for "/" is applied for any file outside
|
||||
// the "/sub" directory
|
||||
Path string `json:"path"`
|
||||
// Retention time in hours. 0 means exclude this path
|
||||
Retention int `json:"retention"`
|
||||
// DeleteEmptyDirs defines if empty directories will be deleted.
|
||||
// The user need the delete permission
|
||||
DeleteEmptyDirs bool `json:"delete_empty_dirs,omitempty"`
|
||||
// IgnoreUserPermissions defines if delete files even if the user does not have the delete permission.
|
||||
// The default is "false" which means that files will be skipped if the user does not have the permission
|
||||
// to delete them. This applies to sub directories too.
|
||||
IgnoreUserPermissions bool `json:"ignore_user_permissions,omitempty"`
|
||||
}
|
||||
|
||||
func (f *FolderRetention) isValid() error {
|
||||
f.Path = path.Clean(f.Path)
|
||||
if !path.IsAbs(f.Path) {
|
||||
return util.NewValidationError(fmt.Sprintf("folder retention: invalid path %#v, please specify an absolute POSIX path",
|
||||
f.Path))
|
||||
}
|
||||
if f.Retention < 0 {
|
||||
return util.NewValidationError(fmt.Sprintf("invalid folder retention %v, it must be greater or equal to zero",
|
||||
f.Retention))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type folderRetentionCheckResult struct {
|
||||
Path string `json:"path"`
|
||||
Retention int `json:"retention"`
|
||||
|
@ -149,14 +157,13 @@ type RetentionCheck struct {
|
|||
// retention check start time as unix timestamp in milliseconds
|
||||
StartTime int64 `json:"start_time"`
|
||||
// affected folders
|
||||
Folders []dataprovider.FolderRetention `json:"folders"`
|
||||
Folders []FolderRetention `json:"folders"`
|
||||
// how cleanup results will be notified
|
||||
Notifications []RetentionCheckNotification `json:"notifications,omitempty"`
|
||||
// email to use if the notification method is set to email
|
||||
Email string `json:"email,omitempty"`
|
||||
Role string `json:"-"`
|
||||
// Cleanup results
|
||||
results []folderRetentionCheckResult `json:"-"`
|
||||
results []*folderRetentionCheckResult `json:"-"`
|
||||
conn *BaseConnection
|
||||
}
|
||||
|
||||
|
@ -166,14 +173,14 @@ func (c *RetentionCheck) Validate() error {
|
|||
nothingToDo := true
|
||||
for idx := range c.Folders {
|
||||
f := &c.Folders[idx]
|
||||
if err := f.Validate(); err != nil {
|
||||
if err := f.isValid(); err != nil {
|
||||
return err
|
||||
}
|
||||
if f.Retention > 0 {
|
||||
nothingToDo = false
|
||||
}
|
||||
if _, ok := folderPaths[f.Path]; ok {
|
||||
return util.NewValidationError(fmt.Sprintf("duplicated folder path %q", f.Path))
|
||||
return util.NewValidationError(fmt.Sprintf("duplicated folder path %#v", f.Path))
|
||||
}
|
||||
folderPaths[f.Path] = true
|
||||
}
|
||||
|
@ -194,19 +201,21 @@ func (c *RetentionCheck) Validate() error {
|
|||
return util.NewValidationError("in order to notify results via hook you must define a data_retention_hook")
|
||||
}
|
||||
default:
|
||||
return util.NewValidationError(fmt.Sprintf("invalid notification %q", notification))
|
||||
return util.NewValidationError(fmt.Sprintf("invalid notification %#v", notification))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) updateUserPermissions() {
|
||||
for k := range c.conn.User.Permissions {
|
||||
c.conn.User.Permissions[k] = []string{dataprovider.PermAny}
|
||||
for _, folder := range c.Folders {
|
||||
if folder.IgnoreUserPermissions {
|
||||
c.conn.User.Permissions[folder.Path] = []string{dataprovider.PermAny}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) getFolderRetention(folderPath string) (dataprovider.FolderRetention, error) {
|
||||
func (c *RetentionCheck) getFolderRetention(folderPath string) (FolderRetention, error) {
|
||||
dirsForPath := util.GetDirsForVirtualPath(folderPath)
|
||||
for _, dirPath := range dirsForPath {
|
||||
for _, folder := range c.Folders {
|
||||
|
@ -216,7 +225,7 @@ func (c *RetentionCheck) getFolderRetention(folderPath string) (dataprovider.Fol
|
|||
}
|
||||
}
|
||||
|
||||
return dataprovider.FolderRetention{}, fmt.Errorf("unable to find folder retention for %q", folderPath)
|
||||
return FolderRetention{}, fmt.Errorf("unable to find folder retention for %#v", folderPath)
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) removeFile(virtualPath string, info os.FileInfo) error {
|
||||
|
@ -227,130 +236,102 @@ func (c *RetentionCheck) removeFile(virtualPath string, info os.FileInfo) error
|
|||
return c.conn.RemoveFile(fs, fsPath, virtualPath, info)
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) cleanupFolder(folderPath string, recursion int) error {
|
||||
func (c *RetentionCheck) cleanupFolder(folderPath string) error {
|
||||
deleteFilesPerms := []string{dataprovider.PermDelete, dataprovider.PermDeleteFiles}
|
||||
startTime := time.Now()
|
||||
result := folderRetentionCheckResult{
|
||||
result := &folderRetentionCheckResult{
|
||||
Path: folderPath,
|
||||
}
|
||||
defer func() {
|
||||
c.results = append(c.results, result)
|
||||
}()
|
||||
if recursion >= util.MaxRecursion {
|
||||
c.results = append(c.results, result)
|
||||
if !c.conn.User.HasPerm(dataprovider.PermListItems, folderPath) || !c.conn.User.HasAnyPerm(deleteFilesPerms, folderPath) {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Info = "data retention check skipped: recursion too deep"
|
||||
c.conn.Log(logger.LevelError, "data retention check skipped, recursion too depth for %q: %d",
|
||||
folderPath, recursion)
|
||||
return util.ErrRecursionTooDeep
|
||||
result.Info = "data retention check skipped: no permissions"
|
||||
c.conn.Log(logger.LevelInfo, "user %#v does not have permissions to check retention on %#v, retention check skipped",
|
||||
c.conn.User, folderPath)
|
||||
return nil
|
||||
}
|
||||
recursion++
|
||||
|
||||
folderRetention, err := c.getFolderRetention(folderPath)
|
||||
if err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Error = "unable to get folder retention"
|
||||
c.conn.Log(logger.LevelError, "unable to get folder retention for path %q", folderPath)
|
||||
c.conn.Log(logger.LevelError, "unable to get folder retention for path %#v", folderPath)
|
||||
return err
|
||||
}
|
||||
result.Retention = folderRetention.Retention
|
||||
if folderRetention.Retention == 0 {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Info = "data retention check skipped: retention is set to 0"
|
||||
c.conn.Log(logger.LevelDebug, "retention check skipped for folder %q, retention is set to 0", folderPath)
|
||||
c.conn.Log(logger.LevelDebug, "retention check skipped for folder %#v, retention is set to 0", folderPath)
|
||||
return nil
|
||||
}
|
||||
c.conn.Log(logger.LevelDebug, "start retention check for folder %q, retention: %v hours, delete empty dirs? %v",
|
||||
folderPath, folderRetention.Retention, folderRetention.DeleteEmptyDirs)
|
||||
lister, err := c.conn.ListDir(folderPath)
|
||||
c.conn.Log(logger.LevelDebug, "start retention check for folder %#v, retention: %v hours, delete empty dirs? %v, ignore user perms? %v",
|
||||
folderPath, folderRetention.Retention, folderRetention.DeleteEmptyDirs, folderRetention.IgnoreUserPermissions)
|
||||
files, err := c.conn.ListDir(folderPath)
|
||||
if err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
if err == c.conn.GetNotExistError() {
|
||||
result.Info = "data retention check skipped, folder does not exist"
|
||||
c.conn.Log(logger.LevelDebug, "folder %q does not exist, retention check skipped", folderPath)
|
||||
c.conn.Log(logger.LevelDebug, "folder %#v does not exist, retention check skipped", folderPath)
|
||||
return nil
|
||||
}
|
||||
result.Error = fmt.Sprintf("unable to get lister for directory %q", folderPath)
|
||||
result.Error = fmt.Sprintf("unable to list directory %#v", folderPath)
|
||||
c.conn.Log(logger.LevelError, result.Error)
|
||||
return err
|
||||
}
|
||||
defer lister.Close()
|
||||
|
||||
for {
|
||||
files, err := lister.Next(vfs.ListerBatchSize)
|
||||
finished := errors.Is(err, io.EOF)
|
||||
if err := lister.convertError(err); err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Error = fmt.Sprintf("unable to list directory %q", folderPath)
|
||||
c.conn.Log(logger.LevelError, "unable to list dir %q: %v", folderPath, err)
|
||||
return err
|
||||
}
|
||||
for _, info := range files {
|
||||
virtualPath := path.Join(folderPath, info.Name())
|
||||
if info.IsDir() {
|
||||
if err := c.cleanupFolder(virtualPath, recursion); err != nil {
|
||||
for _, info := range files {
|
||||
virtualPath := path.Join(folderPath, info.Name())
|
||||
if info.IsDir() {
|
||||
if err := c.cleanupFolder(virtualPath); err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Error = fmt.Sprintf("unable to check folder: %v", err)
|
||||
c.conn.Log(logger.LevelError, "unable to cleanup folder %#v: %v", virtualPath, err)
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
retentionTime := info.ModTime().Add(time.Duration(folderRetention.Retention) * time.Hour)
|
||||
if retentionTime.Before(time.Now()) {
|
||||
if err := c.removeFile(virtualPath, info); err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Error = fmt.Sprintf("unable to check folder: %v", err)
|
||||
c.conn.Log(logger.LevelError, "unable to cleanup folder %q: %v", virtualPath, err)
|
||||
result.Error = fmt.Sprintf("unable to remove file %#v: %v", virtualPath, err)
|
||||
c.conn.Log(logger.LevelError, "unable to remove file %#v, retention %v: %v",
|
||||
virtualPath, retentionTime, err)
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
retentionTime := info.ModTime().Add(time.Duration(folderRetention.Retention) * time.Hour)
|
||||
if retentionTime.Before(time.Now()) {
|
||||
if err := c.removeFile(virtualPath, info); err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Error = fmt.Sprintf("unable to remove file %q: %v", virtualPath, err)
|
||||
c.conn.Log(logger.LevelError, "unable to remove file %q, retention %v: %v",
|
||||
virtualPath, retentionTime, err)
|
||||
return err
|
||||
}
|
||||
c.conn.Log(logger.LevelDebug, "removed file %q, modification time: %v, retention: %v hours, retention time: %v",
|
||||
virtualPath, info.ModTime(), folderRetention.Retention, retentionTime)
|
||||
result.DeletedFiles++
|
||||
result.DeletedSize += info.Size()
|
||||
}
|
||||
c.conn.Log(logger.LevelDebug, "removed file %#v, modification time: %v, retention: %v hours, retention time: %v",
|
||||
virtualPath, info.ModTime(), folderRetention.Retention, retentionTime)
|
||||
result.DeletedFiles++
|
||||
result.DeletedSize += info.Size()
|
||||
}
|
||||
}
|
||||
if finished {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
lister.Close()
|
||||
c.checkEmptyDirRemoval(folderPath, folderRetention.DeleteEmptyDirs)
|
||||
if folderRetention.DeleteEmptyDirs {
|
||||
c.checkEmptyDirRemoval(folderPath)
|
||||
}
|
||||
result.Elapsed = time.Since(startTime)
|
||||
c.conn.Log(logger.LevelDebug, "retention check completed for folder %q, deleted files: %v, deleted size: %v bytes",
|
||||
c.conn.Log(logger.LevelDebug, "retention check completed for folder %#v, deleted files: %v, deleted size: %v bytes",
|
||||
folderPath, result.DeletedFiles, result.DeletedSize)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) checkEmptyDirRemoval(folderPath string, checkVal bool) {
|
||||
if folderPath == "/" || !checkVal {
|
||||
return
|
||||
}
|
||||
for _, folder := range c.Folders {
|
||||
if folderPath == folder.Path {
|
||||
return
|
||||
}
|
||||
}
|
||||
if c.conn.User.HasAnyPerm([]string{
|
||||
func (c *RetentionCheck) checkEmptyDirRemoval(folderPath string) {
|
||||
if folderPath != "/" && c.conn.User.HasAnyPerm([]string{
|
||||
dataprovider.PermDelete,
|
||||
dataprovider.PermDeleteDirs,
|
||||
}, path.Dir(folderPath),
|
||||
) {
|
||||
lister, err := c.conn.ListDir(folderPath)
|
||||
if err == nil {
|
||||
files, err := lister.Next(1)
|
||||
lister.Close()
|
||||
if len(files) == 0 && errors.Is(err, io.EOF) {
|
||||
err = c.conn.RemoveDir(folderPath)
|
||||
c.conn.Log(logger.LevelDebug, "tried to remove empty dir %q, error: %v", folderPath, err)
|
||||
}
|
||||
files, err := c.conn.ListDir(folderPath)
|
||||
if err == nil && len(files) == 0 {
|
||||
err = c.conn.RemoveDir(folderPath)
|
||||
c.conn.Log(logger.LevelDebug, "tryed to remove empty dir %#v, error: %v", folderPath, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Start starts the retention check
|
||||
func (c *RetentionCheck) Start() error {
|
||||
func (c *RetentionCheck) Start() {
|
||||
c.conn.Log(logger.LevelInfo, "retention check started")
|
||||
defer RetentionChecks.remove(c.conn.User.Username)
|
||||
defer c.conn.CloseFS() //nolint:errcheck
|
||||
|
@ -358,71 +339,67 @@ func (c *RetentionCheck) Start() error {
|
|||
startTime := time.Now()
|
||||
for _, folder := range c.Folders {
|
||||
if folder.Retention > 0 {
|
||||
if err := c.cleanupFolder(folder.Path, 0); err != nil {
|
||||
c.conn.Log(logger.LevelError, "retention check failed, unable to cleanup folder %q", folder.Path)
|
||||
if err := c.cleanupFolder(folder.Path); err != nil {
|
||||
c.conn.Log(logger.LevelError, "retention check failed, unable to cleanup folder %#v", folder.Path)
|
||||
c.sendNotifications(time.Since(startTime), err)
|
||||
return err
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
c.conn.Log(logger.LevelInfo, "retention check completed")
|
||||
c.sendNotifications(time.Since(startTime), nil)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) sendNotifications(elapsed time.Duration, err error) {
|
||||
for _, notification := range c.Notifications {
|
||||
switch notification {
|
||||
case RetentionCheckNotificationEmail:
|
||||
c.sendEmailNotification(err) //nolint:errcheck
|
||||
c.sendEmailNotification(elapsed, err) //nolint:errcheck
|
||||
case RetentionCheckNotificationHook:
|
||||
c.sendHookNotification(elapsed, err) //nolint:errcheck
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) sendEmailNotification(errCheck error) error {
|
||||
params := EventParams{}
|
||||
if len(c.results) > 0 || errCheck != nil {
|
||||
params.retentionChecks = append(params.retentionChecks, executedRetentionCheck{
|
||||
Username: c.conn.User.Username,
|
||||
ActionName: "Retention check",
|
||||
Results: c.results,
|
||||
})
|
||||
func (c *RetentionCheck) sendEmailNotification(elapsed time.Duration, errCheck error) error {
|
||||
body := new(bytes.Buffer)
|
||||
data := make(map[string]interface{})
|
||||
data["Results"] = c.results
|
||||
totalDeletedFiles := 0
|
||||
totalDeletedSize := int64(0)
|
||||
for _, result := range c.results {
|
||||
totalDeletedFiles += result.DeletedFiles
|
||||
totalDeletedSize += result.DeletedSize
|
||||
}
|
||||
var files []*mail.File
|
||||
f, err := params.getRetentionReportsAsMailAttachment()
|
||||
if err != nil {
|
||||
c.conn.Log(logger.LevelError, "unable to get retention report as mail attachment: %v", err)
|
||||
data["HumanizeSize"] = util.ByteCountIEC
|
||||
data["TotalFiles"] = totalDeletedFiles
|
||||
data["TotalSize"] = totalDeletedSize
|
||||
data["Elapsed"] = elapsed
|
||||
data["Username"] = c.conn.User.Username
|
||||
data["StartTime"] = util.GetTimeFromMsecSinceEpoch(c.StartTime)
|
||||
if errCheck == nil {
|
||||
data["Status"] = "Succeeded"
|
||||
} else {
|
||||
data["Status"] = "Failed"
|
||||
}
|
||||
if err := smtp.RenderRetentionReportTemplate(body, data); err != nil {
|
||||
c.conn.Log(logger.LevelError, "unable to render retention check template: %v", err)
|
||||
return err
|
||||
}
|
||||
f.Name = "retention-report.zip"
|
||||
files = append(files, f)
|
||||
|
||||
startTime := time.Now()
|
||||
var subject string
|
||||
if errCheck == nil {
|
||||
subject = fmt.Sprintf("Successful retention check for user %q", c.conn.User.Username)
|
||||
} else {
|
||||
subject = fmt.Sprintf("Retention check failed for user %q", c.conn.User.Username)
|
||||
}
|
||||
body := "Further details attached."
|
||||
err = smtp.SendEmail([]string{c.Email}, nil, subject, body, smtp.EmailContentTypeTextPlain, files...)
|
||||
if err != nil {
|
||||
c.conn.Log(logger.LevelError, "unable to notify retention check result via email: %v, elapsed: %s", err,
|
||||
subject := fmt.Sprintf("Retention check completed for user %#v", c.conn.User.Username)
|
||||
if err := smtp.SendEmail(c.Email, subject, body.String(), smtp.EmailContentTypeTextHTML); err != nil {
|
||||
c.conn.Log(logger.LevelError, "unable to notify retention check result via email: %v, elapsed: %v", err,
|
||||
time.Since(startTime))
|
||||
return err
|
||||
}
|
||||
c.conn.Log(logger.LevelInfo, "retention check result successfully notified via email, elapsed: %s", time.Since(startTime))
|
||||
c.conn.Log(logger.LevelInfo, "retention check result successfully notified via email, elapsed: %v", time.Since(startTime))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) sendHookNotification(elapsed time.Duration, errCheck error) error {
|
||||
startNewHook()
|
||||
defer hookEnded()
|
||||
|
||||
data := make(map[string]any)
|
||||
data := make(map[string]interface{})
|
||||
totalDeletedFiles := 0
|
||||
totalDeletedSize := int64(0)
|
||||
for _, result := range c.results {
|
||||
|
@ -448,7 +425,7 @@ func (c *RetentionCheck) sendHookNotification(elapsed time.Duration, errCheck er
|
|||
var url *url.URL
|
||||
url, err := url.Parse(Config.DataRetentionHook)
|
||||
if err != nil {
|
||||
c.conn.Log(logger.LevelError, "invalid data retention hook %q: %v", Config.DataRetentionHook, err)
|
||||
c.conn.Log(logger.LevelError, "invalid data retention hook %#v: %v", Config.DataRetentionHook, err)
|
||||
return err
|
||||
}
|
||||
respCode := 0
|
||||
|
@ -463,26 +440,25 @@ func (c *RetentionCheck) sendHookNotification(elapsed time.Duration, errCheck er
|
|||
}
|
||||
}
|
||||
|
||||
c.conn.Log(logger.LevelDebug, "notified result to URL: %q, status code: %v, elapsed: %v err: %v",
|
||||
c.conn.Log(logger.LevelDebug, "notified result to URL: %#v, status code: %v, elapsed: %v err: %v",
|
||||
url.Redacted(), respCode, time.Since(startTime), err)
|
||||
|
||||
return err
|
||||
}
|
||||
if !filepath.IsAbs(Config.DataRetentionHook) {
|
||||
err := fmt.Errorf("invalid data retention hook %q", Config.DataRetentionHook)
|
||||
err := fmt.Errorf("invalid data retention hook %#v", Config.DataRetentionHook)
|
||||
c.conn.Log(logger.LevelError, "%v", err)
|
||||
return err
|
||||
}
|
||||
timeout, env, args := command.GetConfig(Config.DataRetentionHook, command.HookDataRetention)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cmd := exec.CommandContext(ctx, Config.DataRetentionHook, args...)
|
||||
cmd.Env = append(env,
|
||||
fmt.Sprintf("SFTPGO_DATA_RETENTION_RESULT=%s", util.BytesToString(jsonData)))
|
||||
cmd := exec.CommandContext(ctx, Config.DataRetentionHook)
|
||||
cmd.Env = append(os.Environ(),
|
||||
fmt.Sprintf("SFTPGO_DATA_RETENTION_RESULT=%v", string(jsonData)))
|
||||
err := cmd.Run()
|
||||
|
||||
c.conn.Log(logger.LevelDebug, "notified result using command: %q, elapsed: %s err: %v",
|
||||
c.conn.Log(logger.LevelDebug, "notified result using command: %v, elapsed: %v err: %v",
|
||||
Config.DataRetentionHook, time.Since(startTime), err)
|
||||
return err
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
|
@ -26,24 +12,31 @@ import (
|
|||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/smtp"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/smtp"
|
||||
)
|
||||
|
||||
func TestRetentionValidation(t *testing.T) {
|
||||
check := RetentionCheck{}
|
||||
check.Folders = []dataprovider.FolderRetention{
|
||||
check.Folders = append(check.Folders, FolderRetention{
|
||||
Path: "relative",
|
||||
Retention: 10,
|
||||
})
|
||||
err := check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "please specify an absolute POSIX path")
|
||||
|
||||
check.Folders = []FolderRetention{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: -1,
|
||||
},
|
||||
}
|
||||
err := check.Validate()
|
||||
err = check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "invalid folder retention")
|
||||
|
||||
check.Folders = []dataprovider.FolderRetention{
|
||||
check.Folders = []FolderRetention{
|
||||
{
|
||||
Path: "/ab/..",
|
||||
Retention: 0,
|
||||
|
@ -54,7 +47,7 @@ func TestRetentionValidation(t *testing.T) {
|
|||
assert.Contains(t, err.Error(), "nothing to delete")
|
||||
assert.Equal(t, "/", check.Folders[0].Path)
|
||||
|
||||
check.Folders = append(check.Folders, dataprovider.FolderRetention{
|
||||
check.Folders = append(check.Folders, FolderRetention{
|
||||
Path: "/../..",
|
||||
Retention: 24,
|
||||
})
|
||||
|
@ -62,7 +55,7 @@ func TestRetentionValidation(t *testing.T) {
|
|||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), `duplicated folder path "/"`)
|
||||
|
||||
check.Folders = []dataprovider.FolderRetention{
|
||||
check.Folders = []FolderRetention{
|
||||
{
|
||||
Path: "/dir1",
|
||||
Retention: 48,
|
||||
|
@ -85,10 +78,9 @@ func TestRetentionValidation(t *testing.T) {
|
|||
smtpCfg := smtp.Config{
|
||||
Host: "mail.example.com",
|
||||
Port: 25,
|
||||
From: "notification@example.com",
|
||||
TemplatesPath: "templates",
|
||||
}
|
||||
err = smtpCfg.Initialize(configDir, true)
|
||||
err = smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = check.Validate()
|
||||
|
@ -100,7 +92,7 @@ func TestRetentionValidation(t *testing.T) {
|
|||
assert.NoError(t, err)
|
||||
|
||||
smtpCfg = smtp.Config{}
|
||||
err = smtpCfg.Initialize(configDir, true)
|
||||
err = smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
|
||||
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationHook}
|
||||
|
@ -118,10 +110,9 @@ func TestRetentionEmailNotifications(t *testing.T) {
|
|||
smtpCfg := smtp.Config{
|
||||
Host: "127.0.0.1",
|
||||
Port: 2525,
|
||||
From: "notification@example.com",
|
||||
TemplatesPath: "templates",
|
||||
}
|
||||
err := smtpCfg.Initialize(configDir, true)
|
||||
err := smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
|
||||
user := dataprovider.User{
|
||||
|
@ -134,7 +125,7 @@ func TestRetentionEmailNotifications(t *testing.T) {
|
|||
check := RetentionCheck{
|
||||
Notifications: []RetentionCheckNotification{RetentionCheckNotificationEmail},
|
||||
Email: "notification@example.com",
|
||||
results: []folderRetentionCheckResult{
|
||||
results: []*folderRetentionCheckResult{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: 24,
|
||||
|
@ -149,36 +140,21 @@ func TestRetentionEmailNotifications(t *testing.T) {
|
|||
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
|
||||
check.conn = conn
|
||||
check.sendNotifications(1*time.Second, nil)
|
||||
err = check.sendEmailNotification(nil)
|
||||
err = check.sendEmailNotification(1*time.Second, nil)
|
||||
assert.NoError(t, err)
|
||||
err = check.sendEmailNotification(errors.New("test error"))
|
||||
err = check.sendEmailNotification(1*time.Second, errors.New("test error"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
check.results = nil
|
||||
err = check.sendEmailNotification(nil)
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "no data retention report available")
|
||||
}
|
||||
|
||||
smtpCfg.Port = 2626
|
||||
err = smtpCfg.Initialize(configDir, true)
|
||||
err = smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
err = check.sendEmailNotification(nil)
|
||||
err = check.sendEmailNotification(1*time.Second, nil)
|
||||
assert.Error(t, err)
|
||||
check.results = []folderRetentionCheckResult{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: 24,
|
||||
DeletedFiles: 20,
|
||||
DeletedSize: 456789,
|
||||
Elapsed: 12 * time.Second,
|
||||
},
|
||||
}
|
||||
|
||||
smtpCfg = smtp.Config{}
|
||||
err = smtpCfg.Initialize(configDir, true)
|
||||
err = smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
err = check.sendEmailNotification(nil)
|
||||
err = check.sendEmailNotification(1*time.Second, nil)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
|
@ -195,7 +171,7 @@ func TestRetentionHookNotifications(t *testing.T) {
|
|||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
check := RetentionCheck{
|
||||
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
|
||||
results: []folderRetentionCheckResult{
|
||||
results: []*folderRetentionCheckResult{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: 24,
|
||||
|
@ -250,18 +226,21 @@ func TestRetentionPermissionsAndGetFolder(t *testing.T) {
|
|||
user.Permissions["/dir2/sub2"] = []string{dataprovider.PermDelete}
|
||||
|
||||
check := RetentionCheck{
|
||||
Folders: []dataprovider.FolderRetention{
|
||||
Folders: []FolderRetention{
|
||||
{
|
||||
Path: "/dir2",
|
||||
Retention: 24 * 7,
|
||||
Path: "/dir2",
|
||||
Retention: 24 * 7,
|
||||
IgnoreUserPermissions: true,
|
||||
},
|
||||
{
|
||||
Path: "/dir3",
|
||||
Retention: 24 * 7,
|
||||
Path: "/dir3",
|
||||
Retention: 24 * 7,
|
||||
IgnoreUserPermissions: false,
|
||||
},
|
||||
{
|
||||
Path: "/dir2/sub1/sub",
|
||||
Retention: 24,
|
||||
Path: "/dir2/sub1/sub",
|
||||
Retention: 24,
|
||||
IgnoreUserPermissions: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
@ -271,10 +250,12 @@ func TestRetentionPermissionsAndGetFolder(t *testing.T) {
|
|||
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
|
||||
check.conn = conn
|
||||
check.updateUserPermissions()
|
||||
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/"])
|
||||
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir1"])
|
||||
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub1"])
|
||||
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub2"])
|
||||
assert.Equal(t, []string{dataprovider.PermListItems, dataprovider.PermDelete}, conn.User.Permissions["/"])
|
||||
assert.Equal(t, []string{dataprovider.PermListItems}, conn.User.Permissions["/dir1"])
|
||||
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2"])
|
||||
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub1/sub"])
|
||||
assert.Equal(t, []string{dataprovider.PermCreateDirs}, conn.User.Permissions["/dir2/sub1"])
|
||||
assert.Equal(t, []string{dataprovider.PermDelete}, conn.User.Permissions["/dir2/sub2"])
|
||||
|
||||
_, err := check.getFolderRetention("/")
|
||||
assert.Error(t, err)
|
||||
|
@ -305,7 +286,7 @@ func TestRetentionCheckAddRemove(t *testing.T) {
|
|||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
check := RetentionCheck{
|
||||
Folders: []dataprovider.FolderRetention{
|
||||
Folders: []FolderRetention{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: 48,
|
||||
|
@ -314,7 +295,7 @@ func TestRetentionCheckAddRemove(t *testing.T) {
|
|||
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
|
||||
}
|
||||
assert.NotNil(t, RetentionChecks.Add(check, &user))
|
||||
checks := RetentionChecks.Get("")
|
||||
checks := RetentionChecks.Get()
|
||||
require.Len(t, checks, 1)
|
||||
assert.Equal(t, username, checks[0].Username)
|
||||
assert.Greater(t, checks[0].StartTime, int64(0))
|
||||
|
@ -326,45 +307,10 @@ func TestRetentionCheckAddRemove(t *testing.T) {
|
|||
|
||||
assert.Nil(t, RetentionChecks.Add(check, &user))
|
||||
assert.True(t, RetentionChecks.remove(username))
|
||||
require.Len(t, RetentionChecks.Get(""), 0)
|
||||
require.Len(t, RetentionChecks.Get(), 0)
|
||||
assert.False(t, RetentionChecks.remove(username))
|
||||
}
|
||||
|
||||
func TestRetentionCheckRole(t *testing.T) {
|
||||
username := "retuser"
|
||||
role1 := "retrole1"
|
||||
role2 := "retrole2"
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: username,
|
||||
Role: role1,
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
check := RetentionCheck{
|
||||
Folders: []dataprovider.FolderRetention{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: 48,
|
||||
},
|
||||
},
|
||||
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
|
||||
}
|
||||
assert.NotNil(t, RetentionChecks.Add(check, &user))
|
||||
checks := RetentionChecks.Get("")
|
||||
require.Len(t, checks, 1)
|
||||
assert.Empty(t, checks[0].Role)
|
||||
checks = RetentionChecks.Get(role1)
|
||||
require.Len(t, checks, 1)
|
||||
checks = RetentionChecks.Get(role2)
|
||||
require.Len(t, checks, 0)
|
||||
user.Role = ""
|
||||
assert.Nil(t, RetentionChecks.Add(check, &user))
|
||||
assert.True(t, RetentionChecks.remove(username))
|
||||
require.Len(t, RetentionChecks.Get(""), 0)
|
||||
}
|
||||
|
||||
func TestCleanupErrors(t *testing.T) {
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
|
@ -374,7 +320,7 @@ func TestCleanupErrors(t *testing.T) {
|
|||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
check := &RetentionCheck{
|
||||
Folders: []dataprovider.FolderRetention{
|
||||
Folders: []FolderRetention{
|
||||
{
|
||||
Path: "/path",
|
||||
Retention: 48,
|
||||
|
@ -387,11 +333,8 @@ func TestCleanupErrors(t *testing.T) {
|
|||
err := check.removeFile("missing file", nil)
|
||||
assert.Error(t, err)
|
||||
|
||||
err = check.cleanupFolder("/", 0)
|
||||
err = check.cleanupFolder("/")
|
||||
assert.Error(t, err)
|
||||
|
||||
err = check.cleanupFolder("/", 1000)
|
||||
assert.ErrorIs(t, err, util.ErrRecursionTooDeep)
|
||||
|
||||
assert.True(t, RetentionChecks.remove(user.Username))
|
||||
}
|
274
common/defender.go
Normal file
274
common/defender.go
Normal file
|
@ -0,0 +1,274 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/yl2chen/cidranger"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// HostEvent is the enumerable for the supported host events
|
||||
type HostEvent int
|
||||
|
||||
// Supported host events
|
||||
const (
|
||||
HostEventLoginFailed HostEvent = iota
|
||||
HostEventUserNotFound
|
||||
HostEventNoLoginTried
|
||||
HostEventLimitExceeded
|
||||
)
|
||||
|
||||
// Supported defender drivers
|
||||
const (
|
||||
DefenderDriverMemory = "memory"
|
||||
DefenderDriverProvider = "provider"
|
||||
)
|
||||
|
||||
var (
|
||||
supportedDefenderDrivers = []string{DefenderDriverMemory, DefenderDriverProvider}
|
||||
)
|
||||
|
||||
// Defender defines the interface that a defender must implements
|
||||
type Defender interface {
|
||||
GetHosts() ([]*dataprovider.DefenderEntry, error)
|
||||
GetHost(ip string) (*dataprovider.DefenderEntry, error)
|
||||
AddEvent(ip string, event HostEvent)
|
||||
IsBanned(ip string) bool
|
||||
GetBanTime(ip string) (*time.Time, error)
|
||||
GetScore(ip string) (int, error)
|
||||
DeleteHost(ip string) bool
|
||||
Reload() error
|
||||
}
|
||||
|
||||
// DefenderConfig defines the "defender" configuration
|
||||
type DefenderConfig struct {
|
||||
// Set to true to enable the defender
|
||||
Enabled bool `json:"enabled" mapstructure:"enabled"`
|
||||
// Defender implementation to use, we support "memory" and "provider".
|
||||
// Using "provider" as driver you can share the defender events among
|
||||
// multiple SFTPGo instances. For a single instance "memory" provider will
|
||||
// be much faster
|
||||
Driver string `json:"driver" mapstructure:"driver"`
|
||||
// BanTime is the number of minutes that a host is banned
|
||||
BanTime int `json:"ban_time" mapstructure:"ban_time"`
|
||||
// Percentage increase of the ban time if a banned host tries to connect again
|
||||
BanTimeIncrement int `json:"ban_time_increment" mapstructure:"ban_time_increment"`
|
||||
// Threshold value for banning a client
|
||||
Threshold int `json:"threshold" mapstructure:"threshold"`
|
||||
// Score for invalid login attempts, eg. non-existent user accounts or
|
||||
// client disconnected for inactivity without authentication attempts
|
||||
ScoreInvalid int `json:"score_invalid" mapstructure:"score_invalid"`
|
||||
// Score for valid login attempts, eg. user accounts that exist
|
||||
ScoreValid int `json:"score_valid" mapstructure:"score_valid"`
|
||||
// Score for limit exceeded events, generated from the rate limiters or for max connections
|
||||
// per-host exceeded
|
||||
ScoreLimitExceeded int `json:"score_limit_exceeded" mapstructure:"score_limit_exceeded"`
|
||||
// Defines the time window, in minutes, for tracking client errors.
|
||||
// A host is banned if it has exceeded the defined threshold during
|
||||
// the last observation time minutes
|
||||
ObservationTime int `json:"observation_time" mapstructure:"observation_time"`
|
||||
// The number of banned IPs and host scores kept in memory will vary between the
|
||||
// soft and hard limit for the "memory" driver. For the "provider" driver the
|
||||
// soft limit is ignored and the hard limit is used to limit the number of entries
|
||||
// to return when you request for the entire host list from the defender
|
||||
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
|
||||
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
|
||||
// Path to a file containing a list of ip addresses and/or networks to never ban
|
||||
SafeListFile string `json:"safelist_file" mapstructure:"safelist_file"`
|
||||
// Path to a file containing a list of ip addresses and/or networks to always ban
|
||||
BlockListFile string `json:"blocklist_file" mapstructure:"blocklist_file"`
|
||||
}
|
||||
|
||||
type baseDefender struct {
|
||||
config *DefenderConfig
|
||||
sync.RWMutex
|
||||
safeList *HostList
|
||||
blockList *HostList
|
||||
}
|
||||
|
||||
// Reload reloads block and safe lists
|
||||
func (d *baseDefender) Reload() error {
|
||||
blockList, err := loadHostListFromFile(d.config.BlockListFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Lock()
|
||||
d.blockList = blockList
|
||||
d.Unlock()
|
||||
|
||||
safeList, err := loadHostListFromFile(d.config.SafeListFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Lock()
|
||||
d.safeList = safeList
|
||||
d.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *baseDefender) isBanned(ip string) bool {
|
||||
if d.blockList != nil && d.blockList.isListed(ip) {
|
||||
// permanent ban
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (d *baseDefender) getScore(event HostEvent) int {
|
||||
var score int
|
||||
|
||||
switch event {
|
||||
case HostEventLoginFailed:
|
||||
score = d.config.ScoreValid
|
||||
case HostEventLimitExceeded:
|
||||
score = d.config.ScoreLimitExceeded
|
||||
case HostEventUserNotFound, HostEventNoLoginTried:
|
||||
score = d.config.ScoreInvalid
|
||||
}
|
||||
return score
|
||||
}
|
||||
|
||||
// HostListFile defines the structure expected for safe/block list files
|
||||
type HostListFile struct {
|
||||
IPAddresses []string `json:"addresses"`
|
||||
CIDRNetworks []string `json:"networks"`
|
||||
}
|
||||
|
||||
// HostList defines the structure used to keep the HostListFile in memory
|
||||
type HostList struct {
|
||||
IPAddresses map[string]bool
|
||||
Ranges cidranger.Ranger
|
||||
}
|
||||
|
||||
func (h *HostList) isListed(ip string) bool {
|
||||
if _, ok := h.IPAddresses[ip]; ok {
|
||||
return true
|
||||
}
|
||||
|
||||
ok, err := h.Ranges.Contains(net.ParseIP(ip))
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return ok
|
||||
}
|
||||
|
||||
type hostEvent struct {
|
||||
dateTime time.Time
|
||||
score int
|
||||
}
|
||||
|
||||
type hostScore struct {
|
||||
TotalScore int
|
||||
Events []hostEvent
|
||||
}
|
||||
|
||||
// validate returns an error if the configuration is invalid
|
||||
func (c *DefenderConfig) validate() error {
|
||||
if !c.Enabled {
|
||||
return nil
|
||||
}
|
||||
if c.ScoreInvalid >= c.Threshold {
|
||||
return fmt.Errorf("score_invalid %v cannot be greater than threshold %v", c.ScoreInvalid, c.Threshold)
|
||||
}
|
||||
if c.ScoreValid >= c.Threshold {
|
||||
return fmt.Errorf("score_valid %v cannot be greater than threshold %v", c.ScoreValid, c.Threshold)
|
||||
}
|
||||
if c.ScoreLimitExceeded >= c.Threshold {
|
||||
return fmt.Errorf("score_limit_exceeded %v cannot be greater than threshold %v", c.ScoreLimitExceeded, c.Threshold)
|
||||
}
|
||||
if c.BanTime <= 0 {
|
||||
return fmt.Errorf("invalid ban_time %v", c.BanTime)
|
||||
}
|
||||
if c.BanTimeIncrement <= 0 {
|
||||
return fmt.Errorf("invalid ban_time_increment %v", c.BanTimeIncrement)
|
||||
}
|
||||
if c.ObservationTime <= 0 {
|
||||
return fmt.Errorf("invalid observation_time %v", c.ObservationTime)
|
||||
}
|
||||
if c.EntriesSoftLimit <= 0 {
|
||||
return fmt.Errorf("invalid entries_soft_limit %v", c.EntriesSoftLimit)
|
||||
}
|
||||
if c.EntriesHardLimit <= c.EntriesSoftLimit {
|
||||
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", c.EntriesHardLimit, c.EntriesSoftLimit)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func loadHostListFromFile(name string) (*HostList, error) {
|
||||
if name == "" {
|
||||
return nil, nil
|
||||
}
|
||||
if !util.IsFileInputValid(name) {
|
||||
return nil, fmt.Errorf("invalid host list file name %#v", name)
|
||||
}
|
||||
|
||||
info, err := os.Stat(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// opinionated max size, you should avoid big host lists
|
||||
if info.Size() > 1048576*5 { // 5MB
|
||||
return nil, fmt.Errorf("host list file %#v is too big: %v bytes", name, info.Size())
|
||||
}
|
||||
|
||||
content, err := os.ReadFile(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to read input file %#v: %v", name, err)
|
||||
}
|
||||
|
||||
var hostList HostListFile
|
||||
|
||||
err = json.Unmarshal(content, &hostList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(hostList.CIDRNetworks) > 0 || len(hostList.IPAddresses) > 0 {
|
||||
result := &HostList{
|
||||
IPAddresses: make(map[string]bool),
|
||||
Ranges: cidranger.NewPCTrieRanger(),
|
||||
}
|
||||
ipCount := 0
|
||||
cdrCount := 0
|
||||
for _, ip := range hostList.IPAddresses {
|
||||
if net.ParseIP(ip) == nil {
|
||||
logger.Warn(logSender, "", "unable to parse IP %#v", ip)
|
||||
continue
|
||||
}
|
||||
result.IPAddresses[ip] = true
|
||||
ipCount++
|
||||
}
|
||||
for _, cidrNet := range hostList.CIDRNetworks {
|
||||
_, network, err := net.ParseCIDR(cidrNet)
|
||||
if err != nil {
|
||||
logger.Warn(logSender, "", "unable to parse CIDR network %#v", cidrNet)
|
||||
continue
|
||||
}
|
||||
err = result.Ranges.Insert(cidranger.NewBasicRangerEntry(*network))
|
||||
if err == nil {
|
||||
cdrCount++
|
||||
}
|
||||
}
|
||||
|
||||
logger.Info(logSender, "", "list %#v loaded, ip addresses loaded: %v/%v networks loaded: %v/%v",
|
||||
name, ipCount, len(hostList.IPAddresses), cdrCount, len(hostList.CIDRNetworks))
|
||||
return result, nil
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
|
@ -1,102 +1,45 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/yl2chen/cidranger"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
)
|
||||
|
||||
func TestBasicDefender(t *testing.T) {
|
||||
entries := []dataprovider.IPListEntry{
|
||||
{
|
||||
IPOrNet: "172.16.1.1/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeDeny,
|
||||
},
|
||||
{
|
||||
IPOrNet: "172.16.1.2/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeDeny,
|
||||
},
|
||||
{
|
||||
IPOrNet: "10.8.0.0/24",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeDeny,
|
||||
},
|
||||
{
|
||||
IPOrNet: "192.168.1.1/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeDeny,
|
||||
},
|
||||
{
|
||||
IPOrNet: "192.168.1.2/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeDeny,
|
||||
},
|
||||
{
|
||||
IPOrNet: "10.8.9.0/24",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeDeny,
|
||||
},
|
||||
{
|
||||
IPOrNet: "172.16.1.3/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeAllow,
|
||||
},
|
||||
{
|
||||
IPOrNet: "172.16.1.4/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeAllow,
|
||||
},
|
||||
{
|
||||
IPOrNet: "192.168.8.0/24",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeAllow,
|
||||
},
|
||||
{
|
||||
IPOrNet: "192.168.1.3/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeAllow,
|
||||
},
|
||||
{
|
||||
IPOrNet: "192.168.1.4/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeAllow,
|
||||
},
|
||||
{
|
||||
IPOrNet: "192.168.9.0/24",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeAllow,
|
||||
},
|
||||
bl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
|
||||
CIDRNetworks: []string{"10.8.0.0/24"},
|
||||
}
|
||||
sl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
|
||||
CIDRNetworks: []string{"192.168.8.0/24"},
|
||||
}
|
||||
blFile := filepath.Join(os.TempDir(), "bl.json")
|
||||
slFile := filepath.Join(os.TempDir(), "sl.json")
|
||||
|
||||
for idx := range entries {
|
||||
e := entries[idx]
|
||||
err := dataprovider.AddIPListEntry(&e, "", "", "")
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
data, err := json.Marshal(bl)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.WriteFile(blFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
data, err = json.Marshal(sl)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.WriteFile(slFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
config := &DefenderConfig{
|
||||
Enabled: true,
|
||||
|
@ -105,26 +48,29 @@ func TestBasicDefender(t *testing.T) {
|
|||
Threshold: 5,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ScoreNoAuth: 2,
|
||||
ScoreLimitExceeded: 3,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 1,
|
||||
EntriesHardLimit: 2,
|
||||
SafeListFile: "slFile",
|
||||
BlockListFile: "blFile",
|
||||
}
|
||||
|
||||
_, err = newInMemoryDefender(config)
|
||||
assert.Error(t, err)
|
||||
config.BlockListFile = blFile
|
||||
_, err = newInMemoryDefender(config)
|
||||
assert.Error(t, err)
|
||||
config.SafeListFile = slFile
|
||||
d, err := newInMemoryDefender(config)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defender := d.(*memoryDefender)
|
||||
assert.True(t, defender.IsBanned("172.16.1.1", ProtocolSSH))
|
||||
assert.True(t, defender.IsBanned("192.168.1.1", ProtocolFTP))
|
||||
assert.False(t, defender.IsBanned("172.16.1.10", ProtocolSSH))
|
||||
assert.False(t, defender.IsBanned("192.168.1.10", ProtocolSSH))
|
||||
assert.False(t, defender.IsBanned("10.8.2.3", ProtocolSSH))
|
||||
assert.False(t, defender.IsBanned("10.9.2.3", ProtocolSSH))
|
||||
assert.True(t, defender.IsBanned("10.8.0.3", ProtocolSSH))
|
||||
assert.True(t, defender.IsBanned("10.8.9.3", ProtocolSSH))
|
||||
assert.False(t, defender.IsBanned("invalid ip", ProtocolSSH))
|
||||
assert.True(t, defender.IsBanned("172.16.1.1"))
|
||||
assert.False(t, defender.IsBanned("172.16.1.10"))
|
||||
assert.False(t, defender.IsBanned("10.8.2.3"))
|
||||
assert.True(t, defender.IsBanned("10.8.0.3"))
|
||||
assert.False(t, defender.IsBanned("invalid ip"))
|
||||
assert.Equal(t, 0, defender.countBanned())
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
hosts, err := defender.GetHosts()
|
||||
|
@ -133,15 +79,13 @@ func TestBasicDefender(t *testing.T) {
|
|||
_, err = defender.GetHost("10.8.0.4")
|
||||
assert.Error(t, err)
|
||||
|
||||
defender.AddEvent("172.16.1.4", ProtocolSSH, HostEventLoginFailed)
|
||||
defender.AddEvent("192.168.1.4", ProtocolSSH, HostEventLoginFailed)
|
||||
defender.AddEvent("192.168.8.4", ProtocolSSH, HostEventUserNotFound)
|
||||
defender.AddEvent("172.16.1.3", ProtocolSSH, HostEventLimitExceeded)
|
||||
defender.AddEvent("192.168.1.3", ProtocolSSH, HostEventLimitExceeded)
|
||||
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
|
||||
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
|
||||
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
|
||||
testIP := "12.34.56.78"
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventLoginFailed)
|
||||
defender.AddEvent(testIP, HostEventLoginFailed)
|
||||
assert.Equal(t, 1, defender.countHosts())
|
||||
assert.Equal(t, 0, defender.countBanned())
|
||||
score, err := defender.GetScore(testIP)
|
||||
|
@ -161,7 +105,7 @@ func TestBasicDefender(t *testing.T) {
|
|||
banTime, err := defender.GetBanTime(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventLimitExceeded)
|
||||
defender.AddEvent(testIP, HostEventLimitExceeded)
|
||||
assert.Equal(t, 1, defender.countHosts())
|
||||
assert.Equal(t, 0, defender.countBanned())
|
||||
score, err = defender.GetScore(testIP)
|
||||
|
@ -174,8 +118,8 @@ func TestBasicDefender(t *testing.T) {
|
|||
assert.True(t, hosts[0].BanTime.IsZero())
|
||||
assert.Empty(t, hosts[0].GetBanTime())
|
||||
}
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventUserNotFound)
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
assert.Equal(t, 1, defender.countBanned())
|
||||
score, err = defender.GetScore(testIP)
|
||||
|
@ -202,11 +146,11 @@ func TestBasicDefender(t *testing.T) {
|
|||
testIP2 := "12.34.56.80"
|
||||
testIP3 := "12.34.56.81"
|
||||
|
||||
defender.AddEvent(testIP1, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP2, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP1, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP2, HostEventNoLoginTried)
|
||||
assert.Equal(t, 2, defender.countHosts())
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
defender.AddEvent(testIP3, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
|
||||
// testIP1 and testIP2 should be removed
|
||||
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
|
||||
|
@ -220,8 +164,8 @@ func TestBasicDefender(t *testing.T) {
|
|||
assert.NoError(t, err)
|
||||
assert.Equal(t, 2, score)
|
||||
|
||||
defender.AddEvent(testIP3, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP3, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
// IP3 is now banned
|
||||
banTime, err = defender.GetBanTime(testIP3)
|
||||
assert.NoError(t, err)
|
||||
|
@ -230,7 +174,7 @@ func TestBasicDefender(t *testing.T) {
|
|||
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
for i := 0; i < 3; i++ {
|
||||
defender.AddEvent(testIP1, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP1, HostEventNoLoginTried)
|
||||
}
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
assert.Equal(t, config.EntriesSoftLimit, defender.countBanned())
|
||||
|
@ -245,9 +189,9 @@ func TestBasicDefender(t *testing.T) {
|
|||
assert.NotNil(t, banTime)
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
defender.AddEvent(testIP3, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
}
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countBanned())
|
||||
|
@ -255,7 +199,7 @@ func TestBasicDefender(t *testing.T) {
|
|||
banTime, err = defender.GetBanTime(testIP3)
|
||||
assert.NoError(t, err)
|
||||
if assert.NotNil(t, banTime) {
|
||||
assert.True(t, defender.IsBanned(testIP3, ProtocolFTP))
|
||||
assert.True(t, defender.IsBanned(testIP3))
|
||||
// ban time should increase
|
||||
newBanTime, err := defender.GetBanTime(testIP3)
|
||||
assert.NoError(t, err)
|
||||
|
@ -265,10 +209,10 @@ func TestBasicDefender(t *testing.T) {
|
|||
assert.True(t, defender.DeleteHost(testIP3))
|
||||
assert.False(t, defender.DeleteHost(testIP3))
|
||||
|
||||
for _, e := range entries {
|
||||
err := dataprovider.DeleteIPListEntry(e.IPOrNet, e.Type, "", "", "")
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
err = os.Remove(slFile)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(blFile)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestExpiredHostBans(t *testing.T) {
|
||||
|
@ -298,14 +242,14 @@ func TestExpiredHostBans(t *testing.T) {
|
|||
assert.NoError(t, err)
|
||||
assert.Len(t, res, 0)
|
||||
|
||||
assert.False(t, defender.IsBanned(testIP, ProtocolFTP))
|
||||
assert.False(t, defender.IsBanned(testIP))
|
||||
_, err = defender.GetHost(testIP)
|
||||
assert.Error(t, err)
|
||||
_, ok := defender.banned[testIP]
|
||||
assert.True(t, ok)
|
||||
// now add an event for an expired banned ip, it should be removed
|
||||
defender.AddEvent(testIP, ProtocolFTP, HostEventLoginFailed)
|
||||
assert.False(t, defender.IsBanned(testIP, ProtocolFTP))
|
||||
defender.AddEvent(testIP, HostEventLoginFailed)
|
||||
assert.False(t, defender.IsBanned(testIP))
|
||||
entry, err := defender.GetHost(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, testIP, entry.IP)
|
||||
|
@ -347,6 +291,79 @@ func TestExpiredHostBans(t *testing.T) {
|
|||
assert.True(t, ok)
|
||||
}
|
||||
|
||||
func TestLoadHostListFromFile(t *testing.T) {
|
||||
_, err := loadHostListFromFile(".")
|
||||
assert.Error(t, err)
|
||||
|
||||
hostsFilePath := filepath.Join(os.TempDir(), "hostfile")
|
||||
content := make([]byte, 1048576*6)
|
||||
_, err = rand.Read(content)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.WriteFile(hostsFilePath, content, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.Error(t, err)
|
||||
|
||||
hl := HostListFile{
|
||||
IPAddresses: []string{},
|
||||
CIDRNetworks: []string{},
|
||||
}
|
||||
|
||||
asJSON, err := json.Marshal(hl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
hostList, err := loadHostListFromFile(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, hostList)
|
||||
|
||||
hl.IPAddresses = append(hl.IPAddresses, "invalidip")
|
||||
asJSON, err = json.Marshal(hl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
hostList, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hostList.IPAddresses, 0)
|
||||
|
||||
hl.IPAddresses = nil
|
||||
hl.CIDRNetworks = append(hl.CIDRNetworks, "invalid net")
|
||||
|
||||
asJSON, err = json.Marshal(hl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
hostList, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, hostList)
|
||||
assert.Len(t, hostList.IPAddresses, 0)
|
||||
assert.Equal(t, 0, hostList.Ranges.Len())
|
||||
|
||||
if runtime.GOOS != "windows" {
|
||||
err = os.Chmod(hostsFilePath, 0111)
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Chmod(hostsFilePath, 0644)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
err = os.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
_, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Remove(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestDefenderCleanup(t *testing.T) {
|
||||
d := memoryDefender{
|
||||
baseDefender: baseDefender{
|
||||
|
@ -435,31 +452,6 @@ func TestDefenderCleanup(t *testing.T) {
|
|||
assert.Equal(t, 0, score)
|
||||
}
|
||||
|
||||
func TestDefenderDelay(t *testing.T) {
|
||||
d := memoryDefender{
|
||||
baseDefender: baseDefender{
|
||||
config: &DefenderConfig{
|
||||
ObservationTime: 1,
|
||||
EntriesSoftLimit: 2,
|
||||
EntriesHardLimit: 3,
|
||||
LoginDelay: LoginDelay{
|
||||
Success: 50,
|
||||
PasswordFailed: 200,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
startTime := time.Now()
|
||||
d.DelayLogin(nil)
|
||||
elapsed := time.Since(startTime)
|
||||
assert.Less(t, elapsed, time.Millisecond*100)
|
||||
|
||||
startTime = time.Now()
|
||||
d.DelayLogin(ErrInternalFailure)
|
||||
elapsed = time.Since(startTime)
|
||||
assert.Greater(t, elapsed, time.Millisecond*150)
|
||||
}
|
||||
|
||||
func TestDefenderConfig(t *testing.T) {
|
||||
c := DefenderConfig{}
|
||||
err := c.validate()
|
||||
|
@ -482,11 +474,6 @@ func TestDefenderConfig(t *testing.T) {
|
|||
require.Error(t, err)
|
||||
|
||||
c.ScoreValid = 1
|
||||
c.ScoreNoAuth = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.ScoreNoAuth = 2
|
||||
c.BanTime = 0
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
@ -516,20 +503,6 @@ func TestDefenderConfig(t *testing.T) {
|
|||
c.EntriesHardLimit = 20
|
||||
err = c.validate()
|
||||
require.NoError(t, err)
|
||||
|
||||
c = DefenderConfig{
|
||||
Enabled: true,
|
||||
ScoreInvalid: -1,
|
||||
ScoreLimitExceeded: -1,
|
||||
ScoreNoAuth: -1,
|
||||
ScoreValid: -1,
|
||||
}
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
assert.Equal(t, 0, c.ScoreInvalid)
|
||||
assert.Equal(t, 0, c.ScoreValid)
|
||||
assert.Equal(t, 0, c.ScoreLimitExceeded)
|
||||
assert.Equal(t, 0, c.ScoreNoAuth)
|
||||
}
|
||||
|
||||
func BenchmarkDefenderBannedSearch(b *testing.B) {
|
||||
|
@ -547,7 +520,7 @@ func BenchmarkDefenderBannedSearch(b *testing.B) {
|
|||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
d.IsBanned("192.168.1.1", ProtocolSSH)
|
||||
d.IsBanned("192.168.1.1")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -563,7 +536,7 @@ func BenchmarkCleanup(b *testing.B) {
|
|||
|
||||
for i := 0; i < b.N; i++ {
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
d.AddEvent(ip.String(), ProtocolSSH, HostEventLoginFailed)
|
||||
d.AddEvent(ip.String(), HostEventLoginFailed)
|
||||
if d.countHosts() > d.config.EntriesHardLimit {
|
||||
panic("too many hosts")
|
||||
}
|
||||
|
@ -574,10 +547,72 @@ func BenchmarkCleanup(b *testing.B) {
|
|||
}
|
||||
}
|
||||
|
||||
func BenchmarkDefenderBannedSearchWithBlockList(b *testing.B) {
|
||||
d := getDefenderForBench()
|
||||
|
||||
d.blockList = &HostList{
|
||||
IPAddresses: make(map[string]bool),
|
||||
Ranges: cidranger.NewPCTrieRanger(),
|
||||
}
|
||||
|
||||
ip, ipnet, err := net.ParseCIDR("129.8.0.0/12") // 1048574 ip addresses
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
|
||||
d.blockList.IPAddresses[ip.String()] = true
|
||||
}
|
||||
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("10.8.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
if err := d.blockList.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
d.IsBanned("192.168.1.1")
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkHostListSearch(b *testing.B) {
|
||||
hostlist := &HostList{
|
||||
IPAddresses: make(map[string]bool),
|
||||
Ranges: cidranger.NewPCTrieRanger(),
|
||||
}
|
||||
|
||||
ip, ipnet, _ := net.ParseCIDR("172.16.0.0/16")
|
||||
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
hostlist.IPAddresses[ip.String()] = true
|
||||
}
|
||||
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("10.8.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
if err := hostlist.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
if hostlist.isListed("192.167.1.2") {
|
||||
panic("should not be listed")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCIDRanger(b *testing.B) {
|
||||
ranger := cidranger.NewPCTrieRanger()
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("192.168.%d.1/24", i)
|
||||
cidr := fmt.Sprintf("192.168.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
if err := ranger.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
|
||||
panic(err)
|
||||
|
@ -597,7 +632,7 @@ func BenchmarkCIDRanger(b *testing.B) {
|
|||
func BenchmarkNetContains(b *testing.B) {
|
||||
var nets []*net.IPNet
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("192.168.%d.1/24", i)
|
||||
cidr := fmt.Sprintf("192.168.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
nets = append(nets, network)
|
||||
}
|
|
@ -1,31 +1,16 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
type dbDefender struct {
|
||||
baseDefender
|
||||
lastCleanup atomic.Int64
|
||||
lastCleanup time.Time
|
||||
}
|
||||
|
||||
func newDBDefender(config *DefenderConfig) (Defender, error) {
|
||||
|
@ -33,38 +18,40 @@ func newDBDefender(config *DefenderConfig) (Defender, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ipList, err := dataprovider.NewIPList(dataprovider.IPListTypeDefender)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defender := &dbDefender{
|
||||
baseDefender: baseDefender{
|
||||
config: config,
|
||||
ipList: ipList,
|
||||
},
|
||||
lastCleanup: time.Time{},
|
||||
}
|
||||
|
||||
if err := defender.Reload(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defender.lastCleanup.Store(0)
|
||||
|
||||
return defender, nil
|
||||
}
|
||||
|
||||
// GetHosts returns hosts that are banned or for which some violations have been detected
|
||||
func (d *dbDefender) GetHosts() ([]dataprovider.DefenderEntry, error) {
|
||||
func (d *dbDefender) GetHosts() ([]*dataprovider.DefenderEntry, error) {
|
||||
return dataprovider.GetDefenderHosts(d.getStartObservationTime(), d.config.EntriesHardLimit)
|
||||
}
|
||||
|
||||
// GetHost returns a defender host by ip, if any
|
||||
func (d *dbDefender) GetHost(ip string) (dataprovider.DefenderEntry, error) {
|
||||
func (d *dbDefender) GetHost(ip string) (*dataprovider.DefenderEntry, error) {
|
||||
return dataprovider.GetDefenderHostByIP(ip, d.getStartObservationTime())
|
||||
}
|
||||
|
||||
// IsBanned returns true if the specified IP is banned
|
||||
// and increase ban time if the IP is found.
|
||||
// This method must be called as soon as the client connects
|
||||
func (d *dbDefender) IsBanned(ip, protocol string) bool {
|
||||
if d.baseDefender.isBanned(ip, protocol) {
|
||||
func (d *dbDefender) IsBanned(ip string) bool {
|
||||
d.RLock()
|
||||
if d.baseDefender.isBanned(ip) {
|
||||
d.RUnlock()
|
||||
return true
|
||||
}
|
||||
d.RUnlock()
|
||||
|
||||
_, err := dataprovider.IsDefenderHostBanned(ip)
|
||||
if err != nil {
|
||||
|
@ -88,38 +75,29 @@ func (d *dbDefender) DeleteHost(ip string) bool {
|
|||
}
|
||||
|
||||
// AddEvent adds an event for the given IP.
|
||||
// This method must be called for clients not yet banned.
|
||||
// Returns true if the IP is in the defender's safe list.
|
||||
func (d *dbDefender) AddEvent(ip, protocol string, event HostEvent) bool {
|
||||
if d.IsSafe(ip, protocol) {
|
||||
return true
|
||||
// This method must be called for clients not yet banned
|
||||
func (d *dbDefender) AddEvent(ip string, event HostEvent) {
|
||||
d.RLock()
|
||||
if d.safeList != nil && d.safeList.isListed(ip) {
|
||||
d.RUnlock()
|
||||
return
|
||||
}
|
||||
d.RUnlock()
|
||||
|
||||
score := d.baseDefender.getScore(event)
|
||||
|
||||
host, err := dataprovider.AddDefenderEvent(ip, score, d.getStartObservationTime())
|
||||
if err != nil {
|
||||
return false
|
||||
return
|
||||
}
|
||||
d.baseDefender.logEvent(ip, protocol, event, host.Score)
|
||||
if host.Score > d.config.Threshold {
|
||||
d.baseDefender.logBan(ip, protocol)
|
||||
banTime := time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
|
||||
err = dataprovider.SetDefenderBanTime(ip, util.GetTimeAsMsSinceEpoch(banTime))
|
||||
if err == nil {
|
||||
eventManager.handleIPBlockedEvent(EventParams{
|
||||
Event: ipBlockedEventName,
|
||||
IP: ip,
|
||||
Timestamp: time.Now().UnixNano(),
|
||||
Status: 1,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if err == nil {
|
||||
d.cleanup()
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
|
||||
|
@ -165,17 +143,15 @@ func (d *dbDefender) getStartObservationTime() int64 {
|
|||
}
|
||||
|
||||
func (d *dbDefender) getLastCleanup() time.Time {
|
||||
val := d.lastCleanup.Load()
|
||||
if val == 0 {
|
||||
return time.Time{}
|
||||
}
|
||||
return util.GetTimeFromMsecSinceEpoch(val)
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
return d.lastCleanup
|
||||
}
|
||||
|
||||
func (d *dbDefender) setLastCleanup(when time.Time) {
|
||||
if when.IsZero() {
|
||||
d.lastCleanup.Store(0)
|
||||
return
|
||||
}
|
||||
d.lastCleanup.Store(util.GetTimeAsMsSinceEpoch(when))
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
d.lastCleanup = when
|
||||
}
|
|
@ -1,73 +1,23 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
func TestBasicDbDefender(t *testing.T) {
|
||||
if !isDbDefenderSupported() {
|
||||
t.Skip("this test is not supported with the current database provider")
|
||||
}
|
||||
entries := []dataprovider.IPListEntry{
|
||||
{
|
||||
IPOrNet: "172.16.1.1/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeDeny,
|
||||
},
|
||||
{
|
||||
IPOrNet: "172.16.1.2/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeDeny,
|
||||
},
|
||||
{
|
||||
IPOrNet: "10.8.0.0/24",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeDeny,
|
||||
},
|
||||
{
|
||||
IPOrNet: "172.16.1.3/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeAllow,
|
||||
},
|
||||
{
|
||||
IPOrNet: "172.16.1.4/32",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeAllow,
|
||||
},
|
||||
{
|
||||
IPOrNet: "192.168.8.0/24",
|
||||
Type: dataprovider.IPListTypeDefender,
|
||||
Mode: dataprovider.ListModeAllow,
|
||||
},
|
||||
}
|
||||
|
||||
for idx := range entries {
|
||||
e := entries[idx]
|
||||
err := dataprovider.AddIPListEntry(&e, "", "", "")
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
config := &DefenderConfig{
|
||||
Enabled: true,
|
||||
BanTime: 10,
|
||||
|
@ -75,36 +25,65 @@ func TestBasicDbDefender(t *testing.T) {
|
|||
Threshold: 5,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ScoreNoAuth: 2,
|
||||
ScoreLimitExceeded: 3,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 1,
|
||||
EntriesHardLimit: 10,
|
||||
SafeListFile: "slFile",
|
||||
BlockListFile: "blFile",
|
||||
}
|
||||
_, err := newDBDefender(config)
|
||||
assert.Error(t, err)
|
||||
|
||||
bl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
|
||||
CIDRNetworks: []string{"10.8.0.0/24"},
|
||||
}
|
||||
sl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
|
||||
CIDRNetworks: []string{"192.168.8.0/24"},
|
||||
}
|
||||
blFile := filepath.Join(os.TempDir(), "bl.json")
|
||||
slFile := filepath.Join(os.TempDir(), "sl.json")
|
||||
|
||||
data, err := json.Marshal(bl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(blFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
data, err = json.Marshal(sl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(slFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
config.BlockListFile = blFile
|
||||
_, err = newDBDefender(config)
|
||||
assert.Error(t, err)
|
||||
config.SafeListFile = slFile
|
||||
d, err := newDBDefender(config)
|
||||
assert.NoError(t, err)
|
||||
defender := d.(*dbDefender)
|
||||
assert.True(t, defender.IsBanned("172.16.1.1", ProtocolFTP))
|
||||
assert.False(t, defender.IsBanned("172.16.1.10", ProtocolSSH))
|
||||
assert.False(t, defender.IsBanned("10.8.1.3", ProtocolHTTP))
|
||||
assert.True(t, defender.IsBanned("10.8.0.4", ProtocolWebDAV))
|
||||
assert.False(t, defender.IsBanned("invalid ip", ProtocolSSH))
|
||||
assert.True(t, defender.IsBanned("172.16.1.1"))
|
||||
assert.False(t, defender.IsBanned("172.16.1.10"))
|
||||
assert.False(t, defender.IsBanned("10.8.1.3"))
|
||||
assert.True(t, defender.IsBanned("10.8.0.4"))
|
||||
assert.False(t, defender.IsBanned("invalid ip"))
|
||||
hosts, err := defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 0)
|
||||
_, err = defender.GetHost("10.8.0.3")
|
||||
assert.Error(t, err)
|
||||
|
||||
defender.AddEvent("172.16.1.4", ProtocolSSH, HostEventLoginFailed)
|
||||
defender.AddEvent("192.168.8.4", ProtocolSSH, HostEventUserNotFound)
|
||||
defender.AddEvent("172.16.1.3", ProtocolSSH, HostEventLimitExceeded)
|
||||
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
|
||||
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
|
||||
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 0)
|
||||
assert.True(t, defender.getLastCleanup().IsZero())
|
||||
|
||||
testIP := "123.45.67.89"
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventLoginFailed)
|
||||
defender.AddEvent(testIP, HostEventLoginFailed)
|
||||
lastCleanup := defender.getLastCleanup()
|
||||
assert.False(t, lastCleanup.IsZero())
|
||||
score, err := defender.GetScore(testIP)
|
||||
|
@ -124,7 +103,7 @@ func TestBasicDbDefender(t *testing.T) {
|
|||
banTime, err := defender.GetBanTime(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventLimitExceeded)
|
||||
defender.AddEvent(testIP, HostEventLimitExceeded)
|
||||
score, err = defender.GetScore(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 4, score)
|
||||
|
@ -135,8 +114,8 @@ func TestBasicDbDefender(t *testing.T) {
|
|||
assert.True(t, hosts[0].BanTime.IsZero())
|
||||
assert.Empty(t, hosts[0].GetBanTime())
|
||||
}
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
score, err = defender.GetScore(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
|
@ -156,7 +135,7 @@ func TestBasicDbDefender(t *testing.T) {
|
|||
assert.Equal(t, 0, host.Score)
|
||||
assert.NotEmpty(t, host.GetBanTime())
|
||||
// ban time should increase
|
||||
assert.True(t, defender.IsBanned(testIP, ProtocolSSH))
|
||||
assert.True(t, defender.IsBanned(testIP))
|
||||
newBanTime, err := defender.GetBanTime(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, newBanTime.After(*banTime))
|
||||
|
@ -168,9 +147,9 @@ func TestBasicDbDefender(t *testing.T) {
|
|||
testIP2 := "123.45.67.91"
|
||||
testIP3 := "123.45.67.92"
|
||||
for i := 0; i < 3; i++ {
|
||||
defender.AddEvent(testIP, ProtocolSSH, HostEventUserNotFound)
|
||||
defender.AddEvent(testIP1, ProtocolSSH, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP2, ProtocolSSH, HostEventUserNotFound)
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP1, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP2, HostEventNoLoginTried)
|
||||
}
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
|
@ -180,7 +159,7 @@ func TestBasicDbDefender(t *testing.T) {
|
|||
assert.False(t, host.BanTime.IsZero())
|
||||
assert.NotEmpty(t, host.GetBanTime())
|
||||
}
|
||||
defender.AddEvent(testIP3, ProtocolSSH, HostEventLoginFailed)
|
||||
defender.AddEvent(testIP3, HostEventLoginFailed)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 4)
|
||||
|
@ -254,10 +233,10 @@ func TestBasicDbDefender(t *testing.T) {
|
|||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 0)
|
||||
|
||||
for _, e := range entries {
|
||||
err := dataprovider.DeleteIPListEntry(e.IPOrNet, e.Type, "", "", "")
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
err = os.Remove(slFile)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(blFile)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestDbDefenderCleanup(t *testing.T) {
|
||||
|
@ -286,8 +265,6 @@ func TestDbDefenderCleanup(t *testing.T) {
|
|||
assert.False(t, lastCleanup.IsZero())
|
||||
defender.cleanup()
|
||||
assert.Equal(t, lastCleanup, defender.getLastCleanup())
|
||||
defender.setLastCleanup(time.Time{})
|
||||
assert.True(t, defender.getLastCleanup().IsZero())
|
||||
defender.setLastCleanup(time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4))
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
defender.cleanup()
|
||||
|
@ -297,7 +274,7 @@ func TestDbDefenderCleanup(t *testing.T) {
|
|||
err = dataprovider.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
lastCleanup = util.GetTimeFromMsecSinceEpoch(time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4).UnixMilli())
|
||||
lastCleanup = time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4)
|
||||
defender.setLastCleanup(lastCleanup)
|
||||
defender.cleanup()
|
||||
// cleanup will fail and so last cleanup should be reset to the previous value
|
|
@ -1,31 +1,15 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
type memoryDefender struct {
|
||||
baseDefender
|
||||
sync.RWMutex
|
||||
// IP addresses of the clients trying to connected are stored inside hosts,
|
||||
// they are added to banned once the thresold is reached.
|
||||
// A violation from a banned host will increase the ban time
|
||||
|
@ -39,31 +23,30 @@ func newInMemoryDefender(config *DefenderConfig) (Defender, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ipList, err := dataprovider.NewIPList(dataprovider.IPListTypeDefender)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defender := &memoryDefender{
|
||||
baseDefender: baseDefender{
|
||||
config: config,
|
||||
ipList: ipList,
|
||||
},
|
||||
hosts: make(map[string]hostScore),
|
||||
banned: make(map[string]time.Time),
|
||||
}
|
||||
|
||||
if err := defender.Reload(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return defender, nil
|
||||
}
|
||||
|
||||
// GetHosts returns hosts that are banned or for which some violations have been detected
|
||||
func (d *memoryDefender) GetHosts() ([]dataprovider.DefenderEntry, error) {
|
||||
func (d *memoryDefender) GetHosts() ([]*dataprovider.DefenderEntry, error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
var result []dataprovider.DefenderEntry
|
||||
var result []*dataprovider.DefenderEntry
|
||||
for k, v := range d.banned {
|
||||
if v.After(time.Now()) {
|
||||
result = append(result, dataprovider.DefenderEntry{
|
||||
result = append(result, &dataprovider.DefenderEntry{
|
||||
IP: k,
|
||||
BanTime: v,
|
||||
})
|
||||
|
@ -77,7 +60,7 @@ func (d *memoryDefender) GetHosts() ([]dataprovider.DefenderEntry, error) {
|
|||
}
|
||||
}
|
||||
if score > 0 {
|
||||
result = append(result, dataprovider.DefenderEntry{
|
||||
result = append(result, &dataprovider.DefenderEntry{
|
||||
IP: k,
|
||||
Score: score,
|
||||
})
|
||||
|
@ -88,13 +71,13 @@ func (d *memoryDefender) GetHosts() ([]dataprovider.DefenderEntry, error) {
|
|||
}
|
||||
|
||||
// GetHost returns a defender host by ip, if any
|
||||
func (d *memoryDefender) GetHost(ip string) (dataprovider.DefenderEntry, error) {
|
||||
func (d *memoryDefender) GetHost(ip string) (*dataprovider.DefenderEntry, error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
if banTime, ok := d.banned[ip]; ok {
|
||||
if banTime.After(time.Now()) {
|
||||
return dataprovider.DefenderEntry{
|
||||
return &dataprovider.DefenderEntry{
|
||||
IP: ip,
|
||||
BanTime: banTime,
|
||||
}, nil
|
||||
|
@ -109,20 +92,20 @@ func (d *memoryDefender) GetHost(ip string) (dataprovider.DefenderEntry, error)
|
|||
}
|
||||
}
|
||||
if score > 0 {
|
||||
return dataprovider.DefenderEntry{
|
||||
return &dataprovider.DefenderEntry{
|
||||
IP: ip,
|
||||
Score: score,
|
||||
}, nil
|
||||
}
|
||||
}
|
||||
|
||||
return dataprovider.DefenderEntry{}, util.NewRecordNotFoundError("host not found")
|
||||
return nil, util.NewRecordNotFoundError("host not found")
|
||||
}
|
||||
|
||||
// IsBanned returns true if the specified IP is banned
|
||||
// and increase ban time if the IP is found.
|
||||
// This method must be called as soon as the client connects
|
||||
func (d *memoryDefender) IsBanned(ip, protocol string) bool {
|
||||
func (d *memoryDefender) IsBanned(ip string) bool {
|
||||
d.RLock()
|
||||
|
||||
if banTime, ok := d.banned[ip]; ok {
|
||||
|
@ -148,7 +131,7 @@ func (d *memoryDefender) IsBanned(ip, protocol string) bool {
|
|||
|
||||
defer d.RUnlock()
|
||||
|
||||
return d.baseDefender.isBanned(ip, protocol)
|
||||
return d.baseDefender.isBanned(ip)
|
||||
}
|
||||
|
||||
// DeleteHost removes the specified IP from the defender lists
|
||||
|
@ -170,20 +153,19 @@ func (d *memoryDefender) DeleteHost(ip string) bool {
|
|||
}
|
||||
|
||||
// AddEvent adds an event for the given IP.
|
||||
// This method must be called for clients not yet banned.
|
||||
// Returns true if the IP is in the defender's safe list.
|
||||
func (d *memoryDefender) AddEvent(ip, protocol string, event HostEvent) bool {
|
||||
if d.IsSafe(ip, protocol) {
|
||||
return true
|
||||
}
|
||||
|
||||
// This method must be called for clients not yet banned
|
||||
func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
if d.safeList != nil && d.safeList.isListed(ip) {
|
||||
return
|
||||
}
|
||||
|
||||
// ignore events for already banned hosts
|
||||
if v, ok := d.banned[ip]; ok {
|
||||
if v.After(time.Now()) {
|
||||
return false
|
||||
return
|
||||
}
|
||||
delete(d.banned, ip)
|
||||
}
|
||||
|
@ -207,32 +189,22 @@ func (d *memoryDefender) AddEvent(ip, protocol string, event HostEvent) bool {
|
|||
idx++
|
||||
}
|
||||
}
|
||||
d.baseDefender.logEvent(ip, protocol, event, hs.TotalScore)
|
||||
|
||||
hs.Events = hs.Events[:idx]
|
||||
if hs.TotalScore >= d.config.Threshold {
|
||||
d.baseDefender.logBan(ip, protocol)
|
||||
d.banned[ip] = time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
|
||||
delete(d.hosts, ip)
|
||||
d.cleanupBanned()
|
||||
eventManager.handleIPBlockedEvent(EventParams{
|
||||
Event: ipBlockedEventName,
|
||||
IP: ip,
|
||||
Timestamp: time.Now().UnixNano(),
|
||||
Status: 1,
|
||||
})
|
||||
} else {
|
||||
d.hosts[ip] = hs
|
||||
}
|
||||
} else {
|
||||
d.baseDefender.logEvent(ip, protocol, event, ev.score)
|
||||
d.hosts[ip] = hostScore{
|
||||
TotalScore: ev.score,
|
||||
Events: []hostEvent{ev},
|
||||
}
|
||||
d.cleanupHosts()
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (d *memoryDefender) countBanned() int {
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
|
@ -24,8 +10,8 @@ import (
|
|||
"github.com/GehirnInc/crypt/md5_crypt"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
const (
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
3373
common/protocol_test.go
Normal file
3373
common/protocol_test.go
Normal file
File diff suppressed because it is too large
Load diff
|
@ -1,23 +1,9 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"slices"
|
||||
"net"
|
||||
"sort"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
@ -25,7 +11,7 @@ import (
|
|||
|
||||
"golang.org/x/time/rate"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -62,6 +48,8 @@ type RateLimiterConfig struct {
|
|||
// Available protocols are: "SFTP", "FTP", "DAV".
|
||||
// A rate limiter with no protocols defined is disabled
|
||||
Protocols []string `json:"protocols" mapstructure:"protocols"`
|
||||
// AllowList defines a list of IP addresses and IP ranges excluded from rate limiting
|
||||
AllowList []string `json:"allow_list" mapstructure:"mapstructure"`
|
||||
// If the rate limit is exceeded, the defender is enabled, and this is a per-source limiter,
|
||||
// a new defender event will be generated
|
||||
GenerateDefenderEvents bool `json:"generate_defender_events" mapstructure:"generate_defender_events"`
|
||||
|
@ -93,10 +81,10 @@ func (r *RateLimiterConfig) validate() error {
|
|||
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", r.EntriesHardLimit, r.EntriesSoftLimit)
|
||||
}
|
||||
}
|
||||
r.Protocols = util.RemoveDuplicates(r.Protocols, true)
|
||||
r.Protocols = util.RemoveDuplicates(r.Protocols)
|
||||
for _, protocol := range r.Protocols {
|
||||
if !slices.Contains(rateLimiterProtocolValues, protocol) {
|
||||
return fmt.Errorf("invalid protocol %q", protocol)
|
||||
if !util.IsStringInSlice(protocol, rateLimiterProtocolValues) {
|
||||
return fmt.Errorf("invalid protocol %#v", protocol)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
@ -140,12 +128,23 @@ type rateLimiter struct {
|
|||
globalBucket *rate.Limiter
|
||||
buckets sourceBuckets
|
||||
generateDefenderEvents bool
|
||||
allowList []func(net.IP) bool
|
||||
}
|
||||
|
||||
// Wait blocks until the limit allows one event to happen
|
||||
// or returns an error if the time to wait exceeds the max
|
||||
// allowed delay
|
||||
func (rl *rateLimiter) Wait(source, protocol string) (time.Duration, error) {
|
||||
func (rl *rateLimiter) Wait(source string) (time.Duration, error) {
|
||||
if len(rl.allowList) > 0 {
|
||||
ip := net.ParseIP(source)
|
||||
if ip != nil {
|
||||
for idx := range rl.allowList {
|
||||
if rl.allowList[idx](ip) {
|
||||
return 0, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
var res *rate.Reservation
|
||||
if rl.globalBucket != nil {
|
||||
res = rl.globalBucket.Reserve()
|
||||
|
@ -164,7 +163,7 @@ func (rl *rateLimiter) Wait(source, protocol string) (time.Duration, error) {
|
|||
if delay > rl.maxDelay {
|
||||
res.Cancel()
|
||||
if rl.generateDefenderEvents && rl.globalBucket == nil {
|
||||
AddDefenderEvent(source, protocol, HostEventLimitExceeded)
|
||||
AddDefenderEvent(source, HostEventLimitExceeded)
|
||||
}
|
||||
return delay, fmt.Errorf("rate limit exceed, wait time to respect rate %v, max wait time allowed %v", delay, rl.maxDelay)
|
||||
}
|
||||
|
@ -173,16 +172,16 @@ func (rl *rateLimiter) Wait(source, protocol string) (time.Duration, error) {
|
|||
}
|
||||
|
||||
type sourceRateLimiter struct {
|
||||
lastActivity *atomic.Int64
|
||||
lastActivity int64
|
||||
bucket *rate.Limiter
|
||||
}
|
||||
|
||||
func (s *sourceRateLimiter) updateLastActivity() {
|
||||
s.lastActivity.Store(time.Now().UnixNano())
|
||||
atomic.StoreInt64(&s.lastActivity, time.Now().UnixNano())
|
||||
}
|
||||
|
||||
func (s *sourceRateLimiter) getLastActivity() int64 {
|
||||
return s.lastActivity.Load()
|
||||
return atomic.LoadInt64(&s.lastActivity)
|
||||
}
|
||||
|
||||
type sourceBuckets struct {
|
||||
|
@ -211,8 +210,7 @@ func (b *sourceBuckets) addAndReserve(r *rate.Limiter, source string) *rate.Rese
|
|||
b.cleanup()
|
||||
|
||||
src := sourceRateLimiter{
|
||||
lastActivity: new(atomic.Int64),
|
||||
bucket: r,
|
||||
bucket: r,
|
||||
}
|
||||
src.updateLastActivity()
|
||||
b.buckets[source] = src
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
|
@ -20,6 +6,8 @@ import (
|
|||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
func TestRateLimiterConfig(t *testing.T) {
|
||||
|
@ -77,9 +65,9 @@ func TestRateLimiter(t *testing.T) {
|
|||
Protocols: rateLimiterProtocolValues,
|
||||
}
|
||||
limiter := config.getLimiter()
|
||||
_, err := limiter.Wait("", ProtocolFTP)
|
||||
_, err := limiter.Wait("")
|
||||
require.NoError(t, err)
|
||||
_, err = limiter.Wait("", ProtocolSSH)
|
||||
_, err = limiter.Wait("")
|
||||
require.Error(t, err)
|
||||
|
||||
config.Type = int(rateLimiterTypeSource)
|
||||
|
@ -89,17 +77,28 @@ func TestRateLimiter(t *testing.T) {
|
|||
limiter = config.getLimiter()
|
||||
|
||||
source := "192.168.1.2"
|
||||
_, err = limiter.Wait(source, ProtocolSSH)
|
||||
_, err = limiter.Wait(source)
|
||||
require.NoError(t, err)
|
||||
_, err = limiter.Wait(source, ProtocolSSH)
|
||||
_, err = limiter.Wait(source)
|
||||
require.Error(t, err)
|
||||
// a different source should work
|
||||
_, err = limiter.Wait(source+"1", ProtocolSSH)
|
||||
_, err = limiter.Wait(source + "1")
|
||||
require.NoError(t, err)
|
||||
|
||||
allowList := []string{"192.168.1.0/24"}
|
||||
allowFuncs, err := util.ParseAllowedIPAndRanges(allowList)
|
||||
assert.NoError(t, err)
|
||||
limiter.allowList = allowFuncs
|
||||
for i := 0; i < 5; i++ {
|
||||
_, err = limiter.Wait(source)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
_, err = limiter.Wait("not an ip")
|
||||
require.NoError(t, err)
|
||||
|
||||
config.Burst = 0
|
||||
limiter = config.getLimiter()
|
||||
_, err = limiter.Wait(source, ProtocolSSH)
|
||||
_, err = limiter.Wait(source)
|
||||
require.ErrorIs(t, err, errReserve)
|
||||
}
|
||||
|
||||
|
@ -118,10 +117,10 @@ func TestLimiterCleanup(t *testing.T) {
|
|||
source2 := "10.8.0.2"
|
||||
source3 := "10.8.0.3"
|
||||
source4 := "10.8.0.4"
|
||||
_, err := limiter.Wait(source1, ProtocolSSH)
|
||||
_, err := limiter.Wait(source1)
|
||||
assert.NoError(t, err)
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
_, err = limiter.Wait(source2, ProtocolSSH)
|
||||
_, err = limiter.Wait(source2)
|
||||
assert.NoError(t, err)
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
assert.Len(t, limiter.buckets.buckets, 2)
|
||||
|
@ -129,7 +128,7 @@ func TestLimiterCleanup(t *testing.T) {
|
|||
assert.True(t, ok)
|
||||
_, ok = limiter.buckets.buckets[source2]
|
||||
assert.True(t, ok)
|
||||
_, err = limiter.Wait(source3, ProtocolSSH)
|
||||
_, err = limiter.Wait(source3)
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, limiter.buckets.buckets, 3)
|
||||
_, ok = limiter.buckets.buckets[source1]
|
||||
|
@ -139,7 +138,7 @@ func TestLimiterCleanup(t *testing.T) {
|
|||
_, ok = limiter.buckets.buckets[source3]
|
||||
assert.True(t, ok)
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
_, err = limiter.Wait(source4, ProtocolSSH)
|
||||
_, err = limiter.Wait(source4)
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, limiter.buckets.buckets, 2)
|
||||
_, ok = limiter.buckets.buckets[source3]
|
200
common/tlsutils.go
Normal file
200
common/tlsutils.go
Normal file
|
@ -0,0 +1,200 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"crypto/x509/pkix"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// CertManager defines a TLS certificate manager
|
||||
type CertManager struct {
|
||||
certPath string
|
||||
keyPath string
|
||||
configDir string
|
||||
logSender string
|
||||
sync.RWMutex
|
||||
caCertificates []string
|
||||
caRevocationLists []string
|
||||
cert *tls.Certificate
|
||||
rootCAs *x509.CertPool
|
||||
crls []*pkix.CertificateList
|
||||
}
|
||||
|
||||
// Reload tries to reload certificate and CRLs
|
||||
func (m *CertManager) Reload() error {
|
||||
errCrt := m.loadCertificate()
|
||||
errCRLs := m.LoadCRLs()
|
||||
|
||||
if errCrt != nil {
|
||||
return errCrt
|
||||
}
|
||||
return errCRLs
|
||||
}
|
||||
|
||||
// LoadCertificate loads the configured x509 key pair
|
||||
func (m *CertManager) loadCertificate() error {
|
||||
newCert, err := tls.LoadX509KeyPair(m.certPath, m.keyPath)
|
||||
if err != nil {
|
||||
logger.Warn(m.logSender, "", "unable to load X509 key pair, cert file %#v key file %#v error: %v",
|
||||
m.certPath, m.keyPath, err)
|
||||
return err
|
||||
}
|
||||
logger.Debug(m.logSender, "", "TLS certificate %#v successfully loaded", m.certPath)
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
m.cert = &newCert
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetCertificateFunc returns the loaded certificate
|
||||
func (m *CertManager) GetCertificateFunc() func(*tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
return m.cert, nil
|
||||
}
|
||||
}
|
||||
|
||||
// IsRevoked returns true if the specified certificate has been revoked
|
||||
func (m *CertManager) IsRevoked(crt *x509.Certificate, caCrt *x509.Certificate) bool {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
if crt == nil || caCrt == nil {
|
||||
logger.Warn(m.logSender, "", "unable to verify crt %v ca crt %v", crt, caCrt)
|
||||
return len(m.crls) > 0
|
||||
}
|
||||
|
||||
for _, crl := range m.crls {
|
||||
if !crl.HasExpired(time.Now()) && caCrt.CheckCRLSignature(crl) == nil {
|
||||
for _, rc := range crl.TBSCertList.RevokedCertificates {
|
||||
if rc.SerialNumber.Cmp(crt.SerialNumber) == 0 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// LoadCRLs tries to load certificate revocation lists from the given paths
|
||||
func (m *CertManager) LoadCRLs() error {
|
||||
if len(m.caRevocationLists) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
var crls []*pkix.CertificateList
|
||||
|
||||
for _, revocationList := range m.caRevocationLists {
|
||||
if !util.IsFileInputValid(revocationList) {
|
||||
return fmt.Errorf("invalid root CA revocation list %#v", revocationList)
|
||||
}
|
||||
if revocationList != "" && !filepath.IsAbs(revocationList) {
|
||||
revocationList = filepath.Join(m.configDir, revocationList)
|
||||
}
|
||||
crlBytes, err := os.ReadFile(revocationList)
|
||||
if err != nil {
|
||||
logger.Warn(m.logSender, "unable to read revocation list %#v", revocationList)
|
||||
return err
|
||||
}
|
||||
crl, err := x509.ParseCRL(crlBytes)
|
||||
if err != nil {
|
||||
logger.Warn(m.logSender, "unable to parse revocation list %#v", revocationList)
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Debug(m.logSender, "", "CRL %#v successfully loaded", revocationList)
|
||||
crls = append(crls, crl)
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
m.crls = crls
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetRootCAs returns the set of root certificate authorities that servers
|
||||
// use if required to verify a client certificate
|
||||
func (m *CertManager) GetRootCAs() *x509.CertPool {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
return m.rootCAs
|
||||
}
|
||||
|
||||
// LoadRootCAs tries to load root CA certificate authorities from the given paths
|
||||
func (m *CertManager) LoadRootCAs() error {
|
||||
if len(m.caCertificates) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
rootCAs := x509.NewCertPool()
|
||||
|
||||
for _, rootCA := range m.caCertificates {
|
||||
if !util.IsFileInputValid(rootCA) {
|
||||
return fmt.Errorf("invalid root CA certificate %#v", rootCA)
|
||||
}
|
||||
if rootCA != "" && !filepath.IsAbs(rootCA) {
|
||||
rootCA = filepath.Join(m.configDir, rootCA)
|
||||
}
|
||||
crt, err := os.ReadFile(rootCA)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if rootCAs.AppendCertsFromPEM(crt) {
|
||||
logger.Debug(m.logSender, "", "TLS certificate authority %#v successfully loaded", rootCA)
|
||||
} else {
|
||||
err := fmt.Errorf("unable to load TLS certificate authority %#v", rootCA)
|
||||
logger.Warn(m.logSender, "", "%v", err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
m.rootCAs = rootCAs
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetCACertificates sets the root CA authorities file paths.
|
||||
// This should not be changed at runtime
|
||||
func (m *CertManager) SetCACertificates(caCertificates []string) {
|
||||
m.caCertificates = caCertificates
|
||||
}
|
||||
|
||||
// SetCARevocationLists sets the CA revocation lists file paths.
|
||||
// This should not be changed at runtime
|
||||
func (m *CertManager) SetCARevocationLists(caRevocationLists []string) {
|
||||
m.caRevocationLists = caRevocationLists
|
||||
}
|
||||
|
||||
// NewCertManager creates a new certificate manager
|
||||
func NewCertManager(certificateFile, certificateKeyFile, configDir, logSender string) (*CertManager, error) {
|
||||
manager := &CertManager{
|
||||
cert: nil,
|
||||
certPath: certificateFile,
|
||||
keyPath: certificateKeyFile,
|
||||
configDir: configDir,
|
||||
logSender: logSender,
|
||||
}
|
||||
err := manager.loadCertificate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return manager, nil
|
||||
}
|
386
common/tlsutils_test.go
Normal file
386
common/tlsutils_test.go
Normal file
|
@ -0,0 +1,386 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
const (
|
||||
serverCert = `-----BEGIN CERTIFICATE-----
|
||||
MIIEIDCCAgigAwIBAgIRAPOR9zTkX35vSdeyGpF8Rn8wDQYJKoZIhvcNAQELBQAw
|
||||
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMjU1WhcNMjIwNzAyMjEz
|
||||
MDUxWjARMQ8wDQYDVQQDEwZzZXJ2ZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
|
||||
ggEKAoIBAQCte0PJhCTNqTiqdwk/s4JanKIMKUVWr2u94a+JYy5gJ9xYXrQ49SeN
|
||||
m+fwhTAOqctP5zNVkFqxlBytJZg3pqCKqRoOOl1qVgL3F3o7JdhZGi67aw8QMLPx
|
||||
tLPpYWnnrlUQoXRJdTlqkDqO8lOZl9HO5oZeidPZ7r5BVD6ZiujAC6Zg0jIc+EPt
|
||||
qhaUJ1CStoAeRf1rNWKmDsLv5hEaDWoaHF9sNVzDQg6atZ3ici00qQj+uvEZo8mL
|
||||
k6egg3rqsTv9ml2qlrRgFumt99J60hTt3tuQaAruHY80O9nGy3SCXC11daa7gszH
|
||||
ElCRvhUVoOxRtB54YBEtJ0gEpFnTO9J1AgMBAAGjcTBvMA4GA1UdDwEB/wQEAwID
|
||||
uDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFAgDXwPV
|
||||
nhztNz+H20iNWgoIx8adMB8GA1UdIwQYMBaAFO1yCNAGr/zQTJIi8lw3w5OiuBvM
|
||||
MA0GCSqGSIb3DQEBCwUAA4ICAQCR5kgIb4vAtrtsXD24n6RtU1yIXHPLNmDStVrH
|
||||
uaMYNnHlLhRlQFCjHhjWvZ89FQC7FeNOITc3FpibJySyw7JfnsyEOGxEbcAS4uLB
|
||||
2pdAiJPqdQtxIVcyi5vu53m1T5tm0sy8sBrGxU466aDQ8VGqjcjfTwNIyoFMd3p/
|
||||
ezFRvg2BudwU9hqApgfHfLi4WCuI3hLO2tbmgDinyH0HI0YYNNweGpiBYbTLF4Tx
|
||||
H6vHgD9USMZeu4+HX0IIsBiHQD7TTIe5ceREkPcNPd5qTpIvT3zKQ/KwwT90/zjP
|
||||
aWmz6pLxBfjRu7MY/bDfxfRUqsrLYJCVBoaDVRWR9rhiPIFkC5JzoWD/4hdj2iis
|
||||
N0+OOaJ77L+/ArFprE+7Fu3cSdYlfiNjV8R5kE29cAxKLI92CjAiTKrEuxKcQPKO
|
||||
+taWNKIYYjEDZwVnzlkTIl007X0RBuzu9gh4w5NwJdt8ZOJAp0JV0Cq+UvG+FC/v
|
||||
lYk82E6j1HKhf4CXmrjsrD1Fyu41mpVFOpa2ATiFGvms913MkXuyO8g99IllmDw1
|
||||
D7/PN4Qe9N6Zm7yoKZM0IUw2v+SUMIdOAZ7dptO9ZjtYOfiAIYN3jM8R4JYgPiuD
|
||||
DGSM9LJBJxCxI/DiO1y1Z3n9TcdDQYut8Gqdi/aYXw2YeqyHXosX5Od3vcK/O5zC
|
||||
pOJTYQ==
|
||||
-----END CERTIFICATE-----`
|
||||
serverKey = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEowIBAAKCAQEArXtDyYQkzak4qncJP7OCWpyiDClFVq9rveGviWMuYCfcWF60
|
||||
OPUnjZvn8IUwDqnLT+czVZBasZQcrSWYN6agiqkaDjpdalYC9xd6OyXYWRouu2sP
|
||||
EDCz8bSz6WFp565VEKF0SXU5apA6jvJTmZfRzuaGXonT2e6+QVQ+mYrowAumYNIy
|
||||
HPhD7aoWlCdQkraAHkX9azVipg7C7+YRGg1qGhxfbDVcw0IOmrWd4nItNKkI/rrx
|
||||
GaPJi5OnoIN66rE7/Zpdqpa0YBbprffSetIU7d7bkGgK7h2PNDvZxst0glwtdXWm
|
||||
u4LMxxJQkb4VFaDsUbQeeGARLSdIBKRZ0zvSdQIDAQABAoIBAF4sI8goq7HYwqIG
|
||||
rEagM4rsrCrd3H4KC/qvoJJ7/JjGCp8OCddBfY8pquat5kCPe4aMgxlXm2P6evaj
|
||||
CdZr5Ypf8Xz3we4PctyfKgMhsCfuRqAGpc6sIYJ8DY4LC2pxAExe2LlnoRtv39np
|
||||
QeiGuaYPDbIUL6SGLVFZYgIHngFhbDYfL83q3Cb/PnivUGFvUVQCfRBUKO2d8KYq
|
||||
TrVB5BWD2GrHor24ApQmci1OOqfbkIevkK6bk8HUfSZiZGI9LUQiPHMxi5k2x43J
|
||||
nIwhZnW2N28dorKnWHg2vh7viGvinVRZ3MEyX150oCw/L6SYM4fqR6t2ZSBgNQHT
|
||||
ZNoDtwECgYEA4lXMgtYqKuSlZ3TKfxAj03tJ/gbRdKcUCEGXEbdpY70tTu6KESZS
|
||||
etid4Ut/sWEoPTJsgYiGbgJl571t1O8oR1UZYgh9hBGHLV6UEIt9n2PbExhE2vL3
|
||||
SB7+LfO+tMvM4qKUBN+uy4GpU0NiyEEecw4x4S7MRSyHFRIDR7B6RV0CgYEAxDgS
|
||||
mDaNUfSdfB5mXekLUJAwqeKRdL9RjXYaHbnoZ5kIwQ73tFikRwyTsLQwMhjE1l3z
|
||||
MItTzIAyTf/BlK3dsp6bHTaT7hXIjHBsuKATN5qAuUpzTrg9+QaCawVSlQgNeF3a
|
||||
iyfD4dVp66Bzn3gO757TWqmroBZ2e1owbAQvF/kCgYAKT/Jze6KMNcK7hfy78VZQ
|
||||
imuCoXjlob8t6R8i9YJdwv7Pe9rakS5s3nXDEBePU2fr8eIzvK6zUHSoLF9WtlbV
|
||||
eTEg4FYnsEzCam7AmjptCrWulwp8F1ng9ViLa3Gi9y4snU+1MSPbrdqzKnzTtvPW
|
||||
Ni1bnzA7bp3w/dMcbxQDGQKBgB50hY5SiUS7LuZg4YqZ7UOn3aXAoMr6FvJZ7lvG
|
||||
yyepPQ6aACBh0b2lWhcHIKPl7EdJdcGHHo6TJzusAqPNCKf8rh6upe9COkpx+K3/
|
||||
SnxK4sffol4JgrTwKbXqsZKoGU8hYhZPKbwXn8UOtmN+AvN2N1/PDfBfDCzBJtrd
|
||||
G2IhAoGBAN19976xAMDjKb2+wd/mQYA2fR7E8lodxdX3LDnblYmndTKY67nVo94M
|
||||
FHPKZSN590HkFJ+wmChnOrqjtosY+N25CKMS7939EUIDrq+B+bYTWM/gcwdLXNUk
|
||||
Rygw/078Z3ZDJamXmyez5WpeLFrrbmI8sLnBBmSjQvMb6vCEtQ2Z
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
caCRT = `-----BEGIN CERTIFICATE-----
|
||||
MIIE5jCCAs6gAwIBAgIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0
|
||||
QXV0aDAeFw0yMTAxMDIyMTIwNTVaFw0yMjA3MDIyMTMwNTJaMBMxETAPBgNVBAMT
|
||||
CENlcnRBdXRoMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA4Tiho5xW
|
||||
AC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+sRKqC+Ti88OJWCV5saoyax/1S
|
||||
CjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRRjxp/Bw9dHdiEb9MjLgu28Jro
|
||||
9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgARainBkYjf0SwuWxHeu4nMqkp
|
||||
Ak5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lvuU+DD2W2lym+YVUtRMGs1Env
|
||||
k7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q8T1dCIyP9OQCKVILdc5aVFf1
|
||||
cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n6ykecLEyKt1F1Y/MWY/nWUSI
|
||||
8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZV2gX0a+eRlAVqaRbAhL3LaZe
|
||||
bYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaEOsnGG9KFO6jh+W768qC0zLQI
|
||||
CdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZf2fy7UIYN9ADLFZiorCXAZEh
|
||||
CSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg73TlMsk1zSXEw0MKLUjtsw6c
|
||||
rZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEAAaNFMEMwDgYDVR0PAQH/BAQD
|
||||
AgEGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFO1yCNAGr/zQTJIi8lw3
|
||||
w5OiuBvMMA0GCSqGSIb3DQEBCwUAA4ICAQA6gCNuM7r8mnx674dm31GxBjQy5ZwB
|
||||
7CxDzYEvL/oiZ3Tv3HlPfN2LAAsJUfGnghh9DOytenL2CTZWjl/emP5eijzmlP+9
|
||||
zva5I6CIMCf/eDDVsRdO244t0o4uG7+At0IgSDM3bpVaVb4RHZNjEziYChsEYY8d
|
||||
HK6iwuRSvFniV6yhR/Vj1Ymi9yZ5xclqseLXiQnUB0PkfIk23+7s42cXB16653fH
|
||||
O/FsPyKBLiKJArizLYQc12aP3QOrYoYD9+fAzIIzew7A5C0aanZCGzkuFpO6TRlD
|
||||
Tb7ry9Gf0DfPpCgxraH8tOcmnqp/ka3hjqo/SRnnTk0IFrmmLdarJvjD46rKwBo4
|
||||
MjyAIR1mQ5j8GTlSFBmSgETOQ/EYvO3FPLmra1Fh7L+DvaVzTpqI9fG3TuyyY+Ri
|
||||
Fby4ycTOGSZOe5Fh8lqkX5Y47mCUJ3zHzOA1vUJy2eTlMRGpu47Eb1++Vm6EzPUP
|
||||
2EF5aD+zwcssh+atZvQbwxpgVqVcyLt91RSkKkmZQslh0rnlTb68yxvUnD3zw7So
|
||||
o6TAf9UvwVMEvdLT9NnFd6hwi2jcNte/h538GJwXeBb8EkfpqLKpTKyicnOdkamZ
|
||||
7E9zY8SHNRYMwB9coQ/W8NvufbCgkvOoLyMXk5edbXofXl3PhNGOlraWbghBnzf5
|
||||
r3rwjFsQOoZotA==
|
||||
-----END CERTIFICATE-----`
|
||||
caKey = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIJKQIBAAKCAgEA4Tiho5xWAC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+s
|
||||
RKqC+Ti88OJWCV5saoyax/1SCjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRR
|
||||
jxp/Bw9dHdiEb9MjLgu28Jro9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgA
|
||||
RainBkYjf0SwuWxHeu4nMqkpAk5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lv
|
||||
uU+DD2W2lym+YVUtRMGs1Envk7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q
|
||||
8T1dCIyP9OQCKVILdc5aVFf1cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n
|
||||
6ykecLEyKt1F1Y/MWY/nWUSI8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZ
|
||||
V2gX0a+eRlAVqaRbAhL3LaZebYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaE
|
||||
OsnGG9KFO6jh+W768qC0zLQICdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZ
|
||||
f2fy7UIYN9ADLFZiorCXAZEhCSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg
|
||||
73TlMsk1zSXEw0MKLUjtsw6crZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEA
|
||||
AQKCAgAV+ElERYbaI5VyufvVnFJCH75ypPoc6sVGLEq2jbFVJJcq/5qlZCC8oP1F
|
||||
Xj7YUR6wUiDzK1Hqb7EZ2SCHGjlZVrCVi+y+NYAy7UuMZ+r+mVSkdhmypPoJPUVv
|
||||
GOTqZ6VB46Cn3eSl0WknvoWr7bD555yPmEuiSc5zNy74yWEJTidEKAFGyknowcTK
|
||||
sG+w1tAuPLcUKQ44DGB+rgEkcHL7C5EAa7upzx0C3RmZFB+dTAVyJdkBMbFuOhTS
|
||||
sB7DLeTplR7/4mp9da7EQw51ZXC1DlZOEZt++4/desXsqATNAbva1OuzrLG7mMKe
|
||||
N/PCBh/aERQcsCvgUmaXqGQgqN1Jhw8kbXnjZnVd9iE7TAh7ki3VqNy1OMgTwOex
|
||||
bBYWaCqHuDYIxCjeW0qLJcn0cKQ13FVYrxgInf4Jp82SQht5b/zLL3IRZEyKcLJF
|
||||
kL6g1wlmTUTUX0z8eZzlM0ZCrqtExjgElMO/rV971nyNV5WU8Og3NmE8/slqMrmJ
|
||||
DlrQr9q0WJsDKj1IMe46EUM6ix7bbxC5NIfJ96dgdxZDn6ghjca6iZYqqUACvmUj
|
||||
cq08s3R4Ouw9/87kn11wwGBx2yDueCwrjKEGc0RKjweGbwu0nBxOrkJ8JXz6bAv7
|
||||
1OKfYaX3afI9B8x4uaiuRs38oBQlg9uAYFfl4HNBPuQikGLmsQKCAQEA8VjFOsaz
|
||||
y6NMZzKXi7WZ48uu3ed5x3Kf6RyDr1WvQ1jkBMv9b6b8Gp1CRnPqviRBto9L8QAg
|
||||
bCXZTqnXzn//brskmW8IZgqjAlf89AWa53piucu9/hgidrHRZobs5gTqev28uJdc
|
||||
zcuw1g8c3nCpY9WeTjHODzX5NXYRLFpkazLfYa6c8Q9jZR4KKrpdM+66fxL0JlOd
|
||||
7dN0oQtEqEAugsd3cwkZgvWhY4oM7FGErrZoDLy273ZdJzi/vU+dThyVzfD8Ab8u
|
||||
VxxuobVMT/S608zbe+uaiUdov5s96OkCl87403UNKJBH+6LNb3rjBBLE9NPN5ET9
|
||||
JLQMrYd+zj8jQwKCAQEA7uU5I9MOufo9bIgJqjY4Ie1+Ex9DZEMUYFAvGNCJCVcS
|
||||
mwOdGF8AWzIavTLACmEDJO7t/OrBdoo4L7IEsCNjgA3WiIwIMiWUVqveAGUMEXr6
|
||||
TRI5EolV6FTqqIP6AS+BAeBq7G1ELgsTrWNHh11rW3+3kBMuOCn77PUQ8WHwcq/r
|
||||
teZcZn4Ewcr6P7cBODgVvnBPhe/J8xHS0HFVCeS1CvaiNYgees5yA80Apo9IPjDJ
|
||||
YWawLjmH5wUBI5yDFVp067wjqJnoKPSoKwWkZXqUk+zgFXx5KT0gh/c5yh1frASp
|
||||
q6oaYnHEVC5qj2SpT1GFLonTcrQUXiSkiUudvNu1GQKCAQEAmko+5GFtRe0ihgLQ
|
||||
4S76r6diJli6AKil1Fg3U1r6zZpBQ1PJtJxTJQyN9w5Z7q6tF/GqAesrzxevQdvQ
|
||||
rCImAPtA3ZofC2UXawMnIjWHHx6diNvYnV1+gtUQ4nO1dSOFZ5VZFcUmPiZO6boF
|
||||
oaryj3FcX+71JcJCjEvrlKhA9Es0hXUkvfMxfs5if4he1zlyHpTWYr4oA4egUugq
|
||||
P0mwskikc3VIyvEO+NyjgFxo72yLPkFSzemkidN8uKDyFqKtnlfGM7OuA2CY1WZa
|
||||
3+67lXWshx9KzyJIs92iCYkU8EoPxtdYzyrV6efdX7x27v60zTOut5TnJJS6WiF6
|
||||
Do5MkwKCAQAxoR9IyP0DN/BwzqYrXU42Bi+t603F04W1KJNQNWpyrUspNwv41yus
|
||||
xnD1o0hwH41Wq+h3JZIBfV+E0RfWO9Pc84MBJQ5C1LnHc7cQH+3s575+Km3+4tcd
|
||||
CB8j2R8kBeloKWYtLdn/Mr/ownpGreqyvIq2/LUaZ+Z1aMgXTYB1YwS16mCBzmZQ
|
||||
mEl62RsAwe4KfSyYJ6OtwqMoOJMxFfliiLBULK4gVykqjvk2oQeiG+KKQJoTUFJi
|
||||
dRCyhD5bPkqR+qjxyt+HOqSBI4/uoROi05AOBqjpH1DVzk+MJKQOiX1yM0l98CKY
|
||||
Vng+x+vAla/0Zh+ucajVkgk4mKPxazdpAoIBAQC17vWk4KYJpF2RC3pKPcQ0PdiX
|
||||
bN35YNlvyhkYlSfDNdyH3aDrGiycUyW2mMXUgEDFsLRxHMTL+zPC6efqO6sTAJDY
|
||||
cBptsW4drW/qo8NTx3dNOisLkW+mGGJOR/w157hREFr29ymCVMYu/Z7fVWIeSpCq
|
||||
p3u8YX8WTljrxwSczlGjvpM7uJx3SfYRM4TUoy+8wU8bK74LywLa5f60bQY6Dye0
|
||||
Gqd9O6OoPfgcQlwjC5MiAofeqwPJvU0hQOPoehZyNLAmOCWXTYWaTP7lxO1r6+NE
|
||||
M3hGYqW3W8Ixua71OskCypBZg/HVlIP/lzjRzdx+VOB2hbWVth2Iup/Z1egW
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
caCRL = `-----BEGIN X509 CRL-----
|
||||
MIICpzCBkAIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0QXV0aBcN
|
||||
MjEwMTAyMjEzNDA1WhcNMjMwMTAyMjEzNDA1WjAkMCICEQC+l04DbHWMyC3fG09k
|
||||
VXf+Fw0yMTAxMDIyMTM0MDVaoCMwITAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJc
|
||||
N8OTorgbzDANBgkqhkiG9w0BAQsFAAOCAgEAEJ7z+uNc8sqtxlOhSdTGDzX/xput
|
||||
E857kFQkSlMnU2whQ8c+XpYrBLA5vIZJNSSwohTpM4+zVBX/bJpmu3wqqaArRO9/
|
||||
YcW5mQk9Anvb4WjQW1cHmtNapMTzoC9AiYt/OWPfy+P6JCgCr4Hy6LgQyIRL6bM9
|
||||
VYTalolOm1qa4Y5cIeT7iHq/91mfaqo8/6MYRjLl8DOTROpmw8OS9bCXkzGKdCat
|
||||
AbAzwkQUSauyoCQ10rpX+Y64w9ng3g4Dr20aCqPf5osaqplEJ2HTK8ljDTidlslv
|
||||
9anQj8ax3Su89vI8+hK+YbfVQwrThabgdSjQsn+veyx8GlP8WwHLAQ379KjZjWg+
|
||||
OlOSwBeU1vTdP0QcB8X5C2gVujAyuQekbaV86xzIBOj7vZdfHZ6ee30TZ2FKiMyg
|
||||
7/N2OqW0w77ChsjB4MSHJCfuTgIeg62GzuZXLM+Q2Z9LBdtm4Byg+sm/P52adOEg
|
||||
gVb2Zf4KSvsAmA0PIBlu449/QXUFcMxzLFy7mwTeZj2B4Ln0Hm0szV9f9R8MwMtB
|
||||
SyLYxVH+mgqaR6Jkk22Q/yYyLPaELfafX5gp/AIXG8n0zxfVaTvK3auSgb1Q6ZLS
|
||||
5QH9dSIsmZHlPq7GoSXmKpMdjUL8eaky/IMteioyXgsBiATzl5L2dsw6MTX3MDF0
|
||||
QbDK+MzhmbKfDxs=
|
||||
-----END X509 CRL-----`
|
||||
client1Crt = `-----BEGIN CERTIFICATE-----
|
||||
MIIEITCCAgmgAwIBAgIRAIppZHoj1hM80D7WzTEKLuAwDQYJKoZIhvcNAQELBQAw
|
||||
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzEwWhcNMjIwNzAyMjEz
|
||||
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiVbJtH
|
||||
XVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd20jP
|
||||
yhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1UHw4
|
||||
3Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZmH859
|
||||
DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0habT
|
||||
cDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
|
||||
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBSJ5GIv
|
||||
zIrE4ZSQt2+CGblKTDswizAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
|
||||
zDANBgkqhkiG9w0BAQsFAAOCAgEALh4f5GhvNYNou0Ab04iQBbLEdOu2RlbK1B5n
|
||||
K9P/umYenBHMY/z6HT3+6tpcHsDuqE8UVdq3f3Gh4S2Gu9m8PRitT+cJ3gdo9Plm
|
||||
3rD4ufn/s6rGg3ppydXcedm17492tbccUDWOBZw3IO/ASVq13WPgT0/Kev7cPq0k
|
||||
sSdSNhVeXqx8Myc2/d+8GYyzbul2Kpfa7h9i24sK49E9ftnSmsIvngONo08eT1T0
|
||||
3wAOyK2981LIsHaAWcneShKFLDB6LeXIT9oitOYhiykhFlBZ4M1GNlSNfhQ8IIQP
|
||||
xbqMNXCLkW4/BtLhGEEcg0QVso6Kudl9rzgTfQknrdF7pHp6rS46wYUjoSyIY6dl
|
||||
oLmnoAVJX36J3QPWelePI9e07X2wrTfiZWewwgw3KNRWjd6/zfPLe7GoqXnK1S2z
|
||||
PT8qMfCaTwKTtUkzXuTFvQ8bAo2My/mS8FOcpkt2oQWeOsADHAUX7fz5BCoa2DL3
|
||||
k/7Mh4gVT+JYZEoTwCFuYHgMWFWe98naqHi9lB4yR981p1QgXgxO7qBeipagKY1F
|
||||
LlH1iwXUqZ3MZnkNA+4e1Fglsw3sa/rC+L98HnznJ/YbTfQbCP6aQ1qcOymrjMud
|
||||
7MrFwqZjtd/SK4Qx1VpK6jGEAtPgWBTUS3p9ayg6lqjMBjsmySWfvRsDQbq6P5Ct
|
||||
O/e3EH8=
|
||||
-----END CERTIFICATE-----`
|
||||
client1Key = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiV
|
||||
bJtHXVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd
|
||||
20jPyhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1
|
||||
UHw43Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZm
|
||||
H859DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0
|
||||
habTcDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABAoIBAEBSjVFqtbsp0byR
|
||||
aXvyrtLX1Ng7h++at2jca85Ihq//jyqbHTje8zPuNAKI6eNbmb0YGr5OuEa4pD9N
|
||||
ssDmMsKSoG/lRwwcm7h4InkSvBWpFShvMgUaohfHAHzsBYxfnh+TfULsi0y7c2n6
|
||||
t/2OZcOTRkkUDIITnXYiw93ibHHv2Mv2bBDu35kGrcK+c2dN5IL5ZjTjMRpbJTe2
|
||||
44RBJbdTxHBVSgoGBnugF+s2aEma6Ehsj70oyfoVpM6Aed5kGge0A5zA1JO7WCn9
|
||||
Ay/DzlULRXHjJIoRWd2NKvx5n3FNppUc9vJh2plRHalRooZ2+MjSf8HmXlvG2Hpb
|
||||
ScvmWgECgYEA1G+A/2KnxWsr/7uWIJ7ClcGCiNLdk17Pv3DZ3G4qUsU2ITftfIbb
|
||||
tU0Q/b19na1IY8Pjy9ptP7t74/hF5kky97cf1FA8F+nMj/k4+wO8QDI8OJfzVzh9
|
||||
PwielA5vbE+xmvis5Hdp8/od1Yrc/rPSy2TKtPFhvsqXjqoUmOAjDP8CgYEAwZjH
|
||||
9dt1sc2lx/rMxihlWEzQ3JPswKW9/LJAmbRBoSWF9FGNjbX7uhWtXRKJkzb8ZAwa
|
||||
88azluNo2oftbDD/+jw8b2cDgaJHlLAkSD4O1D1RthW7/LKD15qZ/oFsRb13NV85
|
||||
ZNKtwslXGbfVNyGKUVFm7fVA8vBAOUey+LKDFj8CgYEAg8WWstOzVdYguMTXXuyb
|
||||
ruEV42FJaDyLiSirOvxq7GTAKuLSQUg1yMRBIeQEo2X1XU0JZE3dLodRVhuO4EXP
|
||||
g7Dn4X7Th9HSvgvNuIacowWGLWSz4Qp9RjhGhXhezUSx2nseY6le46PmFavJYYSR
|
||||
4PBofMyt4PcyA6Cknh+KHmkCgYEAnTriG7ETE0a7v4DXUpB4TpCEiMCy5Xs2o8Z5
|
||||
ZNva+W+qLVUWq+MDAIyechqeFSvxK6gRM69LJ96lx+XhU58wJiFJzAhT9rK/g+jS
|
||||
bsHH9WOfu0xHkuHA5hgvvV2Le9B2wqgFyva4HJy82qxMxCu/VG/SMqyfBS9OWbb7
|
||||
ibQhdq0CgYAl53LUWZsFSZIth1vux2LVOsI8C3X1oiXDGpnrdlQ+K7z57hq5EsRq
|
||||
GC+INxwXbvKNqp5h0z2MvmKYPDlGVTgw8f8JjM7TkN17ERLcydhdRrMONUryZpo8
|
||||
1xTob+8blyJgfxZUIAKbMbMbIiU0WAF0rfD/eJJwS4htOW/Hfv4TGA==
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
// client 2 crt is revoked
|
||||
client2Crt = `-----BEGIN CERTIFICATE-----
|
||||
MIIEITCCAgmgAwIBAgIRAL6XTgNsdYzILd8bT2RVd/4wDQYJKoZIhvcNAQELBQAw
|
||||
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzIwWhcNMjIwNzAyMjEz
|
||||
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY+6hi
|
||||
jcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN/4jQ
|
||||
tNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2HkO/xG
|
||||
oZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB1YFM
|
||||
s8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhtsC871
|
||||
nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
|
||||
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBTB84v5
|
||||
t9HqhLhMODbn6oYkEQt3KzAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
|
||||
zDANBgkqhkiG9w0BAQsFAAOCAgEALGtBCve5k8tToL3oLuXp/oSik6ovIB/zq4I/
|
||||
4zNMYPU31+ZWz6aahysgx1JL1yqTa3Qm8o2tu52MbnV10dM7CIw7c/cYa+c+OPcG
|
||||
5LF97kp13X+r2axy+CmwM86b4ILaDGs2Qyai6VB6k7oFUve+av5o7aUrNFpqGCJz
|
||||
HWdtHZSVA3JMATzy0TfWanwkzreqfdw7qH0yZ9bDURlBKAVWrqnCstva9jRuv+AI
|
||||
eqxr/4Ro986TFjJdoAP3Vr16CPg7/B6GA/KmsBWJrpeJdPWq4i2gpLKvYZoy89qD
|
||||
mUZf34RbzcCtV4NvV1DadGnt4us0nvLrvS5rL2+2uWD09kZYq9RbLkvgzF/cY0fz
|
||||
i7I1bi5XQ+alWe0uAk5ZZL/D+GTRYUX1AWwCqwJxmHrMxcskMyO9pXvLyuSWRDLo
|
||||
YNBrbX9nLcfJzVCp+X+9sntTHjs4l6Cw+fLepJIgtgqdCHtbhTiv68vSM6cgb4br
|
||||
6n2xrXRKuioiWFOrTSRr+oalZh8dGJ/xvwY8IbWknZAvml9mf1VvfE7Ma5P777QM
|
||||
fsbYVTq0Y3R/5hIWsC3HA5z6MIM8L1oRe/YyhP3CTmrCHkVKyDOosGXpGz+JVcyo
|
||||
cfYkY5A3yFKB2HaCwZSfwFmRhxkrYWGEbHv3Cd9YkZs1J3hNhGFZyVMC9Uh0S85a
|
||||
6zdDidU=
|
||||
-----END CERTIFICATE-----`
|
||||
client2Key = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY
|
||||
+6hijcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN
|
||||
/4jQtNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2Hk
|
||||
O/xGoZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB
|
||||
1YFMs8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhts
|
||||
C871nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABAoIBAFatstVb1KdQXsq0
|
||||
cFpui8zTKOUiduJOrDkWzTygAmlEhYtrccdfXu7OWz0x0lvBLDVGK3a0I/TGrAzj
|
||||
4BuFY+FM/egxTVt9in6fmA3et4BS1OAfCryzUdfK6RV//8L+t+zJZ/qKQzWnugpy
|
||||
QYjDo8ifuMFwtvEoXizaIyBNLAhEp9hnrv+Tyi2O2gahPvCHsD48zkyZRCHYRstD
|
||||
NH5cIrwz9/RJgPO1KI+QsJE7Nh7stR0sbr+5TPU4fnsL2mNhMUF2TJrwIPrc1yp+
|
||||
YIUjdnh3SO88j4TQT3CIrWi8i4pOy6N0dcVn3gpCRGaqAKyS2ZYUj+yVtLO4KwxZ
|
||||
SZ1lNvECgYEA78BrF7f4ETfWSLcBQ3qxfLs7ibB6IYo2x25685FhZjD+zLXM1AKb
|
||||
FJHEXUm3mUYrFJK6AFEyOQnyGKBOLs3S6oTAswMPbTkkZeD1Y9O6uv0AHASLZnK6
|
||||
pC6ub0eSRF5LUyTQ55Jj8D7QsjXJueO8v+G5ihWhNSN9tB2UA+8NBmkCgYEA+weq
|
||||
cvoeMIEMBQHnNNLy35bwfqrceGyPIRBcUIvzQfY1vk7KW6DYOUzC7u+WUzy/hA52
|
||||
DjXVVhua2eMQ9qqtOav7djcMc2W9RbLowxvno7K5qiCss013MeWk64TCWy+WMp5A
|
||||
AVAtOliC3hMkIKqvR2poqn+IBTh1449agUJQqTMCgYEAu06IHGq1GraV6g9XpGF5
|
||||
wqoAlMzUTdnOfDabRilBf/YtSr+J++ThRcuwLvXFw7CnPZZ4TIEjDJ7xjj3HdxeE
|
||||
fYYjineMmNd40UNUU556F1ZLvJfsVKizmkuCKhwvcMx+asGrmA+tlmds4p3VMS50
|
||||
KzDtpKzLWlmU/p/RINWlRmkCgYBy0pHTn7aZZx2xWKqCDg+L2EXPGqZX6wgZDpu7
|
||||
OBifzlfM4ctL2CmvI/5yPmLbVgkgBWFYpKUdiujsyyEiQvWTUKhn7UwjqKDHtcsk
|
||||
G6p7xS+JswJrzX4885bZJ9Oi1AR2yM3sC9l0O7I4lDbNPmWIXBLeEhGMmcPKv/Kc
|
||||
91Ff4wKBgQCF3ur+Vt0PSU0ucrPVHjCe7tqazm0LJaWbPXL1Aw0pzdM2EcNcW/MA
|
||||
w0kqpr7MgJ94qhXCBcVcfPuFN9fBOadM3UBj1B45Cz3pptoK+ScI8XKno6jvVK/p
|
||||
xr5cb9VBRBtB9aOKVfuRhpatAfS2Pzm2Htae9lFn7slGPUmu2hkjDw==
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
)
|
||||
|
||||
func TestLoadCertificate(t *testing.T) {
|
||||
caCrtPath := filepath.Join(os.TempDir(), "testca.crt")
|
||||
caCrlPath := filepath.Join(os.TempDir(), "testcrl.crt")
|
||||
certPath := filepath.Join(os.TempDir(), "test.crt")
|
||||
keyPath := filepath.Join(os.TempDir(), "test.key")
|
||||
err := os.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(certPath, []byte(serverCert), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
certManager, err := NewCertManager(certPath, keyPath, configDir, logSenderTest)
|
||||
assert.NoError(t, err)
|
||||
certFunc := certManager.GetCertificateFunc()
|
||||
if assert.NotNil(t, certFunc) {
|
||||
hello := &tls.ClientHelloInfo{
|
||||
ServerName: "localhost",
|
||||
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305},
|
||||
}
|
||||
cert, err := certFunc(hello)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, certManager.cert, cert)
|
||||
}
|
||||
|
||||
certManager.SetCACertificates(nil)
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
certManager.SetCACertificates([]string{""})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCACertificates([]string{"invalid"})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.Error(t, err)
|
||||
|
||||
// laoding the key as root CA must fail
|
||||
certManager.SetCACertificates([]string{keyPath})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCACertificates([]string{certPath})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
rootCa := certManager.GetRootCAs()
|
||||
assert.NotNil(t, rootCa)
|
||||
|
||||
err = certManager.Reload()
|
||||
assert.NoError(t, err)
|
||||
|
||||
certManager.SetCARevocationLists(nil)
|
||||
err = certManager.LoadCRLs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
certManager.SetCARevocationLists([]string{""})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCARevocationLists([]string{"invalid crl"})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.Error(t, err)
|
||||
|
||||
// this is not a crl and must fail
|
||||
certManager.SetCARevocationLists([]string{caCrtPath})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCARevocationLists([]string{caCrlPath})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
crt, err := tls.X509KeyPair([]byte(caCRT), []byte(caKey))
|
||||
assert.NoError(t, err)
|
||||
|
||||
x509CAcrt, err := x509.ParseCertificate(crt.Certificate[0])
|
||||
assert.NoError(t, err)
|
||||
|
||||
crt, err = tls.X509KeyPair([]byte(client1Crt), []byte(client1Key))
|
||||
assert.NoError(t, err)
|
||||
x509crt, err := x509.ParseCertificate(crt.Certificate[0])
|
||||
if assert.NoError(t, err) {
|
||||
assert.False(t, certManager.IsRevoked(x509crt, x509CAcrt))
|
||||
}
|
||||
|
||||
crt, err = tls.X509KeyPair([]byte(client2Crt), []byte(client2Key))
|
||||
assert.NoError(t, err)
|
||||
x509crt, err = x509.ParseCertificate(crt.Certificate[0])
|
||||
if assert.NoError(t, err) {
|
||||
assert.True(t, certManager.IsRevoked(x509crt, x509CAcrt))
|
||||
}
|
||||
|
||||
assert.True(t, certManager.IsRevoked(nil, nil))
|
||||
|
||||
err = os.Remove(caCrlPath)
|
||||
assert.NoError(t, err)
|
||||
err = certManager.Reload()
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Remove(certPath)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(keyPath)
|
||||
assert.NoError(t, err)
|
||||
err = certManager.Reload()
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Remove(caCrtPath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestLoadInvalidCert(t *testing.T) {
|
||||
certManager, err := NewCertManager("test.crt", "test.key", configDir, logSenderTest)
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, certManager)
|
||||
}
|
332
common/transfer.go
Normal file
332
common/transfer.go
Normal file
|
@ -0,0 +1,332 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"path"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/metric"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrTransferClosed defines the error returned for a closed transfer
|
||||
ErrTransferClosed = errors.New("transfer already closed")
|
||||
)
|
||||
|
||||
// BaseTransfer contains protocols common transfer details for an upload or a download.
|
||||
type BaseTransfer struct { //nolint:maligned
|
||||
ID uint64
|
||||
BytesSent int64
|
||||
BytesReceived int64
|
||||
Fs vfs.Fs
|
||||
File vfs.File
|
||||
Connection *BaseConnection
|
||||
cancelFn func()
|
||||
fsPath string
|
||||
effectiveFsPath string
|
||||
requestPath string
|
||||
ftpMode string
|
||||
start time.Time
|
||||
MaxWriteSize int64
|
||||
MinWriteOffset int64
|
||||
InitialSize int64
|
||||
isNewFile bool
|
||||
transferType int
|
||||
AbortTransfer int32
|
||||
aTime time.Time
|
||||
mTime time.Time
|
||||
sync.Mutex
|
||||
ErrTransfer error
|
||||
}
|
||||
|
||||
// NewBaseTransfer returns a new BaseTransfer and adds it to the given connection
|
||||
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, effectiveFsPath, requestPath string,
|
||||
transferType int, minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
|
||||
t := &BaseTransfer{
|
||||
ID: conn.GetTransferID(),
|
||||
File: file,
|
||||
Connection: conn,
|
||||
cancelFn: cancelFn,
|
||||
fsPath: fsPath,
|
||||
effectiveFsPath: effectiveFsPath,
|
||||
start: time.Now(),
|
||||
transferType: transferType,
|
||||
MinWriteOffset: minWriteOffset,
|
||||
InitialSize: initialSize,
|
||||
isNewFile: isNewFile,
|
||||
requestPath: requestPath,
|
||||
BytesSent: 0,
|
||||
BytesReceived: 0,
|
||||
MaxWriteSize: maxWriteSize,
|
||||
AbortTransfer: 0,
|
||||
Fs: fs,
|
||||
}
|
||||
|
||||
conn.AddTransfer(t)
|
||||
return t
|
||||
}
|
||||
|
||||
// SetFtpMode sets the FTP mode for the current transfer
|
||||
func (t *BaseTransfer) SetFtpMode(mode string) {
|
||||
t.ftpMode = mode
|
||||
}
|
||||
|
||||
// GetID returns the transfer ID
|
||||
func (t *BaseTransfer) GetID() uint64 {
|
||||
return t.ID
|
||||
}
|
||||
|
||||
// GetType returns the transfer type
|
||||
func (t *BaseTransfer) GetType() int {
|
||||
return t.transferType
|
||||
}
|
||||
|
||||
// GetSize returns the transferred size
|
||||
func (t *BaseTransfer) GetSize() int64 {
|
||||
if t.transferType == TransferDownload {
|
||||
return atomic.LoadInt64(&t.BytesSent)
|
||||
}
|
||||
return atomic.LoadInt64(&t.BytesReceived)
|
||||
}
|
||||
|
||||
// GetStartTime returns the start time
|
||||
func (t *BaseTransfer) GetStartTime() time.Time {
|
||||
return t.start
|
||||
}
|
||||
|
||||
// SignalClose signals that the transfer should be closed.
|
||||
// For same protocols, for example WebDAV, we have no
|
||||
// access to the network connection, so we use this method
|
||||
// to make the next read or write to fail
|
||||
func (t *BaseTransfer) SignalClose() {
|
||||
atomic.StoreInt32(&(t.AbortTransfer), 1)
|
||||
}
|
||||
|
||||
// GetVirtualPath returns the transfer virtual path
|
||||
func (t *BaseTransfer) GetVirtualPath() string {
|
||||
return t.requestPath
|
||||
}
|
||||
|
||||
// GetFsPath returns the transfer filesystem path
|
||||
func (t *BaseTransfer) GetFsPath() string {
|
||||
return t.fsPath
|
||||
}
|
||||
|
||||
// SetTimes stores access and modification times if fsPath matches the current file
|
||||
func (t *BaseTransfer) SetTimes(fsPath string, atime time.Time, mtime time.Time) bool {
|
||||
if fsPath == t.GetFsPath() {
|
||||
t.aTime = atime
|
||||
t.mTime = mtime
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// GetRealFsPath returns the real transfer filesystem path.
|
||||
// If atomic uploads are enabled this differ from fsPath
|
||||
func (t *BaseTransfer) GetRealFsPath(fsPath string) string {
|
||||
if fsPath == t.GetFsPath() {
|
||||
if t.File != nil {
|
||||
return t.File.Name()
|
||||
}
|
||||
return t.fsPath
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// SetCancelFn sets the cancel function for the transfer
|
||||
func (t *BaseTransfer) SetCancelFn(cancelFn func()) {
|
||||
t.cancelFn = cancelFn
|
||||
}
|
||||
|
||||
// Truncate changes the size of the opened file.
|
||||
// Supported for local fs only
|
||||
func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
|
||||
if fsPath == t.GetFsPath() {
|
||||
if t.File != nil {
|
||||
initialSize := t.InitialSize
|
||||
err := t.File.Truncate(size)
|
||||
if err == nil {
|
||||
t.Lock()
|
||||
t.InitialSize = size
|
||||
if t.MaxWriteSize > 0 {
|
||||
sizeDiff := initialSize - size
|
||||
t.MaxWriteSize += sizeDiff
|
||||
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
|
||||
atomic.StoreInt64(&t.BytesReceived, 0)
|
||||
}
|
||||
t.Unlock()
|
||||
}
|
||||
t.Connection.Log(logger.LevelDebug, "file %#v truncated to size %v max write size %v new initial size %v err: %v",
|
||||
fsPath, size, t.MaxWriteSize, t.InitialSize, err)
|
||||
return initialSize, err
|
||||
}
|
||||
if size == 0 && atomic.LoadInt64(&t.BytesSent) == 0 {
|
||||
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads
|
||||
// for buffered SFTP we can have buffered bytes so we returns an error
|
||||
if !vfs.IsBufferedSFTPFs(t.Fs) {
|
||||
return 0, nil
|
||||
}
|
||||
}
|
||||
return 0, vfs.ErrVfsUnsupported
|
||||
}
|
||||
return 0, errTransferMismatch
|
||||
}
|
||||
|
||||
// TransferError is called if there is an unexpected error.
|
||||
// For example network or client issues
|
||||
func (t *BaseTransfer) TransferError(err error) {
|
||||
t.Lock()
|
||||
defer t.Unlock()
|
||||
if t.ErrTransfer != nil {
|
||||
return
|
||||
}
|
||||
t.ErrTransfer = err
|
||||
if t.cancelFn != nil {
|
||||
t.cancelFn()
|
||||
}
|
||||
elapsed := time.Since(t.start).Nanoseconds() / 1000000
|
||||
t.Connection.Log(logger.LevelError, "Unexpected error for transfer, path: %#v, error: \"%v\" bytes sent: %v, "+
|
||||
"bytes received: %v transfer running since %v ms", t.fsPath, t.ErrTransfer, atomic.LoadInt64(&t.BytesSent),
|
||||
atomic.LoadInt64(&t.BytesReceived), elapsed)
|
||||
}
|
||||
|
||||
func (t *BaseTransfer) getUploadFileSize() (int64, error) {
|
||||
var fileSize int64
|
||||
info, err := t.Fs.Stat(t.fsPath)
|
||||
if err == nil {
|
||||
fileSize = info.Size()
|
||||
}
|
||||
if vfs.IsCryptOsFs(t.Fs) && t.ErrTransfer != nil {
|
||||
errDelete := t.Fs.Remove(t.fsPath, false)
|
||||
if errDelete != nil {
|
||||
t.Connection.Log(logger.LevelWarn, "error removing partial crypto file %#v: %v", t.fsPath, errDelete)
|
||||
}
|
||||
}
|
||||
return fileSize, err
|
||||
}
|
||||
|
||||
// Close it is called when the transfer is completed.
|
||||
// It logs the transfer info, updates the user quota (for uploads)
|
||||
// and executes any defined action.
|
||||
// If there is an error no action will be executed and, in atomic mode,
|
||||
// we try to delete the temporary file
|
||||
func (t *BaseTransfer) Close() error {
|
||||
defer t.Connection.RemoveTransfer(t)
|
||||
|
||||
var err error
|
||||
numFiles := 0
|
||||
if t.isNewFile {
|
||||
numFiles = 1
|
||||
}
|
||||
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
|
||||
if t.File != nil && t.Connection.IsQuotaExceededError(t.ErrTransfer) {
|
||||
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
|
||||
err = t.Fs.Remove(t.File.Name(), false)
|
||||
if err == nil {
|
||||
numFiles--
|
||||
atomic.StoreInt64(&t.BytesReceived, 0)
|
||||
t.MinWriteOffset = 0
|
||||
}
|
||||
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %#v, deletion error: %v",
|
||||
t.File.Name(), err)
|
||||
} else if t.transferType == TransferUpload && t.effectiveFsPath != t.fsPath {
|
||||
if t.ErrTransfer == nil || Config.UploadMode == UploadModeAtomicWithResume {
|
||||
err = t.Fs.Rename(t.effectiveFsPath, t.fsPath)
|
||||
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
|
||||
t.effectiveFsPath, t.fsPath, err)
|
||||
} else {
|
||||
err = t.Fs.Remove(t.effectiveFsPath, false)
|
||||
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, "+
|
||||
"deletion error: %v", t.ErrTransfer, t.effectiveFsPath, err)
|
||||
if err == nil {
|
||||
numFiles--
|
||||
atomic.StoreInt64(&t.BytesReceived, 0)
|
||||
t.MinWriteOffset = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
elapsed := time.Since(t.start).Nanoseconds() / 1000000
|
||||
if t.transferType == TransferDownload {
|
||||
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
|
||||
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
|
||||
ExecuteActionNotification(t.Connection, operationDownload, t.fsPath, t.requestPath, "", "", "",
|
||||
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
|
||||
} else {
|
||||
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
|
||||
if statSize, err := t.getUploadFileSize(); err == nil {
|
||||
fileSize = statSize
|
||||
}
|
||||
t.Connection.Log(logger.LevelDebug, "uploaded file size %v", fileSize)
|
||||
t.updateQuota(numFiles, fileSize)
|
||||
t.updateTimes()
|
||||
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
|
||||
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
|
||||
ExecuteActionNotification(t.Connection, operationUpload, t.fsPath, t.requestPath, "", "", "", fileSize, t.ErrTransfer)
|
||||
}
|
||||
if t.ErrTransfer != nil {
|
||||
t.Connection.Log(logger.LevelError, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
|
||||
if err == nil {
|
||||
err = t.ErrTransfer
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (t *BaseTransfer) updateTimes() {
|
||||
if !t.aTime.IsZero() && !t.mTime.IsZero() {
|
||||
err := t.Fs.Chtimes(t.fsPath, t.aTime, t.mTime, true)
|
||||
t.Connection.Log(logger.LevelDebug, "set times for file %#v, atime: %v, mtime: %v, err: %v",
|
||||
t.fsPath, t.aTime, t.mTime, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *BaseTransfer) updateQuota(numFiles int, fileSize int64) bool {
|
||||
// S3 uploads are atomic, if there is an error nothing is uploaded
|
||||
if t.File == nil && t.ErrTransfer != nil && !t.Connection.User.HasBufferedSFTP(t.GetVirtualPath()) {
|
||||
return false
|
||||
}
|
||||
sizeDiff := fileSize - t.InitialSize
|
||||
if t.transferType == TransferUpload && (numFiles != 0 || sizeDiff > 0) {
|
||||
vfolder, err := t.Connection.User.GetVirtualFolderForPath(path.Dir(t.requestPath))
|
||||
if err == nil {
|
||||
dataprovider.UpdateVirtualFolderQuota(&vfolder.BaseVirtualFolder, numFiles, //nolint:errcheck
|
||||
sizeDiff, false)
|
||||
if vfolder.IsIncludedInUserQuota() {
|
||||
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
|
||||
}
|
||||
} else {
|
||||
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
|
||||
}
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// HandleThrottle manage bandwidth throttling
|
||||
func (t *BaseTransfer) HandleThrottle() {
|
||||
var wantedBandwidth int64
|
||||
var trasferredBytes int64
|
||||
if t.transferType == TransferDownload {
|
||||
wantedBandwidth = t.Connection.User.DownloadBandwidth
|
||||
trasferredBytes = atomic.LoadInt64(&t.BytesSent)
|
||||
} else {
|
||||
wantedBandwidth = t.Connection.User.UploadBandwidth
|
||||
trasferredBytes = atomic.LoadInt64(&t.BytesReceived)
|
||||
}
|
||||
if wantedBandwidth > 0 {
|
||||
// real and wanted elapsed as milliseconds, bytes as kilobytes
|
||||
realElapsed := time.Since(t.start).Nanoseconds() / 1000000
|
||||
// trasferredBytes / 1024 = KB/s, we multiply for 1000 to get milliseconds
|
||||
wantedElapsed := 1000 * (trasferredBytes / 1024) / wantedBandwidth
|
||||
if wantedElapsed > realElapsed {
|
||||
toSleep := time.Duration(wantedElapsed - realElapsed)
|
||||
time.Sleep(toSleep * time.Millisecond)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,47 +1,32 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/sftp"
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/internal/kms"
|
||||
"github.com/drakkan/sftpgo/v2/internal/vfs"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
func TestTransferUpdateQuota(t *testing.T) {
|
||||
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{})
|
||||
transfer := BaseTransfer{
|
||||
Connection: conn,
|
||||
transferType: TransferUpload,
|
||||
Fs: vfs.NewOsFs("", os.TempDir(), "", nil),
|
||||
Connection: conn,
|
||||
transferType: TransferUpload,
|
||||
BytesReceived: 123,
|
||||
Fs: vfs.NewOsFs("", os.TempDir(), ""),
|
||||
}
|
||||
transfer.BytesReceived.Store(123)
|
||||
errFake := errors.New("fake error")
|
||||
transfer.TransferError(errFake)
|
||||
assert.False(t, transfer.updateQuota(1, 0))
|
||||
err := transfer.Close()
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, errFake.Error())
|
||||
|
@ -57,15 +42,11 @@ func TestTransferUpdateQuota(t *testing.T) {
|
|||
QuotaSize: -1,
|
||||
})
|
||||
transfer.ErrTransfer = nil
|
||||
transfer.BytesReceived.Store(1)
|
||||
transfer.BytesReceived = 1
|
||||
transfer.requestPath = "/vdir/file"
|
||||
assert.True(t, transfer.updateQuota(1, 0))
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer.ErrTransfer = errFake
|
||||
transfer.Fs = newMockOsFs(true, "", "", "S3Fs fake", nil)
|
||||
assert.False(t, transfer.updateQuota(1, 0))
|
||||
}
|
||||
|
||||
func TestTransferThrottling(t *testing.T) {
|
||||
|
@ -76,7 +57,7 @@ func TestTransferThrottling(t *testing.T) {
|
|||
DownloadBandwidth: 40,
|
||||
},
|
||||
}
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "", nil)
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
testFileSize := int64(131072)
|
||||
wantedUploadElapsed := 1000 * (testFileSize / 1024) / u.UploadBandwidth
|
||||
wantedDownloadElapsed := 1000 * (testFileSize / 1024) / u.DownloadBandwidth
|
||||
|
@ -84,8 +65,8 @@ func TestTransferThrottling(t *testing.T) {
|
|||
wantedUploadElapsed -= wantedDownloadElapsed / 10
|
||||
wantedDownloadElapsed -= wantedDownloadElapsed / 10
|
||||
conn := NewBaseConnection("id", ProtocolSCP, "", "", u)
|
||||
transfer := NewBaseTransfer(nil, conn, nil, "", "", "", TransferUpload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
|
||||
transfer.BytesReceived.Store(testFileSize)
|
||||
transfer := NewBaseTransfer(nil, conn, nil, "", "", "", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.BytesReceived = testFileSize
|
||||
transfer.Connection.UpdateLastActivity()
|
||||
startTime := transfer.Connection.GetLastActivity()
|
||||
transfer.HandleThrottle()
|
||||
|
@ -94,8 +75,8 @@ func TestTransferThrottling(t *testing.T) {
|
|||
err := transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer = NewBaseTransfer(nil, conn, nil, "", "", "", TransferDownload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
|
||||
transfer.BytesSent.Store(testFileSize)
|
||||
transfer = NewBaseTransfer(nil, conn, nil, "", "", "", TransferDownload, 0, 0, 0, true, fs)
|
||||
transfer.BytesSent = testFileSize
|
||||
transfer.Connection.UpdateLastActivity()
|
||||
startTime = transfer.Connection.GetLastActivity()
|
||||
|
||||
|
@ -108,7 +89,7 @@ func TestTransferThrottling(t *testing.T) {
|
|||
|
||||
func TestRealPath(t *testing.T) {
|
||||
testFile := filepath.Join(os.TempDir(), "afile.txt")
|
||||
fs := vfs.NewOsFs("123", os.TempDir(), "", nil)
|
||||
fs := vfs.NewOsFs("123", os.TempDir(), "")
|
||||
u := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "user",
|
||||
|
@ -120,8 +101,7 @@ func TestRealPath(t *testing.T) {
|
|||
file, err := os.Create(testFile)
|
||||
require.NoError(t, err)
|
||||
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file",
|
||||
TransferUpload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
rPath := transfer.GetRealFsPath(testFile)
|
||||
assert.Equal(t, testFile, rPath)
|
||||
rPath = conn.getRealFsPath(testFile)
|
||||
|
@ -142,7 +122,7 @@ func TestRealPath(t *testing.T) {
|
|||
|
||||
func TestTruncate(t *testing.T) {
|
||||
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
|
||||
fs := vfs.NewOsFs("123", os.TempDir(), "", nil)
|
||||
fs := vfs.NewOsFs("123", os.TempDir(), "")
|
||||
u := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "user",
|
||||
|
@ -158,8 +138,7 @@ func TestTruncate(t *testing.T) {
|
|||
_, err = file.Write([]byte("hello"))
|
||||
assert.NoError(t, err)
|
||||
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 5,
|
||||
100, 0, false, fs, dataprovider.TransferQuota{})
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
|
||||
|
||||
err = conn.SetStat("/transfer_test_file", &StatAttributes{
|
||||
Size: 2,
|
||||
|
@ -176,8 +155,7 @@ func TestTruncate(t *testing.T) {
|
|||
assert.Equal(t, int64(2), fi.Size())
|
||||
}
|
||||
|
||||
transfer = NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0,
|
||||
100, 0, true, fs, dataprovider.TransferQuota{})
|
||||
transfer = NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
|
||||
// file.Stat will fail on a closed file
|
||||
err = conn.SetStat("/transfer_test_file", &StatAttributes{
|
||||
Size: 2,
|
||||
|
@ -187,8 +165,7 @@ func TestTruncate(t *testing.T) {
|
|||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer = NewBaseTransfer(nil, conn, nil, testFile, testFile, "", TransferUpload, 0, 0, 0, 0, true,
|
||||
fs, dataprovider.TransferQuota{})
|
||||
transfer = NewBaseTransfer(nil, conn, nil, testFile, testFile, "", TransferUpload, 0, 0, 0, true, fs)
|
||||
_, err = transfer.Truncate("mismatch", 0)
|
||||
assert.EqualError(t, err, errTransferMismatch.Error())
|
||||
_, err = transfer.Truncate(testFile, 0)
|
||||
|
@ -211,7 +188,7 @@ func TestTransferErrors(t *testing.T) {
|
|||
isCancelled = true
|
||||
}
|
||||
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
|
||||
fs := vfs.NewOsFs("id", os.TempDir(), "", nil)
|
||||
fs := vfs.NewOsFs("id", os.TempDir(), "")
|
||||
u := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "test",
|
||||
|
@ -225,25 +202,12 @@ func TestTransferErrors(t *testing.T) {
|
|||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
conn := NewBaseConnection("id", ProtocolSFTP, "", "", u)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload,
|
||||
0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
|
||||
pathError := &os.PathError{
|
||||
Op: "test",
|
||||
Path: testFile,
|
||||
Err: os.ErrInvalid,
|
||||
}
|
||||
err = transfer.ConvertError(pathError)
|
||||
assert.EqualError(t, err, fmt.Sprintf("%s %s: %s", pathError.Op, "/transfer_test_file", pathError.Err.Error()))
|
||||
err = transfer.ConvertError(os.ErrNotExist)
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxNoSuchFile)
|
||||
err = transfer.ConvertError(os.ErrPermission)
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxPermissionDenied)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
assert.Nil(t, transfer.cancelFn)
|
||||
assert.Equal(t, testFile, transfer.GetFsPath())
|
||||
transfer.SetMetadata(map[string]string{"key": "val"})
|
||||
transfer.SetCancelFn(cancelFn)
|
||||
errFake := errors.New("err fake")
|
||||
transfer.BytesReceived.Store(9)
|
||||
transfer.BytesReceived = 9
|
||||
transfer.TransferError(ErrQuotaExceeded)
|
||||
assert.True(t, isCancelled)
|
||||
transfer.TransferError(errFake)
|
||||
|
@ -264,9 +228,8 @@ func TestTransferErrors(t *testing.T) {
|
|||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
fsPath := filepath.Join(os.TempDir(), "test_file")
|
||||
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, 0, true,
|
||||
fs, dataprovider.TransferQuota{})
|
||||
transfer.BytesReceived.Store(9)
|
||||
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.BytesReceived = 9
|
||||
transfer.TransferError(errFake)
|
||||
assert.Error(t, transfer.ErrTransfer, errFake.Error())
|
||||
// the file is closed from the embedding struct before to call close
|
||||
|
@ -284,9 +247,8 @@ func TestTransferErrors(t *testing.T) {
|
|||
if !assert.NoError(t, err) {
|
||||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, 0, true,
|
||||
fs, dataprovider.TransferQuota{})
|
||||
transfer.BytesReceived.Store(9)
|
||||
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.BytesReceived = 9
|
||||
// the file is closed from the embedding struct before to call close
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
|
@ -311,158 +273,27 @@ func TestRemovePartialCryptoFile(t *testing.T) {
|
|||
},
|
||||
}
|
||||
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
|
||||
transfer := NewBaseTransfer(nil, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload,
|
||||
0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
|
||||
transfer := NewBaseTransfer(nil, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.ErrTransfer = errors.New("test error")
|
||||
_, _, err = transfer.getUploadFileSize()
|
||||
_, err = transfer.getUploadFileSize()
|
||||
assert.Error(t, err)
|
||||
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
size, deletedFiles, err := transfer.getUploadFileSize()
|
||||
size, err := transfer.getUploadFileSize()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(0), size)
|
||||
assert.Equal(t, 1, deletedFiles)
|
||||
assert.Equal(t, int64(9), size)
|
||||
assert.NoFileExists(t, testFile)
|
||||
}
|
||||
|
||||
func TestFTPMode(t *testing.T) {
|
||||
conn := NewBaseConnection("", ProtocolFTP, "", "", dataprovider.User{})
|
||||
transfer := BaseTransfer{
|
||||
Connection: conn,
|
||||
transferType: TransferUpload,
|
||||
Fs: vfs.NewOsFs("", os.TempDir(), "", nil),
|
||||
Connection: conn,
|
||||
transferType: TransferUpload,
|
||||
BytesReceived: 123,
|
||||
Fs: vfs.NewOsFs("", os.TempDir(), ""),
|
||||
}
|
||||
transfer.BytesReceived.Store(123)
|
||||
assert.Empty(t, transfer.ftpMode)
|
||||
transfer.SetFtpMode("active")
|
||||
assert.Equal(t, "active", transfer.ftpMode)
|
||||
}
|
||||
|
||||
func TestTransferQuota(t *testing.T) {
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
TotalDataTransfer: 3,
|
||||
UploadDataTransfer: 2,
|
||||
DownloadDataTransfer: 1,
|
||||
},
|
||||
}
|
||||
ul, dl, total := user.GetDataTransferLimits()
|
||||
assert.Equal(t, int64(2*1048576), ul)
|
||||
assert.Equal(t, int64(1*1048576), dl)
|
||||
assert.Equal(t, int64(3*1048576), total)
|
||||
user.TotalDataTransfer = -1
|
||||
user.UploadDataTransfer = -1
|
||||
user.DownloadDataTransfer = -1
|
||||
ul, dl, total = user.GetDataTransferLimits()
|
||||
assert.Equal(t, int64(0), ul)
|
||||
assert.Equal(t, int64(0), dl)
|
||||
assert.Equal(t, int64(0), total)
|
||||
transferQuota := dataprovider.TransferQuota{}
|
||||
assert.True(t, transferQuota.HasDownloadSpace())
|
||||
assert.True(t, transferQuota.HasUploadSpace())
|
||||
transferQuota.TotalSize = -1
|
||||
transferQuota.ULSize = -1
|
||||
transferQuota.DLSize = -1
|
||||
assert.True(t, transferQuota.HasDownloadSpace())
|
||||
assert.True(t, transferQuota.HasUploadSpace())
|
||||
transferQuota.TotalSize = 100
|
||||
transferQuota.AllowedTotalSize = 10
|
||||
assert.True(t, transferQuota.HasDownloadSpace())
|
||||
assert.True(t, transferQuota.HasUploadSpace())
|
||||
transferQuota.AllowedTotalSize = 0
|
||||
assert.False(t, transferQuota.HasDownloadSpace())
|
||||
assert.False(t, transferQuota.HasUploadSpace())
|
||||
transferQuota.TotalSize = 0
|
||||
transferQuota.DLSize = 100
|
||||
transferQuota.ULSize = 50
|
||||
transferQuota.AllowedTotalSize = 0
|
||||
assert.False(t, transferQuota.HasDownloadSpace())
|
||||
assert.False(t, transferQuota.HasUploadSpace())
|
||||
transferQuota.AllowedDLSize = 1
|
||||
transferQuota.AllowedULSize = 1
|
||||
assert.True(t, transferQuota.HasDownloadSpace())
|
||||
assert.True(t, transferQuota.HasUploadSpace())
|
||||
transferQuota.AllowedDLSize = -10
|
||||
transferQuota.AllowedULSize = -1
|
||||
assert.False(t, transferQuota.HasDownloadSpace())
|
||||
assert.False(t, transferQuota.HasUploadSpace())
|
||||
|
||||
conn := NewBaseConnection("", ProtocolSFTP, "", "", user)
|
||||
transfer := NewBaseTransfer(nil, conn, nil, "file.txt", "file.txt", "/transfer_test_file", TransferUpload,
|
||||
0, 0, 0, 0, true, vfs.NewOsFs("", os.TempDir(), "", nil), dataprovider.TransferQuota{})
|
||||
err := transfer.CheckRead()
|
||||
assert.NoError(t, err)
|
||||
err = transfer.CheckWrite()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer.transferQuota = dataprovider.TransferQuota{
|
||||
AllowedTotalSize: 10,
|
||||
}
|
||||
transfer.BytesReceived.Store(5)
|
||||
transfer.BytesSent.Store(4)
|
||||
err = transfer.CheckRead()
|
||||
assert.NoError(t, err)
|
||||
err = transfer.CheckWrite()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer.BytesSent.Store(6)
|
||||
err = transfer.CheckRead()
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
|
||||
}
|
||||
err = transfer.CheckWrite()
|
||||
assert.True(t, conn.IsQuotaExceededError(err))
|
||||
|
||||
transferQuota = dataprovider.TransferQuota{
|
||||
AllowedTotalSize: 0,
|
||||
AllowedULSize: 10,
|
||||
AllowedDLSize: 5,
|
||||
}
|
||||
transfer.transferQuota = transferQuota
|
||||
assert.Equal(t, transferQuota, transfer.GetTransferQuota())
|
||||
err = transfer.CheckRead()
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
|
||||
}
|
||||
err = transfer.CheckWrite()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer.BytesReceived.Store(11)
|
||||
err = transfer.CheckRead()
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
|
||||
}
|
||||
err = transfer.CheckWrite()
|
||||
assert.True(t, conn.IsQuotaExceededError(err))
|
||||
}
|
||||
|
||||
func TestUploadOutsideHomeRenameError(t *testing.T) {
|
||||
oldTempPath := Config.TempPath
|
||||
|
||||
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{})
|
||||
transfer := BaseTransfer{
|
||||
Connection: conn,
|
||||
transferType: TransferUpload,
|
||||
Fs: vfs.NewOsFs("", filepath.Join(os.TempDir(), "home"), "", nil),
|
||||
}
|
||||
transfer.BytesReceived.Store(123)
|
||||
|
||||
fileName := filepath.Join(os.TempDir(), "_temp")
|
||||
err := os.WriteFile(fileName, []byte(`data`), 0644)
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer.effectiveFsPath = fileName
|
||||
res := transfer.checkUploadOutsideHomeDir(os.ErrPermission)
|
||||
assert.Equal(t, 0, res)
|
||||
|
||||
Config.TempPath = filepath.Clean(os.TempDir())
|
||||
res = transfer.checkUploadOutsideHomeDir(nil)
|
||||
assert.Equal(t, 0, res)
|
||||
assert.Greater(t, transfer.BytesReceived.Load(), int64(0))
|
||||
res = transfer.checkUploadOutsideHomeDir(os.ErrPermission)
|
||||
assert.Equal(t, 1, res)
|
||||
assert.Equal(t, int64(0), transfer.BytesReceived.Load())
|
||||
assert.NoFileExists(t, fileName)
|
||||
|
||||
Config.TempPath = oldTempPath
|
||||
}
|
File diff suppressed because it is too large
Load diff
12
config/config_linux.go
Normal file
12
config/config_linux.go
Normal file
|
@ -0,0 +1,12 @@
|
|||
//go:build linux
|
||||
// +build linux
|
||||
|
||||
package config
|
||||
|
||||
import "github.com/spf13/viper"
|
||||
|
||||
// linux specific config search path
|
||||
func setViperAdditionalConfigPaths() {
|
||||
viper.AddConfigPath("$HOME/.config/sftpgo")
|
||||
viper.AddConfigPath("/etc/sftpgo")
|
||||
}
|
6
config/config_nolinux.go
Normal file
6
config/config_nolinux.go
Normal file
|
@ -0,0 +1,6 @@
|
|||
//go:build !linux
|
||||
// +build !linux
|
||||
|
||||
package config
|
||||
|
||||
func setViperAdditionalConfigPaths() {}
|
File diff suppressed because it is too large
Load diff
|
@ -1,6 +0,0 @@
|
|||
project_id_env: CROWDIN_PROJECT_ID
|
||||
api_token_env: CROWDIN_PERSONAL_TOKEN
|
||||
files:
|
||||
- source: /static/locales/en/translation.json
|
||||
translation: /static/locales/%two_letters_code%/%original_file_name%
|
||||
type: i18next_json
|
118
dataprovider/actions.go
Normal file
118
dataprovider/actions.go
Normal file
|
@ -0,0 +1,118 @@
|
|||
package dataprovider
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/sftpgo/sdk/plugin/notifier"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/httpclient"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
const (
|
||||
// ActionExecutorSelf is used as username for self action, for example a user/admin that updates itself
|
||||
ActionExecutorSelf = "__self__"
|
||||
// ActionExecutorSystem is used as username for actions with no explicit executor associated, for example
|
||||
// adding/updating a user/admin by loading initial data
|
||||
ActionExecutorSystem = "__system__"
|
||||
)
|
||||
|
||||
const (
|
||||
actionObjectUser = "user"
|
||||
actionObjectAdmin = "admin"
|
||||
actionObjectAPIKey = "api_key"
|
||||
actionObjectShare = "share"
|
||||
)
|
||||
|
||||
func executeAction(operation, executor, ip, objectType, objectName string, object plugin.Renderer) {
|
||||
if plugin.Handler.HasNotifiers() {
|
||||
plugin.Handler.NotifyProviderEvent(¬ifier.ProviderEvent{
|
||||
Action: operation,
|
||||
Username: executor,
|
||||
ObjectType: objectType,
|
||||
ObjectName: objectName,
|
||||
IP: ip,
|
||||
Timestamp: time.Now().UnixNano(),
|
||||
}, object)
|
||||
}
|
||||
if config.Actions.Hook == "" {
|
||||
return
|
||||
}
|
||||
if !util.IsStringInSlice(operation, config.Actions.ExecuteOn) ||
|
||||
!util.IsStringInSlice(objectType, config.Actions.ExecuteFor) {
|
||||
return
|
||||
}
|
||||
|
||||
go func() {
|
||||
dataAsJSON, err := object.RenderAsJSON(operation != operationDelete)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "unable to serialize user as JSON for operation %#v: %v", operation, err)
|
||||
return
|
||||
}
|
||||
if strings.HasPrefix(config.Actions.Hook, "http") {
|
||||
var url *url.URL
|
||||
url, err := url.Parse(config.Actions.Hook)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "Invalid http_notification_url %#v for operation %#v: %v",
|
||||
config.Actions.Hook, operation, err)
|
||||
return
|
||||
}
|
||||
q := url.Query()
|
||||
q.Add("action", operation)
|
||||
q.Add("username", executor)
|
||||
q.Add("ip", ip)
|
||||
q.Add("object_type", objectType)
|
||||
q.Add("object_name", objectName)
|
||||
q.Add("timestamp", fmt.Sprintf("%v", time.Now().UnixNano()))
|
||||
url.RawQuery = q.Encode()
|
||||
startTime := time.Now()
|
||||
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(dataAsJSON))
|
||||
respCode := 0
|
||||
if err == nil {
|
||||
respCode = resp.StatusCode
|
||||
resp.Body.Close()
|
||||
}
|
||||
providerLog(logger.LevelDebug, "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
|
||||
operation, url.Redacted(), respCode, time.Since(startTime), err)
|
||||
} else {
|
||||
executeNotificationCommand(operation, executor, ip, objectType, objectName, dataAsJSON) //nolint:errcheck // the error is used in test cases only
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func executeNotificationCommand(operation, executor, ip, objectType, objectName string, objectAsJSON []byte) error {
|
||||
if !filepath.IsAbs(config.Actions.Hook) {
|
||||
err := fmt.Errorf("invalid notification command %#v", config.Actions.Hook)
|
||||
logger.Warn(logSender, "", "unable to execute notification command: %v", err)
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cmd := exec.CommandContext(ctx, config.Actions.Hook)
|
||||
cmd.Env = append(os.Environ(),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_ACTION=%v", operation),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_TYPE=%v", objectType),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_NAME=%v", objectName),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_USERNAME=%v", executor),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_IP=%v", ip),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_TIMESTAMP=%v", util.GetTimeAsMsSinceEpoch(time.Now())),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT=%v", string(objectAsJSON)))
|
||||
|
||||
startTime := time.Now()
|
||||
err := cmd.Run()
|
||||
providerLog(logger.LevelDebug, "executed command %#v, elapsed: %v, error: %v", config.Actions.Hook,
|
||||
time.Since(startTime), err)
|
||||
return err
|
||||
}
|
|
@ -1,38 +1,24 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"slices"
|
||||
"strconv"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/alexedwards/argon2id"
|
||||
"github.com/sftpgo/sdk"
|
||||
passwordvalidator "github.com/wagslane/go-password-validator"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/kms"
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/mfa"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/mfa"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// Available permissions for SFTPGo admins
|
||||
|
@ -46,39 +32,23 @@ const (
|
|||
PermAdminCloseConnections = "close_conns"
|
||||
PermAdminViewServerStatus = "view_status"
|
||||
PermAdminManageAdmins = "manage_admins"
|
||||
PermAdminManageGroups = "manage_groups"
|
||||
PermAdminManageFolders = "manage_folders"
|
||||
PermAdminManageAPIKeys = "manage_apikeys"
|
||||
PermAdminQuotaScans = "quota_scans"
|
||||
PermAdminManageSystem = "manage_system"
|
||||
PermAdminManageDefender = "manage_defender"
|
||||
PermAdminViewDefender = "view_defender"
|
||||
PermAdminRetentionChecks = "retention_checks"
|
||||
PermAdminMetadataChecks = "metadata_checks"
|
||||
PermAdminViewEvents = "view_events"
|
||||
PermAdminManageEventRules = "manage_event_rules"
|
||||
PermAdminManageRoles = "manage_roles"
|
||||
PermAdminManageIPLists = "manage_ip_lists"
|
||||
PermAdminDisableMFA = "disable_mfa"
|
||||
)
|
||||
|
||||
const (
|
||||
// GroupAddToUsersAsMembership defines that the admin's group will be added as membership group for new users
|
||||
GroupAddToUsersAsMembership = iota
|
||||
// GroupAddToUsersAsPrimary defines that the admin's group will be added as primary group for new users
|
||||
GroupAddToUsersAsPrimary
|
||||
// GroupAddToUsersAsSecondary defines that the admin's group will be added as secondary group for new users
|
||||
GroupAddToUsersAsSecondary
|
||||
)
|
||||
|
||||
var (
|
||||
emailRegex = regexp.MustCompile("^(?:(?:(?:(?:[a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(?:\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|(?:(?:\\x22)(?:(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(?:\\x20|\\x09)+)?(?:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(\\x20|\\x09)+)?(?:\\x22))))@(?:(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$")
|
||||
validAdminPerms = []string{PermAdminAny, PermAdminAddUsers, PermAdminChangeUsers, PermAdminDeleteUsers,
|
||||
PermAdminViewUsers, PermAdminManageFolders, PermAdminManageGroups, PermAdminViewConnections,
|
||||
PermAdminCloseConnections, PermAdminViewServerStatus, PermAdminManageAdmins, PermAdminManageRoles,
|
||||
PermAdminManageEventRules, PermAdminManageAPIKeys, PermAdminQuotaScans, PermAdminManageSystem,
|
||||
PermAdminManageDefender, PermAdminViewDefender, PermAdminManageIPLists, PermAdminRetentionChecks,
|
||||
PermAdminViewEvents, PermAdminDisableMFA}
|
||||
forbiddenPermsForRoleAdmins = []string{PermAdminAny, PermAdminManageAdmins, PermAdminManageSystem,
|
||||
PermAdminManageEventRules, PermAdminManageIPLists, PermAdminManageRoles}
|
||||
PermAdminViewUsers, PermAdminViewConnections, PermAdminCloseConnections, PermAdminViewServerStatus,
|
||||
PermAdminManageAdmins, PermAdminManageAPIKeys, PermAdminQuotaScans, PermAdminManageSystem,
|
||||
PermAdminManageDefender, PermAdminViewDefender, PermAdminRetentionChecks, PermAdminMetadataChecks,
|
||||
PermAdminViewEvents}
|
||||
)
|
||||
|
||||
// AdminTOTPConfig defines the time-based one time password configuration
|
||||
|
@ -97,8 +67,8 @@ func (c *AdminTOTPConfig) validate(username string) error {
|
|||
if c.ConfigName == "" {
|
||||
return util.NewValidationError("totp: config name is mandatory")
|
||||
}
|
||||
if !slices.Contains(mfa.GetAvailableTOTPConfigNames(), c.ConfigName) {
|
||||
return util.NewValidationError(fmt.Sprintf("totp: config name %q not found", c.ConfigName))
|
||||
if !util.IsStringInSlice(c.ConfigName, mfa.GetAvailableTOTPConfigNames()) {
|
||||
return util.NewValidationError(fmt.Sprintf("totp: config name %#v not found", c.ConfigName))
|
||||
}
|
||||
if c.Secret.IsEmpty() {
|
||||
return util.NewValidationError("totp: secret is mandatory")
|
||||
|
@ -112,85 +82,6 @@ func (c *AdminTOTPConfig) validate(username string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// AdminPreferences defines the admin preferences
|
||||
type AdminPreferences struct {
|
||||
// Allow to hide some sections from the user page.
|
||||
// These are not security settings and are not enforced server side
|
||||
// in any way. They are only intended to simplify the user page in
|
||||
// the WebAdmin UI.
|
||||
//
|
||||
// 1 means hide groups section
|
||||
// 2 means hide filesystem section, "users_base_dir" must be set in the config file otherwise this setting is ignored
|
||||
// 4 means hide virtual folders section
|
||||
// 8 means hide profile section
|
||||
// 16 means hide ACLs section
|
||||
// 32 means hide disk and bandwidth quota limits section
|
||||
// 64 means hide advanced settings section
|
||||
//
|
||||
// The settings can be combined
|
||||
HideUserPageSections int `json:"hide_user_page_sections,omitempty"`
|
||||
// Defines the default expiration for newly created users as number of days.
|
||||
// 0 means no expiration
|
||||
DefaultUsersExpiration int `json:"default_users_expiration,omitempty"`
|
||||
}
|
||||
|
||||
// HideGroups returns true if the groups section should be hidden
|
||||
func (p *AdminPreferences) HideGroups() bool {
|
||||
return p.HideUserPageSections&1 != 0
|
||||
}
|
||||
|
||||
// HideFilesystem returns true if the filesystem section should be hidden
|
||||
func (p *AdminPreferences) HideFilesystem() bool {
|
||||
return config.UsersBaseDir != "" && p.HideUserPageSections&2 != 0
|
||||
}
|
||||
|
||||
// HideVirtualFolders returns true if the virtual folder section should be hidden
|
||||
func (p *AdminPreferences) HideVirtualFolders() bool {
|
||||
return p.HideUserPageSections&4 != 0
|
||||
}
|
||||
|
||||
// HideProfile returns true if the profile section should be hidden
|
||||
func (p *AdminPreferences) HideProfile() bool {
|
||||
return p.HideUserPageSections&8 != 0
|
||||
}
|
||||
|
||||
// HideACLs returns true if the ACLs section should be hidden
|
||||
func (p *AdminPreferences) HideACLs() bool {
|
||||
return p.HideUserPageSections&16 != 0
|
||||
}
|
||||
|
||||
// HideDiskQuotaAndBandwidthLimits returns true if the disk quota and bandwidth limits
|
||||
// section should be hidden
|
||||
func (p *AdminPreferences) HideDiskQuotaAndBandwidthLimits() bool {
|
||||
return p.HideUserPageSections&32 != 0
|
||||
}
|
||||
|
||||
// HideAdvancedSettings returns true if the advanced settings section should be hidden
|
||||
func (p *AdminPreferences) HideAdvancedSettings() bool {
|
||||
return p.HideUserPageSections&64 != 0
|
||||
}
|
||||
|
||||
// VisibleUserPageSections returns the number of visible sections
|
||||
// in the user page
|
||||
func (p *AdminPreferences) VisibleUserPageSections() int {
|
||||
var result int
|
||||
|
||||
if !p.HideProfile() {
|
||||
result++
|
||||
}
|
||||
if !p.HideACLs() {
|
||||
result++
|
||||
}
|
||||
if !p.HideDiskQuotaAndBandwidthLimits() {
|
||||
result++
|
||||
}
|
||||
if !p.HideAdvancedSettings() {
|
||||
result++
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// AdminFilters defines additional restrictions for SFTPGo admins
|
||||
// TODO: rename to AdminOptions in v3
|
||||
type AdminFilters struct {
|
||||
|
@ -200,47 +91,12 @@ type AdminFilters struct {
|
|||
AllowList []string `json:"allow_list,omitempty"`
|
||||
// API key auth allows to impersonate this administrator with an API key
|
||||
AllowAPIKeyAuth bool `json:"allow_api_key_auth,omitempty"`
|
||||
// A password change is required at the next login
|
||||
RequirePasswordChange bool `json:"require_password_change,omitempty"`
|
||||
// Require two factor authentication
|
||||
RequireTwoFactor bool `json:"require_two_factor"`
|
||||
// Time-based one time passwords configuration
|
||||
TOTPConfig AdminTOTPConfig `json:"totp_config,omitempty"`
|
||||
// Recovery codes to use if the user loses access to their second factor auth device.
|
||||
// Each code can only be used once, you should use these codes to login and disable or
|
||||
// reset 2FA for your account
|
||||
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
|
||||
Preferences AdminPreferences `json:"preferences"`
|
||||
}
|
||||
|
||||
// AdminGroupMappingOptions defines the options for admin/group mapping
|
||||
type AdminGroupMappingOptions struct {
|
||||
AddToUsersAs int `json:"add_to_users_as,omitempty"`
|
||||
}
|
||||
|
||||
func (o *AdminGroupMappingOptions) validate() error {
|
||||
if o.AddToUsersAs < GroupAddToUsersAsMembership || o.AddToUsersAs > GroupAddToUsersAsSecondary {
|
||||
return util.NewValidationError(fmt.Sprintf("Invalid mode to add groups to new users: %d", o.AddToUsersAs))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetUserGroupType returns the type for the matching user group
|
||||
func (o *AdminGroupMappingOptions) GetUserGroupType() int {
|
||||
switch o.AddToUsersAs {
|
||||
case GroupAddToUsersAsPrimary:
|
||||
return sdk.GroupTypePrimary
|
||||
case GroupAddToUsersAsSecondary:
|
||||
return sdk.GroupTypeSecondary
|
||||
default:
|
||||
return sdk.GroupTypeMembership
|
||||
}
|
||||
}
|
||||
|
||||
// AdminGroupMapping defines the mapping between an SFTPGo admin and a group
|
||||
type AdminGroupMapping struct {
|
||||
Name string `json:"name"`
|
||||
Options AdminGroupMappingOptions `json:"options"`
|
||||
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
|
||||
}
|
||||
|
||||
// Admin defines a SFTPGo admin
|
||||
|
@ -257,22 +113,12 @@ type Admin struct {
|
|||
Filters AdminFilters `json:"filters,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
AdditionalInfo string `json:"additional_info,omitempty"`
|
||||
// Groups membership
|
||||
Groups []AdminGroupMapping `json:"groups,omitempty"`
|
||||
// Creation time as unix timestamp in milliseconds. It will be 0 for admins created before v2.2.0
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
// last update time as unix timestamp in milliseconds
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
// Last login as unix timestamp in milliseconds
|
||||
LastLogin int64 `json:"last_login"`
|
||||
// Role name. If set the admin can only administer users with the same role.
|
||||
// Role admins cannot have the following permissions:
|
||||
// - manage_admins
|
||||
// - manage_apikeys
|
||||
// - manage_system
|
||||
// - manage_event_rules
|
||||
// - manage_roles
|
||||
Role string `json:"role,omitempty"`
|
||||
}
|
||||
|
||||
// CountUnusedRecoveryCodes returns the number of unused recovery codes
|
||||
|
@ -290,7 +136,7 @@ func (a *Admin) hashPassword() error {
|
|||
if a.Password != "" && !util.IsStringPrefixInSlice(a.Password, internalHashPwdPrefixes) {
|
||||
if config.PasswordValidation.Admins.MinEntropy > 0 {
|
||||
if err := passwordvalidator.Validate(a.Password, config.PasswordValidation.Admins.MinEntropy); err != nil {
|
||||
return util.NewI18nError(util.NewValidationError(err.Error()), util.I18nErrorPasswordComplexity)
|
||||
return util.NewValidationError(err.Error())
|
||||
}
|
||||
}
|
||||
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
|
||||
|
@ -298,7 +144,7 @@ func (a *Admin) hashPassword() error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
a.Password = util.BytesToString(pwd)
|
||||
a.Password = string(pwd)
|
||||
} else {
|
||||
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
|
||||
if err != nil {
|
||||
|
@ -331,53 +177,16 @@ func (a *Admin) validateRecoveryCodes() error {
|
|||
}
|
||||
|
||||
func (a *Admin) validatePermissions() error {
|
||||
a.Permissions = util.RemoveDuplicates(a.Permissions, false)
|
||||
a.Permissions = util.RemoveDuplicates(a.Permissions)
|
||||
if len(a.Permissions) == 0 {
|
||||
return util.NewI18nError(
|
||||
util.NewValidationError("please grant some permissions to this admin"),
|
||||
util.I18nErrorPermissionsRequired,
|
||||
)
|
||||
return util.NewValidationError("please grant some permissions to this admin")
|
||||
}
|
||||
if slices.Contains(a.Permissions, PermAdminAny) {
|
||||
if util.IsStringInSlice(PermAdminAny, a.Permissions) {
|
||||
a.Permissions = []string{PermAdminAny}
|
||||
}
|
||||
for _, perm := range a.Permissions {
|
||||
if !slices.Contains(validAdminPerms, perm) {
|
||||
return util.NewValidationError(fmt.Sprintf("invalid permission: %q", perm))
|
||||
}
|
||||
if a.Role != "" {
|
||||
if slices.Contains(forbiddenPermsForRoleAdmins, perm) {
|
||||
deniedPerms := strings.Join(forbiddenPermsForRoleAdmins, ",")
|
||||
return util.NewI18nError(
|
||||
util.NewValidationError(fmt.Sprintf("a role admin cannot have the following permissions: %q", deniedPerms)),
|
||||
util.I18nErrorRoleAdminPerms,
|
||||
util.I18nErrorArgs(map[string]any{
|
||||
"val": deniedPerms,
|
||||
}),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Admin) validateGroups() error {
|
||||
hasPrimary := false
|
||||
for _, g := range a.Groups {
|
||||
if g.Name == "" {
|
||||
return util.NewValidationError("group name is mandatory")
|
||||
}
|
||||
if err := g.Options.validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
if g.Options.AddToUsersAs == GroupAddToUsersAsPrimary {
|
||||
if hasPrimary {
|
||||
return util.NewI18nError(
|
||||
util.NewValidationError("only one primary group is allowed"),
|
||||
util.I18nErrorPrimaryGroup,
|
||||
)
|
||||
}
|
||||
hasPrimary = true
|
||||
if !util.IsStringInSlice(perm, validAdminPerms) {
|
||||
return util.NewValidationError(fmt.Sprintf("invalid permission: %#v", perm))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
@ -386,28 +195,22 @@ func (a *Admin) validateGroups() error {
|
|||
func (a *Admin) validate() error {
|
||||
a.SetEmptySecretsIfNil()
|
||||
if a.Username == "" {
|
||||
return util.NewI18nError(util.NewValidationError("username is mandatory"), util.I18nErrorUsernameRequired)
|
||||
}
|
||||
if err := checkReservedUsernames(a.Username); err != nil {
|
||||
return util.NewI18nError(err, util.I18nErrorReservedUsername)
|
||||
return util.NewValidationError("username is mandatory")
|
||||
}
|
||||
if a.Password == "" {
|
||||
return util.NewI18nError(util.NewValidationError("please set a password"), util.I18nErrorPasswordRequired)
|
||||
return util.NewValidationError("please set a password")
|
||||
}
|
||||
if a.hasRedactedSecret() {
|
||||
return util.NewValidationError("cannot save an admin with a redacted secret")
|
||||
}
|
||||
if err := a.Filters.TOTPConfig.validate(a.Username); err != nil {
|
||||
return util.NewI18nError(err, util.I18nError2FAInvalid)
|
||||
return err
|
||||
}
|
||||
if err := a.validateRecoveryCodes(); err != nil {
|
||||
return util.NewI18nError(err, util.I18nErrorRecoveryCodesInvalid)
|
||||
return err
|
||||
}
|
||||
if config.NamingRules&1 == 0 && !usernameRegex.MatchString(a.Username) {
|
||||
return util.NewI18nError(
|
||||
util.NewValidationError(fmt.Sprintf("username %q is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username)),
|
||||
util.I18nErrorInvalidUser,
|
||||
)
|
||||
if !config.SkipNaturalKeysValidation && !usernameRegex.MatchString(a.Username) {
|
||||
return util.NewValidationError(fmt.Sprintf("username %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username))
|
||||
}
|
||||
if err := a.hashPassword(); err != nil {
|
||||
return err
|
||||
|
@ -415,51 +218,32 @@ func (a *Admin) validate() error {
|
|||
if err := a.validatePermissions(); err != nil {
|
||||
return err
|
||||
}
|
||||
if a.Email != "" && !util.IsEmailValid(a.Email) {
|
||||
return util.NewI18nError(
|
||||
util.NewValidationError(fmt.Sprintf("email %q is not valid", a.Email)),
|
||||
util.I18nErrorInvalidEmail,
|
||||
)
|
||||
if a.Email != "" && !emailRegex.MatchString(a.Email) {
|
||||
return util.NewValidationError(fmt.Sprintf("email %#v is not valid", a.Email))
|
||||
}
|
||||
a.Filters.AllowList = util.RemoveDuplicates(a.Filters.AllowList, false)
|
||||
a.Filters.AllowList = util.RemoveDuplicates(a.Filters.AllowList)
|
||||
for _, IPMask := range a.Filters.AllowList {
|
||||
_, _, err := net.ParseCIDR(IPMask)
|
||||
if err != nil {
|
||||
return util.NewI18nError(
|
||||
util.NewValidationError(fmt.Sprintf("could not parse allow list entry %q : %v", IPMask, err)),
|
||||
util.I18nErrorInvalidIPMask,
|
||||
)
|
||||
return util.NewValidationError(fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err))
|
||||
}
|
||||
}
|
||||
|
||||
return a.validateGroups()
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckPassword verifies the admin password
|
||||
func (a *Admin) CheckPassword(password string) (bool, error) {
|
||||
if config.PasswordCaching {
|
||||
found, match := cachedAdminPasswords.Check(a.Username, password, a.Password)
|
||||
if found {
|
||||
if !match {
|
||||
return false, ErrInvalidCredentials
|
||||
}
|
||||
return match, nil
|
||||
}
|
||||
}
|
||||
if strings.HasPrefix(a.Password, bcryptPwdPrefix) {
|
||||
if err := bcrypt.CompareHashAndPassword([]byte(a.Password), []byte(password)); err != nil {
|
||||
return false, ErrInvalidCredentials
|
||||
}
|
||||
cachedAdminPasswords.Add(a.Username, password, a.Password)
|
||||
return true, nil
|
||||
}
|
||||
match, err := argon2id.ComparePasswordAndHash(password, a.Password)
|
||||
if !match || err != nil {
|
||||
return false, ErrInvalidCredentials
|
||||
}
|
||||
if match {
|
||||
cachedAdminPasswords.Add(a.Username, password, a.Password)
|
||||
}
|
||||
return match, err
|
||||
}
|
||||
|
||||
|
@ -488,7 +272,7 @@ func (a *Admin) CanLoginFromIP(ip string) bool {
|
|||
// CanLogin returns an error if the login is not allowed
|
||||
func (a *Admin) CanLogin(ip string) error {
|
||||
if a.Status != 1 {
|
||||
return fmt.Errorf("admin %q is disabled", a.Username)
|
||||
return fmt.Errorf("admin %#v is disabled", a.Username)
|
||||
}
|
||||
if !a.CanLoginFromIP(ip) {
|
||||
return fmt.Errorf("login from IP %v not allowed", ip)
|
||||
|
@ -560,10 +344,15 @@ func (a *Admin) SetNilSecretsIfEmpty() {
|
|||
|
||||
// HasPermission returns true if the admin has the specified permission
|
||||
func (a *Admin) HasPermission(perm string) bool {
|
||||
if slices.Contains(a.Permissions, PermAdminAny) {
|
||||
if util.IsStringInSlice(PermAdminAny, a.Permissions) {
|
||||
return true
|
||||
}
|
||||
return slices.Contains(a.Permissions, perm)
|
||||
return util.IsStringInSlice(perm, a.Permissions)
|
||||
}
|
||||
|
||||
// GetPermissionsAsString returns permission as string
|
||||
func (a *Admin) GetPermissionsAsString() string {
|
||||
return strings.Join(a.Permissions, ", ")
|
||||
}
|
||||
|
||||
// GetAllowedIPAsString returns the allowed IP as comma separated string
|
||||
|
@ -576,15 +365,30 @@ func (a *Admin) GetValidPerms() []string {
|
|||
return validAdminPerms
|
||||
}
|
||||
|
||||
// GetInfoString returns admin's info as string.
|
||||
func (a *Admin) GetInfoString() string {
|
||||
var result strings.Builder
|
||||
if a.Email != "" {
|
||||
result.WriteString(fmt.Sprintf("Email: %v. ", a.Email))
|
||||
}
|
||||
if len(a.Filters.AllowList) > 0 {
|
||||
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(a.Filters.AllowList)))
|
||||
}
|
||||
return result.String()
|
||||
}
|
||||
|
||||
// CanManageMFA returns true if the admin can add a multi-factor authentication configuration
|
||||
func (a *Admin) CanManageMFA() bool {
|
||||
return len(mfa.GetAvailableTOTPConfigs()) > 0
|
||||
}
|
||||
|
||||
// GetSignature returns a signature for this admin.
|
||||
// It will change after an update
|
||||
// It could change after an update
|
||||
func (a *Admin) GetSignature() string {
|
||||
return strconv.FormatInt(a.UpdatedAt, 10)
|
||||
data := []byte(a.Username)
|
||||
data = append(data, []byte(a.Password)...)
|
||||
signature := sha256.Sum256(data)
|
||||
return base64.StdEncoding.EncodeToString(signature[:])
|
||||
}
|
||||
|
||||
func (a *Admin) getACopy() Admin {
|
||||
|
@ -594,8 +398,6 @@ func (a *Admin) getACopy() Admin {
|
|||
filters := AdminFilters{}
|
||||
filters.AllowList = make([]string, len(a.Filters.AllowList))
|
||||
filters.AllowAPIKeyAuth = a.Filters.AllowAPIKeyAuth
|
||||
filters.RequirePasswordChange = a.Filters.RequirePasswordChange
|
||||
filters.RequireTwoFactor = a.Filters.RequireTwoFactor
|
||||
filters.TOTPConfig.Enabled = a.Filters.TOTPConfig.Enabled
|
||||
filters.TOTPConfig.ConfigName = a.Filters.TOTPConfig.ConfigName
|
||||
filters.TOTPConfig.Secret = a.Filters.TOTPConfig.Secret.Clone()
|
||||
|
@ -610,19 +412,6 @@ func (a *Admin) getACopy() Admin {
|
|||
Used: code.Used,
|
||||
})
|
||||
}
|
||||
filters.Preferences = AdminPreferences{
|
||||
HideUserPageSections: a.Filters.Preferences.HideUserPageSections,
|
||||
DefaultUsersExpiration: a.Filters.Preferences.DefaultUsersExpiration,
|
||||
}
|
||||
groups := make([]AdminGroupMapping, 0, len(a.Groups))
|
||||
for _, g := range a.Groups {
|
||||
groups = append(groups, AdminGroupMapping{
|
||||
Name: g.Name,
|
||||
Options: AdminGroupMappingOptions{
|
||||
AddToUsersAs: g.Options.AddToUsersAs,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
return Admin{
|
||||
ID: a.ID,
|
||||
|
@ -631,14 +420,12 @@ func (a *Admin) getACopy() Admin {
|
|||
Password: a.Password,
|
||||
Email: a.Email,
|
||||
Permissions: permissions,
|
||||
Groups: groups,
|
||||
Filters: filters,
|
||||
AdditionalInfo: a.AdditionalInfo,
|
||||
Description: a.Description,
|
||||
LastLogin: a.LastLogin,
|
||||
CreatedAt: a.CreatedAt,
|
||||
UpdatedAt: a.UpdatedAt,
|
||||
Role: a.Role,
|
||||
}
|
||||
}
|
||||
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
|
@ -23,8 +9,8 @@ import (
|
|||
"github.com/alexedwards/argon2id"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// APIKeyScope defines the supported API key scopes
|
||||
|
@ -118,7 +104,7 @@ func (k *APIKey) hashKey() error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
k.Key = util.BytesToString(hashed)
|
||||
k.Key = string(hashed)
|
||||
} else {
|
||||
hashed, err := argon2id.CreateHash(k.Key, argon2Params)
|
||||
if err != nil {
|
||||
|
@ -165,7 +151,7 @@ func (k *APIKey) validate() error {
|
|||
k.Admin = ""
|
||||
}
|
||||
if k.User != "" {
|
||||
_, err := provider.userExists(k.User, "")
|
||||
_, err := provider.userExists(k.User)
|
||||
if err != nil {
|
||||
return util.NewValidationError(fmt.Sprintf("unable to check API key user %v: %v", k.User, err))
|
||||
}
|
||||
|
@ -182,18 +168,9 @@ func (k *APIKey) validate() error {
|
|||
// Authenticate tries to authenticate the provided plain key
|
||||
func (k *APIKey) Authenticate(plainKey string) error {
|
||||
if k.ExpiresAt > 0 && k.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
|
||||
return fmt.Errorf("API key %q is expired, expiration timestamp: %v current timestamp: %v", k.KeyID,
|
||||
return fmt.Errorf("API key %#v is expired, expiration timestamp: %v current timestamp: %v", k.KeyID,
|
||||
k.ExpiresAt, util.GetTimeAsMsSinceEpoch(time.Now()))
|
||||
}
|
||||
if config.PasswordCaching {
|
||||
found, match := cachedAPIKeys.Check(k.KeyID, plainKey, k.Key)
|
||||
if found {
|
||||
if !match {
|
||||
return ErrInvalidCredentials
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
if strings.HasPrefix(k.Key, bcryptPwdPrefix) {
|
||||
if err := bcrypt.CompareHashAndPassword([]byte(k.Key), []byte(plainKey)); err != nil {
|
||||
return ErrInvalidCredentials
|
||||
|
@ -205,6 +182,5 @@ func (k *APIKey) Authenticate(plainKey string) error {
|
|||
}
|
||||
}
|
||||
|
||||
cachedAPIKeys.Add(k.KeyID, plainKey, k.Key)
|
||||
return nil
|
||||
}
|
1749
dataprovider/bolt.go
Normal file
1749
dataprovider/bolt.go
Normal file
File diff suppressed because it is too large
Load diff
18
dataprovider/bolt_disabled.go
Normal file
18
dataprovider/bolt_disabled.go
Normal file
|
@ -0,0 +1,18 @@
|
|||
//go:build nobolt
|
||||
// +build nobolt
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-bolt")
|
||||
}
|
||||
|
||||
func initializeBoltProvider(basePath string) error {
|
||||
return errors.New("bolt disabled at build time")
|
||||
}
|
62
dataprovider/cachedpassword.go
Normal file
62
dataprovider/cachedpassword.go
Normal file
|
@ -0,0 +1,62 @@
|
|||
package dataprovider
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
var cachedPasswords passwordsCache
|
||||
|
||||
func init() {
|
||||
cachedPasswords = passwordsCache{
|
||||
cache: make(map[string]string),
|
||||
}
|
||||
}
|
||||
|
||||
type passwordsCache struct {
|
||||
sync.RWMutex
|
||||
cache map[string]string
|
||||
}
|
||||
|
||||
func (c *passwordsCache) Add(username, password string) {
|
||||
if !config.PasswordCaching || username == "" || password == "" {
|
||||
return
|
||||
}
|
||||
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
c.cache[username] = password
|
||||
}
|
||||
|
||||
func (c *passwordsCache) Remove(username string) {
|
||||
if !config.PasswordCaching {
|
||||
return
|
||||
}
|
||||
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
delete(c.cache, username)
|
||||
}
|
||||
|
||||
// Check returns if the user is found and if the password match
|
||||
func (c *passwordsCache) Check(username, password string) (bool, bool) {
|
||||
if username == "" || password == "" {
|
||||
return false, false
|
||||
}
|
||||
|
||||
c.RLock()
|
||||
defer c.RUnlock()
|
||||
|
||||
pwd, ok := c.cache[username]
|
||||
if !ok {
|
||||
return false, false
|
||||
}
|
||||
|
||||
return true, pwd == password
|
||||
}
|
||||
|
||||
// CheckCachedPassword is an utility method used only in test cases
|
||||
func CheckCachedPassword(username, password string) (bool, bool) {
|
||||
return cachedPasswords.Check(username, password)
|
||||
}
|
|
@ -1,27 +1,13 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/webdav"
|
||||
"golang.org/x/net/webdav"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -76,41 +62,27 @@ func (cache *usersCache) updateLastLogin(username string) {
|
|||
|
||||
// swapWebDAVUser updates an existing cached user with the specified one
|
||||
// preserving the lock fs if possible
|
||||
// FIXME: this could be racy in rare cases
|
||||
func (cache *usersCache) swap(userRef *User, plainPassword string) {
|
||||
user := userRef.getACopy()
|
||||
err := user.LoadAndApplyGroupSettings()
|
||||
|
||||
func (cache *usersCache) swap(user *User) {
|
||||
cache.Lock()
|
||||
defer cache.Unlock()
|
||||
|
||||
if cachedUser, ok := cache.users[user.Username]; ok {
|
||||
if err != nil {
|
||||
providerLog(logger.LevelDebug, "unable to load group settings, for user %q, removing from cache, err :%v",
|
||||
user.Username, err)
|
||||
if cachedUser.User.Password != user.Password {
|
||||
providerLog(logger.LevelDebug, "current password different from the cached one for user %#v, removing from cache",
|
||||
user.Username)
|
||||
// the password changed, the cached user is no longer valid
|
||||
delete(cache.users, user.Username)
|
||||
return
|
||||
}
|
||||
if plainPassword != "" {
|
||||
cachedUser.Password = plainPassword
|
||||
} else {
|
||||
if cachedUser.User.Password != user.Password {
|
||||
providerLog(logger.LevelDebug, "current password different from the cached one for user %q, removing from cache",
|
||||
user.Username)
|
||||
// the password changed, the cached user is no longer valid
|
||||
delete(cache.users, user.Username)
|
||||
return
|
||||
}
|
||||
}
|
||||
if cachedUser.User.isFsEqual(&user) {
|
||||
if cachedUser.User.isFsEqual(user) {
|
||||
// the updated user has the same fs as the cached one, we can preserve the lock filesystem
|
||||
providerLog(logger.LevelDebug, "current password and fs unchanged for for user %q, swap cached one",
|
||||
providerLog(logger.LevelDebug, "current password and fs unchanged for for user %#v, swap cached one",
|
||||
user.Username)
|
||||
cachedUser.User = user
|
||||
cachedUser.User = *user
|
||||
cache.users[user.Username] = cachedUser
|
||||
} else {
|
||||
// filesystem changed, the cached user is no longer valid
|
||||
providerLog(logger.LevelDebug, "current fs different from the cached one for user %q, removing from cache",
|
||||
providerLog(logger.LevelDebug, "current fs different from the cached one for user %#v, removing from cache",
|
||||
user.Username)
|
||||
delete(cache.users, user.Username)
|
||||
}
|
||||
|
@ -158,10 +130,7 @@ func (cache *usersCache) get(username string) (*CachedUser, bool) {
|
|||
defer cache.RUnlock()
|
||||
|
||||
cachedUser, ok := cache.users[username]
|
||||
if !ok {
|
||||
return nil, false
|
||||
}
|
||||
return &cachedUser, true
|
||||
return &cachedUser, ok
|
||||
}
|
||||
|
||||
// CacheWebDAVUser add a user to the WebDAV cache
|
3119
dataprovider/dataprovider.go
Normal file
3119
dataprovider/dataprovider.go
Normal file
File diff suppressed because it is too large
Load diff
1523
dataprovider/memory.go
Normal file
1523
dataprovider/memory.go
Normal file
File diff suppressed because it is too large
Load diff
606
dataprovider/mysql.go
Normal file
606
dataprovider/mysql.go
Normal file
|
@ -0,0 +1,606 @@
|
|||
//go:build !nomysql
|
||||
// +build !nomysql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/x509"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
// we import go-sql-driver/mysql here to be able to disable MySQL support using a build tag
|
||||
_ "github.com/go-sql-driver/mysql"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
mysqlResetSQL = "DROP TABLE IF EXISTS `{{api_keys}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{folders_mapping}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{admins}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{folders}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{shares}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{users}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{defender_events}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{defender_hosts}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{schema_version}}` CASCADE;"
|
||||
mysqlInitialSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);" +
|
||||
"CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
|
||||
"`description` varchar(512) NULL, `password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, " +
|
||||
"`permissions` longtext NOT NULL, `filters` longtext NULL, `additional_info` longtext NULL);" +
|
||||
"CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL UNIQUE, " +
|
||||
"`description` varchar(512) NULL, `path` varchar(512) NULL, `used_quota_size` bigint NOT NULL, " +
|
||||
"`used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, `filesystem` longtext NULL);" +
|
||||
"CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
|
||||
"`status` integer NOT NULL, `expiration_date` bigint NOT NULL, `description` varchar(512) NULL, `password` longtext NULL, " +
|
||||
"`public_keys` longtext NULL, `home_dir` varchar(512) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, " +
|
||||
"`max_sessions` integer NOT NULL, `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, " +
|
||||
"`permissions` longtext NOT NULL, `used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, " +
|
||||
"`last_quota_update` bigint NOT NULL, `upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, " +
|
||||
"`last_login` bigint NOT NULL, `filters` longtext NULL, `filesystem` longtext NULL, `additional_info` longtext NULL);" +
|
||||
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` varchar(512) NOT NULL, " +
|
||||
"`quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `folder_id` integer NOT NULL, `user_id` integer NOT NULL);" +
|
||||
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
|
||||
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
|
||||
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
|
||||
"INSERT INTO {{schema_version}} (version) VALUES (10);"
|
||||
mysqlV11SQL = "CREATE TABLE `{{api_keys}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL, `key_id` varchar(50) NOT NULL UNIQUE," +
|
||||
"`api_key` varchar(255) NOT NULL UNIQUE, `scope` integer NOT NULL, `created_at` bigint NOT NULL, `updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, " +
|
||||
"`expires_at` bigint NOT NULL, `description` longtext NULL, `admin_id` integer NULL, `user_id` integer NULL);" +
|
||||
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_admin_id_fk_admins_id` FOREIGN KEY (`admin_id`) REFERENCES `{{admins}}` (`id`) ON DELETE CASCADE;" +
|
||||
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
|
||||
mysqlV11DownSQL = "DROP TABLE `{{api_keys}}` CASCADE;"
|
||||
mysqlV12SQL = "ALTER TABLE `{{admins}}` ADD COLUMN `created_at` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{admins}}` ALTER COLUMN `created_at` DROP DEFAULT;" +
|
||||
"ALTER TABLE `{{admins}}` ADD COLUMN `updated_at` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{admins}}` ALTER COLUMN `updated_at` DROP DEFAULT;" +
|
||||
"ALTER TABLE `{{admins}}` ADD COLUMN `last_login` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{admins}}` ALTER COLUMN `last_login` DROP DEFAULT;" +
|
||||
"ALTER TABLE `{{users}}` ADD COLUMN `created_at` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{users}}` ALTER COLUMN `created_at` DROP DEFAULT;" +
|
||||
"ALTER TABLE `{{users}}` ADD COLUMN `updated_at` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{users}}` ALTER COLUMN `updated_at` DROP DEFAULT;" +
|
||||
"CREATE INDEX `{{prefix}}users_updated_at_idx` ON `{{users}}` (`updated_at`);"
|
||||
mysqlV12DownSQL = "ALTER TABLE `{{admins}}` DROP COLUMN `updated_at`;" +
|
||||
"ALTER TABLE `{{admins}}` DROP COLUMN `created_at`;" +
|
||||
"ALTER TABLE `{{admins}}` DROP COLUMN `last_login`;" +
|
||||
"ALTER TABLE `{{users}}` DROP COLUMN `created_at`;" +
|
||||
"ALTER TABLE `{{users}}` DROP COLUMN `updated_at`;"
|
||||
|
||||
mysqlV13SQL = "ALTER TABLE `{{users}}` ADD COLUMN `email` varchar(255) NULL;"
|
||||
mysqlV13DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `email`;"
|
||||
mysqlV14SQL = "CREATE TABLE `{{shares}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
|
||||
"`share_id` varchar(60) NOT NULL UNIQUE, `name` varchar(255) NOT NULL, `description` varchar(512) NULL, " +
|
||||
"`scope` integer NOT NULL, `paths` longtext NOT NULL, `created_at` bigint NOT NULL, " +
|
||||
"`updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, `expires_at` bigint NOT NULL, " +
|
||||
"`password` longtext NULL, `max_tokens` integer NOT NULL, `used_tokens` integer NOT NULL, " +
|
||||
"`allow_from` longtext NULL, `user_id` integer NOT NULL);" +
|
||||
"ALTER TABLE `{{shares}}` ADD CONSTRAINT `{{prefix}}shares_user_id_fk_users_id` " +
|
||||
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
|
||||
mysqlV14DownSQL = "DROP TABLE `{{shares}}` CASCADE;"
|
||||
mysqlV15SQL = "CREATE TABLE `{{defender_hosts}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
|
||||
"`ip` varchar(50) NOT NULL UNIQUE, `ban_time` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
|
||||
"CREATE TABLE `{{defender_events}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
|
||||
"`date_time` bigint NOT NULL, `score` integer NOT NULL, `host_id` bigint NOT NULL);" +
|
||||
"ALTER TABLE `{{defender_events}}` ADD CONSTRAINT `{{prefix}}defender_events_host_id_fk_defender_hosts_id` " +
|
||||
"FOREIGN KEY (`host_id`) REFERENCES `{{defender_hosts}}` (`id`) ON DELETE CASCADE;" +
|
||||
"CREATE INDEX `{{prefix}}defender_hosts_updated_at_idx` ON `{{defender_hosts}}` (`updated_at`);" +
|
||||
"CREATE INDEX `{{prefix}}defender_hosts_ban_time_idx` ON `{{defender_hosts}}` (`ban_time`);" +
|
||||
"CREATE INDEX `{{prefix}}defender_events_date_time_idx` ON `{{defender_events}}` (`date_time`);"
|
||||
mysqlV15DownSQL = "DROP TABLE `{{defender_events}}` CASCADE;" +
|
||||
"DROP TABLE `{{defender_hosts}}` CASCADE;"
|
||||
)
|
||||
|
||||
// MySQLProvider auth provider for MySQL/MariaDB database
|
||||
type MySQLProvider struct {
|
||||
dbHandle *sql.DB
|
||||
}
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+mysql")
|
||||
}
|
||||
|
||||
func initializeMySQLProvider() error {
|
||||
var err error
|
||||
|
||||
dbHandle, err := sql.Open("mysql", getMySQLConnectionString(false))
|
||||
if err == nil {
|
||||
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
|
||||
getMySQLConnectionString(true), config.PoolSize)
|
||||
dbHandle.SetMaxOpenConns(config.PoolSize)
|
||||
if config.PoolSize > 0 {
|
||||
dbHandle.SetMaxIdleConns(config.PoolSize)
|
||||
} else {
|
||||
dbHandle.SetMaxIdleConns(2)
|
||||
}
|
||||
dbHandle.SetConnMaxLifetime(240 * time.Second)
|
||||
provider = &MySQLProvider{dbHandle: dbHandle}
|
||||
} else {
|
||||
providerLog(logger.LevelError, "error creating mysql database handler, connection string: %#v, error: %v",
|
||||
getMySQLConnectionString(true), err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
func getMySQLConnectionString(redactedPwd bool) string {
|
||||
var connectionString string
|
||||
if config.ConnectionString == "" {
|
||||
password := config.Password
|
||||
if redactedPwd {
|
||||
password = "[redacted]"
|
||||
}
|
||||
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8mb4&interpolateParams=true&timeout=10s&parseTime=true&tls=%v&writeTimeout=10s&readTimeout=10s",
|
||||
config.Username, password, config.Host, config.Port, config.Name, getSSLMode())
|
||||
} else {
|
||||
connectionString = config.ConnectionString
|
||||
}
|
||||
return connectionString
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) checkAvailability() error {
|
||||
return sqlCommonCheckAvailability(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
|
||||
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
|
||||
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
return sqlCommonGetUsedQuota(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) setUpdatedAt(username string) {
|
||||
sqlCommonSetUpdatedAt(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateAdminLastLogin(username string) error {
|
||||
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonGetUserByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addUser(user *User) error {
|
||||
return sqlCommonAddUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateUser(user *User) error {
|
||||
return sqlCommonUpdateUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteUser(user *User) error {
|
||||
return sqlCommonDeleteUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpUsers() ([]User, error) {
|
||||
return sqlCommonDumpUsers(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
|
||||
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonDumpFolders(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
|
||||
defer cancel()
|
||||
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonAddFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonUpdateFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonDeleteFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) adminExists(username string) (Admin, error) {
|
||||
return sqlCommonGetAdminByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addAdmin(admin *Admin) error {
|
||||
return sqlCommonAddAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateAdmin(admin *Admin) error {
|
||||
return sqlCommonUpdateAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteAdmin(admin *Admin) error {
|
||||
return sqlCommonDeleteAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpAdmins() ([]Admin, error) {
|
||||
return sqlCommonDumpAdmins(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) apiKeyExists(keyID string) (APIKey, error) {
|
||||
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
|
||||
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpAPIKeys() ([]APIKey, error) {
|
||||
return sqlCommonDumpAPIKeys(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateAPIKeyLastUse(keyID string) error {
|
||||
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) shareExists(shareID, username string) (Share, error) {
|
||||
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addShare(share *Share) error {
|
||||
return sqlCommonAddShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateShare(share *Share) error {
|
||||
return sqlCommonUpdateShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteShare(share *Share) error {
|
||||
return sqlCommonDeleteShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
|
||||
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpShares() ([]Share, error) {
|
||||
return sqlCommonDumpShares(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateShareLastUse(shareID string, numTokens int) error {
|
||||
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getDefenderHosts(from int64, limit int) ([]*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getDefenderHostByIP(ip string, from int64) (*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) isDefenderHostBanned(ip string) (*DefenderEntry, error) {
|
||||
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateDefenderBanTime(ip string, minutes int) error {
|
||||
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteDefenderHost(ip string) error {
|
||||
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addDefenderEvent(ip string, score int) error {
|
||||
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) setDefenderBanTime(ip string, banTime int64) error {
|
||||
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) cleanupDefender(from int64) error {
|
||||
return sqlCommonDefenderCleanup(from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) close() error {
|
||||
return p.dbHandle.Close()
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) reloadConfig() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase creates the initial database structure
|
||||
func (p *MySQLProvider) initializeDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
|
||||
if err == nil && dbVersion.Version > 0 {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return errSchemaVersionEmpty
|
||||
}
|
||||
initialSQL := strings.ReplaceAll(mysqlInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
|
||||
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(initialSQL, ";"), 10)
|
||||
}
|
||||
|
||||
//nolint:dupl
|
||||
func (p *MySQLProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch version := dbVersion.Version; {
|
||||
case version == sqlDatabaseVersion:
|
||||
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
|
||||
return ErrNoInitRequired
|
||||
case version < 10:
|
||||
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
|
||||
providerLog(logger.LevelError, "%v", err)
|
||||
logger.ErrorToConsole("%v", err)
|
||||
return err
|
||||
case version == 10:
|
||||
return updateMySQLDatabaseFromV10(p.dbHandle)
|
||||
case version == 11:
|
||||
return updateMySQLDatabaseFromV11(p.dbHandle)
|
||||
case version == 12:
|
||||
return updateMySQLDatabaseFromV12(p.dbHandle)
|
||||
case version == 13:
|
||||
return updateMySQLDatabaseFromV13(p.dbHandle)
|
||||
case version == 14:
|
||||
return updateMySQLDatabaseFromV14(p.dbHandle)
|
||||
default:
|
||||
if version > sqlDatabaseVersion {
|
||||
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("database version not handled: %v", version)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) revertDatabase(targetVersion int) error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == targetVersion {
|
||||
return errors.New("current version match target version, nothing to do")
|
||||
}
|
||||
|
||||
switch dbVersion.Version {
|
||||
case 15:
|
||||
return downgradeMySQLDatabaseFromV15(p.dbHandle)
|
||||
case 14:
|
||||
return downgradeMySQLDatabaseFromV14(p.dbHandle)
|
||||
case 13:
|
||||
return downgradeMySQLDatabaseFromV13(p.dbHandle)
|
||||
case 12:
|
||||
return downgradeMySQLDatabaseFromV12(p.dbHandle)
|
||||
case 11:
|
||||
return downgradeMySQLDatabaseFromV11(p.dbHandle)
|
||||
default:
|
||||
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) resetDatabase() error {
|
||||
sql := strings.ReplaceAll(mysqlResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(sql, ";"), 0)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV10(dbHandle *sql.DB) error {
|
||||
if err := updateMySQLDatabaseFrom10To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
if err := updateMySQLDatabaseFrom11To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := updateMySQLDatabaseFrom12To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := updateMySQLDatabaseFrom13To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
return updateMySQLDatabaseFrom14To15(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV15(dbHandle *sql.DB) error {
|
||||
if err := downgradeMySQLDatabaseFrom15To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
if err := downgradeMySQLDatabaseFrom14To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := downgradeMySQLDatabaseFrom13To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := downgradeMySQLDatabaseFrom12To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
return downgradeMySQLDatabaseFrom11To10(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom13To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 13 -> 14")
|
||||
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
|
||||
sql := strings.ReplaceAll(mysqlV14SQL, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 14)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom14To15(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 14 -> 15")
|
||||
providerLog(logger.LevelInfo, "updating database version: 14 -> 15")
|
||||
sql := strings.ReplaceAll(mysqlV15SQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 15)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom15To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 15 -> 14")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 15 -> 14")
|
||||
sql := strings.ReplaceAll(mysqlV15DownSQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 14)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom14To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 14 -> 13")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
|
||||
sql := strings.ReplaceAll(mysqlV14DownSQL, "{{shares}}", sqlTableShares)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 13)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom12To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 12 -> 13")
|
||||
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
|
||||
sql := strings.ReplaceAll(mysqlV13SQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 13)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom13To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 13 -> 12")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
|
||||
sql := strings.ReplaceAll(mysqlV13DownSQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 12)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom11To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 11 -> 12")
|
||||
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
|
||||
sql := strings.ReplaceAll(mysqlV12SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 12)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom12To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 12 -> 11")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
|
||||
sql := strings.ReplaceAll(mysqlV12DownSQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 11)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom10To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 10 -> 11")
|
||||
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
|
||||
sql := strings.ReplaceAll(mysqlV11SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 11)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom11To10(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 11 -> 10")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
|
||||
sql := strings.ReplaceAll(mysqlV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 10)
|
||||
}
|
18
dataprovider/mysql_disabled.go
Normal file
18
dataprovider/mysql_disabled.go
Normal file
|
@ -0,0 +1,18 @@
|
|||
//go:build nomysql
|
||||
// +build nomysql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-mysql")
|
||||
}
|
||||
|
||||
func initializeMySQLProvider() error {
|
||||
return errors.New("MySQL disabled at build time")
|
||||
}
|
630
dataprovider/pgsql.go
Normal file
630
dataprovider/pgsql.go
Normal file
|
@ -0,0 +1,630 @@
|
|||
//go:build !nopgsql
|
||||
// +build !nopgsql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/x509"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
// we import lib/pq here to be able to disable PostgreSQL support using a build tag
|
||||
_ "github.com/lib/pq"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
pgsqlResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{folders_mapping}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{admins}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{folders}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{shares}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{users}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{defender_events}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{defender_hosts}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{schema_version}}" CASCADE;
|
||||
`
|
||||
pgsqlInitial = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);
|
||||
CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
|
||||
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL);
|
||||
CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE, "description" varchar(512) NULL,
|
||||
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
|
||||
"filesystem" text NULL);
|
||||
CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE, "status" integer NOT NULL,
|
||||
"expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL, "public_keys" text NULL,
|
||||
"home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
|
||||
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
|
||||
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
|
||||
"download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL, "filesystem" text NULL,
|
||||
"additional_info" text NULL);
|
||||
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL,
|
||||
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
|
||||
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id");
|
||||
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_folder_id_fk_folders_id"
|
||||
FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
|
||||
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_user_id_fk_users_id"
|
||||
FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
|
||||
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
|
||||
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
|
||||
INSERT INTO {{schema_version}} (version) VALUES (10);
|
||||
`
|
||||
pgsqlV11SQL = `CREATE TABLE "{{api_keys}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL,
|
||||
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL,
|
||||
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL,"expires_at" bigint NOT NULL,
|
||||
"description" text NULL, "admin_id" integer NULL, "user_id" integer NULL);
|
||||
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_admin_id_fk_admins_id" FOREIGN KEY ("admin_id")
|
||||
REFERENCES "{{admins}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
|
||||
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_user_id_fk_users_id" FOREIGN KEY ("user_id")
|
||||
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
|
||||
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
|
||||
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
|
||||
`
|
||||
pgsqlV11DownSQL = `DROP TABLE "{{api_keys}}" CASCADE;`
|
||||
pgsqlV12SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ALTER COLUMN "created_at" DROP DEFAULT;
|
||||
ALTER TABLE "{{admins}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ALTER COLUMN "updated_at" DROP DEFAULT;
|
||||
ALTER TABLE "{{admins}}" ADD COLUMN "last_login" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ALTER COLUMN "last_login" DROP DEFAULT;
|
||||
ALTER TABLE "{{users}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{users}}" ALTER COLUMN "created_at" DROP DEFAULT;
|
||||
ALTER TABLE "{{users}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{users}}" ALTER COLUMN "updated_at" DROP DEFAULT;
|
||||
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
|
||||
`
|
||||
pgsqlV12DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "updated_at" CASCADE;
|
||||
ALTER TABLE "{{users}}" DROP COLUMN "created_at" CASCADE;
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "created_at" CASCADE;
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "updated_at" CASCADE;
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "last_login" CASCADE;
|
||||
`
|
||||
pgsqlV13SQL = `ALTER TABLE "{{users}}" ADD COLUMN "email" varchar(255) NULL;`
|
||||
pgsqlV13DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "email" CASCADE;`
|
||||
pgsqlV14SQL = `CREATE TABLE "{{shares}}" ("id" serial NOT NULL PRIMARY KEY,
|
||||
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
|
||||
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
|
||||
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL,
|
||||
"max_tokens" integer NOT NULL, "used_tokens" integer NOT NULL, "allow_from" text NULL,
|
||||
"user_id" integer NOT NULL);
|
||||
ALTER TABLE "{{shares}}" ADD CONSTRAINT "{{prefix}}shares_user_id_fk_users_id" FOREIGN KEY ("user_id")
|
||||
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
|
||||
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
|
||||
`
|
||||
pgsqlV14DownSQL = `DROP TABLE "{{shares}}" CASCADE;`
|
||||
pgsqlV15SQL = `CREATE TABLE "{{defender_hosts}}" ("id" bigserial NOT NULL PRIMARY KEY, "ip" varchar(50) NOT NULL UNIQUE,
|
||||
"ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
|
||||
CREATE TABLE "{{defender_events}}" ("id" bigserial NOT NULL PRIMARY KEY, "date_time" bigint NOT NULL, "score" integer NOT NULL,
|
||||
"host_id" bigint NOT NULL);
|
||||
ALTER TABLE "{{defender_events}}" ADD CONSTRAINT "{{prefix}}defender_events_host_id_fk_defender_hosts_id" FOREIGN KEY
|
||||
("host_id") REFERENCES "{{defender_hosts}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
|
||||
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
|
||||
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
|
||||
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
|
||||
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
|
||||
`
|
||||
pgsqlV15DownSQL = `DROP TABLE "{{defender_events}}" CASCADE;
|
||||
DROP TABLE "{{defender_hosts}}" CASCADE;
|
||||
`
|
||||
)
|
||||
|
||||
// PGSQLProvider auth provider for PostgreSQL database
|
||||
type PGSQLProvider struct {
|
||||
dbHandle *sql.DB
|
||||
}
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+pgsql")
|
||||
}
|
||||
|
||||
func initializePGSQLProvider() error {
|
||||
var err error
|
||||
dbHandle, err := sql.Open("postgres", getPGSQLConnectionString(false))
|
||||
if err == nil {
|
||||
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %#v, pool size: %v",
|
||||
getPGSQLConnectionString(true), config.PoolSize)
|
||||
dbHandle.SetMaxOpenConns(config.PoolSize)
|
||||
if config.PoolSize > 0 {
|
||||
dbHandle.SetMaxIdleConns(config.PoolSize)
|
||||
} else {
|
||||
dbHandle.SetMaxIdleConns(2)
|
||||
}
|
||||
dbHandle.SetConnMaxLifetime(240 * time.Second)
|
||||
provider = &PGSQLProvider{dbHandle: dbHandle}
|
||||
} else {
|
||||
providerLog(logger.LevelError, "error creating postgres database handler, connection string: %#v, error: %v",
|
||||
getPGSQLConnectionString(true), err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func getPGSQLConnectionString(redactedPwd bool) string {
|
||||
var connectionString string
|
||||
if config.ConnectionString == "" {
|
||||
password := config.Password
|
||||
if redactedPwd {
|
||||
password = "[redacted]"
|
||||
}
|
||||
connectionString = fmt.Sprintf("host='%v' port=%v dbname='%v' user='%v' password='%v' sslmode=%v connect_timeout=10",
|
||||
config.Host, config.Port, config.Name, config.Username, password, getSSLMode())
|
||||
} else {
|
||||
connectionString = config.ConnectionString
|
||||
}
|
||||
return connectionString
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) checkAvailability() error {
|
||||
return sqlCommonCheckAvailability(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
|
||||
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
|
||||
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
return sqlCommonGetUsedQuota(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) setUpdatedAt(username string) {
|
||||
sqlCommonSetUpdatedAt(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateAdminLastLogin(username string) error {
|
||||
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonGetUserByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addUser(user *User) error {
|
||||
return sqlCommonAddUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateUser(user *User) error {
|
||||
return sqlCommonUpdateUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteUser(user *User) error {
|
||||
return sqlCommonDeleteUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpUsers() ([]User, error) {
|
||||
return sqlCommonDumpUsers(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
|
||||
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonDumpFolders(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
|
||||
defer cancel()
|
||||
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonAddFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonUpdateFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonDeleteFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) adminExists(username string) (Admin, error) {
|
||||
return sqlCommonGetAdminByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addAdmin(admin *Admin) error {
|
||||
return sqlCommonAddAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateAdmin(admin *Admin) error {
|
||||
return sqlCommonUpdateAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteAdmin(admin *Admin) error {
|
||||
return sqlCommonDeleteAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpAdmins() ([]Admin, error) {
|
||||
return sqlCommonDumpAdmins(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) apiKeyExists(keyID string) (APIKey, error) {
|
||||
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
|
||||
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpAPIKeys() ([]APIKey, error) {
|
||||
return sqlCommonDumpAPIKeys(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateAPIKeyLastUse(keyID string) error {
|
||||
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) shareExists(shareID, username string) (Share, error) {
|
||||
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addShare(share *Share) error {
|
||||
return sqlCommonAddShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateShare(share *Share) error {
|
||||
return sqlCommonUpdateShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteShare(share *Share) error {
|
||||
return sqlCommonDeleteShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
|
||||
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpShares() ([]Share, error) {
|
||||
return sqlCommonDumpShares(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateShareLastUse(shareID string, numTokens int) error {
|
||||
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getDefenderHosts(from int64, limit int) ([]*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getDefenderHostByIP(ip string, from int64) (*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) isDefenderHostBanned(ip string) (*DefenderEntry, error) {
|
||||
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateDefenderBanTime(ip string, minutes int) error {
|
||||
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteDefenderHost(ip string) error {
|
||||
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addDefenderEvent(ip string, score int) error {
|
||||
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) setDefenderBanTime(ip string, banTime int64) error {
|
||||
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) cleanupDefender(from int64) error {
|
||||
return sqlCommonDefenderCleanup(from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) close() error {
|
||||
return p.dbHandle.Close()
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) reloadConfig() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase creates the initial database structure
|
||||
func (p *PGSQLProvider) initializeDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
|
||||
if err == nil && dbVersion.Version > 0 {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return errSchemaVersionEmpty
|
||||
}
|
||||
initialSQL := strings.ReplaceAll(pgsqlInitial, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
|
||||
if config.Driver == CockroachDataProviderName {
|
||||
// Cockroach does not support deferrable constraint validation, we don't need them,
|
||||
// we keep these definitions for the PostgreSQL driver to avoid changes for users
|
||||
// upgrading from old SFTPGo versions
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "DEFERRABLE INITIALLY DEFERRED", "")
|
||||
}
|
||||
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 10)
|
||||
}
|
||||
|
||||
//nolint:dupl
|
||||
func (p *PGSQLProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch version := dbVersion.Version; {
|
||||
case version == sqlDatabaseVersion:
|
||||
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
|
||||
return ErrNoInitRequired
|
||||
case version < 10:
|
||||
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
|
||||
providerLog(logger.LevelError, "%v", err)
|
||||
logger.ErrorToConsole("%v", err)
|
||||
return err
|
||||
case version == 10:
|
||||
return updatePGSQLDatabaseFromV10(p.dbHandle)
|
||||
case version == 11:
|
||||
return updatePGSQLDatabaseFromV11(p.dbHandle)
|
||||
case version == 12:
|
||||
return updatePGSQLDatabaseFromV12(p.dbHandle)
|
||||
case version == 13:
|
||||
return updatePGSQLDatabaseFromV13(p.dbHandle)
|
||||
case version == 14:
|
||||
return updatePGSQLDatabaseFromV14(p.dbHandle)
|
||||
default:
|
||||
if version > sqlDatabaseVersion {
|
||||
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("database version not handled: %v", version)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) revertDatabase(targetVersion int) error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == targetVersion {
|
||||
return errors.New("current version match target version, nothing to do")
|
||||
}
|
||||
|
||||
switch dbVersion.Version {
|
||||
case 15:
|
||||
return downgradePGSQLDatabaseFromV15(p.dbHandle)
|
||||
case 14:
|
||||
return downgradePGSQLDatabaseFromV14(p.dbHandle)
|
||||
case 13:
|
||||
return downgradePGSQLDatabaseFromV13(p.dbHandle)
|
||||
case 12:
|
||||
return downgradePGSQLDatabaseFromV12(p.dbHandle)
|
||||
case 11:
|
||||
return downgradePGSQLDatabaseFromV11(p.dbHandle)
|
||||
default:
|
||||
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) resetDatabase() error {
|
||||
sql := strings.ReplaceAll(pgsqlResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV10(dbHandle *sql.DB) error {
|
||||
if err := updatePGSQLDatabaseFrom10To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
if err := updatePGSQLDatabaseFrom11To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := updatePGSQLDatabaseFrom12To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := updatePGSQLDatabaseFrom13To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
return updatePGSQLDatabaseFrom14To15(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV15(dbHandle *sql.DB) error {
|
||||
if err := downgradePGSQLDatabaseFrom15To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
if err := downgradePGSQLDatabaseFrom14To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := downgradePGSQLDatabaseFrom13To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := downgradePGSQLDatabaseFrom12To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
return downgradePGSQLDatabaseFrom11To10(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom13To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 13 -> 14")
|
||||
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
|
||||
sql := strings.ReplaceAll(pgsqlV14SQL, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom14To15(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 14 -> 15")
|
||||
providerLog(logger.LevelInfo, "updating database version: 14 -> 15")
|
||||
sql := strings.ReplaceAll(pgsqlV15SQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom15To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 15 -> 14")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 15 -> 14")
|
||||
sql := strings.ReplaceAll(pgsqlV15DownSQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom14To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 14 -> 13")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
|
||||
sql := strings.ReplaceAll(pgsqlV14DownSQL, "{{shares}}", sqlTableShares)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom12To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 12 -> 13")
|
||||
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
|
||||
sql := strings.ReplaceAll(pgsqlV13SQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom13To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 13 -> 12")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
|
||||
sql := strings.ReplaceAll(pgsqlV13DownSQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom11To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 11 -> 12")
|
||||
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
|
||||
sql := strings.ReplaceAll(pgsqlV12SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom12To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 12 -> 11")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
|
||||
sql := strings.ReplaceAll(pgsqlV12DownSQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom10To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 10 -> 11")
|
||||
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
|
||||
sql := strings.ReplaceAll(pgsqlV11SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom11To10(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 11 -> 10")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
|
||||
sql := strings.ReplaceAll(pgsqlV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 10)
|
||||
}
|
18
dataprovider/pgsql_disabled.go
Normal file
18
dataprovider/pgsql_disabled.go
Normal file
|
@ -0,0 +1,18 @@
|
|||
//go:build nopgsql
|
||||
// +build nopgsql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-pgsql")
|
||||
}
|
||||
|
||||
func initializePGSQLProvider() error {
|
||||
return errors.New("PostgreSQL disabled at build time")
|
||||
}
|
|
@ -1,24 +1,10 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
)
|
||||
|
||||
var delayedQuotaUpdater quotaUpdater
|
||||
|
@ -32,25 +18,18 @@ type quotaObject struct {
|
|||
files int
|
||||
}
|
||||
|
||||
type transferQuotaObject struct {
|
||||
ulSize int64
|
||||
dlSize int64
|
||||
}
|
||||
|
||||
type quotaUpdater struct {
|
||||
paramsMutex sync.RWMutex
|
||||
waitTime time.Duration
|
||||
sync.RWMutex
|
||||
pendingUserQuotaUpdates map[string]quotaObject
|
||||
pendingFolderQuotaUpdates map[string]quotaObject
|
||||
pendingTransferQuotaUpdates map[string]transferQuotaObject
|
||||
pendingUserQuotaUpdates map[string]quotaObject
|
||||
pendingFolderQuotaUpdates map[string]quotaObject
|
||||
}
|
||||
|
||||
func newQuotaUpdater() quotaUpdater {
|
||||
return quotaUpdater{
|
||||
pendingUserQuotaUpdates: make(map[string]quotaObject),
|
||||
pendingFolderQuotaUpdates: make(map[string]quotaObject),
|
||||
pendingTransferQuotaUpdates: make(map[string]transferQuotaObject),
|
||||
pendingUserQuotaUpdates: make(map[string]quotaObject),
|
||||
pendingFolderQuotaUpdates: make(map[string]quotaObject),
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -71,7 +50,6 @@ func (q *quotaUpdater) loop() {
|
|||
providerLog(logger.LevelDebug, "delayed quota update check start")
|
||||
q.storeUsersQuota()
|
||||
q.storeFoldersQuota()
|
||||
q.storeUsersTransferQuota()
|
||||
providerLog(logger.LevelDebug, "delayed quota update check end")
|
||||
waitTime = q.getWaitTime()
|
||||
}
|
||||
|
@ -152,36 +130,6 @@ func (q *quotaUpdater) getFolderPendingQuota(name string) (int, int64) {
|
|||
return obj.files, obj.size
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) resetUserTransferQuota(username string) {
|
||||
q.Lock()
|
||||
defer q.Unlock()
|
||||
|
||||
delete(q.pendingTransferQuotaUpdates, username)
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) updateUserTransferQuota(username string, ulSize, dlSize int64) {
|
||||
q.Lock()
|
||||
defer q.Unlock()
|
||||
|
||||
obj := q.pendingTransferQuotaUpdates[username]
|
||||
obj.ulSize += ulSize
|
||||
obj.dlSize += dlSize
|
||||
if obj.ulSize == 0 && obj.dlSize == 0 {
|
||||
delete(q.pendingTransferQuotaUpdates, username)
|
||||
return
|
||||
}
|
||||
q.pendingTransferQuotaUpdates[username] = obj
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) getUserPendingTransferQuota(username string) (int64, int64) {
|
||||
q.RLock()
|
||||
defer q.RUnlock()
|
||||
|
||||
obj := q.pendingTransferQuotaUpdates[username]
|
||||
|
||||
return obj.ulSize, obj.dlSize
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) getUsernames() []string {
|
||||
q.RLock()
|
||||
defer q.RUnlock()
|
||||
|
@ -206,25 +154,13 @@ func (q *quotaUpdater) getFoldernames() []string {
|
|||
return result
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) getTransferQuotaUsernames() []string {
|
||||
q.RLock()
|
||||
defer q.RUnlock()
|
||||
|
||||
result := make([]string, 0, len(q.pendingTransferQuotaUpdates))
|
||||
for username := range q.pendingTransferQuotaUpdates {
|
||||
result = append(result, username)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) storeUsersQuota() {
|
||||
for _, username := range q.getUsernames() {
|
||||
files, size := q.getUserPendingQuota(username)
|
||||
if size != 0 || files != 0 {
|
||||
err := provider.updateQuota(username, files, size, false)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "unable to update quota delayed for user %q: %v", username, err)
|
||||
providerLog(logger.LevelWarn, "unable to update quota delayed for user %#v: %v", username, err)
|
||||
continue
|
||||
}
|
||||
q.updateUserQuota(username, -files, -size)
|
||||
|
@ -238,24 +174,10 @@ func (q *quotaUpdater) storeFoldersQuota() {
|
|||
if size != 0 || files != 0 {
|
||||
err := provider.updateFolderQuota(name, files, size, false)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "unable to update quota delayed for folder %q: %v", name, err)
|
||||
providerLog(logger.LevelWarn, "unable to update quota delayed for folder %#v: %v", name, err)
|
||||
continue
|
||||
}
|
||||
q.updateFolderQuota(name, -files, -size)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) storeUsersTransferQuota() {
|
||||
for _, username := range q.getTransferQuotaUsernames() {
|
||||
ulSize, dlSize := q.getUserPendingTransferQuota(username)
|
||||
if ulSize != 0 || dlSize != 0 {
|
||||
err := provider.updateTransferQuota(username, ulSize, dlSize, false)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "unable to update transfer quota delayed for user %q: %v", username, err)
|
||||
continue
|
||||
}
|
||||
q.updateUserTransferQuota(username, -ulSize, -dlSize)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,17 +1,3 @@
|
|||
// Copyright (C) 2019 Nicola Murino
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published
|
||||
// by the Free Software Foundation, version 3.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
|
@ -22,11 +8,10 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/alexedwards/argon2id"
|
||||
passwordvalidator "github.com/wagslane/go-password-validator"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/internal/logger"
|
||||
"github.com/drakkan/sftpgo/v2/internal/util"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// ShareScope defines the supported share scopes
|
||||
|
@ -36,7 +21,6 @@ type ShareScope int
|
|||
const (
|
||||
ShareScopeRead ShareScope = iota + 1
|
||||
ShareScopeWrite
|
||||
ShareScopeReadWrite
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -76,6 +60,17 @@ type Share struct {
|
|||
IsRestore bool `json:"-"`
|
||||
}
|
||||
|
||||
// GetScopeAsString returns the share's scope as string.
|
||||
// Used in web pages
|
||||
func (s *Share) GetScopeAsString() string {
|
||||
switch s.Scope {
|
||||
case ShareScopeRead:
|
||||
return "Read"
|
||||
default:
|
||||
return "Write"
|
||||
}
|
||||
}
|
||||
|
||||
// IsExpired returns true if the share is expired
|
||||
func (s *Share) IsExpired() bool {
|
||||
if s.ExpiresAt > 0 {
|
||||
|
@ -84,6 +79,31 @@ func (s *Share) IsExpired() bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// GetInfoString returns share's info as string.
|
||||
func (s *Share) GetInfoString() string {
|
||||
var result strings.Builder
|
||||
if s.ExpiresAt > 0 {
|
||||
t := util.GetTimeFromMsecSinceEpoch(s.ExpiresAt)
|
||||
result.WriteString(fmt.Sprintf("Expiration: %v. ", t.Format("2006-01-02 15:04"))) // YYYY-MM-DD HH:MM
|
||||
}
|
||||
if s.LastUseAt > 0 {
|
||||
t := util.GetTimeFromMsecSinceEpoch(s.LastUseAt)
|
||||
result.WriteString(fmt.Sprintf("Last use: %v. ", t.Format("2006-01-02 15:04")))
|
||||
}
|
||||
if s.MaxTokens > 0 {
|
||||
result.WriteString(fmt.Sprintf("Usage: %v/%v. ", s.UsedTokens, s.MaxTokens))
|
||||
} else {
|
||||
result.WriteString(fmt.Sprintf("Used tokens: %v. ", s.UsedTokens))
|
||||
}
|
||||
if len(s.AllowFrom) > 0 {
|
||||
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(s.AllowFrom)))
|
||||
}
|
||||
if s.Password != "" {
|
||||
result.WriteString("Password protected.")
|
||||
}
|
||||
return result.String()
|
||||
}
|
||||
|
||||
// GetAllowedFromAsString returns the allowed IP as comma separated string
|
||||
func (s *Share) GetAllowedFromAsString() string {
|
||||
return strings.Join(s.AllowFrom, ",")
|
||||
|
@ -141,21 +161,12 @@ func (s *Share) HasRedactedPassword() bool {
|
|||
|
||||
func (s *Share) hashPassword() error {
|
||||
if s.Password != "" && !util.IsStringPrefixInSlice(s.Password, internalHashPwdPrefixes) {
|
||||
user, err := UserExists(s.Username, "")
|
||||
if err != nil {
|
||||
return util.NewGenericError(fmt.Sprintf("unable to validate user: %v", err))
|
||||
}
|
||||
if minEntropy := user.getMinPasswordEntropy(); minEntropy > 0 {
|
||||
if err := passwordvalidator.Validate(s.Password, minEntropy); err != nil {
|
||||
return util.NewI18nError(util.NewValidationError(err.Error()), util.I18nErrorPasswordComplexity)
|
||||
}
|
||||
}
|
||||
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
|
||||
hashed, err := bcrypt.GenerateFromPassword([]byte(s.Password), config.PasswordHashing.BcryptOptions.Cost)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.Password = util.BytesToString(hashed)
|
||||
s.Password = string(hashed)
|
||||
} else {
|
||||
hashed, err := argon2id.CreateHash(s.Password, argon2Params)
|
||||
if err != nil {
|
||||
|
@ -170,20 +181,21 @@ func (s *Share) hashPassword() error {
|
|||
func (s *Share) validatePaths() error {
|
||||
var paths []string
|
||||
for _, p := range s.Paths {
|
||||
if strings.TrimSpace(p) != "" {
|
||||
p = strings.TrimSpace(p)
|
||||
if p != "" {
|
||||
paths = append(paths, p)
|
||||
}
|
||||
}
|
||||
s.Paths = paths
|
||||
if len(s.Paths) == 0 {
|
||||
return util.NewI18nError(util.NewValidationError("at least a shared path is required"), util.I18nErrorSharePathRequired)
|
||||
return util.NewValidationError("at least a shared path is required")
|
||||
}
|
||||
for idx := range s.Paths {
|
||||
s.Paths[idx] = util.CleanPath(s.Paths[idx])
|
||||
}
|
||||
s.Paths = util.RemoveDuplicates(s.Paths, false)
|
||||
if s.Scope >= ShareScopeWrite && len(s.Paths) != 1 {
|
||||
return util.NewI18nError(util.NewValidationError("the write share scope requires exactly one path"), util.I18nErrorShareWriteScope)
|
||||
s.Paths = util.RemoveDuplicates(s.Paths)
|
||||
if s.Scope == ShareScopeWrite && len(s.Paths) != 1 {
|
||||
return util.NewValidationError("the write share scope requires exactly one path")
|
||||
}
|
||||
// check nested paths
|
||||
if len(s.Paths) > 1 {
|
||||
|
@ -192,8 +204,8 @@ func (s *Share) validatePaths() error {
|
|||
if idx == innerIdx {
|
||||
continue
|
||||
}
|
||||
if s.Paths[idx] == "/" || s.Paths[innerIdx] == "/" || util.IsDirOverlapped(s.Paths[idx], s.Paths[innerIdx], true, "/") {
|
||||
return util.NewI18nError(util.NewGenericError("shared paths cannot be nested"), util.I18nErrorShareNestedPaths)
|
||||
if isVirtualDirOverlapped(s.Paths[idx], s.Paths[innerIdx], true) {
|
||||
return util.NewGenericError("shared paths cannot be nested")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -206,26 +218,26 @@ func (s *Share) validate() error {
|
|||
return util.NewValidationError("share_id is mandatory")
|
||||
}
|
||||
if s.Name == "" {
|
||||
return util.NewI18nError(util.NewValidationError("name is mandatory"), util.I18nErrorNameRequired)
|
||||
return util.NewValidationError("name is mandatory")
|
||||
}
|
||||
if s.Scope < ShareScopeRead || s.Scope > ShareScopeReadWrite {
|
||||
return util.NewI18nError(util.NewValidationError(fmt.Sprintf("invalid scope: %v", s.Scope)), util.I18nErrorShareScope)
|
||||
if s.Scope != ShareScopeRead && s.Scope != ShareScopeWrite {
|
||||
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", s.Scope))
|
||||
}
|
||||
if err := s.validatePaths(); err != nil {
|
||||
return err
|
||||
}
|
||||
if s.ExpiresAt > 0 {
|
||||
if !s.IsRestore && s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
|
||||
return util.NewI18nError(util.NewValidationError("expiration must be in the future"), util.I18nErrorShareExpirationPast)
|
||||
return util.NewValidationError("expiration must be in the future")
|
||||
}
|
||||
} else {
|
||||
s.ExpiresAt = 0
|
||||
}
|
||||
if s.MaxTokens < 0 {
|
||||
return util.NewI18nError(util.NewValidationError("invalid max tokens"), util.I18nErrorShareMaxTokens)
|
||||
return util.NewValidationError("invalid max tokens")
|
||||
}
|
||||
if s.Username == "" {
|
||||
return util.NewI18nError(util.NewValidationError("username is mandatory"), util.I18nErrorUsernameRequired)
|
||||
return util.NewValidationError("username is mandatory")
|
||||
}
|
||||
if s.HasRedactedPassword() {
|
||||
return util.NewValidationError("cannot save a share with a redacted password")
|
||||
|
@ -233,21 +245,18 @@ func (s *Share) validate() error {
|
|||
if err := s.hashPassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
s.AllowFrom = util.RemoveDuplicates(s.AllowFrom, false)
|
||||
s.AllowFrom = util.RemoveDuplicates(s.AllowFrom)
|
||||
for _, IPMask := range s.AllowFrom {
|
||||
_, _, err := net.ParseCIDR(IPMask)
|
||||
if err != nil {
|
||||
return util.NewI18nError(
|
||||
util.NewValidationError(fmt.Sprintf("could not parse allow from entry %q : %v", IPMask, err)),
|
||||
util.I18nErrorInvalidIPMask,
|
||||
)
|
||||
return util.NewValidationError(fmt.Sprintf("could not parse allow from entry %#v : %v", IPMask, err))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckCredentials verifies the share credentials if a password if set
|
||||
func (s *Share) CheckCredentials(password string) (bool, error) {
|
||||
// CheckPassword verifies the share password if set
|
||||
func (s *Share) CheckPassword(password string) (bool, error) {
|
||||
if s.Password == "" {
|
||||
return true, nil
|
||||
}
|
||||
|
@ -267,22 +276,14 @@ func (s *Share) CheckCredentials(password string) (bool, error) {
|
|||
return match, err
|
||||
}
|
||||
|
||||
// GetRelativePath returns the specified absolute path as relative to the share base path
|
||||
func (s *Share) GetRelativePath(name string) string {
|
||||
if len(s.Paths) == 0 {
|
||||
return ""
|
||||
}
|
||||
return util.CleanPath(strings.TrimPrefix(name, s.Paths[0]))
|
||||
}
|
||||
|
||||
// IsUsable checks if the share is usable from the specified IP
|
||||
func (s *Share) IsUsable(ip string) (bool, error) {
|
||||
if s.MaxTokens > 0 && s.UsedTokens >= s.MaxTokens {
|
||||
return false, util.NewI18nError(util.NewRecordNotFoundError("max share usage exceeded"), util.I18nErrorShareUsage)
|
||||
return false, util.NewRecordNotFoundError("max share usage exceeded")
|
||||
}
|
||||
if s.ExpiresAt > 0 {
|
||||
if s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
|
||||
return false, util.NewI18nError(util.NewRecordNotFoundError("share expired"), util.I18nErrorShareExpired)
|
||||
return false, util.NewRecordNotFoundError("share expired")
|
||||
}
|
||||
}
|
||||
if len(s.AllowFrom) == 0 {
|
||||
|
@ -290,7 +291,7 @@ func (s *Share) IsUsable(ip string) (bool, error) {
|
|||
}
|
||||
parsedIP := net.ParseIP(ip)
|
||||
if parsedIP == nil {
|
||||
return false, util.NewI18nError(ErrLoginNotAllowedFromIP, util.I18nErrorLoginFromIPDenied)
|
||||
return false, ErrLoginNotAllowedFromIP
|
||||
}
|
||||
for _, ipMask := range s.AllowFrom {
|
||||
_, network, err := net.ParseCIDR(ipMask)
|
||||
|
@ -301,5 +302,5 @@ func (s *Share) IsUsable(ip string) (bool, error) {
|
|||
return true, nil
|
||||
}
|
||||
}
|
||||
return false, util.NewI18nError(ErrLoginNotAllowedFromIP, util.I18nErrorLoginFromIPDenied)
|
||||
return false, ErrLoginNotAllowedFromIP
|
||||
}
|
2047
dataprovider/sqlcommon.go
Normal file
2047
dataprovider/sqlcommon.go
Normal file
File diff suppressed because it is too large
Load diff
615
dataprovider/sqlite.go
Normal file
615
dataprovider/sqlite.go
Normal file
|
@ -0,0 +1,615 @@
|
|||
//go:build !nosqlite
|
||||
// +build !nosqlite
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/x509"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
// we import go-sqlite3 here to be able to disable SQLite support using a build tag
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
sqliteResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}";
|
||||
DROP TABLE IF EXISTS "{{folders_mapping}}";
|
||||
DROP TABLE IF EXISTS "{{admins}}";
|
||||
DROP TABLE IF EXISTS "{{folders}}";
|
||||
DROP TABLE IF EXISTS "{{shares}}";
|
||||
DROP TABLE IF EXISTS "{{users}}";
|
||||
DROP TABLE IF EXISTS "{{defender_events}}";
|
||||
DROP TABLE IF EXISTS "{{defender_hosts}}";
|
||||
DROP TABLE IF EXISTS "{{schema_version}}";
|
||||
`
|
||||
sqliteInitialSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);
|
||||
CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
|
||||
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL);
|
||||
CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
|
||||
"description" varchar(512) NULL, "path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
|
||||
"last_quota_update" bigint NOT NULL, "filesystem" text NULL);
|
||||
CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"status" integer NOT NULL, "expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL,
|
||||
"public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
|
||||
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
|
||||
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
|
||||
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL,
|
||||
"filesystem" text NULL, "additional_info" text NULL);
|
||||
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" varchar(512) NOT NULL,
|
||||
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id")
|
||||
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, "user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
|
||||
CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id"));
|
||||
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
|
||||
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
|
||||
INSERT INTO {{schema_version}} (version) VALUES (10);
|
||||
`
|
||||
sqliteV11SQL = `CREATE TABLE "{{api_keys}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL,
|
||||
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL, "created_at" bigint NOT NULL,
|
||||
"updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "description" text NULL,
|
||||
"admin_id" integer NULL REFERENCES "{{admins}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
|
||||
"user_id" integer NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
|
||||
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
|
||||
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
|
||||
`
|
||||
sqliteV11DownSQL = `DROP TABLE "{{api_keys}}";`
|
||||
sqliteV12SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ADD COLUMN "last_login" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{users}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{users}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
|
||||
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
|
||||
`
|
||||
sqliteV12DownSQL = `DROP INDEX "{{prefix}}users_updated_at_idx";
|
||||
ALTER TABLE "{{users}}" DROP COLUMN "updated_at";
|
||||
ALTER TABLE "{{users}}" DROP COLUMN "created_at";
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "created_at";
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "updated_at";
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "last_login";
|
||||
`
|
||||
sqliteV13SQL = `ALTER TABLE "{{users}}" ADD COLUMN "email" varchar(255) NULL;`
|
||||
sqliteV13DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "email";`
|
||||
sqliteV14SQL = `CREATE TABLE "{{shares}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
|
||||
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
|
||||
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
|
||||
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL, "max_tokens" integer NOT NULL,
|
||||
"used_tokens" integer NOT NULL, "allow_from" text NULL,
|
||||
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
|
||||
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
|
||||
`
|
||||
sqliteV14DownSQL = `DROP TABLE "{{shares}}";`
|
||||
sqliteV15SQL = `CREATE TABLE "{{defender_hosts}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
|
||||
"ip" varchar(50) NOT NULL UNIQUE, "ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
|
||||
CREATE TABLE "{{defender_events}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "date_time" bigint NOT NULL,
|
||||
"score" integer NOT NULL, "host_id" integer NOT NULL REFERENCES "{{defender_hosts}}" ("id") ON DELETE CASCADE
|
||||
DEFERRABLE INITIALLY DEFERRED);
|
||||
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
|
||||
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
|
||||
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
|
||||
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
|
||||
`
|
||||
sqliteV15DownSQL = `DROP TABLE "{{defender_events}}";
|
||||
DROP TABLE "{{defender_hosts}}";
|
||||
`
|
||||
)
|
||||
|
||||
// SQLiteProvider auth provider for SQLite database
|
||||
type SQLiteProvider struct {
|
||||
dbHandle *sql.DB
|
||||
}
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+sqlite")
|
||||
}
|
||||
|
||||
func initializeSQLiteProvider(basePath string) error {
|
||||
var err error
|
||||
var connectionString string
|
||||
|
||||
if config.ConnectionString == "" {
|
||||
dbPath := config.Name
|
||||
if !util.IsFileInputValid(dbPath) {
|
||||
return fmt.Errorf("invalid database path: %#v", dbPath)
|
||||
}
|
||||
if !filepath.IsAbs(dbPath) {
|
||||
dbPath = filepath.Join(basePath, dbPath)
|
||||
}
|
||||
connectionString = fmt.Sprintf("file:%v?cache=shared&_foreign_keys=1", dbPath)
|
||||
} else {
|
||||
connectionString = config.ConnectionString
|
||||
}
|
||||
dbHandle, err := sql.Open("sqlite3", connectionString)
|
||||
if err == nil {
|
||||
providerLog(logger.LevelDebug, "sqlite database handle created, connection string: %#v", connectionString)
|
||||
dbHandle.SetMaxOpenConns(1)
|
||||
provider = &SQLiteProvider{dbHandle: dbHandle}
|
||||
} else {
|
||||
providerLog(logger.LevelError, "error creating sqlite database handler, connection string: %#v, error: %v",
|
||||
connectionString, err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) checkAvailability() error {
|
||||
return sqlCommonCheckAvailability(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
|
||||
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
|
||||
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
return sqlCommonGetUsedQuota(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) setUpdatedAt(username string) {
|
||||
sqlCommonSetUpdatedAt(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateAdminLastLogin(username string) error {
|
||||
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonGetUserByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addUser(user *User) error {
|
||||
return sqlCommonAddUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateUser(user *User) error {
|
||||
return sqlCommonUpdateUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteUser(user *User) error {
|
||||
return sqlCommonDeleteUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpUsers() ([]User, error) {
|
||||
return sqlCommonDumpUsers(p.dbHandle)
|
||||
}
|
||||
|
||||
// SQLite provider cannot be shared, so we always return no recently updated users
|
||||
func (p *SQLiteProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonDumpFolders(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
|
||||
defer cancel()
|
||||
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonAddFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonUpdateFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonDeleteFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) adminExists(username string) (Admin, error) {
|
||||
return sqlCommonGetAdminByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addAdmin(admin *Admin) error {
|
||||
return sqlCommonAddAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateAdmin(admin *Admin) error {
|
||||
return sqlCommonUpdateAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteAdmin(admin *Admin) error {
|
||||
return sqlCommonDeleteAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpAdmins() ([]Admin, error) {
|
||||
return sqlCommonDumpAdmins(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) apiKeyExists(keyID string) (APIKey, error) {
|
||||
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
|
||||
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpAPIKeys() ([]APIKey, error) {
|
||||
return sqlCommonDumpAPIKeys(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateAPIKeyLastUse(keyID string) error {
|
||||
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) shareExists(shareID, username string) (Share, error) {
|
||||
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addShare(share *Share) error {
|
||||
return sqlCommonAddShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateShare(share *Share) error {
|
||||
return sqlCommonUpdateShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteShare(share *Share) error {
|
||||
return sqlCommonDeleteShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
|
||||
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpShares() ([]Share, error) {
|
||||
return sqlCommonDumpShares(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateShareLastUse(shareID string, numTokens int) error {
|
||||
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getDefenderHosts(from int64, limit int) ([]*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getDefenderHostByIP(ip string, from int64) (*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) isDefenderHostBanned(ip string) (*DefenderEntry, error) {
|
||||
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateDefenderBanTime(ip string, minutes int) error {
|
||||
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteDefenderHost(ip string) error {
|
||||
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addDefenderEvent(ip string, score int) error {
|
||||
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) setDefenderBanTime(ip string, banTime int64) error {
|
||||
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) cleanupDefender(from int64) error {
|
||||
return sqlCommonDefenderCleanup(from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) close() error {
|
||||
return p.dbHandle.Close()
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) reloadConfig() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase creates the initial database structure
|
||||
func (p *SQLiteProvider) initializeDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
|
||||
if err == nil && dbVersion.Version > 0 {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return errSchemaVersionEmpty
|
||||
}
|
||||
initialSQL := strings.ReplaceAll(sqliteInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
|
||||
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 10)
|
||||
}
|
||||
|
||||
//nolint:dupl
|
||||
func (p *SQLiteProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch version := dbVersion.Version; {
|
||||
case version == sqlDatabaseVersion:
|
||||
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
|
||||
return ErrNoInitRequired
|
||||
case version < 10:
|
||||
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
|
||||
providerLog(logger.LevelError, "%v", err)
|
||||
logger.ErrorToConsole("%v", err)
|
||||
return err
|
||||
case version == 10:
|
||||
return updateSQLiteDatabaseFromV10(p.dbHandle)
|
||||
case version == 11:
|
||||
return updateSQLiteDatabaseFromV11(p.dbHandle)
|
||||
case version == 12:
|
||||
return updateSQLiteDatabaseFromV12(p.dbHandle)
|
||||
case version == 13:
|
||||
return updateSQLiteDatabaseFromV13(p.dbHandle)
|
||||
case version == 14:
|
||||
return updateSQLiteDatabaseFromV14(p.dbHandle)
|
||||
default:
|
||||
if version > sqlDatabaseVersion {
|
||||
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("database version not handled: %v", version)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) revertDatabase(targetVersion int) error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == targetVersion {
|
||||
return errors.New("current version match target version, nothing to do")
|
||||
}
|
||||
|
||||
switch dbVersion.Version {
|
||||
case 15:
|
||||
return downgradeSQLiteDatabaseFromV15(p.dbHandle)
|
||||
case 14:
|
||||
return downgradeSQLiteDatabaseFromV14(p.dbHandle)
|
||||
case 13:
|
||||
return downgradeSQLiteDatabaseFromV13(p.dbHandle)
|
||||
case 12:
|
||||
return downgradeSQLiteDatabaseFromV12(p.dbHandle)
|
||||
case 11:
|
||||
return downgradeSQLiteDatabaseFromV11(p.dbHandle)
|
||||
default:
|
||||
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) resetDatabase() error {
|
||||
sql := strings.ReplaceAll(sqliteResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV10(dbHandle *sql.DB) error {
|
||||
if err := updateSQLiteDatabaseFrom10To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
if err := updateSQLiteDatabaseFrom11To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := updateSQLiteDatabaseFrom12To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := updateSQLiteDatabaseFrom13To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
return updateSQLiteDatabaseFrom14To15(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV15(dbHandle *sql.DB) error {
|
||||
if err := downgradeSQLiteDatabaseFrom15To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
if err := downgradeSQLiteDatabaseFrom14To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := downgradeSQLiteDatabaseFrom13To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := downgradeSQLiteDatabaseFrom12To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
return downgradeSQLiteDatabaseFrom11To10(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom13To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 13 -> 14")
|
||||
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
|
||||
sql := strings.ReplaceAll(sqliteV14SQL, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom14To15(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 14 -> 15")
|
||||
providerLog(logger.LevelInfo, "updating database version: 14 -> 15")
|
||||
sql := strings.ReplaceAll(sqliteV15SQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom15To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 15 -> 14")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 15 -> 14")
|
||||
sql := strings.ReplaceAll(sqliteV15DownSQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom14To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 14 -> 13")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
|
||||
sql := strings.ReplaceAll(sqliteV14DownSQL, "{{shares}}", sqlTableShares)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom12To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 12 -> 13")
|
||||
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
|
||||
sql := strings.ReplaceAll(sqliteV13SQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom13To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 13 -> 12")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
|
||||
sql := strings.ReplaceAll(sqliteV13DownSQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom11To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 11 -> 12")
|
||||
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
|
||||
sql := strings.ReplaceAll(sqliteV12SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom12To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 12 -> 11")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
|
||||
sql := strings.ReplaceAll(sqliteV12DownSQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom10To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 10 -> 11")
|
||||
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
|
||||
sql := strings.ReplaceAll(sqliteV11SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom11To10(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 11 -> 10")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
|
||||
sql := strings.ReplaceAll(sqliteV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 10)
|
||||
}
|
||||
|
||||
/*func setPragmaFK(dbHandle *sql.DB, value string) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
|
||||
defer cancel()
|
||||
|
||||
sql := fmt.Sprintf("PRAGMA foreign_keys=%v;", value)
|
||||
|
||||
_, err := dbHandle.ExecContext(ctx, sql)
|
||||
return err
|
||||
}*/
|
18
dataprovider/sqlite_disabled.go
Normal file
18
dataprovider/sqlite_disabled.go
Normal file
|
@ -0,0 +1,18 @@
|
|||
//go:build nosqlite
|
||||
// +build nosqlite
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-sqlite")
|
||||
}
|
||||
|
||||
func initializeSQLiteProvider(basePath string) error {
|
||||
return errors.New("SQLite disabled at build time")
|
||||
}
|
442
dataprovider/sqlqueries.go
Normal file
442
dataprovider/sqlqueries.go
Normal file
|
@ -0,0 +1,442 @@
|
|||
package dataprovider
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,used_quota_size," +
|
||||
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem," +
|
||||
"additional_info,description,email,created_at,updated_at"
|
||||
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem"
|
||||
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login"
|
||||
selectAPIKeyFields = "key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id"
|
||||
selectShareFields = "s.share_id,s.name,s.description,s.scope,s.paths,u.username,s.created_at,s.updated_at,s.last_use_at," +
|
||||
"s.expires_at,s.password,s.max_tokens,s.used_tokens,s.allow_from"
|
||||
)
|
||||
|
||||
func getSQLPlaceholders() []string {
|
||||
var placeholders []string
|
||||
for i := 1; i <= 30; i++ {
|
||||
if config.Driver == PGSQLDataProviderName || config.Driver == CockroachDataProviderName {
|
||||
placeholders = append(placeholders, fmt.Sprintf("$%v", i))
|
||||
} else {
|
||||
placeholders = append(placeholders, "?")
|
||||
}
|
||||
}
|
||||
return placeholders
|
||||
}
|
||||
|
||||
func getAddDefenderHostQuery() string {
|
||||
if config.Driver == MySQLDataProviderName {
|
||||
return fmt.Sprintf("INSERT INTO %v (`ip`,`updated_at`,`ban_time`) VALUES (%v,%v,0) ON DUPLICATE KEY UPDATE `updated_at`=VALUES(`updated_at`)",
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
return fmt.Sprintf(`INSERT INTO %v (ip,updated_at,ban_time) VALUES (%v,%v,0) ON CONFLICT (ip) DO UPDATE SET updated_at = EXCLUDED.updated_at RETURNING id`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getAddDefenderEventQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (date_time,score,host_id) VALUES (%v,%v,(SELECT id from %v WHERE ip = %v))`,
|
||||
sqlTableDefenderEvents, sqlPlaceholders[0], sqlPlaceholders[1], sqlTableDefenderHosts, sqlPlaceholders[2])
|
||||
}
|
||||
|
||||
func getDefenderHostsQuery() string {
|
||||
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE updated_at >= %v OR ban_time > 0 ORDER BY updated_at DESC LIMIT %v`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderHostQuery() string {
|
||||
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE ip = %v AND (updated_at >= %v OR ban_time > 0)`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderEventsQuery(hostIDS []int64) string {
|
||||
var sb strings.Builder
|
||||
for _, hID := range hostIDS {
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(hID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
} else {
|
||||
sb.WriteString("(0)")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT host_id,SUM(score) FROM %v WHERE date_time >= %v AND host_id IN %v GROUP BY host_id`,
|
||||
sqlTableDefenderEvents, sqlPlaceholders[0], sb.String())
|
||||
}
|
||||
|
||||
func getDefenderIsHostBannedQuery() string {
|
||||
return fmt.Sprintf(`SELECT id FROM %v WHERE ip = %v AND ban_time >= %v`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderIncrementBanTimeQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET ban_time = ban_time + %v WHERE ip = %v`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderSetBanTimeQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET ban_time = %v WHERE ip = %v`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDeleteDefenderHostQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE ip = %v`, sqlTableDefenderHosts, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getDefenderHostsCleanupQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE ban_time < %v AND NOT EXISTS (
|
||||
SELECT id FROM %v WHERE %v.host_id = %v.id AND %v.date_time > %v)`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlTableDefenderEvents, sqlTableDefenderEvents, sqlTableDefenderHosts,
|
||||
sqlTableDefenderEvents, sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderEventsCleanupQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE date_time < %v`, sqlTableDefenderEvents, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAdminByUsernameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectAdminFields, sqlTableAdmins, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAdminsQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectAdminFields, sqlTableAdmins,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDumpAdminsQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectAdminFields, sqlTableAdmins)
|
||||
}
|
||||
|
||||
func getAddAdminQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
|
||||
sqlPlaceholders[8], sqlPlaceholders[9])
|
||||
}
|
||||
|
||||
func getUpdateAdminQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v,description=%v,updated_at=%v
|
||||
WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
|
||||
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8])
|
||||
}
|
||||
|
||||
func getDeleteAdminQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getShareByIDQuery(filterUser bool) string {
|
||||
if filterUser {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v AND u.username = %v`,
|
||||
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v`,
|
||||
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getSharesQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE u.username = %v ORDER BY s.share_id %v LIMIT %v OFFSET %v`,
|
||||
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
|
||||
}
|
||||
|
||||
func getDumpSharesQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id`,
|
||||
selectShareFields, sqlTableShares, sqlTableUsers)
|
||||
}
|
||||
|
||||
func getAddShareQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (share_id,name,description,scope,paths,created_at,updated_at,last_use_at,
|
||||
expires_at,password,max_tokens,used_tokens,allow_from,user_id) VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`,
|
||||
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
|
||||
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11],
|
||||
sqlPlaceholders[12], sqlPlaceholders[13])
|
||||
}
|
||||
|
||||
func getUpdateShareRestoreQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,created_at=%v,updated_at=%v,
|
||||
last_use_at=%v,expires_at=%v,password=%v,max_tokens=%v,used_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
|
||||
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
|
||||
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
|
||||
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13])
|
||||
}
|
||||
|
||||
func getUpdateShareQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,updated_at=%v,expires_at=%v,
|
||||
password=%v,max_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
|
||||
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
|
||||
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
|
||||
sqlPlaceholders[10])
|
||||
}
|
||||
|
||||
func getDeleteShareQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE share_id = %v`, sqlTableShares, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAPIKeyByIDQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE key_id = %v`, selectAPIKeyFields, sqlTableAPIKeys, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAPIKeysQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY key_id %v LIMIT %v OFFSET %v`, selectAPIKeyFields, sqlTableAPIKeys,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDumpAPIKeysQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectAPIKeyFields, sqlTableAPIKeys)
|
||||
}
|
||||
|
||||
func getAddAPIKeyQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
|
||||
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10])
|
||||
}
|
||||
|
||||
func getUpdateAPIKeyQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET name=%v,scope=%v,expires_at=%v,user_id=%v,admin_id=%v,description=%v,updated_at=%v
|
||||
WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
|
||||
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7])
|
||||
}
|
||||
|
||||
func getDeleteAPIKeyQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getRelatedUsersForAPIKeysQuery(apiKeys []APIKey) string {
|
||||
var sb strings.Builder
|
||||
for _, k := range apiKeys {
|
||||
if k.userID == 0 {
|
||||
continue
|
||||
}
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(k.userID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
} else {
|
||||
sb.WriteString("(0)")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableUsers, sb.String())
|
||||
}
|
||||
|
||||
func getRelatedAdminsForAPIKeysQuery(apiKeys []APIKey) string {
|
||||
var sb strings.Builder
|
||||
for _, k := range apiKeys {
|
||||
if k.adminID == 0 {
|
||||
continue
|
||||
}
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(k.adminID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
} else {
|
||||
sb.WriteString("(0)")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableAdmins, sb.String())
|
||||
}
|
||||
|
||||
func getUserByUsernameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUsersQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectUserFields, sqlTableUsers,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getRecentlyUpdatedUsersQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE updated_at >= %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getDumpUsersQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectUserFields, sqlTableUsers)
|
||||
}
|
||||
|
||||
func getDumpFoldersQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectFolderFields, sqlTableFolders)
|
||||
}
|
||||
|
||||
func getUpdateQuotaQuery(reset bool) string {
|
||||
if reset {
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
|
||||
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
|
||||
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
|
||||
func getSetUpdateAtQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET updated_at = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateLastLoginQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateAdminLastLoginQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateAPIKeyLastUseQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateShareLastUseQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v, used_tokens = used_tokens +%v WHERE share_id = %v`,
|
||||
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2])
|
||||
}
|
||||
|
||||
func getQuotaQuery() string {
|
||||
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, sqlTableUsers,
|
||||
sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAddUserQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,
|
||||
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,status,last_login,expiration_date,filters,
|
||||
filesystem,additional_info,description,email,created_at,updated_at)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
|
||||
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13],
|
||||
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19],
|
||||
sqlPlaceholders[20])
|
||||
}
|
||||
|
||||
func getUpdateUserQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET password=%v,public_keys=%v,home_dir=%v,uid=%v,gid=%v,max_sessions=%v,quota_size=%v,
|
||||
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v,
|
||||
additional_info=%v,description=%v,email=%v,updated_at=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
|
||||
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
|
||||
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15],
|
||||
sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19])
|
||||
}
|
||||
|
||||
func getDeleteUserQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getFolderByNameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE name = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAddFolderQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
|
||||
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
|
||||
}
|
||||
|
||||
func getUpdateFolderQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET path=%v,description=%v,filesystem=%v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0],
|
||||
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
|
||||
func getDeleteFolderQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableFolders, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUpsertFolderQuery() string {
|
||||
if config.Driver == MySQLDataProviderName {
|
||||
return fmt.Sprintf("INSERT INTO %v (`path`,`used_quota_size`,`used_quota_files`,`last_quota_update`,`name`,"+
|
||||
"`description`,`filesystem`) VALUES (%v,%v,%v,%v,%v,%v,%v) ON DUPLICATE KEY UPDATE "+
|
||||
"`path`=VALUES(`path`),`description`=VALUES(`description`),`filesystem`=VALUES(`filesystem`)",
|
||||
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
|
||||
sqlPlaceholders[5], sqlPlaceholders[6])
|
||||
}
|
||||
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v) ON CONFLICT (name) DO UPDATE SET path = EXCLUDED.path,description=EXCLUDED.description,
|
||||
filesystem=EXCLUDED.filesystem`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
|
||||
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
|
||||
}
|
||||
|
||||
func getClearFolderMappingQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE user_id = (SELECT id FROM %v WHERE username = %v)`, sqlTableFoldersMapping,
|
||||
sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAddFolderMappingQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (virtual_path,quota_size,quota_files,folder_id,user_id)
|
||||
VALUES (%v,%v,%v,(SELECT id FROM %v WHERE name = %v),(SELECT id FROM %v WHERE username = %v))`,
|
||||
sqlTableFoldersMapping, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlTableFolders,
|
||||
sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
|
||||
}
|
||||
|
||||
func getFoldersQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY name %v LIMIT %v OFFSET %v`, selectFolderFields, sqlTableFolders,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateFolderQuotaQuery(reset bool) string {
|
||||
if reset {
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
|
||||
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
|
||||
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
|
||||
func getQuotaFolderQuery() string {
|
||||
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE name = %v`, sqlTableFolders,
|
||||
sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getRelatedFoldersForUsersQuery(users []User) string {
|
||||
var sb strings.Builder
|
||||
for _, u := range users {
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(u.ID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,
|
||||
fm.quota_size,fm.quota_files,fm.user_id,f.filesystem,f.description FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE
|
||||
fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders, sqlTableFoldersMapping, sb.String())
|
||||
}
|
||||
|
||||
func getRelatedUsersForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
|
||||
var sb strings.Builder
|
||||
for _, f := range folders {
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(f.ID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT fm.folder_id,u.username FROM %v fm INNER JOIN %v u ON fm.user_id = u.id
|
||||
WHERE fm.folder_id IN %v ORDER BY fm.folder_id`, sqlTableFoldersMapping, sqlTableUsers, sb.String())
|
||||
}
|
||||
|
||||
func getDatabaseVersionQuery() string {
|
||||
return fmt.Sprintf("SELECT version from %v LIMIT 1", sqlTableSchemaVersion)
|
||||
}
|
||||
|
||||
func getUpdateDBVersionQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET version=%v`, sqlTableSchemaVersion, sqlPlaceholders[0])
|
||||
}
|
1299
dataprovider/user.go
Normal file
1299
dataprovider/user.go
Normal file
File diff suppressed because it is too large
Load diff
204
docker/README.md
Normal file
204
docker/README.md
Normal file
|
@ -0,0 +1,204 @@
|
|||
# Official Docker image
|
||||
|
||||
SFTPGo provides an official Docker image, it is available on both [Docker Hub](https://hub.docker.com/r/drakkan/sftpgo) and on [GitHub Container Registry](https://github.com/users/drakkan/packages/container/package/sftpgo).
|
||||
|
||||
## Supported tags and respective Dockerfile links
|
||||
|
||||
- [v2.2.3, v2.2, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile)
|
||||
- [v2.2.3-alpine, v2.2-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.alpine)
|
||||
- [v2.2.3-slim, v2.2-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile)
|
||||
- [v2.2.3-alpine-slim, v2.2-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.alpine)
|
||||
- [v2.2.3-distroless-slim, v2.2-distroless-slim, v2-distroless-slim, distroless-slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.distroless)
|
||||
- [edge](../Dockerfile)
|
||||
- [edge-alpine](../Dockerfile.alpine)
|
||||
- [edge-slim](../Dockerfile)
|
||||
- [edge-alpine-slim](../Dockerfile.alpine)
|
||||
- [edge-distroless-slim](../Dockerfile.distroless)
|
||||
|
||||
## How to use the SFTPGo image
|
||||
|
||||
### Start a `sftpgo` server instance
|
||||
|
||||
Starting a SFTPGo instance is simple:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
|
||||
|
||||
Now visit [http://localhost:8080/web/admin](http://localhost:8080/web/admin), replacing `localhost` with the appropriate IP address if SFTPGo is not reachable on localhost, create the first admin and a new SFTPGo user. The SFTP service is available on port 2022.
|
||||
|
||||
If you don't want to persist any files, for example for testing purposes, you can run an SFTPGo instance like this:
|
||||
|
||||
```shell
|
||||
docker run --rm --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
If you prefer GitHub Container Registry to Docker Hub replace `drakkan/sftpgo:tag` with `ghcr.io/drakkan/sftpgo:tag`.
|
||||
|
||||
### Enable FTP service
|
||||
|
||||
FTP is disabled by default, you can enable the FTP service by starting the SFTPGo instance in this way:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
-p 2121:2121 \
|
||||
-p 50000-50100:50000-50100 \
|
||||
-e SFTPGO_FTPD__BINDINGS__0__PORT=2121 \
|
||||
-e SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP=<your external ip here> \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
The FTP service is now available on port 2121 and SFTP on port 2022.
|
||||
|
||||
You can change the passive ports range (`50000-50100` by default) by setting the environment variables `SFTPGO_FTPD__PASSIVE_PORT_RANGE__START` and `SFTPGO_FTPD__PASSIVE_PORT_RANGE__END`.
|
||||
|
||||
It is recommended that you provide a certificate and key file to expose FTP over TLS. You should prefer SFTP to FTP even if you configure TLS, please don't blindly enable the old FTP protocol.
|
||||
|
||||
### Enable WebDAV service
|
||||
|
||||
WebDAV is disabled by default, you can enable the WebDAV service by starting the SFTPGo instance in this way:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
-p 10080:10080 \
|
||||
-e SFTPGO_WEBDAVD__BINDINGS__0__PORT=10080 \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
The WebDAV service is now available on port 10080 and SFTP on port 2022.
|
||||
|
||||
It is recommended that you provide a certificate and key file to expose WebDAV over https.
|
||||
|
||||
### Container shell access and viewing SFTPGo logs
|
||||
|
||||
The docker exec command allows you to run commands inside a Docker container. The following command line will give you a shell inside your `sftpgo` container:
|
||||
|
||||
```shell
|
||||
docker exec -it some-sftpgo sh
|
||||
```
|
||||
|
||||
The logs are available through Docker's container log:
|
||||
|
||||
```shell
|
||||
docker logs some-sftpgo
|
||||
```
|
||||
|
||||
**Note:** [distroless](../Dockerfile.distroless) image contains only a statically linked sftpgo binary and its minimal runtime dependencies. Shell is not available on this image.
|
||||
|
||||
### Where to Store Data
|
||||
|
||||
Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the SFTPGo images to familiarize themselves with the options available, including:
|
||||
|
||||
- Let Docker manage the storage for SFTPGo data by [writing them to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
|
||||
- Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container]((https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume)). This places the SFTPGo files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly. The SFTPGo image runs using `1000` as UID/GID by default.
|
||||
|
||||
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
|
||||
|
||||
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/sftpgodata`.
|
||||
2. Create a home directory for the sftpgo container user on your host system e.g. `/my/own/sftpgohome`.
|
||||
3. Start your SFTPGo container like this:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
-p 8080:8090 \
|
||||
-p 2022:2022 \
|
||||
--mount type=bind,source=/my/own/sftpgodata,target=/srv/sftpgo \
|
||||
--mount type=bind,source=/my/own/sftpgohome,target=/var/lib/sftpgo \
|
||||
-e SFTPGO_HTTPD__BINDINGS__0__PORT=8090 \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
As you can see SFTPGo uses two main volumes:
|
||||
|
||||
- `/srv/sftpgo` to handle persistent data. The default home directory for SFTP/FTP/WebDAV users is `/srv/sftpgo/data/<username>`. Backups are stored in `/srv/sftpgo/backups`
|
||||
- `/var/lib/sftpgo` is the home directory for the sftpgo system user defined inside the container. This is the container working directory too, host keys will be created here when using the default configuration.
|
||||
|
||||
If you want to get fine grained control, you can also mount `/srv/sftpgo/data` and `/srv/sftpgo/backups` as separate volumes instead of mounting `/srv/sftpgo`.
|
||||
|
||||
### Configuration
|
||||
|
||||
The runtime configuration can be customized via environment variables that you can set passing the `-e` option to the `docker run` command or inside the `environment` section if you are using [docker stack deploy](https://docs.docker.com/engine/reference/commandline/stack_deploy/) or [docker-compose](https://github.com/docker/compose).
|
||||
|
||||
Please take a look [here](../docs/full-configuration.md#environment-variables) to learn how to configure SFTPGo via environment variables.
|
||||
|
||||
Alternately you can mount your custom configuration file to `/var/lib/sftpgo` or `/var/lib/sftpgo/.config/sftpgo`.
|
||||
|
||||
### Loading initial data
|
||||
|
||||
Initial data can be loaded in the following ways:
|
||||
|
||||
- via the `--loaddata-from` flag or the `SFTPGO_LOADDATA_FROM` environment variable
|
||||
- by providing a dump file to the memory provider
|
||||
|
||||
Please take a look [here](../docs/full-configuration.md) for more details.
|
||||
|
||||
### Running as an arbitrary user
|
||||
|
||||
The SFTPGo image runs using `1000` as UID/GID by default. If you know the permissions of your data and/or configuration directory are already set appropriately or you have need of running SFTPGo with a specific UID/GID, it is possible to invoke this image with `--user` set to any value (other than `root/0`) in order to achieve the desired access/configuration:
|
||||
|
||||
```shell
|
||||
$ ls -lnd data
|
||||
drwxr-xr-x 2 1100 1100 6 7 nov 09.09 data
|
||||
$ ls -lnd config
|
||||
drwxr-xr-x 2 1100 1100 6 7 nov 09.19 config
|
||||
```
|
||||
|
||||
With the above directory permissions, you can start a SFTPGo instance like this:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
--user 1100:1100 \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
--mount type=bind,source="${PWD}/data",target=/srv/sftpgo \
|
||||
--mount type=bind,source="${PWD}/config",target=/var/lib/sftpgo \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
Alternately build your own image using the official one as a base, here is a sample Dockerfile:
|
||||
|
||||
```shell
|
||||
FROM drakkan/sftpgo:tag
|
||||
USER root
|
||||
RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpgo
|
||||
USER 1100:1100
|
||||
```
|
||||
|
||||
**Note:** the above Dockerfile will not work if you use the [distroless](../Dockerfile.distroless) image as base since the `chown` command is not available there.
|
||||
|
||||
## Image Variants
|
||||
|
||||
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge`, `edge-slim`, `edge-alpine`, `edge-alpine-slim` and `edge-distroless-slim` tags are updated after each new commit.
|
||||
|
||||
### `sftpgo:<version>`
|
||||
|
||||
This is the defacto image, it is based on [Debian](https://www.debian.org/), available in [the `debian` official image](https://hub.docker.com/_/debian). If you are unsure about what your needs are, you probably want to use this one.
|
||||
|
||||
### `sftpgo:<version>-alpine`
|
||||
|
||||
This image is based on the popular [Alpine Linux project](https://alpinelinux.org/), available in [the `alpine` official image](https://hub.docker.com/_/alpine). Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
|
||||
|
||||
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](https://musl.libc.org/) instead of [glibc and friends](https://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See [this Hacker News comment thread](https://news.ycombinator.com/item?id=10782897) for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
|
||||
|
||||
### `sftpgo:<version>-distroless`
|
||||
|
||||
This image is based on the popular [Distroless project](https://github.com/GoogleContainerTools/distroless). We use the latest Debian based distroless image as base.
|
||||
|
||||
Distroless variant contains only a statically linked sftpgo binary and its minimal runtime dependencies and so it doesn't allow shell access (no shell is installed).
|
||||
SQLite support is disabled since it requires CGO and so a C runtime which is not installed.
|
||||
The default data provider is `bolt`, all the supported data providers expect `sqlite` work.
|
||||
We only provide the slim variant and so the optional `git` dependency is not available.
|
||||
|
||||
### `sftpgo:<suite>-slim`
|
||||
|
||||
These tags provide a slimmer image that does not include the optional `git` dependency.
|
||||
|
||||
## Helm Chart
|
||||
|
||||
An helm chart is [available](https://artifacthub.io/packages/helm/sagikazarmark/sftpgo). You can find the source code [here](https://github.com/sagikazarmark/helm-charts/tree/master/charts/sftpgo).
|
|
@ -1,25 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
ARCH=`uname -m`
|
||||
|
||||
case ${ARCH} in
|
||||
"x86_64")
|
||||
SUFFIX=amd64
|
||||
;;
|
||||
"aarch64")
|
||||
SUFFIX=arm64
|
||||
;;
|
||||
*)
|
||||
SUFFIX=ppc64le
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "download plugins for arch ${SUFFIX}"
|
||||
|
||||
for PLUGIN in geoipfilter kms pubsub eventstore eventsearch auth
|
||||
do
|
||||
echo "download plugin from https://github.com/sftpgo/sftpgo-plugin-${PLUGIN}/releases/latest/download/sftpgo-plugin-${PLUGIN}-linux-${SUFFIX}"
|
||||
curl -L "https://github.com/sftpgo/sftpgo-plugin-${PLUGIN}/releases/latest/download/sftpgo-plugin-${PLUGIN}-linux-${SUFFIX}" --output "/usr/local/bin/sftpgo-plugin-${PLUGIN}"
|
||||
chmod 755 "/usr/local/bin/sftpgo-plugin-${PLUGIN}"
|
||||
done
|
28
docker/scripts/entrypoint-alpine.sh
Executable file
28
docker/scripts/entrypoint-alpine.sh
Executable file
|
@ -0,0 +1,28 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
SFTPGO_PUID=${SFTPGO_PUID:-1000}
|
||||
SFTPGO_PGID=${SFTPGO_PGID:-1000}
|
||||
|
||||
if [ "$1" = 'sftpgo' ]; then
|
||||
if [ "$(id -u)" = '0' ]; then
|
||||
for DIR in "/etc/sftpgo" "/var/lib/sftpgo" "/srv/sftpgo"
|
||||
do
|
||||
DIR_UID=$(stat -c %u ${DIR})
|
||||
DIR_GID=$(stat -c %g ${DIR})
|
||||
if [ ${DIR_UID} != ${SFTPGO_PUID} ] || [ ${DIR_GID} != ${SFTPGO_PGID} ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"change owner for \"'${DIR}'\" UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
if [ ${DIR} = "/etc/sftpgo" ]; then
|
||||
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
|
||||
else
|
||||
chown ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
|
||||
fi
|
||||
fi
|
||||
done
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
exec su-exec ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
32
docker/scripts/entrypoint.sh
Executable file
32
docker/scripts/entrypoint.sh
Executable file
|
@ -0,0 +1,32 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
SFTPGO_PUID=${SFTPGO_PUID:-1000}
|
||||
SFTPGO_PGID=${SFTPGO_PGID:-1000}
|
||||
|
||||
if [ "$1" = 'sftpgo' ]; then
|
||||
if [ "$(id -u)" = '0' ]; then
|
||||
getent passwd ${SFTPGO_PUID} > /dev/null
|
||||
HAS_PUID=$?
|
||||
getent group ${SFTPGO_PGID} > /dev/null
|
||||
HAS_PGID=$?
|
||||
if [ ${HAS_PUID} -ne 0 ] || [ ${HAS_PGID} -ne 0 ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"prepare to run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
if [ ${HAS_PGID} -ne 0 ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set GID to: '${SFTPGO_PGID}'"}'
|
||||
groupmod -g ${SFTPGO_PGID} sftpgo
|
||||
fi
|
||||
if [ ${HAS_PUID} -ne 0 ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set UID to: '${SFTPGO_PUID}'"}'
|
||||
usermod -u ${SFTPGO_PUID} sftpgo
|
||||
fi
|
||||
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} /etc/sftpgo
|
||||
chown ${SFTPGO_PUID}:${SFTPGO_PGID} /var/lib/sftpgo /srv/sftpgo
|
||||
fi
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
exec gosu ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
50
docker/sftpgo/alpine/Dockerfile
Normal file
50
docker/sftpgo/alpine/Dockerfile
Normal file
|
@ -0,0 +1,50 @@
|
|||
FROM golang:alpine as builder
|
||||
|
||||
RUN apk add --no-cache git gcc g++ ca-certificates \
|
||||
&& go get -v -d github.com/drakkan/sftpgo
|
||||
WORKDIR /go/src/github.com/drakkan/sftpgo
|
||||
ARG TAG
|
||||
ARG FEATURES
|
||||
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
|
||||
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
|
||||
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o /go/bin/sftpgo
|
||||
|
||||
FROM alpine:latest
|
||||
|
||||
RUN apk add --no-cache ca-certificates su-exec \
|
||||
&& mkdir -p /data /etc/sftpgo /srv/sftpgo/config /srv/sftpgo/web /srv/sftpgo/backups
|
||||
|
||||
# git and rsync are optional, uncomment the next line to add support for them if needed.
|
||||
#RUN apk add --no-cache git rsync
|
||||
|
||||
COPY --from=builder /go/bin/sftpgo /bin/
|
||||
COPY --from=builder /go/src/github.com/drakkan/sftpgo/sftpgo.json /etc/sftpgo/sftpgo.json
|
||||
COPY --from=builder /go/src/github.com/drakkan/sftpgo/templates /srv/sftpgo/web/templates
|
||||
COPY --from=builder /go/src/github.com/drakkan/sftpgo/static /srv/sftpgo/web/static
|
||||
COPY docker-entrypoint.sh /bin/entrypoint.sh
|
||||
RUN chmod +x /bin/entrypoint.sh
|
||||
|
||||
VOLUME [ "/data", "/srv/sftpgo/config", "/srv/sftpgo/backups" ]
|
||||
EXPOSE 2022 8080
|
||||
|
||||
# uncomment the following settings to enable FTP support
|
||||
#ENV SFTPGO_FTPD__BIND_PORT=2121
|
||||
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
|
||||
#EXPOSE 2121
|
||||
|
||||
# we need to expose the passive ports range too
|
||||
#EXPOSE 50000-50100
|
||||
|
||||
# it is a good idea to provide certificates to enable FTPS too
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=/srv/sftpgo/config/mycert.crt
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=/srv/sftpgo/config/mycert.key
|
||||
|
||||
# uncomment the following setting to enable WebDAV support
|
||||
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
|
||||
|
||||
# it is a good idea to provide certificates to enable WebDAV over HTTPS
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
|
||||
|
||||
ENTRYPOINT ["/bin/entrypoint.sh"]
|
||||
CMD ["serve"]
|
61
docker/sftpgo/alpine/README.md
Normal file
61
docker/sftpgo/alpine/README.md
Normal file
|
@ -0,0 +1,61 @@
|
|||
# SFTPGo with Docker and Alpine
|
||||
|
||||
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
|
||||
|
||||
This DockerFile is made to build image to host multiple instances of SFTPGo started with different users.
|
||||
|
||||
## Example
|
||||
|
||||
> 1003 is a custom uid:gid for this instance of SFTPGo
|
||||
|
||||
```bash
|
||||
# Prereq on docker host
|
||||
sudo groupadd -g 1003 sftpgrp && \
|
||||
sudo useradd -u 1003 -g 1003 sftpuser -d /home/sftpuser/ && \
|
||||
sudo -u sftpuser mkdir /home/sftpuser/{conf,data} && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /home/sftpuser/conf/sftpgo.json
|
||||
|
||||
# Edit sftpgo.json as you need
|
||||
|
||||
# Get and build SFTPGo image.
|
||||
# Add --build-arg TAG=LATEST to build the latest tag or e.g. TAG=v1.0.0 for a specific tag/commit.
|
||||
# Add --build-arg FEATURES=<build features comma separated> to specify the features to build.
|
||||
git clone https://github.com/drakkan/sftpgo.git && \
|
||||
cd sftpgo && \
|
||||
sudo docker build -t sftpgo docker/sftpgo/alpine/
|
||||
|
||||
# Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the "initprovider" command will create the required tables.
|
||||
sudo docker run --name sftpgo \
|
||||
-e PUID=1003 \
|
||||
-e GUID=1003 \
|
||||
-v /home/sftpuser/conf/:/srv/sftpgo/config \
|
||||
sftpgo initprovider -c /srv/sftpgo/config
|
||||
|
||||
# Start the image
|
||||
sudo docker rm sftpgo && sudo docker run --name sftpgo \
|
||||
-e SFTPGO_LOG_FILE_PATH= \
|
||||
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
|
||||
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
|
||||
-e SFTPGO_HTTPD__STATIC_FILES_PATH=/srv/sftpgo/web/static \
|
||||
-e SFTPGO_HTTPD__BACKUPS_PATH=/srv/sftpgo/backups \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
-e PUID=1003 \
|
||||
-e GUID=1003 \
|
||||
-v /home/sftpuser/conf/:/srv/sftpgo/config \
|
||||
-v /home/sftpuser/data:/data \
|
||||
-v /home/sftpuser/backups:/srv/sftpgo/backups \
|
||||
sftpgo
|
||||
```
|
||||
|
||||
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.
|
||||
|
||||
The script `entrypoint.sh` makes sure to correct the permissions of directories and start the process with the right user.
|
||||
|
||||
Several images can be run with different parameters.
|
||||
|
||||
## Custom systemd script
|
||||
|
||||
An example of systemd script is present [here](sftpgo.service), with `Environment` parameter to set `PUID` and `GUID`
|
||||
|
||||
`WorkingDirectory` parameter must be exist with one file in this directory like `sftpgo-${PUID}.env` corresponding to the variable file for SFTPGo instance.
|
7
docker/sftpgo/alpine/docker-entrypoint.sh
Executable file
7
docker/sftpgo/alpine/docker-entrypoint.sh
Executable file
|
@ -0,0 +1,7 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -eu
|
||||
|
||||
chown -R "${PUID}:${GUID}" /data /etc/sftpgo /srv/sftpgo/config /srv/sftpgo/backups \
|
||||
&& exec su-exec "${PUID}:${GUID}" \
|
||||
/bin/sftpgo "$@"
|
35
docker/sftpgo/alpine/sftpgo.service
Normal file
35
docker/sftpgo/alpine/sftpgo.service
Normal file
|
@ -0,0 +1,35 @@
|
|||
[Unit]
|
||||
Description=SFTPGo server
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
User=root
|
||||
Group=root
|
||||
WorkingDirectory=/etc/sftpgo
|
||||
Environment=PUID=1003
|
||||
Environment=GUID=1003
|
||||
EnvironmentFile=-/etc/sysconfig/sftpgo.env
|
||||
ExecStartPre=-docker kill sftpgo
|
||||
ExecStartPre=-docker rm sftpgo
|
||||
ExecStart=docker run --name sftpgo \
|
||||
--env-file sftpgo-${PUID}.env \
|
||||
-e PUID=${PUID} \
|
||||
-e GUID=${GUID} \
|
||||
-e SFTPGO_LOG_FILE_PATH= \
|
||||
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
|
||||
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
|
||||
-e SFTPGO_HTTPD__STATIC_FILES_PATH=/srv/sftpgo/web/static \
|
||||
-e SFTPGO_HTTPD__BACKUPS_PATH=/srv/sftpgo/backups \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
-v /home/sftpuser/conf/:/srv/sftpgo/config \
|
||||
-v /home/sftpuser/data:/data \
|
||||
-v /home/sftpuser/backups:/srv/sftpgo/backups \
|
||||
sftpgo
|
||||
ExecStop=docker stop sftpgo
|
||||
SyslogIdentifier=sftpgo
|
||||
Restart=always
|
||||
RestartSec=10s
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
93
docker/sftpgo/debian/Dockerfile
Normal file
93
docker/sftpgo/debian/Dockerfile
Normal file
|
@ -0,0 +1,93 @@
|
|||
# we use a multi stage build to have a separate build and run env
|
||||
FROM golang:latest as buildenv
|
||||
LABEL maintainer="nicola.murino@gmail.com"
|
||||
RUN go get -v -d github.com/drakkan/sftpgo
|
||||
WORKDIR /go/src/github.com/drakkan/sftpgo
|
||||
ARG TAG
|
||||
ARG FEATURES
|
||||
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
|
||||
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
|
||||
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
# now define the run environment
|
||||
FROM debian:latest
|
||||
|
||||
# ca-certificates is needed for Cloud Storage Support and for HTTPS/FTPS.
|
||||
RUN apt-get update && apt-get install -y ca-certificates && apt-get clean
|
||||
|
||||
# git and rsync are optional, uncomment the next line to add support for them if needed.
|
||||
#RUN apt-get update && apt-get install -y git rsync && apt-get clean
|
||||
|
||||
ARG BASE_DIR=/app
|
||||
ARG DATA_REL_DIR=data
|
||||
ARG CONFIG_REL_DIR=config
|
||||
ARG BACKUP_REL_DIR=backups
|
||||
ARG USERNAME=sftpgo
|
||||
ARG GROUPNAME=sftpgo
|
||||
ARG UID=515
|
||||
ARG GID=515
|
||||
ARG WEB_REL_PATH=web
|
||||
|
||||
# HOME_DIR for sftpgo itself
|
||||
ENV HOME_DIR=${BASE_DIR}/${USERNAME}
|
||||
# DATA_DIR, this is a volume that you can use hold user's home dirs
|
||||
ENV DATA_DIR=${BASE_DIR}/${DATA_REL_DIR}
|
||||
# CONFIG_DIR, this is a volume to persist the daemon private keys, configuration file ecc..
|
||||
ENV CONFIG_DIR=${BASE_DIR}/${CONFIG_REL_DIR}
|
||||
# BACKUPS_DIR, this is a volume to store backups done using "dumpdata" REST API
|
||||
ENV BACKUPS_DIR=${BASE_DIR}/${BACKUP_REL_DIR}
|
||||
ENV WEB_DIR=${BASE_DIR}/${WEB_REL_PATH}
|
||||
|
||||
RUN mkdir -p ${DATA_DIR} ${CONFIG_DIR} ${WEB_DIR} ${BACKUPS_DIR}
|
||||
RUN groupadd --system -g ${GID} ${GROUPNAME}
|
||||
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /usr/sbin/nologin --gid ${GID} --uid ${UID} ${USERNAME}
|
||||
|
||||
WORKDIR ${HOME_DIR}
|
||||
RUN mkdir -p bin .config/sftpgo
|
||||
ENV PATH ${HOME_DIR}/bin:$PATH
|
||||
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/sftpgo bin/sftpgo
|
||||
# default config file to use if no config file is found inside the CONFIG_DIR volume.
|
||||
# You can override each configuration options via env vars too
|
||||
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/sftpgo.json .config/sftpgo/
|
||||
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/templates ${WEB_DIR}/templates
|
||||
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/static ${WEB_DIR}/static
|
||||
RUN chown -R ${UID}:${GID} ${DATA_DIR} ${BACKUPS_DIR}
|
||||
|
||||
# run as non root user
|
||||
USER ${USERNAME}
|
||||
|
||||
EXPOSE 2022 8080
|
||||
|
||||
# the defined volumes must have write access for the UID and GID defined above
|
||||
VOLUME [ "$DATA_DIR", "$CONFIG_DIR", "$BACKUPS_DIR" ]
|
||||
|
||||
# override some default configuration options using env vars
|
||||
ENV SFTPGO_CONFIG_DIR=${CONFIG_DIR}
|
||||
# setting SFTPGO_LOG_FILE_PATH to an empty string will log to stdout
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
ENV SFTPGO_HTTPD__BIND_ADDRESS=""
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=${WEB_DIR}/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=${WEB_DIR}/static
|
||||
ENV SFTPGO_DATA_PROVIDER__USERS_BASE_DIR=${DATA_DIR}
|
||||
ENV SFTPGO_HTTPD__BACKUPS_PATH=${BACKUPS_DIR}
|
||||
|
||||
# uncomment the following settings to enable FTP support
|
||||
#ENV SFTPGO_FTPD__BIND_PORT=2121
|
||||
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
|
||||
#EXPOSE 2121
|
||||
# we need to expose the passive ports range too
|
||||
#EXPOSE 50000-50100
|
||||
|
||||
# it is a good idea to provide certificates to enable FTPS too
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
|
||||
|
||||
# uncomment the following setting to enable WebDAV support
|
||||
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
|
||||
|
||||
# it is a good idea to provide certificates to enable WebDAV over HTTPS
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
|
||||
|
||||
ENTRYPOINT ["sftpgo"]
|
||||
CMD ["serve"]
|
59
docker/sftpgo/debian/README.md
Normal file
59
docker/sftpgo/debian/README.md
Normal file
|
@ -0,0 +1,59 @@
|
|||
# Dockerfile based on Debian stable
|
||||
|
||||
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
|
||||
|
||||
Please read the comments inside the `Dockerfile` to learn how to customize things for your setup.
|
||||
|
||||
You can build the container image using `docker build`, for example:
|
||||
|
||||
```bash
|
||||
docker build -t="drakkan/sftpgo" .
|
||||
```
|
||||
|
||||
This will build master of github.com/drakkan/sftpgo.
|
||||
|
||||
To build the latest tag you can add `--build-arg TAG=LATEST` and to build a specific tag/commit you can use for example `TAG=v1.0.0`, like this:
|
||||
|
||||
```bash
|
||||
docker build -t="drakkan/sftpgo" --build-arg TAG=v1.0.0 .
|
||||
```
|
||||
|
||||
To specify the features to build you can add `--build-arg FEATURES=<build features comma separated>`. For example you can disable SQLite and S3 support like this:
|
||||
|
||||
```bash
|
||||
docker build -t="drakkan/sftpgo" --build-arg FEATURES=nosqlite,nos3 .
|
||||
```
|
||||
|
||||
Please take a look at the [build from source](./../../../docs/build-from-source.md) documentation for the complete list of the features that can be disabled.
|
||||
|
||||
Now create the required folders on the host system, for example:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
|
||||
```
|
||||
|
||||
and give write access to them to the UID/GID defined inside the `Dockerfile`. You can choose to create a new user, on the host system, with a matching UID/GID pair, or simply do something like this:
|
||||
|
||||
```bash
|
||||
sudo chown -R <UID>:<GID> /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
|
||||
```
|
||||
|
||||
Download the default configuration file and edit it as you need:
|
||||
|
||||
```bash
|
||||
sudo curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /srv/sftpgo/config/sftpgo.json
|
||||
```
|
||||
|
||||
Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the `initprovider` command will create the required tables:
|
||||
|
||||
```bash
|
||||
docker run --name sftpgo --mount type=bind,source=/srv/sftpgo/config,target=/app/config drakkan/sftpgo initprovider -c /app/config
|
||||
```
|
||||
|
||||
and finally you can run the image using something like this:
|
||||
|
||||
```bash
|
||||
docker rm sftpgo && docker run --name sftpgo -p 8080:8080 -p 2022:2022 --mount type=bind,source=/srv/sftpgo/data,target=/app/data --mount type=bind,source=/srv/sftpgo/config,target=/app/config --mount type=bind,source=/srv/sftpgo/backups,target=/app/backups drakkan/sftpgo
|
||||
```
|
||||
|
||||
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.
|
22
docs/account.md
Normal file
22
docs/account.md
Normal file
|
@ -0,0 +1,22 @@
|
|||
# Account's configuration properties
|
||||
|
||||
Please take a look at the [OpenAPI schema](../openapi/openapi.yaml) for the exact definitions of user, folder and admin fields.
|
||||
If you need an example you can export a dump using the Web Admin or by invoking the `dumpdata` endpoint directly, you need to obtain an access token first, for example:
|
||||
|
||||
```shell
|
||||
$ curl "http://admin:password@127.0.0.1:8080/api/v2/token"
|
||||
{"access_token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME","expires_at":"2021-02-14T20:41:01Z"}
|
||||
|
||||
curl -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME" "http://127.0.0.1:8080/api/v2/dumpdata?output-data=1"
|
||||
```
|
||||
|
||||
the dump is a JSON with all SFTPGo data including users, folders, admins.
|
||||
|
||||
These properties are stored inside the configured data provider.
|
||||
|
||||
SFTPGo supports checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512` or `$pbkdf2-b64salt-sha256$`. For example the pbkdf2-sha256 of the word password using 150000 iterations and E86a9YMX3zC7 as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. In pbkdf2 variant with b64salt the salt is base64 encoded. For bcrypt the format must be the one supported by golang's crypto/bcrypt package, for example the password secret with cost 14 must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as bcrypt, pbkdf2, md5crypt or sha512crypt and it will be stored as is.
|
||||
|
||||
If you want to use your existing accounts, you have these options:
|
||||
|
||||
- you can import your users inside SFTPGo. Take a look at [convert users](.../examples/convertusers) script, it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
|
||||
- you can use an external authentication program
|
20
docs/azure-blob-storage.md
Normal file
20
docs/azure-blob-storage.md
Normal file
|
@ -0,0 +1,20 @@
|
|||
# Azure Blob Storage backend
|
||||
|
||||
To connect SFTPGo to Azure Blob Storage, you need to specify the access credentials. Azure Blob Storage has different options for credentials, we support:
|
||||
|
||||
1. Providing an account name and account key.
|
||||
2. Providing a shared access signature (SAS).
|
||||
|
||||
If you authenticate using account and key you also need to specify a container. The endpoint can generally be left blank, the default is `blob.core.windows.net`.
|
||||
|
||||
If you provide a SAS URL the container is optional and if given it must match the one inside the shared access signature.
|
||||
|
||||
If you want to connect to an emulator such as [Azurite](https://github.com/Azure/Azurite) you need to provide the account name/key pair and an endpoint prefixed with the protocol, for example `http://127.0.0.1:10000`.
|
||||
|
||||
Specifying a different `key_prefix`, you can assign different "folders" of the same container to different users. This is similar to a chroot directory for local filesystem. Each SFTPGo user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
|
||||
|
||||
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and the Azure Blob service then the client should wait for the last parts to be uploaded to Azure after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
|
||||
|
||||
The configured container must exist.
|
||||
|
||||
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations. As with S3 `chtime` will fail with the default configuration, you can install the [metadata plugin](https://github.com/sftpgo/sftpgo-plugin-metadata) to make it work and thus be able to preserve/change file modification times.
|
40
docs/build-from-source.md
Normal file
40
docs/build-from-source.md
Normal file
|
@ -0,0 +1,40 @@
|
|||
# Build SFTPGo from source
|
||||
|
||||
Download the sources and use `go build`.
|
||||
|
||||
The following build tags are available:
|
||||
|
||||
- `nogcs`, disable Google Cloud Storage backend, default enabled
|
||||
- `nos3`, disable S3 Compabible Object Storage backends, default enabled
|
||||
- `noazblob`, disable Azure Blob Storage backend, default enabled
|
||||
- `nobolt`, disable Bolt data provider, default enabled
|
||||
- `nomysql`, disable MySQL data provider, default enabled
|
||||
- `nopgsql`, disable PostgreSQL data provider, default enabled
|
||||
- `nosqlite`, disable SQLite data provider, default enabled
|
||||
- `noportable`, disable portable mode, default enabled
|
||||
- `nometrics`, disable Prometheus metrics, default enabled
|
||||
|
||||
If no build tag is specified the build will include the default features.
|
||||
|
||||
The optional [SQLite driver](https://github.com/mattn/go-sqlite3 "go-sqlite3") is a `CGO` package and so it requires a `C` compiler at build time.
|
||||
On Linux and macOS, a compiler is easy to install or already installed. On Windows, you need to download [MinGW-w64](https://sourceforge.net/projects/mingw-w64/files/) and build SFTPGo from its command prompt.
|
||||
|
||||
The compiler is a build time only dependency. It is not required at runtime.
|
||||
|
||||
Version info, such as git commit and build date, can be embedded setting the following string variables at build time:
|
||||
|
||||
- `github.com/drakkan/sftpgo/v2/version.commit`
|
||||
- `github.com/drakkan/sftpgo/v2/version.date`
|
||||
|
||||
For example, you can build using the following command:
|
||||
|
||||
```bash
|
||||
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
```
|
||||
|
||||
You should get a version that includes git commit, build date and available features like this one:
|
||||
|
||||
```bash
|
||||
$ ./sftpgo -v
|
||||
SFTPGo 0.9.6-dev-b30614e-dirty-2020-06-19T11:04:56Z +metrics -gcs -s3 +bolt +mysql +pgsql -sqlite +portable
|
||||
```
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue