Compare commits

...

41 commits
main ... 2.2.x

Author SHA1 Message Date
Nicola Murino
2e9919d1a5
sftpfs: add more ciphers, KEXs and MACs
they are negotiated according to the order.
Restrictions are generally configured server side.
I want to avoid to expose other settings for now.

Fixes #817

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-06 14:10:55 +02:00
Nicola Murino
c40a48c6f3
sql provider: enhanced folder mapping query using an upsert
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-30 13:02:32 +02:00
Nicola Murino
c7073f90cb
improve readlink handling
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-27 19:01:42 +02:00
Nicola Murino
80c8486d24
webclient: don't restore checkbox status
Fixes #807

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-26 09:17:27 +02:00
Nicola Murino
cf9d081495
update moment.js to v2.29.2
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-15 09:58:15 +02:00
Nicola Murino
05ed7b6aa4
sshd: disable sha1 based KEXs and MACs by default
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-04 19:21:42 +02:00
Nicola Murino
68a4bbd10c
be sure to close an SSH connection if all channels are idle
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-01 08:05:07 +02:00
Nicola Murino
1b21c19a78
add jq to full docker image variants
Fixes #767

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-23 11:37:27 +01:00
Nicola Murino
ee600c716b
docker: add rsync to "full" images
there are better alternatives and rsync will only work on local
filesystem, but it can still be useful to some people

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-22 17:42:56 +01:00
Nicola Murino
6b77b55068
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-16 19:27:31 +01:00
Nicola Murino
5a45af76f3
db defender: fix list hosts queries
ensure that banned hosts are always returned

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-16 18:27:47 +01:00
Nicola Murino
7959737442
ensure that defaults defined in code match the default config file
Fixes #754

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-14 10:40:47 +01:00
Nicola Murino
d3fee39388
sftpfs: add a dial timeout
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-11 17:12:47 +01:00
Nicola Murino
97122ef06c
backport some fixes from the main branch
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-04 19:14:39 +01:00
Nicola Murino
8a6c2265a4
deb/rpm packages: attempt to set the cap_net_bind_service capability
so the service can bind to privileged ports without running as root us

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-28 10:06:39 +01:00
Nicola Murino
b65dae89e8
web setup: add an optional installation code
The purpose of this code is to prevent anyone who can access to
the initial setup screen from creating an admin user

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-27 14:27:53 +01:00
Nicola Murino
4ed6e96c7b
sftpfs: improve rename and remove
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-16 17:08:22 +01:00
Nicola Murino
6d3ff5a8ad
logger: fix UTC time func
Fixes #719

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-14 12:37:55 +01:00
Nicola Murino
a7921500f5
set version to 2.2.2
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-06 09:59:28 +01:00
Nicola Murino
c3188a2b5a
share download uncompressed: don't allow symlinks
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-06 08:49:08 +01:00
Nicola Murino
3f38f44d42
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-03 18:40:28 +01:00
Nicola Murino
0a3122f03e
fix prefix for defender database tables
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-30 11:10:08 +01:00
Nicola Murino
8cd9e886f3
CI: enable docker
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 18:14:46 +01:00
Nicola Murino
016e285745
update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 17:57:39 +01:00
Nicola Murino
467708dc1c
Admin UI: allow to create multiple users/folders from templates
the clone button is not needed anymore, you can select a user and
click on template to generate one or more similar users or you can
create users/folders from an empty template

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:52:20 +01:00
Nicola Murino
ef626befb1
web admin: simplify user page
The page to add/edit users should be less less intimidating now.
All the advanced settings are hidden by default. Permissions are set
to any, so if you also have a users base dir set, to add a user
you have to simply set username, password or public key and save

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:52:16 +01:00
Nicola Murino
f61456ce87
sshd: improve docs about supported ciphers, KEX and MACs
also added a check to ensure that the configured values are valid

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:52:09 +01:00
Nicola Murino
ba3548c2c3
make the sdk a separate module
The SFTPGo SDK now is at the following URL

https://github.com/sftpgo/sdk

Fixes #657

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:52:03 +01:00
Nicola Murino
0e2d673889
move kms implementation outside the sdk package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:55 +01:00
Nicola Murino
bf03eb2a88
log at info level the service configurations
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:48 +01:00
Nicola Murino
3603493146
move plugin handling outside the sdk package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:42 +01:00
Nicola Murino
6a20e7411b
sdk: add a logger interface
we are now ready to make the sdk a separate module

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:36 +01:00
Nicola Murino
0e1d8fc4d9
move kms definitions to the sdk package
This is the first step to make the sdk a separate module

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:31 +01:00
Nicola Murino
08a7f08d6e
httpd: switch back to chi Recoverer now that the required patch is merged
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:24 +01:00
Nicola Murino
2c8968b5dc
eventsearcher plugin: add support to search for provider, bucket, endpoint Signed-off-by: Nicola Murino <nicola.murino@gmail.com> 2022-01-13 10:51:02 +01:00
Nicola Murino
f65c973c99
notifier plugins: add provider, bucket and endpoint to nottifier params
Fixes #656

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:50:57 +01:00
Nicola Murino
85c2d474d9
notifiers plugin: replace params with a struct
Fixes #658

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:50:48 +01:00
Nicola Murino
6c6a6e3d16
Revert "notifier plugin: fix failed events recovery"
This reverts commit 92af6efc0c.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:50:35 +01:00
Nicola Murino
92122bd962
sqlite: fix prefix for api_key indexes
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-09 11:56:27 +01:00
Nicola Murino
112306b9a2
CI: fix development workflow
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-02 16:38:48 +01:00
Nicola Murino
92af6efc0c
notifier plugin: fix failed events recovery
the event timestamp is in nanosecons not milliseconds

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-02 16:35:29 +01:00
138 changed files with 2645 additions and 8420 deletions

View file

@ -2,7 +2,7 @@ name: CI
on:
push:
branches: [main]
branches: [2.2.x]
pull_request:
jobs:
@ -12,20 +12,20 @@ jobs:
strategy:
matrix:
go: [1.17]
os: [ubuntu-latest, macos-latest]
os: [ubuntu-18.04, macos-10.15]
upload-coverage: [true]
include:
- go: 1.17
os: windows-latest
os: windows-2019
upload-coverage: false
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go }}
@ -34,7 +34,6 @@ jobs:
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
cd tests/eventsearcher
go mod tidy
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
@ -55,7 +54,6 @@ jobs:
go-winres simply --arch amd64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
cd tests/eventsearcher
go mod tidy
go build -trimpath -ldflags "-s -w" -o eventsearcher.exe
cd ../..
mkdir arm64
@ -77,7 +75,7 @@ jobs:
- name: Upload coverage to Codecov
if: ${{ matrix.upload-coverage }}
uses: codecov/codecov-action@v2
uses: codecov/codecov-action@v3
with:
file: ./coverage.txt
fail_ci_if_error: false
@ -170,21 +168,21 @@ jobs:
- name: Upload Windows installer x86_64 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_x86_64
path: ./sftpgo_windows_x86_64.exe
- name: Upload Windows installer arm64 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_arm64
path: ./sftpgo_windows_arm64.exe
- name: Upload Windows installer x86 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_x86
path: ./sftpgo_windows_x86.exe
@ -210,27 +208,26 @@ jobs:
- name: Upload build artifact
if: startsWith(matrix.os, 'ubuntu-') != true
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
path: output
test-goarch-386:
name: Run test cases on 32-bit arch
runs-on: ubuntu-latest
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: 1.17
- name: Build
run: |
cd tests/eventsearcher
go mod tidy
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
env:
@ -245,7 +242,7 @@ jobs:
test-postgresql-mysql-crdb:
name: Test with PgSQL/MySQL/Cockroach
runs-on: ubuntu-latest
runs-on: ubuntu-18.04
services:
postgres:
@ -277,17 +274,16 @@ jobs:
- 3307:3306
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: 1.17
- name: Build
run: |
cd tests/eventsearcher
go mod tidy
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
@ -338,23 +334,23 @@ jobs:
go-arch: amd64
- arch: aarch64
distro: ubuntu18.04
go: latest
go: go1.17.9
go-arch: arm64
- arch: ppc64le
distro: ubuntu18.04
go: latest
go: go1.17.9
go-arch: ppc64le
- arch: armv7
distro: ubuntu18.04
go: latest
go: go1.17.9
go-arch: arm7
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Go
if: ${{ matrix.arch == 'amd64' }}
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go }}
@ -374,7 +370,7 @@ jobs:
gzip output/man/man1/*
cp sftpgo output/
- uses: uraimo/run-on-arch-action@v2.1.1
- uses: uraimo/run-on-arch-action@v2
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
@ -422,7 +418,7 @@ jobs:
cp sftpgo output/
- name: Upload build artifact
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-linux-${{ matrix.arch }}-go-${{ matrix.go }}
path: output
@ -437,13 +433,13 @@ jobs:
echo "::set-output name=pkg-version::${PKG_VERSION}"
- name: Upload Debian Package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-rpm
path: pkgs/dist/rpm/*
@ -452,8 +448,12 @@ jobs:
name: golangci-lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.17
- uses: actions/checkout@v3
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2
uses: golangci/golangci-lint-action@v3
with:
version: latest

View file

@ -5,7 +5,7 @@ on:
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
push:
branches:
- main
- 2.2.x
tags:
- v*
pull_request:
@ -30,7 +30,7 @@ jobs:
optional_deps: false
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Gather image information
id: info

View file

@ -5,16 +5,16 @@ on:
tags: 'v*'
env:
GO_VERSION: 1.17.5
GO_VERSION: 1.17.9
jobs:
prepare-sources-with-deps:
name: Prepare sources with deps
runs-on: ubuntu-latest
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
@ -31,7 +31,7 @@ jobs:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Upload build artifact
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
@ -45,9 +45,9 @@ jobs:
os: [macos-10.15, windows-2019]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
@ -206,7 +206,7 @@ jobs:
- name: Upload macOS x86_64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
@ -214,7 +214,7 @@ jobs:
- name: Upload macOS arm64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
@ -222,7 +222,7 @@ jobs:
- name: Upload Windows installer x86_64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
path: ./sftpgo_windows_x86_64.exe
@ -230,7 +230,7 @@ jobs:
- name: Upload Windows installer arm64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.exe
path: ./sftpgo_windows_arm64.exe
@ -238,7 +238,7 @@ jobs:
- name: Upload Windows installer x86 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86.exe
path: ./sftpgo_windows_x86.exe
@ -246,7 +246,7 @@ jobs:
- name: Upload Windows portable artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable.zip
path: ./sftpgo_portable.zip
@ -283,10 +283,10 @@ jobs:
tar-arch: armv7
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Go
if: ${{ matrix.arch == 'amd64' }}
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
@ -326,7 +326,7 @@ jobs:
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- uses: uraimo/run-on-arch-action@v2.1.1
- uses: uraimo/run-on-arch-action@v2
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
@ -373,7 +373,7 @@ jobs:
cd ..
- name: Upload build artifact for ${{ matrix.arch }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
path: ./output/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
@ -391,14 +391,14 @@ jobs:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Deb Package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
retention-days: 1
- name: Upload RPM Package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
@ -417,22 +417,22 @@ jobs:
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
@ -455,7 +455,7 @@ jobs:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Linux bundle
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
path: ./bundle/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
@ -467,7 +467,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get versions
id: get_version
run: |
@ -478,102 +478,102 @@ jobs:
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Download Linux bundle artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
- name: Download Deb amd64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_amd64.deb
- name: Download Deb arm64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_arm64.deb
- name: Download Deb ppc64le artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_ppc64el.deb
- name: Download Deb armv7 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_armhf.deb
- name: Download RPM x86_64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.x86_64.rpm
- name: Download RPM aarch64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.aarch64.rpm
- name: Download RPM ppc64le artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.ppc64le.rpm
- name: Download RPM armv7 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.armv7hl.rpm
- name: Download macOS x86_64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_x86_64.tar.xz
- name: Download macOS arm64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_arm64.tar.xz
- name: Download Windows installer x86_64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86_64.exe
- name: Download Windows installer arm64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_arm64.exe
- name: Download Windows installer x86 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86.exe
- name: Download Windows portable artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_portable.zip
- name: Download source with deps artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_src_with_deps.tar.xz

View file

@ -25,12 +25,12 @@ RUN set -xe && \
FROM debian:bullseye-slim
# Set to "true" to install the optional git dependency
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y git && rm -rf /var/lib/apt/lists/*; fi
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y jq git rsync && rm -rf /var/lib/apt/lists/*; fi
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups

View file

@ -28,12 +28,12 @@ RUN set -xe && \
FROM alpine:3.15
# Set to "true" to install the optional git dependency
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apk add --update --no-cache ca-certificates tzdata mailcap
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache git; fi
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache jq git rsync; fi
# set up nsswitch.conf for Go's "netgo" implementation
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457

View file

@ -10,12 +10,12 @@ import (
"path/filepath"
"strings"
"github.com/sftpgo/sdk"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
@ -157,18 +157,19 @@ Please take a look at the usage below to customize the serving parameters`,
Permissions: permissions,
HomeDir: portableDir,
Status: 1,
Filters: sdk.UserFilters{
},
Filters: dataprovider.UserFilters{
BaseUserFilters: sdk.BaseUserFilters{
FilePatterns: parsePatternsFilesFilters(),
},
},
FsConfig: vfs.Filesystem{
Provider: sdk.GetProviderByName(portableFsProvider),
S3Config: vfs.S3FsConfig{
S3FsConfig: sdk.S3FsConfig{
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: portableS3Bucket,
Region: portableS3Region,
AccessKey: portableS3AccessKey,
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
Endpoint: portableS3Endpoint,
StorageClass: portableS3StorageClass,
ACL: portableS3ACL,
@ -177,46 +178,45 @@ Please take a look at the usage below to customize the serving parameters`,
UploadConcurrency: portableS3ULConcurrency,
ForcePathStyle: portableS3ForcePathStyle,
},
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
},
GCSConfig: vfs.GCSFsConfig{
GCSFsConfig: sdk.GCSFsConfig{
BaseGCSFsConfig: sdk.BaseGCSFsConfig{
Bucket: portableGCSBucket,
Credentials: kms.NewPlainSecret(portableGCSCredentials),
AutomaticCredentials: portableGCSAutoCredentials,
StorageClass: portableGCSStorageClass,
KeyPrefix: portableGCSKeyPrefix,
},
Credentials: kms.NewPlainSecret(portableGCSCredentials),
},
AzBlobConfig: vfs.AzBlobFsConfig{
AzBlobFsConfig: sdk.AzBlobFsConfig{
BaseAzBlobFsConfig: sdk.BaseAzBlobFsConfig{
Container: portableAzContainer,
AccountName: portableAzAccountName,
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
Endpoint: portableAzEndpoint,
AccessTier: portableAzAccessTier,
SASURL: kms.NewPlainSecret(portableAzSASURL),
KeyPrefix: portableAzKeyPrefix,
UseEmulator: portableAzUseEmulator,
UploadPartSize: int64(portableAzULPartSize),
UploadConcurrency: portableAzULConcurrency,
},
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
SASURL: kms.NewPlainSecret(portableAzSASURL),
},
CryptConfig: vfs.CryptFsConfig{
CryptFsConfig: sdk.CryptFsConfig{
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
},
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
},
SFTPConfig: vfs.SFTPFsConfig{
SFTPFsConfig: sdk.SFTPFsConfig{
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
Endpoint: portableSFTPEndpoint,
Username: portableSFTPUsername,
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
Fingerprints: portableSFTPFingerprints,
Prefix: portableSFTPPrefix,
DisableCouncurrentReads: portableSFTPDisableConcurrentReads,
BufferSize: portableSFTPDBufferSize,
},
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
},
},
},

View file

@ -58,7 +58,6 @@ Please take a look at the usage below to customize the options.`,
func init() {
addConfigFlags(revertProviderCmd)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 10, `10 means the version supported in v2.1.x`)
revertProviderCmd.MarkFlagRequired("to-version") //nolint:errcheck
rootCmd.AddCommand(revertProviderCmd)
}

View file

@ -15,7 +15,7 @@ import (
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
)

View file

@ -14,11 +14,13 @@ import (
"strings"
"time"
"github.com/sftpgo/sdk"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
@ -50,67 +52,63 @@ func InitializeActionHandler(handler ActionHandler) {
actionHandler = handler
}
func handleUnconfiguredPreAction(operation string) error {
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
// Other pre action will deny the operation on error so if we have no configuration we must return
// a nil error
if operation == operationPreDelete {
return errUnconfiguredAction
}
return nil
}
// ExecutePreAction executes a pre-* action and returns the result
func ExecutePreAction(conn *BaseConnection, operation, filePath, virtualPath string, fileSize int64, openFlags int) error {
remoteIP := conn.GetRemoteIP()
plugin.Handler.NotifyFsEvent(time.Now().UnixNano(), operation, conn.User.Username, filePath, "", "", conn.protocol,
remoteIP, virtualPath, "", conn.ID, fileSize, nil)
if !util.IsStringInSlice(operation, Config.Actions.ExecuteOn) {
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
// Other pre action will deny the operation on error so if we have no configuration we must return
// a nil error
if operation == operationPreDelete {
return errUnconfiguredAction
}
return nil
var event *notifier.FsEvent
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
hasHook := util.IsStringInSlice(operation, Config.Actions.ExecuteOn)
if !hasHook && !hasNotifiersPlugin {
return handleUnconfiguredPreAction(operation)
}
notification := newActionNotification(&conn.User, operation, filePath, virtualPath, "", "", "",
conn.protocol, remoteIP, conn.ID, fileSize, openFlags, nil)
return actionHandler.Handle(notification)
event = newActionNotification(&conn.User, operation, filePath, virtualPath, "", "", "",
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, openFlags, nil)
if hasNotifiersPlugin {
plugin.Handler.NotifyFsEvent(event)
}
if !hasHook {
return handleUnconfiguredPreAction(operation)
}
return actionHandler.Handle(event)
}
// ExecuteActionNotification executes the defined hook, if any, for the specified action
func ExecuteActionNotification(conn *BaseConnection, operation, filePath, virtualPath, target, virtualTarget, sshCmd string,
fileSize int64, err error,
) {
remoteIP := conn.GetRemoteIP()
plugin.Handler.NotifyFsEvent(time.Now().UnixNano(), operation, conn.User.Username, filePath, target, sshCmd, conn.protocol,
remoteIP, virtualPath, virtualTarget, conn.ID, fileSize, err)
notification := newActionNotification(&conn.User, operation, filePath, virtualPath, target, virtualTarget, sshCmd,
conn.protocol, remoteIP, conn.ID, fileSize, 0, err)
if util.IsStringInSlice(operation, Config.Actions.ExecuteSync) {
actionHandler.Handle(notification) //nolint:errcheck
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
hasHook := util.IsStringInSlice(operation, Config.Actions.ExecuteOn)
if !hasHook && !hasNotifiersPlugin {
return
}
notification := newActionNotification(&conn.User, operation, filePath, virtualPath, target, virtualTarget, sshCmd,
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, 0, err)
if hasNotifiersPlugin {
plugin.Handler.NotifyFsEvent(notification)
}
go actionHandler.Handle(notification) //nolint:errcheck
if hasHook {
if util.IsStringInSlice(operation, Config.Actions.ExecuteSync) {
actionHandler.Handle(notification) //nolint:errcheck
return
}
go actionHandler.Handle(notification) //nolint:errcheck
}
}
// ActionHandler handles a notification for a Protocol Action.
type ActionHandler interface {
Handle(notification *ActionNotification) error
}
// ActionNotification defines a notification for a Protocol Action.
type ActionNotification struct {
Action string `json:"action"`
Username string `json:"username"`
Path string `json:"path"`
TargetPath string `json:"target_path,omitempty"`
VirtualPath string `json:"virtual_path"`
VirtualTargetPath string `json:"virtual_target_path,omitempty"`
SSHCmd string `json:"ssh_cmd,omitempty"`
FileSize int64 `json:"file_size,omitempty"`
FsProvider int `json:"fs_provider"`
Bucket string `json:"bucket,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
Status int `json:"status"`
Protocol string `json:"protocol"`
IP string `json:"ip"`
SessionID string `json:"session_id"`
Timestamp int64 `json:"timestamp"`
OpenFlags int `json:"open_flags,omitempty"`
Handle(notification *notifier.FsEvent) error
}
func newActionNotification(
@ -119,7 +117,7 @@ func newActionNotification(
fileSize int64,
openFlags int,
err error,
) *ActionNotification {
) *notifier.FsEvent {
var bucket, endpoint string
status := 1
@ -146,7 +144,7 @@ func newActionNotification(
status = 2
}
return &ActionNotification{
return &notifier.FsEvent{
Action: operation,
Username: user.Username,
Path: filePath,
@ -169,28 +167,29 @@ func newActionNotification(
type defaultActionHandler struct{}
func (h *defaultActionHandler) Handle(notification *ActionNotification) error {
if !util.IsStringInSlice(notification.Action, Config.Actions.ExecuteOn) {
func (h *defaultActionHandler) Handle(event *notifier.FsEvent) error {
if !util.IsStringInSlice(event.Action, Config.Actions.ExecuteOn) {
return errUnconfiguredAction
}
if Config.Actions.Hook == "" {
logger.Warn(notification.Protocol, "", "Unable to send notification, no hook is defined")
logger.Warn(event.Protocol, "", "Unable to send notification, no hook is defined")
return errNoHook
}
if strings.HasPrefix(Config.Actions.Hook, "http") {
return h.handleHTTP(notification)
return h.handleHTTP(event)
}
return h.handleCommand(notification)
return h.handleCommand(event)
}
func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) error {
func (h *defaultActionHandler) handleHTTP(event *notifier.FsEvent) error {
u, err := url.Parse(Config.Actions.Hook)
if err != nil {
logger.Warn(notification.Protocol, "", "Invalid hook %#v for operation %#v: %v", Config.Actions.Hook, notification.Action, err)
logger.Error(event.Protocol, "", "Invalid hook %#v for operation %#v: %v",
Config.Actions.Hook, event.Action, err)
return err
}
@ -198,7 +197,7 @@ func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) erro
respCode := 0
var b bytes.Buffer
_ = json.NewEncoder(&b).Encode(notification)
_ = json.NewEncoder(&b).Encode(event)
resp, err := httpclient.RetryablePost(Config.Actions.Hook, "application/json", &b)
if err == nil {
@ -210,16 +209,16 @@ func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) erro
}
}
logger.Debug(notification.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
notification.Action, u.Redacted(), respCode, time.Since(startTime), err)
logger.Debug(event.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
event.Action, u.Redacted(), respCode, time.Since(startTime), err)
return err
}
func (h *defaultActionHandler) handleCommand(notification *ActionNotification) error {
func (h *defaultActionHandler) handleCommand(event *notifier.FsEvent) error {
if !filepath.IsAbs(Config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %#v", Config.Actions.Hook)
logger.Warn(notification.Protocol, "", "unable to execute notification command: %v", err)
logger.Warn(event.Protocol, "", "unable to execute notification command: %v", err)
return err
}
@ -228,35 +227,35 @@ func (h *defaultActionHandler) handleCommand(notification *ActionNotification) e
defer cancel()
cmd := exec.CommandContext(ctx, Config.Actions.Hook)
cmd.Env = append(os.Environ(), notificationAsEnvVars(notification)...)
cmd.Env = append(os.Environ(), notificationAsEnvVars(event)...)
startTime := time.Now()
err := cmd.Run()
logger.Debug(notification.Protocol, "", "executed command %#v, elapsed: %v, error: %v",
logger.Debug(event.Protocol, "", "executed command %#v, elapsed: %v, error: %v",
Config.Actions.Hook, time.Since(startTime), err)
return err
}
func notificationAsEnvVars(notification *ActionNotification) []string {
func notificationAsEnvVars(event *notifier.FsEvent) []string {
return []string{
fmt.Sprintf("SFTPGO_ACTION=%v", notification.Action),
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", notification.Username),
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", notification.Path),
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", notification.TargetPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_PATH=%v", notification.VirtualPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_TARGET=%v", notification.VirtualTargetPath),
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", notification.SSHCmd),
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", notification.FileSize),
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", notification.FsProvider),
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", notification.Bucket),
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", notification.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", notification.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", notification.Protocol),
fmt.Sprintf("SFTPGO_ACTION_IP=%v", notification.IP),
fmt.Sprintf("SFTPGO_ACTION_SESSION_ID=%v", notification.SessionID),
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", notification.OpenFlags),
fmt.Sprintf("SFTPGO_ACTION_TIMESTAMP=%v", notification.Timestamp),
fmt.Sprintf("SFTPGO_ACTION=%v", event.Action),
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", event.Username),
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", event.Path),
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", event.TargetPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_PATH=%v", event.VirtualPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_TARGET=%v", event.VirtualTargetPath),
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", event.SSHCmd),
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", event.FileSize),
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", event.FsProvider),
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", event.Bucket),
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", event.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", event.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", event.Protocol),
fmt.Sprintf("SFTPGO_ACTION_IP=%v", event.IP),
fmt.Sprintf("SFTPGO_ACTION_SESSION_ID=%v", event.SessionID),
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", event.OpenFlags),
fmt.Sprintf("SFTPGO_ACTION_TIMESTAMP=%v", event.Timestamp),
}
}

View file

@ -11,10 +11,12 @@ import (
"github.com/lithammer/shortuuid/v3"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -26,24 +28,24 @@ func TestNewActionNotification(t *testing.T) {
}
user.FsConfig.Provider = sdk.LocalFilesystemProvider
user.FsConfig.S3Config = vfs.S3FsConfig{
S3FsConfig: sdk.S3FsConfig{
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: "s3bucket",
Endpoint: "endpoint",
},
}
user.FsConfig.GCSConfig = vfs.GCSFsConfig{
GCSFsConfig: sdk.GCSFsConfig{
BaseGCSFsConfig: sdk.BaseGCSFsConfig{
Bucket: "gcsbucket",
},
}
user.FsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
AzBlobFsConfig: sdk.AzBlobFsConfig{
BaseAzBlobFsConfig: sdk.BaseAzBlobFsConfig{
Container: "azcontainer",
Endpoint: "azendpoint",
},
}
user.FsConfig.SFTPConfig = vfs.SFTPFsConfig{
SFTPFsConfig: sdk.SFTPFsConfig{
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
Endpoint: "sftpendpoint",
},
}
@ -146,6 +148,8 @@ func TestActionCMD(t *testing.T) {
c := NewBaseConnection("id", ProtocolSFTP, "", "", *user)
ExecuteActionNotification(c, OperationSSHCmd, "path", "vpath", "target", "vtarget", "sha1sum", 0, nil)
ExecuteActionNotification(c, operationDownload, "path", "vpath", "", "", "", 0, nil)
Config.Actions = actionsCopy
}
@ -235,11 +239,42 @@ func TestPreDeleteAction(t *testing.T) {
Config.Actions = actionsCopy
}
func TestUnconfiguredHook(t *testing.T) {
actionsCopy := Config.Actions
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationDownload},
Hook: "",
}
pluginsConfig := []plugin.Config{
{
Type: "notifier",
},
}
err := plugin.Initialize(pluginsConfig, true)
assert.Error(t, err)
assert.True(t, plugin.Handler.HasNotifiers())
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
err = ExecutePreAction(c, OperationPreDownload, "", "", 0, 0)
assert.NoError(t, err)
err = ExecutePreAction(c, operationPreDelete, "", "", 0, 0)
assert.ErrorIs(t, err, errUnconfiguredAction)
ExecuteActionNotification(c, operationDownload, "", "", "", "", "", 0, nil)
err = plugin.Initialize(nil, true)
assert.NoError(t, err)
assert.False(t, plugin.Handler.HasNotifiers())
Config.Actions = actionsCopy
}
type actionHandlerStub struct {
called bool
}
func (h *actionHandlerStub) Handle(notification *ActionNotification) error {
func (h *actionHandlerStub) Handle(event *notifier.FsEvent) error {
h.called = true
return nil
@ -253,7 +288,7 @@ func TestInitializeActionHandler(t *testing.T) {
InitializeActionHandler(&defaultActionHandler{})
})
err := actionHandler.Handle(&ActionNotification{})
err := actionHandler.Handle(&notifier.FsEvent{})
assert.NoError(t, err)
assert.True(t, handler.called)

View file

@ -787,8 +787,10 @@ func (conns *ActiveConnections) checkIdles() {
toClose := true
for _, conn := range conns.connections {
if strings.Contains(conn.GetID(), idToMatch) {
toClose = false
break
if time.Since(conn.GetLastActivity()) <= Config.idleTimeoutAsDuration {
toClose = false
break
}
}
}
if toClose {

View file

@ -14,13 +14,13 @@ import (
"time"
"github.com/alexedwards/argon2id"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -731,9 +731,7 @@ func TestPostConnectHook(t *testing.T) {
func TestCryptoConvertFileInfo(t *testing.T) {
name := "name"
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), "", vfs.CryptFsConfig{
CryptFsConfig: sdk.CryptFsConfig{
Passphrase: kms.NewPlainSecret("secret"),
},
Passphrase: kms.NewPlainSecret("secret"),
})
require.NoError(t, err)
cryptFs := fs.(*vfs.CryptFs)
@ -772,9 +770,7 @@ func TestFolderCopy(t *testing.T) {
folder.FsConfig = vfs.Filesystem{
CryptConfig: vfs.CryptFsConfig{
CryptFsConfig: sdk.CryptFsConfig{
Passphrase: kms.NewPlainSecret("crypto secret"),
},
Passphrase: kms.NewPlainSecret("crypto secret"),
},
}
folderCopy = folder.GetACopy()
@ -835,7 +831,10 @@ func TestParseAllowedIPAndRanges(t *testing.T) {
}
func TestHideConfidentialData(t *testing.T) {
for _, provider := range sdk.ListProviders() {
for _, provider := range []sdk.FilesystemProvider{sdk.LocalFilesystemProvider,
sdk.CryptedFilesystemProvider, sdk.S3FilesystemProvider, sdk.GCSFilesystemProvider,
sdk.AzureBlobFilesystemProvider, sdk.SFTPFilesystemProvider,
} {
u := dataprovider.User{
FsConfig: vfs.Filesystem{
Provider: provider,

View file

@ -12,10 +12,10 @@ import (
ftpserver "github.com/fclairamb/ftpserverlib"
"github.com/pkg/sftp"
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)

View file

@ -10,11 +10,11 @@ import (
"github.com/pkg/sftp"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -419,12 +419,12 @@ func TestCheckParentDirsErrors(t *testing.T) {
FsConfig: vfs.Filesystem{
Provider: sdk.S3FilesystemProvider,
S3Config: vfs.S3FsConfig{
S3FsConfig: sdk.S3FsConfig{
Bucket: "buck",
Region: "us-east-1",
AccessKey: "key",
AccessSecret: kms.NewPlainSecret("s3secret"),
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: "buck",
Region: "us-east-1",
AccessKey: "key",
},
AccessSecret: kms.NewPlainSecret("s3secret"),
},
},
}

View file

@ -8,11 +8,11 @@ import (
"testing"
"time"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/smtp"
)

View file

@ -197,9 +197,10 @@ func TestBasicDbDefender(t *testing.T) {
assert.Equal(t, 1, host.Score)
assert.True(t, host.BanTime.IsZero())
assert.Empty(t, host.GetBanTime())
// cleanup db
err = dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(time.Now().Add(10 * time.Minute)))
assert.NoError(t, err)
// set a negative observation time so the from field in the queries will be in the future
// we still should get the banned hosts
defender.config.ObservationTime = -2
assert.Greater(t, defender.getStartObservationTime(), time.Now().UnixMilli())
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
@ -208,6 +209,22 @@ func TestBasicDbDefender(t *testing.T) {
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
}
_, err = defender.GetHost(testIP)
assert.NoError(t, err)
// cleanup db
err = dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(time.Now().Add(10 * time.Minute)))
assert.NoError(t, err)
// the banned host must still be there
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, testIP, hosts[0].IP)
assert.Equal(t, 0, hosts[0].Score)
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
}
_, err = defender.GetHost(testIP)
assert.NoError(t, err)
err = dataprovider.SetDefenderBanTime(testIP, util.GetTimeAsMsSinceEpoch(time.Now().Add(-1*time.Minute)))
assert.NoError(t, err)
err = dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(time.Now().Add(10 * time.Minute)))

View file

@ -25,6 +25,7 @@ import (
"github.com/pquerna/otp/totp"
"github.com/rs/xid"
"github.com/rs/zerolog"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/ssh"
@ -37,7 +38,6 @@ import (
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -2758,7 +2758,7 @@ func TestBuiltinKeyboardInteractiveAuthentication(t *testing.T) {
configName, _, secret, _, err := mfa.GenerateTOTPSecret(mfa.GetAvailableTOTPConfigNames()[0], user.Username)
assert.NoError(t, err)
user.Password = defaultPassword
user.Filters.TOTPConfig = sdk.TOTPConfig{
user.Filters.TOTPConfig = dataprovider.UserTOTPConfig{
Enabled: true,
ConfigName: configName,
Secret: kms.NewPlainSecret(secret),
@ -2926,11 +2926,11 @@ func TestSFTPLoopError(t *testing.T) {
FsConfig: vfs.Filesystem{
Provider: sdk.SFTPFilesystemProvider,
SFTPConfig: vfs.SFTPFsConfig{
SFTPFsConfig: sdk.SFTPFsConfig{
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
Endpoint: sftpServerAddr,
Username: user2.Username,
Password: kms.NewPlainSecret(defaultPassword),
},
Password: kms.NewPlainSecret(defaultPassword),
},
},
},
@ -2939,11 +2939,11 @@ func TestSFTPLoopError(t *testing.T) {
user2.FsConfig.Provider = sdk.SFTPFilesystemProvider
user2.FsConfig.SFTPConfig = vfs.SFTPFsConfig{
SFTPFsConfig: sdk.SFTPFsConfig{
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
Endpoint: sftpServerAddr,
Username: user1.Username,
Password: kms.NewPlainSecret(defaultPassword),
},
Password: kms.NewPlainSecret(defaultPassword),
}
user1, resp, err := httpdtest.AddUser(user1, http.StatusCreated)
@ -2995,11 +2995,11 @@ func TestNonLocalCrossRename(t *testing.T) {
FsConfig: vfs.Filesystem{
Provider: sdk.SFTPFilesystemProvider,
SFTPConfig: vfs.SFTPFsConfig{
SFTPFsConfig: sdk.SFTPFsConfig{
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
Endpoint: sftpServerAddr,
Username: baseUser.Username,
Password: kms.NewPlainSecret(defaultPassword),
},
Password: kms.NewPlainSecret(defaultPassword),
},
},
},
@ -3014,9 +3014,7 @@ func TestNonLocalCrossRename(t *testing.T) {
FsConfig: vfs.Filesystem{
Provider: sdk.CryptedFilesystemProvider,
CryptConfig: vfs.CryptFsConfig{
CryptFsConfig: sdk.CryptFsConfig{
Passphrase: kms.NewPlainSecret(defaultPassword),
},
Passphrase: kms.NewPlainSecret(defaultPassword),
},
},
MappedPath: mappedPathCrypt,
@ -3117,9 +3115,7 @@ func TestNonLocalCrossRenameNonLocalBaseUser(t *testing.T) {
FsConfig: vfs.Filesystem{
Provider: sdk.CryptedFilesystemProvider,
CryptConfig: vfs.CryptFsConfig{
CryptFsConfig: sdk.CryptFsConfig{
Passphrase: kms.NewPlainSecret(defaultPassword),
},
Passphrase: kms.NewPlainSecret(defaultPassword),
},
},
MappedPath: mappedPathCrypt,

View file

@ -7,12 +7,12 @@ import (
"testing"
"time"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -264,7 +264,7 @@ func TestTransferErrors(t *testing.T) {
func TestRemovePartialCryptoFile(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs, err := vfs.NewCryptFs("id", os.TempDir(), "", vfs.CryptFsConfig{CryptFsConfig: sdk.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")}})
fs, err := vfs.NewCryptFs("id", os.TempDir(), "", vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
require.NoError(t, err)
u := dataprovider.User{
BaseUser: sdk.BaseUser{

View file

@ -19,7 +19,7 @@ import (
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/telemetry"
@ -39,10 +39,11 @@ const (
)
var (
globalConf globalConfig
defaultSFTPDBanner = fmt.Sprintf("SFTPGo_%v", version.Get().Version)
defaultFTPDBanner = fmt.Sprintf("SFTPGo %v ready", version.Get().Version)
defaultSFTPDBinding = sftpd.Binding{
globalConf globalConfig
defaultSFTPDBanner = fmt.Sprintf("SFTPGo_%v", version.Get().Version)
defaultFTPDBanner = fmt.Sprintf("SFTPGo %v ready", version.Get().Version)
defaultInstallCodeHint = "Installation code"
defaultSFTPDBinding = sftpd.Binding{
Address: "",
Port: 2022,
ApplyProxyConfig: true,
@ -70,7 +71,7 @@ var (
ProxyAllowed: nil,
}
defaultHTTPDBinding = httpd.Binding{
Address: "127.0.0.1",
Address: "",
Port: 8080,
EnableWebAdmin: true,
EnableWebClient: true,
@ -229,7 +230,7 @@ func Init() {
ConnectionString: "",
SQLTablesPrefix: "",
SSLMode: 0,
TrackQuota: 1,
TrackQuota: 2,
PoolSize: 0,
UsersBaseDir: "",
Actions: dataprovider.ObjectsActions{
@ -294,6 +295,10 @@ func Init() {
AllowCredentials: false,
MaxAge: 0,
},
Setup: httpd.SetupConfig{
InstallationCode: "",
InstallationCodeHint: defaultInstallCodeHint,
},
},
HTTPConfig: httpclient.Config{
Timeout: 20,
@ -316,7 +321,7 @@ func Init() {
TOTP: nil,
},
TelemetryConfig: telemetry.Conf{
BindPort: 10000,
BindPort: 0,
BindAddress: "127.0.0.1",
EnableProfiler: false,
AuthUserFile: "",
@ -480,6 +485,7 @@ func getRedactedGlobalConf() globalConfig {
conf.Common.DataRetentionHook = util.GetRedactedURL(conf.Common.DataRetentionHook)
conf.SFTPD.KeyboardInteractiveHook = util.GetRedactedURL(conf.SFTPD.KeyboardInteractiveHook)
conf.HTTPDConfig.SigningPassphrase = getRedactedPassword()
conf.HTTPDConfig.Setup.InstallationCode = getRedactedPassword()
conf.ProviderConf.Password = getRedactedPassword()
conf.ProviderConf.Actions.Hook = util.GetRedactedURL(conf.ProviderConf.Actions.Hook)
conf.ProviderConf.ExternalAuthHook = util.GetRedactedURL(conf.ProviderConf.ExternalAuthHook)
@ -522,6 +528,7 @@ func LoadConfig(configDir, configFile string) error {
logger.Warn(logSender, "", "error loading configuration file: %v", err)
logger.WarnToConsole("error loading configuration file: %v", err)
}
globalConf.MFAConfig.TOTP = []mfa.TOTPConfig{defaultTOTP}
}
err = viper.Unmarshal(&globalConf)
if err != nil {
@ -536,6 +543,7 @@ func LoadConfig(configDir, configFile string) error {
return nil
}
//nolint:gocyclo
func resetInvalidConfigs() {
if strings.TrimSpace(globalConf.SFTPD.Banner) == "" {
globalConf.SFTPD.Banner = defaultSFTPDBanner
@ -543,6 +551,9 @@ func resetInvalidConfigs() {
if strings.TrimSpace(globalConf.FTPD.Banner) == "" {
globalConf.FTPD.Banner = defaultFTPDBanner
}
if strings.TrimSpace(globalConf.HTTPDConfig.Setup.InstallationCodeHint) == "" {
globalConf.HTTPDConfig.Setup.InstallationCodeHint = defaultInstallCodeHint
}
if globalConf.ProviderConf.UsersBaseDir != "" && !util.IsFileInputValid(globalConf.ProviderConf.UsersBaseDir) {
warn := fmt.Sprintf("invalid users base dir %#v will be ignored", globalConf.ProviderConf.UsersBaseDir)
globalConf.ProviderConf.UsersBaseDir = ""
@ -1324,6 +1335,8 @@ func setViperDefaults() {
viper.SetDefault("httpd.cors.exposed_headers", globalConf.HTTPDConfig.Cors.ExposedHeaders)
viper.SetDefault("httpd.cors.allow_credentials", globalConf.HTTPDConfig.Cors.AllowCredentials)
viper.SetDefault("httpd.cors.max_age", globalConf.HTTPDConfig.Cors.MaxAge)
viper.SetDefault("httpd.setup.installation_code", globalConf.HTTPDConfig.Setup.InstallationCode)
viper.SetDefault("httpd.setup.installation_code_hint", globalConf.HTTPDConfig.Setup.InstallationCodeHint)
viper.SetDefault("http.timeout", globalConf.HTTPConfig.Timeout)
viper.SetDefault("http.retry_wait_min", globalConf.HTTPConfig.RetryWaitMin)
viper.SetDefault("http.retry_wait_max", globalConf.HTTPConfig.RetryWaitMax)

View file

@ -7,6 +7,7 @@ import (
"strings"
"testing"
sdkkms "github.com/sftpgo/sdk/kms"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -17,9 +18,8 @@ import (
"github.com/drakkan/sftpgo/v2/ftpd"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/httpd"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
@ -67,6 +67,8 @@ func TestLoadConfigFileNotFound(t *testing.T) {
viper.SetConfigName("configfile")
err := config.LoadConfig(os.TempDir(), "")
assert.NoError(t, err)
mfaConf := config.GetMFAConfig()
assert.Len(t, mfaConf.TOTP, 1)
}
func TestEmptyBanner(t *testing.T) {
@ -250,6 +252,34 @@ func TestInvalidUsersBaseDir(t *testing.T) {
assert.NoError(t, err)
}
func TestInvalidInstallationHint(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
httpdConfig := config.GetHTTPDConfig()
httpdConfig.Setup = httpd.SetupConfig{
InstallationCode: "abc",
InstallationCodeHint: " ",
}
c := make(map[string]httpd.Conf)
c["httpd"] = httpdConfig
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
httpdConfig = config.GetHTTPDConfig()
assert.Equal(t, "abc", httpdConfig.Setup.InstallationCode)
assert.Equal(t, "Installation code", httpdConfig.Setup.InstallationCodeHint)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestDefenderProviderDriver(t *testing.T) {
if config.GetProviderConf().Driver != dataprovider.SQLiteDataProviderName {
t.Skip("this test is not supported with the current database provider")
@ -467,8 +497,8 @@ func TestPluginsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_PLUGINS__0__ARGS", "arg1,arg2")
os.Setenv("SFTPGO_PLUGINS__0__SHA256SUM", "0a71ded61fccd59c4f3695b51c1b3d180da8d2d77ea09ccee20dac242675c193")
os.Setenv("SFTPGO_PLUGINS__0__AUTO_MTLS", "1")
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__SCHEME", kms.SchemeAWS)
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__ENCRYPTED_STATUS", kms.SecretStatusAWS)
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__SCHEME", sdkkms.SchemeAWS)
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__ENCRYPTED_STATUS", sdkkms.SecretStatusAWS)
os.Setenv("SFTPGO_PLUGINS__0__AUTH_OPTIONS__SCOPE", "14")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_PLUGINS__0__TYPE")
@ -510,8 +540,8 @@ func TestPluginsFromEnv(t *testing.T) {
require.Equal(t, "arg2", pluginConf.Args[1])
require.Equal(t, "0a71ded61fccd59c4f3695b51c1b3d180da8d2d77ea09ccee20dac242675c193", pluginConf.SHA256Sum)
require.True(t, pluginConf.AutoMTLS)
require.Equal(t, kms.SchemeAWS, pluginConf.KMSOptions.Scheme)
require.Equal(t, kms.SecretStatusAWS, pluginConf.KMSOptions.EncryptedStatus)
require.Equal(t, sdkkms.SchemeAWS, pluginConf.KMSOptions.Scheme)
require.Equal(t, sdkkms.SecretStatusAWS, pluginConf.KMSOptions.EncryptedStatus)
require.Equal(t, 14, pluginConf.AuthOptions.Scope)
configAsJSON, err := json.Marshal(pluginsConf)
@ -524,8 +554,8 @@ func TestPluginsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_PLUGINS__0__CMD", "plugin_start_cmd1")
os.Setenv("SFTPGO_PLUGINS__0__ARGS", "")
os.Setenv("SFTPGO_PLUGINS__0__AUTO_MTLS", "0")
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__SCHEME", kms.SchemeVaultTransit)
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__ENCRYPTED_STATUS", kms.SecretStatusVaultTransit)
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__SCHEME", sdkkms.SchemeVaultTransit)
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__ENCRYPTED_STATUS", sdkkms.SecretStatusVaultTransit)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
pluginsConf = config.GetPluginsConfig()
@ -547,8 +577,8 @@ func TestPluginsFromEnv(t *testing.T) {
require.Len(t, pluginConf.Args, 0)
require.Equal(t, "0a71ded61fccd59c4f3695b51c1b3d180da8d2d77ea09ccee20dac242675c193", pluginConf.SHA256Sum)
require.False(t, pluginConf.AutoMTLS)
require.Equal(t, kms.SchemeVaultTransit, pluginConf.KMSOptions.Scheme)
require.Equal(t, kms.SecretStatusVaultTransit, pluginConf.KMSOptions.EncryptedStatus)
require.Equal(t, sdkkms.SchemeVaultTransit, pluginConf.KMSOptions.Scheme)
require.Equal(t, sdkkms.SecretStatusVaultTransit, pluginConf.KMSOptions.EncryptedStatus)
require.Equal(t, 14, pluginConf.AuthOptions.Scope)
err = os.Remove(configFilePath)
@ -1013,6 +1043,7 @@ func TestConfigFromEnv(t *testing.T) {
os.Setenv("SFTPGO_KMS__SECRETS__URL", "local")
os.Setenv("SFTPGO_KMS__SECRETS__MASTER_KEY_PATH", "path")
os.Setenv("SFTPGO_TELEMETRY__TLS_CIPHER_SUITES", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA")
os.Setenv("SFTPGO_HTTPD__SETUP__INSTALLATION_CODE", "123")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__0__PORT")
@ -1023,6 +1054,7 @@ func TestConfigFromEnv(t *testing.T) {
os.Unsetenv("SFTPGO_KMS__SECRETS__URL")
os.Unsetenv("SFTPGO_KMS__SECRETS__MASTER_KEY_PATH")
os.Unsetenv("SFTPGO_TELEMETRY__TLS_CIPHER_SUITES")
os.Unsetenv("SFTPGO_HTTPD__SETUP__INSTALLATION_CODE")
})
err := config.LoadConfig(".", "invalid config")
assert.NoError(t, err)
@ -1042,4 +1074,5 @@ func TestConfigFromEnv(t *testing.T) {
assert.Len(t, telemetryConfig.TLSCipherSuites, 2)
assert.Equal(t, "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", telemetryConfig.TLSCipherSuites[0])
assert.Equal(t, "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", telemetryConfig.TLSCipherSuites[1])
assert.Equal(t, "123", config.GetHTTPDConfig().Setup.InstallationCode)
}

View file

@ -11,9 +11,11 @@ import (
"strings"
"time"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
@ -33,7 +35,16 @@ const (
)
func executeAction(operation, executor, ip, objectType, objectName string, object plugin.Renderer) {
plugin.Handler.NotifyProviderEvent(time.Now().UnixNano(), operation, executor, objectType, objectName, ip, object)
if plugin.Handler.HasNotifiers() {
plugin.Handler.NotifyProviderEvent(&notifier.ProviderEvent{
Action: operation,
Username: executor,
ObjectType: objectType,
ObjectName: objectName,
IP: ip,
Timestamp: time.Now().UnixNano(),
}, object)
}
if config.Actions.Hook == "" {
return
}

View file

@ -18,7 +18,6 @@ import (
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
)
@ -52,14 +51,14 @@ var (
PermAdminViewEvents}
)
// TOTPConfig defines the time-based one time password configuration
type TOTPConfig struct {
// AdminTOTPConfig defines the time-based one time password configuration
type AdminTOTPConfig struct {
Enabled bool `json:"enabled,omitempty"`
ConfigName string `json:"config_name,omitempty"`
Secret *kms.Secret `json:"secret,omitempty"`
}
func (c *TOTPConfig) validate(username string) error {
func (c *AdminTOTPConfig) validate(username string) error {
if !c.Enabled {
c.ConfigName = ""
c.Secret = kms.NewEmptySecret()
@ -93,11 +92,11 @@ type AdminFilters struct {
// API key auth allows to impersonate this administrator with an API key
AllowAPIKeyAuth bool `json:"allow_api_key_auth,omitempty"`
// Time-based one time passwords configuration
TOTPConfig TOTPConfig `json:"totp_config,omitempty"`
TOTPConfig AdminTOTPConfig `json:"totp_config,omitempty"`
// Recovery codes to use if the user loses access to their second factor auth device.
// Each code can only be used once, you should use these codes to login and disable or
// reset 2FA for your account
RecoveryCodes []sdk.RecoveryCode `json:"recovery_codes,omitempty"`
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
}
// Admin defines a SFTPGo admin
@ -403,12 +402,12 @@ func (a *Admin) getACopy() Admin {
filters.TOTPConfig.ConfigName = a.Filters.TOTPConfig.ConfigName
filters.TOTPConfig.Secret = a.Filters.TOTPConfig.Secret.Clone()
copy(filters.AllowList, a.Filters.AllowList)
filters.RecoveryCodes = make([]sdk.RecoveryCode, 0)
filters.RecoveryCodes = make([]RecoveryCode, 0)
for _, code := range a.Filters.RecoveryCodes {
if code.Secret == nil {
code.Secret = kms.NewEmptySecret()
}
filters.RecoveryCodes = append(filters.RecoveryCodes, sdk.RecoveryCode{
filters.RecoveryCodes = append(filters.RecoveryCodes, RecoveryCode{
Secret: code.Secret.Clone(),
Used: code.Used,
})

View file

@ -40,6 +40,7 @@ import (
"github.com/alexedwards/argon2id"
"github.com/go-chi/render"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
passwordvalidator "github.com/wagslane/go-password-validator"
"golang.org/x/crypto/bcrypt"
"golang.org/x/crypto/pbkdf2"
@ -50,8 +51,7 @@ import (
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -457,6 +457,11 @@ func GetQuotaTracking() int {
return config.TrackQuota
}
// HasUsersBaseDir returns true if users base dir is set
func HasUsersBaseDir() bool {
return config.UsersBaseDir != ""
}
// Provider defines the interface that data providers must implement.
type Provider interface {
validateUserAndPass(username, password, ip, protocol string) (User, error)
@ -647,10 +652,13 @@ func validateSQLTablesPrefix() error {
sqlTableAdmins = config.SQLTablesPrefix + sqlTableAdmins
sqlTableAPIKeys = config.SQLTablesPrefix + sqlTableAPIKeys
sqlTableShares = config.SQLTablesPrefix + sqlTableShares
sqlTableDefenderEvents = config.SQLTablesPrefix + sqlTableDefenderEvents
sqlTableDefenderHosts = config.SQLTablesPrefix + sqlTableDefenderHosts
sqlTableSchemaVersion = config.SQLTablesPrefix + sqlTableSchemaVersion
providerLog(logger.LevelDebug, "sql table for users %#v, folders %#v folders mapping %#v admins %#v "+
"api keys %#v shares %#v schema version %#v", sqlTableUsers, sqlTableFolders, sqlTableFoldersMapping,
sqlTableAdmins, sqlTableAPIKeys, sqlTableShares, sqlTableSchemaVersion)
"api keys %#v shares %#v defender hosts %#v defender events %#v schema version %#v",
sqlTableUsers, sqlTableFolders, sqlTableFoldersMapping, sqlTableAdmins, sqlTableAPIKeys,
sqlTableShares, sqlTableDefenderHosts, sqlTableDefenderEvents, sqlTableSchemaVersion)
}
return nil
}
@ -1153,7 +1161,7 @@ func HasAdmin() bool {
// AddAdmin adds a new SFTPGo admin
func AddAdmin(admin *Admin, executor, ipAddress string) error {
admin.Filters.RecoveryCodes = nil
admin.Filters.TOTPConfig = TOTPConfig{
admin.Filters.TOTPConfig = AdminTOTPConfig{
Enabled: false,
}
err := provider.addAdmin(admin)
@ -1199,7 +1207,7 @@ func UserExists(username string) (User, error) {
// AddUser adds a new SFTPGo user.
func AddUser(user *User, executor, ipAddress string) error {
user.Filters.RecoveryCodes = nil
user.Filters.TOTPConfig = sdk.TOTPConfig{
user.Filters.TOTPConfig = UserTOTPConfig{
Enabled: false,
}
err := provider.addUser(user)
@ -1559,7 +1567,7 @@ func validateUserVirtualFolders(user *User) error {
return nil
}
func validateUserTOTPConfig(c *sdk.TOTPConfig, username string) error {
func validateUserTOTPConfig(c *UserTOTPConfig, username string) error {
if !c.Enabled {
c.ConfigName = ""
c.Secret = kms.NewEmptySecret()
@ -1745,10 +1753,20 @@ func validateIPFilters(user *User) error {
return nil
}
func validateBandwidthLimit(bl sdk.BandwidthLimit) error {
for _, source := range bl.Sources {
_, _, err := net.ParseCIDR(source)
if err != nil {
return util.NewValidationError(fmt.Sprintf("could not parse bandwidth limit source %#v: %v", source, err))
}
}
return nil
}
func validateBandwidthLimitFilters(user *User) error {
for idx, bandwidthLimit := range user.Filters.BandwidthLimits {
user.Filters.BandwidthLimits[idx].Sources = util.RemoveDuplicates(bandwidthLimit.Sources)
if err := bandwidthLimit.Validate(); err != nil {
if err := validateBandwidthLimit(bandwidthLimit); err != nil {
return err
}
if bandwidthLimit.DownloadBandwidth < 0 {
@ -2355,7 +2373,7 @@ func sendKeyboardAuthHTTPReq(url string, request *plugin.KeyboardAuthRequest) (*
}
func doBuiltinKeyboardInteractiveAuth(user *User, client ssh.KeyboardInteractiveChallenge, ip, protocol string) (int, error) {
answers, err := client(user.Username, "", []string{"Password: "}, []bool{false})
answers, err := client("", "", []string{"Password: "}, []bool{false})
if err != nil {
return 0, err
}
@ -2375,7 +2393,7 @@ func doBuiltinKeyboardInteractiveAuth(user *User, client ssh.KeyboardInteractive
user.Username, protocol, err)
return 0, err
}
answers, err = client(user.Username, "", []string{"Authentication code: "}, []bool{false})
answers, err = client("", "", []string{"Authentication code: "}, []bool{false})
if err != nil {
return 0, err
}
@ -2478,7 +2496,7 @@ func getKeyboardInteractiveAnswers(client ssh.KeyboardInteractiveChallenge, resp
user *User, ip, protocol string,
) ([]string, error) {
questions := response.Questions
answers, err := client(user.Username, response.Instruction, questions, response.Echos)
answers, err := client("", response.Instruction, questions, response.Echos)
if err != nil {
providerLog(logger.LevelInfo, "error getting interactive auth client response: %v", err)
return answers, err

View file

@ -197,6 +197,19 @@ func (s *Share) validatePaths() error {
if s.Scope == ShareScopeWrite && len(s.Paths) != 1 {
return util.NewValidationError("the write share scope requires exactly one path")
}
// check nested paths
if len(s.Paths) > 1 {
for idx := range s.Paths {
for innerIdx := range s.Paths {
if idx == innerIdx {
continue
}
if isVirtualDirOverlapped(s.Paths[idx], s.Paths[innerIdx], true) {
return util.NewGenericError("shared paths cannot be nested")
}
}
}
}
return nil
}

View file

@ -13,7 +13,6 @@ import (
"github.com/cockroachdb/cockroach-go/v2/crdb"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -1386,7 +1385,7 @@ func getUserFromDbRow(row sqlScanner) (User, error) {
user.Permissions = perms
}
if filters.Valid {
var userFilters sdk.UserFilters
var userFilters UserFilters
err = json.Unmarshal([]byte(filters.String), &userFilters)
if err == nil {
user.Filters = userFilters
@ -1412,19 +1411,6 @@ func getUserFromDbRow(row sqlScanner) (User, error) {
return user, nil
}
func sqlCommonCheckFolderExists(ctx context.Context, name string, dbHandle sqlQuerier) error {
var folderName string
q := checkFolderNameQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelError, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
row := stmt.QueryRowContext(ctx, name)
return row.Scan(&folderName)
}
func sqlCommonGetFolder(ctx context.Context, name string, dbHandle sqlQuerier) (vfs.BaseVirtualFolder, error) {
var folder vfs.BaseVirtualFolder
q := getFolderByNameQuery()
@ -1476,29 +1462,23 @@ func sqlCommonGetFolderByName(ctx context.Context, name string, dbHandle sqlQuer
}
func sqlCommonAddOrUpdateFolder(ctx context.Context, baseFolder *vfs.BaseVirtualFolder, usedQuotaSize int64,
usedQuotaFiles int, lastQuotaUpdate int64, dbHandle sqlQuerier) (vfs.BaseVirtualFolder, error) {
var folder vfs.BaseVirtualFolder
// FIXME: we could use an UPSERT here, this SELECT could be racy
err := sqlCommonCheckFolderExists(ctx, baseFolder.Name, dbHandle)
switch err {
case nil:
err = sqlCommonUpdateFolder(baseFolder, dbHandle)
if err != nil {
return folder, err
}
case sql.ErrNoRows:
baseFolder.UsedQuotaFiles = usedQuotaFiles
baseFolder.UsedQuotaSize = usedQuotaSize
baseFolder.LastQuotaUpdate = lastQuotaUpdate
err = sqlCommonAddFolder(baseFolder, dbHandle)
if err != nil {
return folder, err
}
default:
return folder, err
usedQuotaFiles int, lastQuotaUpdate int64, dbHandle sqlQuerier,
) error {
fsConfig, err := json.Marshal(baseFolder.FsConfig)
if err != nil {
return err
}
q := getUpsertFolderQuery()
stmt, err := dbHandle.PrepareContext(ctx, q)
if err != nil {
providerLog(logger.LevelError, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
return sqlCommonGetFolder(ctx, baseFolder.Name, dbHandle)
_, err = stmt.ExecContext(ctx, baseFolder.MappedPath, usedQuotaSize, usedQuotaFiles,
lastQuotaUpdate, baseFolder.Name, baseFolder.Description, string(fsConfig))
return err
}
func sqlCommonAddFolder(folder *vfs.BaseVirtualFolder, dbHandle sqlQuerier) error {
@ -1675,7 +1655,7 @@ func sqlCommonAddFolderMapping(ctx context.Context, user *User, folder *vfs.Virt
return err
}
defer stmt.Close()
_, err = stmt.ExecContext(ctx, folder.VirtualPath, folder.QuotaSize, folder.QuotaFiles, folder.ID, user.Username)
_, err = stmt.ExecContext(ctx, folder.VirtualPath, folder.QuotaSize, folder.QuotaFiles, folder.Name, user.Username)
return err
}
@ -1686,11 +1666,10 @@ func generateVirtualFoldersMapping(ctx context.Context, user *User, dbHandle sql
}
for idx := range user.VirtualFolders {
vfolder := &user.VirtualFolders[idx]
f, err := sqlCommonAddOrUpdateFolder(ctx, &vfolder.BaseVirtualFolder, 0, 0, 0, dbHandle)
err = sqlCommonAddOrUpdateFolder(ctx, &vfolder.BaseVirtualFolder, 0, 0, 0, dbHandle)
if err != nil {
return err
}
vfolder.BaseVirtualFolder = f
err = sqlCommonAddFolderMapping(ctx, user, vfolder, dbHandle)
if err != nil {
return err

View file

@ -59,8 +59,8 @@ INSERT INTO {{schema_version}} (version) VALUES (10);
"updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "description" text NULL,
"admin_id" integer NULL REFERENCES "{{admins}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"user_id" integer NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "api_keys" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "api_keys" ("user_id");
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
`
sqliteV11DownSQL = `DROP TABLE "{{api_keys}}";`
sqliteV12SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;

View file

@ -46,12 +46,12 @@ func getAddDefenderEventQuery() string {
}
func getDefenderHostsQuery() string {
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE updated_at >= %v ORDER BY updated_at DESC LIMIT %v`,
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE updated_at >= %v OR ban_time > 0 ORDER BY updated_at DESC LIMIT %v`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDefenderHostQuery() string {
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE ip = %v AND updated_at >= %v`,
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE ip = %v AND (updated_at >= %v OR ban_time > 0)`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
@ -338,10 +338,6 @@ func getFolderByNameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE name = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
}
func checkFolderNameQuery() string {
return fmt.Sprintf(`SELECT name FROM %v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0])
}
func getAddFolderQuery() string {
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
@ -357,6 +353,20 @@ func getDeleteFolderQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableFolders, sqlPlaceholders[0])
}
func getUpsertFolderQuery() string {
if config.Driver == MySQLDataProviderName {
return fmt.Sprintf("INSERT INTO %v (`path`,`used_quota_size`,`used_quota_files`,`last_quota_update`,`name`,"+
"`description`,`filesystem`) VALUES (%v,%v,%v,%v,%v,%v,%v) ON DUPLICATE KEY UPDATE "+
"`path`=VALUES(`path`),`description`=VALUES(`description`),`filesystem`=VALUES(`filesystem`)",
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6])
}
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v) ON CONFLICT (name) DO UPDATE SET path = EXCLUDED.path,description=EXCLUDED.description,
filesystem=EXCLUDED.filesystem`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
}
func getClearFolderMappingQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE user_id = (SELECT id FROM %v WHERE username = %v)`, sqlTableFoldersMapping,
sqlTableUsers, sqlPlaceholders[0])
@ -364,8 +374,9 @@ func getClearFolderMappingQuery() string {
func getAddFolderMappingQuery() string {
return fmt.Sprintf(`INSERT INTO %v (virtual_path,quota_size,quota_files,folder_id,user_id)
VALUES (%v,%v,%v,%v,(SELECT id FROM %v WHERE username = %v))`, sqlTableFoldersMapping, sqlPlaceholders[0],
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
VALUES (%v,%v,%v,(SELECT id FROM %v WHERE name = %v),(SELECT id FROM %v WHERE username = %v))`,
sqlTableFoldersMapping, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlTableFolders,
sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
}
func getFoldersQuery(order string) string {

View file

@ -15,10 +15,11 @@ import (
"strings"
"time"
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -79,9 +80,44 @@ var (
permsCreateAny = []string{PermUpload, PermCreateDirs}
)
// RecoveryCode defines a 2FA recovery code
type RecoveryCode struct {
Secret *kms.Secret `json:"secret"`
Used bool `json:"used,omitempty"`
}
// UserTOTPConfig defines the time-based one time password configuration
type UserTOTPConfig struct {
Enabled bool `json:"enabled,omitempty"`
ConfigName string `json:"config_name,omitempty"`
Secret *kms.Secret `json:"secret,omitempty"`
// TOTP will be required for the specified protocols.
// SSH protocol (SFTP/SCP/SSH commands) will ask for the TOTP passcode if the client uses keyboard interactive
// authentication.
// FTP have no standard way to support two factor authentication, if you
// enable the support for this protocol you have to add the TOTP passcode after the password.
// For example if your password is "password" and your one time passcode is
// "123456" you have to use "password123456" as password.
Protocols []string `json:"protocols,omitempty"`
}
// UserFilters defines additional restrictions for a user
// TODO: rename to UserOptions in v3
type UserFilters struct {
sdk.BaseUserFilters
// Time-based one time passwords configuration
TOTPConfig UserTOTPConfig `json:"totp_config,omitempty"`
// Recovery codes to use if the user loses access to their second factor auth device.
// Each code can only be used once, you should use these codes to login and disable or
// reset 2FA for your account
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
}
// User defines a SFTPGo user
type User struct {
sdk.BaseUser
// Additional restrictions
Filters UserFilters `json:"filters"`
// Mapping between virtual paths and virtual folders
VirtualFolders []vfs.VirtualFolder `json:"virtual_folders,omitempty"`
// Filesystem configuration details
@ -308,16 +344,10 @@ func (u *User) IsTLSUsernameVerificationEnabled() bool {
// SetEmptySecrets sets to empty any user secret
func (u *User) SetEmptySecrets() {
u.FsConfig.S3Config.AccessSecret = kms.NewEmptySecret()
u.FsConfig.GCSConfig.Credentials = kms.NewEmptySecret()
u.FsConfig.AzBlobConfig.AccountKey = kms.NewEmptySecret()
u.FsConfig.AzBlobConfig.SASURL = kms.NewEmptySecret()
u.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
u.FsConfig.SFTPConfig.Password = kms.NewEmptySecret()
u.FsConfig.SFTPConfig.PrivateKey = kms.NewEmptySecret()
u.FsConfig.SetEmptySecrets()
for idx := range u.VirtualFolders {
folder := &u.VirtualFolders[idx]
folder.FsConfig.SetEmptySecretsIfNil()
folder.FsConfig.SetEmptySecrets()
}
u.Filters.TOTPConfig.Secret = kms.NewEmptySecret()
}
@ -525,19 +555,28 @@ func (u *User) AddVirtualDirs(list []os.FileInfo, virtualPath string) []os.FileI
return list
}
vdirs := make(map[string]bool)
for dir := range u.GetVirtualFoldersInPath(virtualPath) {
fi := vfs.NewFileInfo(dir, true, 0, time.Now(), false)
found := false
for index := range list {
if list[index].Name() == fi.Name() {
list[index] = fi
found = true
break
vdirs[path.Base(dir)] = true
}
if len(vdirs) == 0 {
return list
}
for index := range list {
for dir := range vdirs {
if list[index].Name() == dir {
if !list[index].IsDir() {
list[index] = vfs.NewFileInfo(dir, true, 0, time.Now(), false)
}
delete(vdirs, dir)
}
}
if !found {
list = append(list, fi)
}
}
for dir := range vdirs {
fi := vfs.NewFileInfo(dir, true, 0, time.Now(), false)
list = append(list, fi)
}
return list
}
@ -1168,7 +1207,7 @@ func (u *User) getACopy() User {
copy(perms, v)
permissions[k] = perms
}
filters := sdk.UserFilters{}
filters := UserFilters{}
filters.MaxUploadFileSize = u.Filters.MaxUploadFileSize
filters.TLSUsername = u.Filters.TLSUsername
filters.UserType = u.Filters.UserType
@ -1194,12 +1233,12 @@ func (u *User) getACopy() User {
filters.AllowAPIKeyAuth = u.Filters.AllowAPIKeyAuth
filters.WebClient = make([]string, len(u.Filters.WebClient))
copy(filters.WebClient, u.Filters.WebClient)
filters.RecoveryCodes = make([]sdk.RecoveryCode, 0, len(u.Filters.RecoveryCodes))
filters.RecoveryCodes = make([]RecoveryCode, 0, len(u.Filters.RecoveryCodes))
for _, code := range u.Filters.RecoveryCodes {
if code.Secret == nil {
code.Secret = kms.NewEmptySecret()
}
filters.RecoveryCodes = append(filters.RecoveryCodes, sdk.RecoveryCode{
filters.RecoveryCodes = append(filters.RecoveryCodes, RecoveryCode{
Secret: code.Secret.Clone(),
Used: code.Used,
})
@ -1238,12 +1277,12 @@ func (u *User) getACopy() User {
Status: u.Status,
ExpirationDate: u.ExpirationDate,
LastLogin: u.LastLogin,
Filters: filters,
AdditionalInfo: u.AdditionalInfo,
Description: u.Description,
CreatedAt: u.CreatedAt,
UpdatedAt: u.UpdatedAt,
},
Filters: filters,
VirtualFolders: virtualFolders,
FsConfig: u.FsConfig.GetACopy(),
}

View file

@ -4,11 +4,11 @@ SFTPGo provides an official Docker image, it is available on both [Docker Hub](h
## Supported tags and respective Dockerfile links
- [v2.2.1, v2.2, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.2.1/Dockerfile)
- [v2.2.1-alpine, v2.2-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.2.1/Dockerfile.alpine)
- [v2.2.1-slim, v2.2-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.2.1/Dockerfile)
- [v2.2.1-alpine-slim, v2.2-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.2.1/Dockerfile.alpine)
- [v2.2.1-distroless-slim, v2.2-distroless-slim, v2-distroless-slim, distroless-slim](https://github.com/drakkan/sftpgo/blob/v2.2.1/Dockerfile.distroless)
- [v2.2.3, v2.2, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile)
- [v2.2.3-alpine, v2.2-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.alpine)
- [v2.2.3-slim, v2.2-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile)
- [v2.2.3-alpine-slim, v2.2-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.alpine)
- [v2.2.3-distroless-slim, v2.2-distroless-slim, v2-distroless-slim, distroless-slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.distroless)
- [edge](../Dockerfile)
- [edge-alpine](../Dockerfile.alpine)
- [edge-slim](../Dockerfile)

View file

@ -41,7 +41,7 @@ If the `hook` defines a path to an external program, then this program can read
- `SFTPGO_ACTION_FILE_SIZE`, non-zero for `pre-upload`,`upload`, `download` and `delete` actions if the file size is greater than `0`
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3, SFTP and Azure backend if configured. For Azure this is the endpoint, if configured
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3, SFTP and Azure backend if configured
- `SFTPGO_ACTION_STATUS`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `DataRetention`
- `SFTPGO_ACTION_IP`, the action was executed from this IP address

View file

@ -55,8 +55,8 @@ The configuration file contains the following sections:
- `idle_timeout`, integer. Time in minutes after which an idle client will be disconnected. 0 means disabled. Default: 15
- `upload_mode` integer. 0 means standard: the files are uploaded directly to the requested path. 1 means atomic: files are uploaded to a temporary path and renamed to the requested path when the client ends the upload. Atomic mode avoids problems such as a web server that serves partial files when the files are being uploaded. In atomic mode, if there is an upload error, the temporary file is deleted and so the requested upload path will not contain a partial file. 2 means atomic with resume support: same as atomic but if there is an upload error, the temporary file is renamed to the requested path and not deleted. This way, a client can reconnect and resume the upload. Default: 0
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `pre-download`, `download`, `pre-upload`, `upload`, `pre-delete`, `delete`, `rename`, `ssh_cmd`. Leave empty to disable actions.
- `execute_sync`, list of strings. Actions to be performed synchronously. The `pre-delete` action is always executed synchronously while the other ones are asynchronous. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. Leave empty to execute only the `pre-delete` hook synchronously
- `execute_on`, list of strings. Valid values are `pre-download`, `download`, `pre-upload`, `upload`, `pre-delete`, `delete`, `rename`, `mkdir`, `rmdir`, `ssh_cmd`. Leave empty to disable actions.
- `execute_sync`, list of strings. Actions, defined in the `execute_on` list above, to be performed synchronously. The `pre-*` actions are always executed synchronously while the other ones are asynchronous. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. Leave empty to execute only the defined `pre-*` hook synchronously
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `setstat_mode`, integer. 0 means "normal mode": requests for changing permissions, owner/group and access/modification times are executed. 1 means "ignore mode": requests for changing permissions, owner/group and access/modification times are silently ignored. 2 means "ignore mode if not supported": requests for changing permissions and owner/group are silently ignored for cloud filesystems and executed for local/SFTP filesystem. Requests for changing modification times are always executed for local/SFTP filesystems and are executed for cloud based filesystems if the target is a file and there is a metadata plugin available. A metadata plugin can be found [here](https://github.com/sftpgo/sftpgo-plugin-metadata).
- `temp_path`, string. Defines the path for temporary files such as those used for atomic uploads or file pipes. If you set this option you must make sure that the defined path exists, is accessible for writing by the user running SFTPGo, and is on the same filesystem as the users home directories otherwise the renaming for atomic uploads will become a copy and therefore may take a long time. The temporary files are not namespaced. The default is generally fine. Leave empty for the default.
@ -105,9 +105,9 @@ The configuration file contains the following sections:
- `max_auth_tries` integer. Maximum number of authentication attempts permitted per connection. If set to a negative number, the number of attempts is unlimited. If set to zero, the number of attempts is limited to 6.
- `banner`, string. Identification string used by the server. Leave empty to use the default banner. Default `SFTPGo_<version>`, for example `SSH-2.0-SFTPGo_0.9.5`
- `host_keys`, list of strings. It contains the daemon's private host keys. Each host key can be defined as a path relative to the configuration directory or an absolute one. If empty, the daemon will search or try to generate `id_rsa`, `id_ecdsa` and `id_ed25519` keys inside the configuration directory. If you configure absolute paths to files named `id_rsa`, `id_ecdsa` and/or `id_ed25519` then SFTPGo will try to generate these keys using the default settings.
- `kex_algorithms`, list of strings. Available KEX (Key Exchange) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L46 "Supported kex algos")
- `ciphers`, list of strings. Allowed ciphers. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L28 "Supported ciphers")
- `macs`, list of strings. Available MAC (message authentication code) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L84 "Supported MACs")
- `kex_algorithms`, list of strings. Available KEX (Key Exchange) algorithms in preference order. Leave empty to use default values. The supported values are: `curve25519-sha256`, `curve25519-sha256@libssh.org`, `ecdh-sha2-nistp256`, `ecdh-sha2-nistp384`, `ecdh-sha2-nistp521`, `diffie-hellman-group14-sha256`, `diffie-hellman-group16-sha512`, `diffie-hellman-group18-sha512`, `diffie-hellman-group14-sha1`, `diffie-hellman-group1-sha1`. Default values: `curve25519-sha256`, `curve25519-sha256@libssh.org`, `ecdh-sha2-nistp256`, `ecdh-sha2-nistp384`, `ecdh-sha2-nistp521`, `diffie-hellman-group14-sha256`. SHA512 based KEXs are disabled by default because they are slow.
- `ciphers`, list of strings. Allowed ciphers in preference order. Leave empty to use default values. The supported values are: `aes128-gcm@openssh.com`, `aes256-gcm@openssh.com`, `chacha20-poly1305@openssh.com`, `aes128-ctr`, `aes192-ctr`, `aes256-ctr`, `aes128-cbc`, `aes192-cbc`, `aes256-cbc`, `3des-cbc`, `arcfour256`, `arcfour128`, `arcfour`. Default values: `aes128-gcm@openssh.com`, `aes256-gcm@openssh.com`, `chacha20-poly1305@openssh.com`, `aes128-ctr`, `aes192-ctr`, `aes256-ctr`. Please note that the ciphers disabled by default are insecure, you should expect that an active attacker can recover plaintext if you enable them.
- `macs`, list of strings. Available MAC (message authentication code) algorithms in preference order. Leave empty to use default values. The supported values are: `hmac-sha2-256-etm@openssh.com`, `hmac-sha2-256`, `hmac-sha2-512-etm@openssh.com`, `hmac-sha2-512`, `hmac-sha1`, `hmac-sha1-96`. Default values: `hmac-sha2-256-etm@openssh.com`, `hmac-sha2-256`.
- `trusted_user_ca_keys`, list of public keys paths of certificate authorities that are trusted to sign user certificates for authentication. The paths can be absolute or relative to the configuration directory.
- `login_banner_file`, path to the login banner file. The contents of the specified file, if any, are sent to the remote user before authentication is allowed. It can be a path relative to the config dir or an absolute one. Leave empty to disable login banner.
- `enabled_ssh_commands`, list of enabled SSH commands. `*` enables all supported commands. More information can be found [here](./ssh-commands.md).
@ -132,7 +132,7 @@ The configuration file contains the following sections:
- `debug`, boolean. If enabled any FTP command will be logged. This will generate a lot of logs. Enable only if you are investigating a client compatibility issue or something similar. You shouldn't leave this setting enabled for production servers. Default `false`.
- `banner`, string. Greeting banner displayed when a connection first comes in. Leave empty to use the default banner. Default `SFTPGo <version> ready`, for example `SFTPGo 1.0.0-dev ready`.
- `banner_file`, path to the banner file. The contents of the specified file, if any, are displayed when someone connects to the server. It can be a path relative to the config dir or an absolute one. If set, it overrides the banner string provided by the `banner` option. Leave empty to disable.
- `active_transfers_port_non_20`, boolean. Do not impose the port 20 for active data transfers. Enabling this option allows to run SFTPGo with less privilege. Default: false.
- `active_transfers_port_non_20`, boolean. Do not impose the port 20 for active data transfers. Enabling this option allows to run SFTPGo with less privilege. Default: `true`.
- `passive_port_range`, struct containing the key `start` and `end`. Port Range for data connections. Random if not specified. Default range is 50000-50100.
- `disable_active_mode`, boolean. Set to `true` to disable active FTP, default `false`.
- `enable_site`, boolean. Set to true to enable the FTP SITE command. We support `chmod` and `symlink` if SITE support is enabled. Default `false`
@ -249,6 +249,9 @@ The configuration file contains the following sections:
- `exposed_headers`, list of strings.
- `allow_credentials` boolean.
- `max_age`, integer.
- `setup` struct containing configurations for the initial setup screen
- `installation_code`, string. If set, this installation code will be required when creating the first admin account. Please note that even if set using an environment variable this field is read at SFTPGo startup and not at runtime. This is not a license key or similar, the purpose here is to prevent anyone who can access to the initial setup screen from creating an admin user. Default: blank.
- `installation_code_hint`, string. Description for the installation code input field. Default: `Installation code`.
- **"telemetry"**, the configuration for the telemetry server, more details [below](#telemetry-server)
- `bind_port`, integer. The port used for serving HTTP requests. Set to 0 to disable HTTP server. Default: 0
- `bind_address`, string. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: "127.0.0.1"
@ -370,6 +373,8 @@ $ getcap /usr/bin/sftpgo
Now you can use privileged ports such as 21, 22, 443 etc.. without running the SFTPGo service as root user. You have to set the `cap_net_bind_service` capability each time you update the `sftpgo` binary.
The "official" deb/rpm packages attempt to set the `cap_net_bind_service` capability in their `postinstall` scripts.
An alternative method is to use `iptables`, for example you run the SFTP service on port `2022` and redirect traffic from port `22` to port `2022`:
```shell

View file

@ -33,6 +33,6 @@ In theory, because the plugin interface is HTTP, you could even develop a plugin
Developing a plugin is simple. The only knowledge necessary to write a plugin is basic command-line skills and basic knowledge of the [Go programming language](http://golang.org/).
Your plugin implementation needs to satisfy the interface for the plugin type you want to build. You can find these definitions in the [docs](https://pkg.go.dev/github.com/drakkan/sftpgo/v2/sdk/plugin#section-directories).
Your plugin implementation needs to satisfy the interface for the plugin type you want to build. You can find these definitions in the [docs](https://pkg.go.dev/github.com/sftpgo/sdk/plugin#section-directories).
The SFTPGo plugin system uses the HashiCorp [go-plugin](https://github.com/hashicorp/go-plugin) library. Please refer to its documentation for more in-depth information on writing plugins.

View file

@ -28,7 +28,7 @@ The password and the private key are stored as ciphertext according to your [KMS
SHA256 fingerprints for remote server host keys are optional but highly recommended: if you provide one or more fingerprints the server host key will be verified against them and the connection will be denied if none of the fingerprints provided match that for the server host key.
Specifying a prefix you can restrict all operations to a given path within the remote SFTP server.
Specifying a prefix you can restrict all operations to a given path within the remote SFTP server. If you set a prefix make sure it is not inside a symlinked directory or it is a symlink itself.
Buffering can be enabled by setting a buffer size (in MB) greater than 0. By enabling buffering, the reads and writes, from/to the remote SFTP server, are split in multiple concurrent requests and this allows data to be transferred at a faster rate, over high latency networks, by overlapping round-trip times. With buffering enabled, resuming uploads and trucate are not supported and a file cannot be opened for both reading and writing at the same time. 0 means disabled.

View file

@ -13,13 +13,13 @@ import (
"time"
"github.com/minio/sio"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpdtest"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
)
func TestBasicFTPHandlingCryptFs(t *testing.T) {

View file

@ -257,7 +257,7 @@ func (c *Configuration) ShouldBind() bool {
// Initialize configures and starts the FTP server
func (c *Configuration) Initialize(configDir string) error {
logger.Debug(logSender, "", "initializing FTP server with config %+v", *c)
logger.Info(logSender, "", "initializing FTP server with config %+v", *c)
if !c.ShouldBind() {
return common.ErrNoBinding
}

View file

@ -24,6 +24,8 @@ import (
"github.com/pquerna/otp"
"github.com/pquerna/otp/totp"
"github.com/rs/zerolog"
"github.com/sftpgo/sdk"
sdkkms "github.com/sftpgo/sdk/kms"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -35,7 +37,6 @@ import (
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -604,7 +605,7 @@ func TestMultiFactorAuth(t *testing.T) {
configName, _, secret, _, err := mfa.GenerateTOTPSecret(mfa.GetAvailableTOTPConfigNames()[0], user.Username)
assert.NoError(t, err)
user.Password = defaultPassword
user.Filters.TOTPConfig = sdk.TOTPConfig{
user.Filters.TOTPConfig = dataprovider.UserTOTPConfig{
Enabled: true,
ConfigName: configName,
Secret: kms.NewPlainSecret(secret),
@ -1686,7 +1687,7 @@ func TestLoginWithDatabaseCredentials(t *testing.T) {
user, _, err := httpdtest.AddUser(u, http.StatusCreated)
assert.NoError(t, err)
assert.Equal(t, kms.SecretStatusSecretBox, user.FsConfig.GCSConfig.Credentials.GetStatus())
assert.Equal(t, sdkkms.SecretStatusSecretBox, user.FsConfig.GCSConfig.Credentials.GetStatus())
assert.NotEmpty(t, user.FsConfig.GCSConfig.Credentials.GetPayload())
assert.Empty(t, user.FsConfig.GCSConfig.Credentials.GetAdditionalData())
assert.Empty(t, user.FsConfig.GCSConfig.Credentials.GetKey())
@ -2808,9 +2809,7 @@ func TestNestedVirtualFolders(t *testing.T) {
FsConfig: vfs.Filesystem{
Provider: sdk.CryptedFilesystemProvider,
CryptConfig: vfs.CryptFsConfig{
CryptFsConfig: sdk.CryptFsConfig{
Passphrase: kms.NewPlainSecret(defaultPassword),
},
Passphrase: kms.NewPlainSecret(defaultPassword),
},
},
MappedPath: mappedPathCrypt,

View file

@ -14,12 +14,12 @@ import (
"github.com/eikenb/pipeat"
ftpserver "github.com/fclairamb/ftpserverlib"
"github.com/pires/go-proxyproto"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -262,6 +262,8 @@ func (cc mockFTPClientContext) Path() string {
return ""
}
func (cc mockFTPClientContext) SetPath(value string) {}
func (cc mockFTPClientContext) SetDebug(debug bool) {}
func (cc mockFTPClientContext) Debug() bool {

142
go.mod
View file

@ -3,16 +3,16 @@ module github.com/drakkan/sftpgo/v2
go 1.17
require (
cloud.google.com/go/storage v1.18.2
github.com/Azure/azure-storage-blob-go v0.14.0
cloud.google.com/go/storage v1.22.0
github.com/Azure/azure-storage-blob-go v0.15.0
github.com/GehirnInc/crypt v0.0.0-20200316065508-bb7000b8a962
github.com/alexedwards/argon2id v0.0.0-20211130144151-3585854a6387
github.com/aws/aws-sdk-go v1.42.25
github.com/cockroachdb/cockroach-go/v2 v2.2.5
github.com/eikenb/pipeat v0.0.0-20210603033007-44fc3ffce52b
github.com/fclairamb/ftpserverlib v0.17.0
github.com/fclairamb/go-log v0.2.0
github.com/go-chi/chi/v5 v5.0.7
github.com/aws/aws-sdk-go v1.44.8
github.com/cockroachdb/cockroach-go/v2 v2.2.8
github.com/eikenb/pipeat v0.0.0-20210730190139-06b3e6902001
github.com/fclairamb/ftpserverlib v0.18.0
github.com/fclairamb/go-log v0.3.0
github.com/go-chi/chi/v5 v5.0.8-0.20220103230436-7dbe9a0bd10f
github.com/go-chi/jwtauth/v5 v5.0.2
github.com/go-chi/render v1.0.1
github.com/go-sql-driver/mysql v1.6.0
@ -20,126 +20,126 @@ require (
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
github.com/google/uuid v1.3.0
github.com/grandcat/zeroconf v1.0.0
github.com/hashicorp/go-hclog v1.0.0
github.com/hashicorp/go-plugin v1.4.3
github.com/hashicorp/go-retryablehttp v0.7.0
github.com/hashicorp/go-hclog v1.2.0
github.com/hashicorp/go-plugin v1.4.4
github.com/hashicorp/go-retryablehttp v0.7.1
github.com/jlaffaye/ftp v0.0.0-20201112195030-9aae4d151126
github.com/klauspost/compress v1.13.6
github.com/lestrrat-go/jwx v1.2.14
github.com/lib/pq v1.10.4
github.com/klauspost/compress v1.15.3
github.com/lestrrat-go/jwx v1.2.24
github.com/lib/pq v1.10.5
github.com/lithammer/shortuuid/v3 v3.0.7
github.com/mattn/go-sqlite3 v1.14.10
github.com/mattn/go-sqlite3 v1.14.12
github.com/mhale/smtpd v0.8.0
github.com/minio/sio v0.3.0
github.com/otiai10/copy v1.7.0
github.com/pires/go-proxyproto v0.6.1
github.com/pkg/sftp v1.13.5-0.20211217081921-1849af66afae
github.com/pires/go-proxyproto v0.6.2
github.com/pkg/sftp v1.13.5-0.20220303113417-dcfc1d5e4162
github.com/pquerna/otp v1.3.0
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/client_golang v1.12.1
github.com/rs/cors v1.8.2
github.com/rs/xid v1.3.0
github.com/rs/zerolog v1.26.2-0.20211219225053-665519c4da50
github.com/shirou/gopsutil/v3 v3.21.11
github.com/spf13/afero v1.7.1
github.com/spf13/cobra v1.3.0
github.com/spf13/viper v1.10.1
github.com/stretchr/testify v1.7.0
github.com/rs/xid v1.4.0
github.com/rs/zerolog v1.26.2-0.20220312163309-e9344a8c507b
github.com/sftpgo/sdk v0.1.0
github.com/shirou/gopsutil/v3 v3.22.4
github.com/spf13/afero v1.8.2
github.com/spf13/cobra v1.4.0
github.com/spf13/viper v1.11.0
github.com/stretchr/testify v1.7.1
github.com/studio-b12/gowebdav v0.0.0-20211106090535-29e74efa701f
github.com/wagslane/go-password-validator v0.3.0
github.com/xhit/go-simple-mail/v2 v2.10.0
github.com/xhit/go-simple-mail/v2 v2.11.0
github.com/yl2chen/cidranger v1.0.3-0.20210928021809-d1cb2c52f37a
go.etcd.io/bbolt v1.3.6
go.uber.org/automaxprocs v1.4.0
gocloud.dev v0.24.0
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3
golang.org/x/net v0.0.0-20211209124913-491a49abca63
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11
google.golang.org/api v0.63.0
google.golang.org/grpc v1.43.0
google.golang.org/protobuf v1.27.1
go.uber.org/automaxprocs v1.5.1
gocloud.dev v0.25.0
golang.org/x/crypto v0.0.0-20220427172511-eb4f295cb31f
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6
golang.org/x/time v0.0.0-20220411224347-583f2d630306
google.golang.org/api v0.78.0
gopkg.in/natefinch/lumberjack.v2 v2.0.0
)
require (
cloud.google.com/go v0.99.0 // indirect
cloud.google.com/go v0.101.1 // indirect
cloud.google.com/go/compute v1.6.1 // indirect
cloud.google.com/go/iam v0.3.0 // indirect
github.com/Azure/azure-pipeline-go v0.2.3 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/boombuler/barcode v1.0.1 // indirect
github.com/cenkalti/backoff v2.2.1+incompatible // indirect
github.com/census-instrumentation/opencensus-proto v0.3.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4 // indirect
github.com/cncf/xds/go v0.0.0-20211216145620-d92e9ce0af51 // indirect
github.com/coreos/go-systemd/v22 v22.3.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect
github.com/coreos/go-systemd/v22 v22.3.3-0.20220203105225-a9a7ef127534 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect
github.com/envoyproxy/go-control-plane v0.10.1 // indirect
github.com/envoyproxy/protoc-gen-validate v0.6.2 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/fsnotify/fsnotify v1.5.4 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/goccy/go-json v0.8.1 // indirect
github.com/go-test/deep v1.0.8 // indirect
github.com/goccy/go-json v0.9.7 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.6 // indirect
github.com/googleapis/gax-go/v2 v2.1.1 // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/googleapis/gax-go/v2 v2.3.0 // indirect
github.com/googleapis/go-type-adapters v1.0.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/hashicorp/yamux v0.0.0-20211028200310-0bc27b27de87 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/klauspost/cpuid/v2 v2.0.9 // indirect
github.com/klauspost/cpuid/v2 v2.0.12 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/lestrrat-go/backoff/v2 v2.0.8 // indirect
github.com/lestrrat-go/blackmagic v1.0.0 // indirect
github.com/lestrrat-go/httpcc v1.0.0 // indirect
github.com/lestrrat-go/iter v1.0.1 // indirect
github.com/lestrrat-go/blackmagic v1.0.1 // indirect
github.com/lestrrat-go/httpcc v1.0.1 // indirect
github.com/lestrrat-go/iter v1.0.2 // indirect
github.com/lestrrat-go/option v1.0.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/magiconair/properties v1.8.5 // indirect
github.com/lufia/plan9stats v0.0.0-20220326011226-f1430873d8db // indirect
github.com/magiconair/properties v1.8.6 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-ieproxy v0.0.1 // indirect
github.com/mattn/go-ieproxy v0.0.6 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/miekg/dns v1.1.45 // indirect
github.com/miekg/dns v1.1.48 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect
github.com/mitchellh/go-testing-interface v1.14.1 // indirect
github.com/mitchellh/mapstructure v1.4.3 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/oklog/run v1.1.0 // indirect
github.com/pelletier/go-toml v1.9.4 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pelletier/go-toml/v2 v2.0.0 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/power-devops/perfstat v0.0.0-20220216144756-c35f1ee13d7c // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/common v0.34.0 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/spf13/cast v1.4.1 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/tklauser/go-sysconf v0.3.9 // indirect
github.com/tklauser/numcpus v0.3.0 // indirect
github.com/tklauser/go-sysconf v0.3.10 // indirect
github.com/tklauser/numcpus v0.4.0 // indirect
github.com/toorop/go-dkim v0.0.0-20201103131630-e1cd1a0a5208 // indirect
github.com/yusufpapurcu/wmi v1.2.2 // indirect
go.opencensus.io v0.23.0 // indirect
golang.org/x/mod v0.5.1 // indirect
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 // indirect
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/tools v0.1.8 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
golang.org/x/tools v0.1.10 // indirect
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20211223182754-3ac035c7e7cb // indirect
gopkg.in/ini.v1 v1.66.2 // indirect
google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3 // indirect
google.golang.org/grpc v1.46.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/ini.v1 v1.66.4 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
)
replace (
github.com/eikenb/pipeat => github.com/drakkan/pipeat v0.0.0-20210805162858-70e57fa8a639
github.com/jlaffaye/ftp => github.com/drakkan/ftp v0.0.0-20201114075148-9b9adce499a9
golang.org/x/crypto => github.com/drakkan/crypto v0.0.0-20211216170250-0a05a5747f0f
golang.org/x/net => github.com/drakkan/net v0.0.0-20211210172952-3f0f9446f73f
golang.org/x/crypto => github.com/drakkan/crypto v0.0.0-20220430103812-3102e77e562a
golang.org/x/net => github.com/drakkan/net v0.0.0-20220430103631-b41bb3940f13
)

591
go.sum

File diff suppressed because it is too large Load diff

View file

@ -83,7 +83,7 @@ func disableAdmin2FA(w http.ResponseWriter, r *http.Request) {
return
}
admin.Filters.RecoveryCodes = nil
admin.Filters.TOTPConfig = dataprovider.TOTPConfig{
admin.Filters.TOTPConfig = dataprovider.AdminTOTPConfig{
Enabled: false,
}
if err := dataprovider.UpdateAdmin(&admin, claims.Username, util.GetIPFromRemoteAddress(r.RemoteAddr)); err != nil {
@ -105,7 +105,7 @@ func updateAdmin(w http.ResponseWriter, r *http.Request) {
adminID := admin.ID
totpConfig := admin.Filters.TOTPConfig
recoveryCodes := admin.Filters.RecoveryCodes
admin.Filters.TOTPConfig = dataprovider.TOTPConfig{}
admin.Filters.TOTPConfig = dataprovider.AdminTOTPConfig{}
admin.Filters.RecoveryCodes = nil
err = render.DecodeJSON(r.Body, &admin)
if err != nil {

View file

@ -5,40 +5,31 @@ import (
"net/http"
"strconv"
"github.com/sftpgo/sdk/plugin/eventsearcher"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
type commonEventSearchParams struct {
StartTimestamp int64
EndTimestamp int64
Actions []string
Username string
IP string
InstanceIDs []string
ExcludeIDs []string
Limit int
Order int
}
func (c *commonEventSearchParams) fromRequest(r *http.Request) error {
func getCommonSearchParamsFromRequest(r *http.Request) (eventsearcher.CommonSearchParams, error) {
c := eventsearcher.CommonSearchParams{}
c.Limit = 100
if _, ok := r.URL.Query()["limit"]; ok {
limit, err := strconv.Atoi(r.URL.Query().Get("limit"))
if err != nil {
return util.NewValidationError(fmt.Sprintf("invalid limit: %v", err))
return c, util.NewValidationError(fmt.Sprintf("invalid limit: %v", err))
}
if limit < 1 || limit > 1000 {
return util.NewValidationError(fmt.Sprintf("limit is out of the 1-1000 range: %v", limit))
return c, util.NewValidationError(fmt.Sprintf("limit is out of the 1-1000 range: %v", limit))
}
c.Limit = limit
}
if _, ok := r.URL.Query()["order"]; ok {
order := r.URL.Query().Get("order")
if order != dataprovider.OrderASC && order != dataprovider.OrderDESC {
return util.NewValidationError(fmt.Sprintf("invalid order %#v", order))
return c, util.NewValidationError(fmt.Sprintf("invalid order %#v", order))
}
if order == dataprovider.OrderASC {
c.Order = 1
@ -47,14 +38,14 @@ func (c *commonEventSearchParams) fromRequest(r *http.Request) error {
if _, ok := r.URL.Query()["start_timestamp"]; ok {
ts, err := strconv.ParseInt(r.URL.Query().Get("start_timestamp"), 10, 64)
if err != nil {
return util.NewValidationError(fmt.Sprintf("invalid start_timestamp: %v", err))
return c, util.NewValidationError(fmt.Sprintf("invalid start_timestamp: %v", err))
}
c.StartTimestamp = ts
}
if _, ok := r.URL.Query()["end_timestamp"]; ok {
ts, err := strconv.ParseInt(r.URL.Query().Get("end_timestamp"), 10, 64)
if err != nil {
return util.NewValidationError(fmt.Sprintf("invalid end_timestamp: %v", err))
return c, util.NewValidationError(fmt.Sprintf("invalid end_timestamp: %v", err))
}
c.EndTimestamp = ts
}
@ -64,64 +55,64 @@ func (c *commonEventSearchParams) fromRequest(r *http.Request) error {
c.InstanceIDs = getCommaSeparatedQueryParam(r, "instance_ids")
c.ExcludeIDs = getCommaSeparatedQueryParam(r, "exclude_ids")
return nil
return c, nil
}
type fsEventSearchParams struct {
commonEventSearchParams
SSHCmd string
Protocols []string
Statuses []int32
}
func (s *fsEventSearchParams) fromRequest(r *http.Request) error {
if err := s.commonEventSearchParams.fromRequest(r); err != nil {
return err
func getFsSearchParamsFromRequest(r *http.Request) (eventsearcher.FsEventSearch, error) {
var err error
s := eventsearcher.FsEventSearch{}
s.CommonSearchParams, err = getCommonSearchParamsFromRequest(r)
if err != nil {
return s, err
}
s.FsProvider = -1
if _, ok := r.URL.Query()["fs_provider"]; ok {
provider := r.URL.Query().Get("fs_provider")
val, err := strconv.Atoi(provider)
if err != nil {
return s, util.NewValidationError(fmt.Sprintf("invalid fs_provider: %v", provider))
}
s.FsProvider = val
}
s.IP = r.URL.Query().Get("ip")
s.SSHCmd = r.URL.Query().Get("ssh_cmd")
s.Bucket = r.URL.Query().Get("bucket")
s.Endpoint = r.URL.Query().Get("endpoint")
s.Protocols = getCommaSeparatedQueryParam(r, "protocols")
statuses := getCommaSeparatedQueryParam(r, "statuses")
for _, status := range statuses {
val, err := strconv.Atoi(status)
if err != nil {
return util.NewValidationError(fmt.Sprintf("invalid status: %v", status))
return s, util.NewValidationError(fmt.Sprintf("invalid status: %v", status))
}
s.Statuses = append(s.Statuses, int32(val))
}
return nil
return s, nil
}
type providerEventSearchParams struct {
commonEventSearchParams
ObjectName string
ObjectTypes []string
}
func (s *providerEventSearchParams) fromRequest(r *http.Request) error {
if err := s.commonEventSearchParams.fromRequest(r); err != nil {
return err
func getProviderSearchParamsFromRequest(r *http.Request) (eventsearcher.ProviderEventSearch, error) {
var err error
s := eventsearcher.ProviderEventSearch{}
s.CommonSearchParams, err = getCommonSearchParamsFromRequest(r)
if err != nil {
return s, err
}
s.ObjectName = r.URL.Query().Get("object_name")
s.ObjectTypes = getCommaSeparatedQueryParam(r, "object_types")
return nil
return s, nil
}
func searchFsEvents(w http.ResponseWriter, r *http.Request) {
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
params := fsEventSearchParams{}
err := params.fromRequest(r)
filters, err := getFsSearchParamsFromRequest(r)
if err != nil {
sendAPIResponse(w, r, err, "", getRespStatus(err))
return
}
data, _, _, err := plugin.Handler.SearchFsEvents(params.StartTimestamp, params.EndTimestamp, params.Username,
params.IP, params.SSHCmd, params.Actions, params.Protocols, params.InstanceIDs, params.ExcludeIDs,
params.Statuses, params.Limit, params.Order)
data, _, _, err := plugin.Handler.SearchFsEvents(&filters)
if err != nil {
sendAPIResponse(w, r, err, "", getRespStatus(err))
return
@ -134,16 +125,13 @@ func searchFsEvents(w http.ResponseWriter, r *http.Request) {
func searchProviderEvents(w http.ResponseWriter, r *http.Request) {
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
params := providerEventSearchParams{}
err := params.fromRequest(r)
filters, err := getProviderSearchParamsFromRequest(r)
if err != nil {
sendAPIResponse(w, r, err, "", getRespStatus(err))
return
}
data, _, _, err := plugin.Handler.SearchProviderEvents(params.StartTimestamp, params.EndTimestamp, params.Username,
params.IP, params.ObjectName, params.Limit, params.Order, params.Actions, params.ObjectTypes, params.InstanceIDs,
params.ExcludeIDs)
data, _, _, err := plugin.Handler.SearchProviderEvents(&filters)
if err != nil {
sendAPIResponse(w, r, err, "", getRespStatus(err))
return

View file

@ -10,7 +10,6 @@ import (
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
)
@ -81,10 +80,10 @@ func saveTOTPConfig(w http.ResponseWriter, r *http.Request) {
sendAPIResponse(w, r, err, "Invalid token claims", http.StatusBadRequest)
return
}
recoveryCodes := make([]sdk.RecoveryCode, 0, 12)
recoveryCodes := make([]dataprovider.RecoveryCode, 0, 12)
for i := 0; i < 12; i++ {
code := getNewRecoveryCode()
recoveryCodes = append(recoveryCodes, sdk.RecoveryCode{Secret: kms.NewPlainSecret(code)})
recoveryCodes = append(recoveryCodes, dataprovider.RecoveryCode{Secret: kms.NewPlainSecret(code)})
}
if claims.hasUserAudience() {
if err := saveUserTOTPConfig(claims.Username, r, recoveryCodes); err != nil {
@ -125,7 +124,7 @@ func getRecoveryCodes(w http.ResponseWriter, r *http.Request) {
return
}
recoveryCodes := make([]recoveryCode, 0, 12)
var accountRecoveryCodes []sdk.RecoveryCode
var accountRecoveryCodes []dataprovider.RecoveryCode
if claims.hasUserAudience() {
user, err := dataprovider.UserExists(claims.Username)
if err != nil {
@ -163,11 +162,11 @@ func generateRecoveryCodes(w http.ResponseWriter, r *http.Request) {
return
}
recoveryCodes := make([]string, 0, 12)
accountRecoveryCodes := make([]sdk.RecoveryCode, 0, 12)
accountRecoveryCodes := make([]dataprovider.RecoveryCode, 0, 12)
for i := 0; i < 12; i++ {
code := getNewRecoveryCode()
recoveryCodes = append(recoveryCodes, code)
accountRecoveryCodes = append(accountRecoveryCodes, sdk.RecoveryCode{Secret: kms.NewPlainSecret(code)})
accountRecoveryCodes = append(accountRecoveryCodes, dataprovider.RecoveryCode{Secret: kms.NewPlainSecret(code)})
}
if claims.hasUserAudience() {
user, err := dataprovider.UserExists(claims.Username)
@ -200,7 +199,7 @@ func getNewRecoveryCode() string {
return fmt.Sprintf("RC-%v", strings.ToUpper(util.GenerateUniqueID()))
}
func saveUserTOTPConfig(username string, r *http.Request, recoveryCodes []sdk.RecoveryCode) error {
func saveUserTOTPConfig(username string, r *http.Request, recoveryCodes []dataprovider.RecoveryCode) error {
user, err := dataprovider.UserExists(username)
if err != nil {
return err
@ -220,7 +219,7 @@ func saveUserTOTPConfig(username string, r *http.Request, recoveryCodes []sdk.Re
return dataprovider.UpdateUser(&user, dataprovider.ActionExecutorSelf, util.GetIPFromRemoteAddress(r.RemoteAddr))
}
func saveAdminTOTPConfig(username string, r *http.Request, recoveryCodes []sdk.RecoveryCode) error {
func saveAdminTOTPConfig(username string, r *http.Request, recoveryCodes []dataprovider.RecoveryCode) error {
admin, err := dataprovider.AdminExists(username)
if err != nil {
return err

View file

@ -146,13 +146,13 @@ func downloadFromShare(w http.ResponseWriter, r *http.Request) {
compress := true
var info os.FileInfo
if len(share.Paths) > 0 && r.URL.Query().Get("compress") == "false" {
info, err = connection.Stat(share.Paths[0], 0)
if len(share.Paths) == 1 && r.URL.Query().Get("compress") == "false" {
info, err = connection.Stat(share.Paths[0], 1)
if err != nil {
sendAPIResponse(w, r, err, "", getRespStatus(err))
return
}
if !info.IsDir() {
if info.Mode().IsRegular() {
compress = false
}
}

View file

@ -7,11 +7,11 @@ import (
"strconv"
"github.com/go-chi/render"
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
@ -89,7 +89,7 @@ func disableUser2FA(w http.ResponseWriter, r *http.Request) {
return
}
user.Filters.RecoveryCodes = nil
user.Filters.TOTPConfig = sdk.TOTPConfig{
user.Filters.TOTPConfig = dataprovider.UserTOTPConfig{
Enabled: false,
}
if err := dataprovider.UpdateUser(&user, claims.Username, util.GetIPFromRemoteAddress(r.RemoteAddr)); err != nil {
@ -140,7 +140,7 @@ func updateUser(w http.ResponseWriter, r *http.Request) {
user.FsConfig.GCSConfig = vfs.GCSFsConfig{}
user.FsConfig.CryptConfig = vfs.CryptFsConfig{}
user.FsConfig.SFTPConfig = vfs.SFTPFsConfig{}
user.Filters.TOTPConfig = sdk.TOTPConfig{}
user.Filters.TOTPConfig = dataprovider.UserTOTPConfig{}
user.Filters.RecoveryCodes = nil
user.VirtualFolders = nil
err = render.DecodeJSON(r.Body, &user)

View file

@ -24,7 +24,7 @@ import (
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
)

View file

@ -222,7 +222,9 @@ var (
webStaticFilesPath string
webOpenAPIPath string
// max upload size for http clients, 1GB by default
maxUploadFileSize = int64(1048576000)
maxUploadFileSize = int64(1048576000)
installationCode string
installationCodeHint string
)
func init() {
@ -353,6 +355,18 @@ type ServicesStatus struct {
MFA mfa.ServiceStatus `json:"mfa"`
}
// SetupConfig defines the configuration parameters for the initial web admin setup
type SetupConfig struct {
// Installation code to require when creating the first admin account.
// As for the other configurations, this value is read at SFTPGo startup and not at runtime
// even if set using an environment variable.
// This is not a license key or similar, the purpose here is to prevent anyone who can access
// to the initial setup screen from creating an admin user
InstallationCode string `json:"installation_code" mapstructure:"installation_code"`
// Description for the installation code input field
InstallationCodeHint string `json:"installation_code_hint" mapstructure:"installation_code_hint"`
}
// CorsConfig defines the CORS configuration
type CorsConfig struct {
AllowedOrigins []string `json:"allowed_origins" mapstructure:"allowed_origins"`
@ -401,6 +415,8 @@ type Conf struct {
MaxUploadFileSize int64 `json:"max_upload_file_size" mapstructure:"max_upload_file_size"`
// CORS configuration
Cors CorsConfig `json:"cors" mapstructure:"cors"`
// Initial setup configuration
Setup SetupConfig `json:"setup" mapstructure:"setup"`
}
type apiResponse struct {
@ -447,13 +463,15 @@ func (c *Conf) checkRequiredDirs(staticFilesPath, templatesPath string) error {
func (c *Conf) getRedacted() Conf {
conf := *c
conf.SigningPassphrase = "[redacted]"
redactedSecret := "[redacted]"
conf.SigningPassphrase = redactedSecret
conf.Setup.InstallationCode = redactedSecret
return conf
}
// Initialize configures and starts the HTTP server
func (c *Conf) Initialize(configDir string) error {
logger.Debug(logSender, "", "initializing HTTP server with config %+v", c.getRedacted())
logger.Info(logSender, "", "initializing HTTP server with config %+v", c.getRedacted())
backupsPath = getConfigPath(c.BackupsPath, configDir)
staticFilesPath := getConfigPath(c.StaticFilesPath, configDir)
templatesPath := getConfigPath(c.TemplatesPath, configDir)
@ -515,6 +533,8 @@ func (c *Conf) Initialize(configDir string) error {
}
maxUploadFileSize = c.MaxUploadFileSize
installationCode = c.Setup.InstallationCode
installationCodeHint = c.Setup.InstallationCodeHint
startCleanupTicker(tokenDuration / 2)
return <-exitChannel
}

File diff suppressed because it is too large Load diff

View file

@ -22,19 +22,20 @@ import (
"time"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/go-chi/jwtauth/v5"
"github.com/klauspost/compress/zip"
"github.com/lestrrat-go/jwx/jwa"
"github.com/lestrrat-go/jwx/jwt"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
@ -299,6 +300,21 @@ func TestShouldBind(t *testing.T) {
}
}
func TestRedactedConf(t *testing.T) {
c := Conf{
SigningPassphrase: "passphrase",
Setup: SetupConfig{
InstallationCode: "123",
},
}
redactedField := "[redacted]"
redactedConf := c.getRedacted()
assert.Equal(t, redactedField, redactedConf.SigningPassphrase)
assert.Equal(t, redactedField, redactedConf.Setup.InstallationCode)
assert.NotEqual(t, c.SigningPassphrase, redactedConf.SigningPassphrase)
assert.NotEqual(t, c.Setup.InstallationCode, redactedConf.Setup.InstallationCode)
}
func TestGetRespStatus(t *testing.T) {
var err error
err = util.NewMethodDisabledError("")
@ -507,6 +523,16 @@ func TestInvalidToken(t *testing.T) {
assert.Equal(t, http.StatusBadRequest, rr.Code)
assert.Contains(t, rr.Body.String(), "invalid token claims")
rr = httptest.NewRecorder()
handleWebTemplateFolderPost(rr, req)
assert.Equal(t, http.StatusBadRequest, rr.Code)
assert.Contains(t, rr.Body.String(), "invalid token claims")
rr = httptest.NewRecorder()
handleWebTemplateUserPost(rr, req)
assert.Equal(t, http.StatusBadRequest, rr.Code)
assert.Contains(t, rr.Body.String(), "invalid token claims")
rr = httptest.NewRecorder()
updateFolder(rr, req)
assert.Equal(t, http.StatusBadRequest, rr.Code)
@ -1529,7 +1555,7 @@ func TestRecoverer(t *testing.T) {
assert.Equal(t, http.StatusInternalServerError, rr.Code, rr.Body.String())
server.router = chi.NewRouter()
server.router.Use(recoverer)
server.router.Use(middleware.Recoverer)
server.router.Get(recoveryPath, func(w http.ResponseWriter, r *http.Request) {
panic("panic")
})
@ -1772,10 +1798,10 @@ func TestConnection(t *testing.T) {
FsConfig: vfs.Filesystem{
Provider: sdk.GCSFilesystemProvider,
GCSConfig: vfs.GCSFsConfig{
GCSFsConfig: sdk.GCSFsConfig{
Bucket: "test_bucket_name",
Credentials: kms.NewPlainSecret("invalid JSON payload"),
BaseGCSFsConfig: sdk.BaseGCSFsConfig{
Bucket: "test_bucket_name",
},
Credentials: kms.NewPlainSecret("invalid JSON payload"),
},
},
}
@ -1814,12 +1840,12 @@ func TestGetFileWriterErrors(t *testing.T) {
user.FsConfig.Provider = sdk.S3FilesystemProvider
user.FsConfig.S3Config = vfs.S3FsConfig{
S3FsConfig: sdk.S3FsConfig{
Bucket: "b",
Region: "us-west-1",
AccessKey: "key",
AccessSecret: kms.NewPlainSecret("secret"),
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: "b",
Region: "us-west-1",
AccessKey: "key",
},
AccessSecret: kms.NewPlainSecret("secret"),
}
connection = &Connection{
BaseConnection: common.NewBaseConnection(xid.New().String(), common.ProtocolHTTP, "", "", user),
@ -2173,3 +2199,78 @@ func TestMetadataAPI(t *testing.T) {
err = doMetadataCheck(user)
assert.Error(t, err)
}
func TestWebAdminSetupWithInstallCode(t *testing.T) {
installationCode = "1234"
// delete all the admins
admins, err := dataprovider.GetAdmins(100, 0, dataprovider.OrderASC)
assert.NoError(t, err)
for _, admin := range admins {
err = dataprovider.DeleteAdmin(admin.Username, "", "")
assert.NoError(t, err)
}
// close the provider and initializes it without creating the default admin
providerConf := dataprovider.GetProviderConfig()
providerConf.CreateDefaultAdmin = false
err = dataprovider.Close()
assert.NoError(t, err)
err = dataprovider.Initialize(providerConf, "..", true)
assert.NoError(t, err)
server := httpdServer{
enableWebAdmin: true,
enableWebClient: true,
}
server.initializeRouter()
rr := httptest.NewRecorder()
r, err := http.NewRequest(http.MethodGet, webAdminSetupPath, nil)
assert.NoError(t, err)
server.router.ServeHTTP(rr, r)
assert.Equal(t, http.StatusOK, rr.Code)
for _, webURL := range []string{"/", webBasePath, webBaseAdminPath, webLoginPath, webClientLoginPath} {
rr = httptest.NewRecorder()
r, err = http.NewRequest(http.MethodGet, webURL, nil)
assert.NoError(t, err)
server.router.ServeHTTP(rr, r)
assert.Equal(t, http.StatusFound, rr.Code)
assert.Equal(t, webAdminSetupPath, rr.Header().Get("Location"))
}
defaultAdminUsername := "admin"
form := make(url.Values)
csrfToken := createCSRFToken()
form.Set("_form_token", csrfToken)
form.Set("install_code", "12345")
form.Set("username", defaultAdminUsername)
form.Set("password", "password")
form.Set("confirm_password", "password")
rr = httptest.NewRecorder()
r, err = http.NewRequest(http.MethodPost, webAdminSetupPath, bytes.NewBuffer([]byte(form.Encode())))
assert.NoError(t, err)
r.Header.Set("Content-Type", "application/x-www-form-urlencoded")
server.router.ServeHTTP(rr, r)
assert.Equal(t, http.StatusOK, rr.Code)
assert.Contains(t, rr.Body.String(), "Installation code mismatch")
_, err = dataprovider.AdminExists(defaultAdminUsername)
assert.Error(t, err)
form.Set("install_code", "1234")
rr = httptest.NewRecorder()
r, err = http.NewRequest(http.MethodPost, webAdminSetupPath, bytes.NewBuffer([]byte(form.Encode())))
assert.NoError(t, err)
r.Header.Set("Content-Type", "application/x-www-form-urlencoded")
server.router.ServeHTTP(rr, r)
assert.Equal(t, http.StatusFound, rr.Code)
_, err = dataprovider.AdminExists(defaultAdminUsername)
assert.NoError(t, err)
err = dataprovider.Close()
assert.NoError(t, err)
providerConf.CreateDefaultAdmin = true
err = dataprovider.Initialize(providerConf, "..", true)
assert.NoError(t, err)
installationCode = ""
}

View file

@ -4,19 +4,17 @@ import (
"errors"
"fmt"
"net/http"
"runtime/debug"
"strings"
"time"
"github.com/go-chi/chi/v5/middleware"
"github.com/go-chi/jwtauth/v5"
"github.com/lestrrat-go/jwx/jwt"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
)
@ -332,31 +330,6 @@ func forbidAPIKeyAuthentication(next http.Handler) http.Handler {
})
}
func recoverer(next http.Handler) http.Handler {
fn := func(w http.ResponseWriter, r *http.Request) {
defer func() {
if rvr := recover(); rvr != nil {
if rvr == http.ErrAbortHandler {
panic(rvr)
}
logEntry := middleware.GetLogEntry(r)
if logEntry != nil {
logEntry.Panic(rvr, debug.Stack())
} else {
middleware.PrintPrettyStack(rvr)
}
w.WriteHeader(http.StatusInternalServerError)
}
}()
next.ServeHTTP(w, r)
}
return http.HandlerFunc(fn)
}
func authenticateAdminWithAPIKey(username, keyID string, tokenAuth *jwtauth.JWTAuth, r *http.Request) error {
if username == "" {
return errors.New("the provided key is not associated with any admin and no username was provided")

View file

@ -19,12 +19,12 @@ import (
"github.com/lestrrat-go/jwx/jwa"
"github.com/rs/cors"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
@ -536,6 +536,11 @@ func (s *httpdServer) handleWebAdminSetupPost(w http.ResponseWriter, r *http.Req
username := r.Form.Get("username")
password := r.Form.Get("password")
confirmPassword := r.Form.Get("confirm_password")
installCode := r.Form.Get("install_code")
if installationCode != "" && installCode != installationCode {
renderAdminSetupPage(w, r, username, fmt.Sprintf("%v mismatch", installationCodeHint))
return
}
if username == "" {
renderAdminSetupPage(w, r, username, "Please set a username")
return
@ -969,7 +974,7 @@ func (s *httpdServer) initializeRouter() {
s.router.Use(middleware.RequestID)
s.router.Use(s.checkConnection)
s.router.Use(logger.NewStructuredLogger(logger.GetLogger()))
s.router.Use(recoverer)
s.router.Use(middleware.Recoverer)
if s.cors.Enabled {
c := cors.New(cors.Options{
AllowedOrigins: s.cors.AllowedOrigins,

View file

@ -7,18 +7,20 @@ import (
"io"
"net/http"
"net/url"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/go-chi/render"
"github.com/sftpgo/sdk"
sdkkms "github.com/sftpgo/sdk/kms"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
@ -141,6 +143,13 @@ type statusPage struct {
Status ServicesStatus
}
type fsWrapper struct {
vfs.Filesystem
IsUserPage bool
HasUsersBaseDir bool
DirPath string
}
type userPage struct {
basePage
User *dataprovider.User
@ -154,6 +163,8 @@ type userPage struct {
RedactedSecret string
Mode userPageMode
VirtualFolders []vfs.BaseVirtualFolder
CanImpersonate bool
FsWrapper fsWrapper
}
type adminPage struct {
@ -179,7 +190,7 @@ type changePasswordPage struct {
type mfaPage struct {
basePage
TOTPConfigs []string
TOTPConfig dataprovider.TOTPConfig
TOTPConfig dataprovider.AdminTOTPConfig
GenerateTOTPURL string
ValidateTOTPURL string
SaveTOTPURL string
@ -200,15 +211,18 @@ type defenderHostsPage struct {
type setupPage struct {
basePage
Username string
Error string
Username string
HasInstallationCode bool
InstallationCodeHint string
Error string
}
type folderPage struct {
basePage
Folder vfs.BaseVirtualFolder
Error string
Mode folderPageMode
Folder vfs.BaseVirtualFolder
Error string
Mode folderPageMode
FsWrapper fsWrapper
}
type messagePage struct {
@ -306,7 +320,12 @@ func loadAdminTemplates(templatesPath string) {
}
fsBaseTpl := template.New("fsBaseTemplate").Funcs(template.FuncMap{
"ListFSProviders": sdk.ListProviders,
"ListFSProviders": func() []sdk.FilesystemProvider {
return []sdk.FilesystemProvider{sdk.LocalFilesystemProvider, sdk.CryptedFilesystemProvider,
sdk.S3FilesystemProvider, sdk.GCSFilesystemProvider,
sdk.AzureBlobFilesystemProvider, sdk.SFTPFilesystemProvider,
}
},
})
usersTmpl := util.LoadTemplate(nil, usersPaths...)
userTmpl := util.LoadTemplate(fsBaseTpl, userPaths...)
@ -534,9 +553,11 @@ func renderMaintenancePage(w http.ResponseWriter, r *http.Request, error string)
func renderAdminSetupPage(w http.ResponseWriter, r *http.Request, username, error string) {
data := setupPage{
basePage: getBasePageData(pageSetupTitle, webAdminSetupPath, r),
Username: username,
Error: error,
basePage: getBasePageData(pageSetupTitle, webAdminSetupPath, r),
Username: username,
HasInstallationCode: installationCode != "",
InstallationCodeHint: installationCodeHint,
Error: error,
}
renderAdminTemplate(w, templateSetup, data)
@ -593,6 +614,13 @@ func renderUserPage(w http.ResponseWriter, r *http.Request, user *dataprovider.U
WebClientOptions: sdk.WebClientOptions,
RootDirPerms: user.GetPermissionsForPath("/"),
VirtualFolders: folders,
CanImpersonate: os.Getuid() == 0,
FsWrapper: fsWrapper{
Filesystem: user.FsConfig,
IsUserPage: true,
HasUsersBaseDir: dataprovider.HasUsersBaseDir(),
DirPath: user.HomeDir,
},
}
renderAdminTemplate(w, templateUser, data)
}
@ -618,6 +646,12 @@ func renderFolderPage(w http.ResponseWriter, r *http.Request, folder vfs.BaseVir
Error: error,
Folder: folder,
Mode: mode,
FsWrapper: fsWrapper{
Filesystem: folder.FsConfig,
IsUserPage: false,
HasUsersBaseDir: false,
DirPath: folder.MappedPath,
},
}
renderAdminTemplate(w, templateFolder, data)
}
@ -821,8 +855,8 @@ func getFilePatternsFromPostField(r *http.Request) []sdk.PatternsFilter {
return result
}
func getFiltersFromUserPostFields(r *http.Request) (sdk.UserFilters, error) {
var filters sdk.UserFilters
func getFiltersFromUserPostFields(r *http.Request) (sdk.BaseUserFilters, error) {
var filters sdk.BaseUserFilters
bwLimits, err := getBandwidthLimitsFromPostFields(r)
if err != nil {
return filters, err
@ -853,7 +887,7 @@ func getFiltersFromUserPostFields(r *http.Request) (sdk.UserFilters, error) {
func getSecretFromFormField(r *http.Request, field string) *kms.Secret {
secret := kms.NewPlainSecret(r.Form.Get(field))
if strings.TrimSpace(secret.GetPayload()) == redactedSecret {
secret.SetStatus(kms.SecretStatusRedacted)
secret.SetStatus(sdkkms.SecretStatusRedacted)
}
if strings.TrimSpace(secret.GetPayload()) == "" {
secret.SetStatus("")
@ -1204,10 +1238,12 @@ func getUserFromPostFields(r *http.Request) (dataprovider.User, error) {
DownloadBandwidth: bandwidthDL,
Status: status,
ExpirationDate: expirationDateMillis,
Filters: filters,
AdditionalInfo: r.Form.Get("additional_info"),
Description: r.Form.Get("description"),
},
Filters: dataprovider.UserFilters{
BaseUserFilters: filters,
},
VirtualFolders: getVirtualFoldersFromPostFields(r),
FsConfig: fsConfig,
}
@ -1473,7 +1509,7 @@ func handleWebAddAdminPost(w http.ResponseWriter, r *http.Request) {
renderForbiddenPage(w, r, err.Error())
return
}
err = dataprovider.AddAdmin(&admin, claims.Username, util.GetIPFromRemoteAddress(r.Method))
err = dataprovider.AddAdmin(&admin, claims.Username, util.GetIPFromRemoteAddress(r.RemoteAddr))
if err != nil {
renderAddUpdateAdminPage(w, r, &admin, err.Error(), true)
return
@ -1512,7 +1548,7 @@ func handleWebUpdateAdminPost(w http.ResponseWriter, r *http.Request) {
updatedAdmin.Filters.RecoveryCodes = admin.Filters.RecoveryCodes
claims, err := getTokenClaims(r)
if err != nil || claims.Username == "" {
renderAddUpdateAdminPage(w, r, &updatedAdmin, fmt.Sprintf("Invalid token claims: %v", err), false)
renderAddUpdateAdminPage(w, r, &updatedAdmin, "Invalid token claims", false)
return
}
if username == claims.Username {
@ -1578,6 +1614,7 @@ func handleWebTemplateFolderGet(w http.ResponseWriter, r *http.Request) {
name := r.URL.Query().Get("from")
folder, err := dataprovider.GetFolderByName(name)
if err == nil {
folder.FsConfig.SetEmptySecrets()
renderFolderPage(w, r, folder, folderPageModeTemplate, "")
} else if _, ok := err.(*util.RecordNotFoundError); ok {
renderNotFoundPage(w, r, err)
@ -1592,8 +1629,13 @@ func handleWebTemplateFolderGet(w http.ResponseWriter, r *http.Request) {
func handleWebTemplateFolderPost(w http.ResponseWriter, r *http.Request) {
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
claims, err := getTokenClaims(r)
if err != nil || claims.Username == "" {
renderBadRequestPage(w, r, errors.New("invalid token claims"))
return
}
templateFolder := vfs.BaseVirtualFolder{}
err := r.ParseMultipartForm(maxRequestSize)
err = r.ParseMultipartForm(maxRequestSize)
if err != nil {
renderMessagePage(w, r, "Error parsing folders fields", "", http.StatusBadRequest, err, "")
return
@ -1621,19 +1663,30 @@ func handleWebTemplateFolderPost(w http.ResponseWriter, r *http.Request) {
for _, tmpl := range foldersFields {
f := getFolderFromTemplate(templateFolder, tmpl)
if err := dataprovider.ValidateFolder(&f); err != nil {
renderMessagePage(w, r, fmt.Sprintf("Error validating folder %#v", f.Name), "", http.StatusBadRequest, err, "")
renderMessagePage(w, r, "Folder validation error", fmt.Sprintf("Error validating folder %#v", f.Name),
http.StatusBadRequest, err, "")
return
}
dump.Folders = append(dump.Folders, f)
}
if len(dump.Folders) == 0 {
renderMessagePage(w, r, "No folders to export", "No valid folders found, export is not possible", http.StatusBadRequest, nil, "")
renderMessagePage(w, r, "No folders defined", "No valid folders defined, unable to complete the requested action",
http.StatusBadRequest, nil, "")
return
}
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"sftpgo-%v-folders-from-template.json\"", len(dump.Folders)))
render.JSON(w, r, dump)
if r.Form.Get("form_action") == "export_from_template" {
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"sftpgo-%v-folders-from-template.json\"",
len(dump.Folders)))
render.JSON(w, r, dump)
return
}
if err = RestoreFolders(dump.Folders, "", 1, 0, claims.Username, util.GetIPFromRemoteAddress(r.RemoteAddr)); err != nil {
renderMessagePage(w, r, "Unable to save folders", "Cannot save the defined folders:",
getRespStatus(err), err, "")
return
}
http.Redirect(w, r, webFoldersPath, http.StatusSeeOther)
}
func handleWebTemplateUserGet(w http.ResponseWriter, r *http.Request) {
@ -1643,6 +1696,7 @@ func handleWebTemplateUserGet(w http.ResponseWriter, r *http.Request) {
user, err := dataprovider.UserExists(username)
if err == nil {
user.SetEmptySecrets()
user.PublicKeys = nil
user.Email = ""
user.Description = ""
renderUserPage(w, r, &user, userPageModeTemplate, "")
@ -1652,13 +1706,23 @@ func handleWebTemplateUserGet(w http.ResponseWriter, r *http.Request) {
renderInternalServerErrorPage(w, r, err)
}
} else {
user := dataprovider.User{BaseUser: sdk.BaseUser{Status: 1}}
user := dataprovider.User{BaseUser: sdk.BaseUser{
Status: 1,
Permissions: map[string][]string{
"/": {dataprovider.PermAny},
},
}}
renderUserPage(w, r, &user, userPageModeTemplate, "")
}
}
func handleWebTemplateUserPost(w http.ResponseWriter, r *http.Request) {
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
claims, err := getTokenClaims(r)
if err != nil || claims.Username == "" {
renderBadRequestPage(w, r, errors.New("invalid token claims"))
return
}
templateUser, err := getUserFromPostFields(r)
if err != nil {
renderMessagePage(w, r, "Error parsing user fields", "", http.StatusBadRequest, err, "")
@ -1676,7 +1740,8 @@ func handleWebTemplateUserPost(w http.ResponseWriter, r *http.Request) {
for _, tmpl := range userTmplFields {
u := getUserFromTemplate(templateUser, tmpl)
if err := dataprovider.ValidateUser(&u); err != nil {
renderMessagePage(w, r, fmt.Sprintf("Error validating user %#v", u.Username), "", http.StatusBadRequest, err, "")
renderMessagePage(w, r, "User validation error", fmt.Sprintf("Error validating user %#v", u.Username),
http.StatusBadRequest, err, "")
return
}
dump.Users = append(dump.Users, u)
@ -1688,34 +1753,33 @@ func handleWebTemplateUserPost(w http.ResponseWriter, r *http.Request) {
}
if len(dump.Users) == 0 {
renderMessagePage(w, r, "No users to export", "No valid users found, export is not possible", http.StatusBadRequest, nil, "")
renderMessagePage(w, r, "No users defined", "No valid users defined, unable to complete the requested action",
http.StatusBadRequest, nil, "")
return
}
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"sftpgo-%v-users-from-template.json\"", len(dump.Users)))
render.JSON(w, r, dump)
if r.Form.Get("form_action") == "export_from_template" {
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"sftpgo-%v-users-from-template.json\"",
len(dump.Users)))
render.JSON(w, r, dump)
return
}
if err = RestoreUsers(dump.Users, "", 1, 0, claims.Username, util.GetIPFromRemoteAddress(r.RemoteAddr)); err != nil {
renderMessagePage(w, r, "Unable to save users", "Cannot save the defined users:",
getRespStatus(err), err, "")
return
}
http.Redirect(w, r, webUsersPath, http.StatusSeeOther)
}
func handleWebAddUserGet(w http.ResponseWriter, r *http.Request) {
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
if r.URL.Query().Get("clone-from") != "" {
username := r.URL.Query().Get("clone-from")
user, err := dataprovider.UserExists(username)
if err == nil {
user.ID = 0
user.Username = ""
user.Password = ""
user.SetEmptySecrets()
renderUserPage(w, r, &user, userPageModeAdd, "")
} else if _, ok := err.(*util.RecordNotFoundError); ok {
renderNotFoundPage(w, r, err)
} else {
renderInternalServerErrorPage(w, r, err)
}
} else {
user := dataprovider.User{BaseUser: sdk.BaseUser{Status: 1}}
renderUserPage(w, r, &user, userPageModeAdd, "")
}
user := dataprovider.User{BaseUser: sdk.BaseUser{
Status: 1,
Permissions: map[string][]string{
"/": {dataprovider.PermAny},
},
}}
renderUserPage(w, r, &user, userPageModeAdd, "")
}
func handleWebUpdateUserGet(w http.ResponseWriter, r *http.Request) {

View file

@ -18,11 +18,11 @@ import (
"github.com/go-chi/render"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
@ -158,7 +158,7 @@ type changeClientPasswordPage struct {
type clientMFAPage struct {
baseClientPage
TOTPConfigs []string
TOTPConfig sdk.TOTPConfig
TOTPConfig dataprovider.UserTOTPConfig
GenerateTOTPURL string
ValidateTOTPURL string
SaveTOTPURL string

View file

@ -1,17 +1,21 @@
package kms
import (
sdkkms "github.com/sftpgo/sdk/kms"
)
// BaseSecret defines the base struct shared among all the secret providers
type BaseSecret struct {
Status SecretStatus `json:"status,omitempty"`
Payload string `json:"payload,omitempty"`
Key string `json:"key,omitempty"`
AdditionalData string `json:"additional_data,omitempty"`
Status sdkkms.SecretStatus `json:"status,omitempty"`
Payload string `json:"payload,omitempty"`
Key string `json:"key,omitempty"`
AdditionalData string `json:"additional_data,omitempty"`
// 1 means encrypted using a master key
Mode int `json:"mode,omitempty"`
}
// GetStatus returns the secret's status
func (s *BaseSecret) GetStatus() SecretStatus {
func (s *BaseSecret) GetStatus() sdkkms.SecretStatus {
return s.Status
}
@ -46,7 +50,7 @@ func (s *BaseSecret) SetAdditionalData(value string) {
}
// SetStatus sets the secret's status
func (s *BaseSecret) SetStatus(value SecretStatus) {
func (s *BaseSecret) SetStatus(value sdkkms.SecretStatus) {
s.Status = value
}

View file

@ -6,7 +6,14 @@ import (
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"errors"
"io"
sdkkms "github.com/sftpgo/sdk/kms"
)
var (
errMalformedCiphertext = errors.New("malformed ciphertext")
)
type builtinSecret struct {
@ -14,7 +21,7 @@ type builtinSecret struct {
}
func init() {
RegisterSecretProvider(SchemeBuiltin, SecretStatusAES256GCM, newBuiltinSecret)
RegisterSecretProvider(sdkkms.SchemeBuiltin, sdkkms.SecretStatusAES256GCM, newBuiltinSecret)
}
func newBuiltinSecret(base BaseSecret, url, masterKey string) SecretProvider {
@ -28,7 +35,7 @@ func (s *builtinSecret) Name() string {
}
func (s *builtinSecret) IsEncrypted() bool {
return s.Status == SecretStatusAES256GCM
return s.Status == sdkkms.SecretStatusAES256GCM
}
func (s *builtinSecret) deriveKey(key []byte) []byte {
@ -47,7 +54,7 @@ func (s *builtinSecret) Encrypt() error {
return ErrInvalidSecret
}
switch s.Status {
case SecretStatusPlain:
case sdkkms.SecretStatusPlain:
key := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, key); err != nil {
return err
@ -71,7 +78,7 @@ func (s *builtinSecret) Encrypt() error {
ciphertext := gcm.Seal(nonce, nonce, []byte(s.Payload), aad)
s.Key = hex.EncodeToString(key)
s.Payload = hex.EncodeToString(ciphertext)
s.Status = SecretStatusAES256GCM
s.Status = sdkkms.SecretStatusAES256GCM
return nil
default:
return ErrWrongSecretStatus
@ -80,7 +87,7 @@ func (s *builtinSecret) Encrypt() error {
func (s *builtinSecret) Decrypt() error {
switch s.Status {
case SecretStatusAES256GCM:
case sdkkms.SecretStatusAES256GCM:
encrypted, err := hex.DecodeString(s.Payload)
if err != nil {
return err
@ -110,7 +117,7 @@ func (s *builtinSecret) Decrypt() error {
if err != nil {
return err
}
s.Status = SecretStatusPlain
s.Status = sdkkms.SecretStatusPlain
s.Payload = string(plaintext)
s.Key = ""
s.AdditionalData = ""

View file

@ -8,8 +8,9 @@ import (
"strings"
"sync"
sdkkms "github.com/sftpgo/sdk/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// SecretProvider defines the interface for a KMS secrets provider
@ -18,14 +19,14 @@ type SecretProvider interface {
Encrypt() error
Decrypt() error
IsEncrypted() bool
GetStatus() SecretStatus
GetStatus() sdkkms.SecretStatus
GetPayload() string
GetKey() string
GetAdditionalData() string
GetMode() int
SetKey(string)
SetAdditionalData(string)
SetStatus(SecretStatus)
SetStatus(sdkkms.SecretStatus)
Clone() SecretProvider
}
@ -33,44 +34,6 @@ const (
logSender = "kms"
)
// SecretStatus defines the statuses of a Secret object
type SecretStatus = string
const (
// SecretStatusPlain means the secret is in plain text and must be encrypted
SecretStatusPlain SecretStatus = "Plain"
// SecretStatusAES256GCM means the secret is encrypted using AES-256-GCM
SecretStatusAES256GCM SecretStatus = "AES-256-GCM"
// SecretStatusSecretBox means the secret is encrypted using a locally provided symmetric key
SecretStatusSecretBox SecretStatus = "Secretbox"
// SecretStatusGCP means we use keys from Google Cloud Platforms Key Management Service
// (GCP KMS) to keep information secret
SecretStatusGCP SecretStatus = "GCP"
// SecretStatusAWS means we use customer master keys from Amazon Web Services
// Key Management Service (AWS KMS) to keep information secret
SecretStatusAWS SecretStatus = "AWS"
// SecretStatusVaultTransit means we use the transit secrets engine in Vault
// to keep information secret
SecretStatusVaultTransit SecretStatus = "VaultTransit"
// SecretStatusAzureKeyVault means we use Azure KeyVault to keep information secret
SecretStatusAzureKeyVault SecretStatus = "AzureKeyVault"
// SecretStatusRedacted means the secret is redacted
SecretStatusRedacted SecretStatus = "Redacted"
)
// Scheme defines the supported URL scheme
type Scheme = string
// supported URL schemes
const (
SchemeLocal Scheme = "local"
SchemeBuiltin Scheme = "builtin"
SchemeAWS Scheme = "awskms"
SchemeGCP Scheme = "gcpkms"
SchemeVaultTransit Scheme = "hashivault"
SchemeAzureKeyVault Scheme = "azurekeyvault"
)
// Configuration defines the KMS configuration
type Configuration struct {
Secrets Secrets `json:"secrets" mapstructure:"secrets"`
@ -85,7 +48,7 @@ type Secrets struct {
}
type registeredSecretProvider struct {
encryptedStatus SecretStatus
encryptedStatus sdkkms.SecretStatus
newFn func(base BaseSecret, url, masterKey string) SecretProvider
}
@ -94,16 +57,17 @@ var (
// for the request operation
ErrWrongSecretStatus = errors.New("wrong secret status")
// ErrInvalidSecret defines the error to return if a secret is not valid
ErrInvalidSecret = errors.New("invalid secret")
errMalformedCiphertext = errors.New("malformed ciphertext")
validSecretStatuses = []string{SecretStatusPlain, SecretStatusAES256GCM, SecretStatusSecretBox,
SecretStatusVaultTransit, SecretStatusAWS, SecretStatusGCP, SecretStatusRedacted}
ErrInvalidSecret = errors.New("invalid secret")
validSecretStatuses = []string{sdkkms.SecretStatusPlain, sdkkms.SecretStatusAES256GCM, sdkkms.SecretStatusSecretBox,
sdkkms.SecretStatusVaultTransit, sdkkms.SecretStatusAWS, sdkkms.SecretStatusGCP, sdkkms.SecretStatusRedacted}
config Configuration
secretProviders = make(map[string]registeredSecretProvider)
)
// RegisterSecretProvider register a new secret provider
func RegisterSecretProvider(scheme string, encryptedStatus SecretStatus, fn func(base BaseSecret, url, masterKey string) SecretProvider) {
func RegisterSecretProvider(scheme string, encryptedStatus sdkkms.SecretStatus,
fn func(base BaseSecret, url, masterKey string) SecretProvider,
) {
secretProviders[scheme] = registeredSecretProvider{
encryptedStatus: encryptedStatus,
newFn: fn,
@ -111,7 +75,7 @@ func RegisterSecretProvider(scheme string, encryptedStatus SecretStatus, fn func
}
// NewSecret builds a new Secret using the provided arguments
func NewSecret(status SecretStatus, payload, key, data string) *Secret {
func NewSecret(status sdkkms.SecretStatus, payload, key, data string) *Secret {
return config.newSecret(status, payload, key, data)
}
@ -122,7 +86,7 @@ func NewEmptySecret() *Secret {
// NewPlainSecret stores the give payload in a plain text secret
func NewPlainSecret(payload string) *Secret {
return NewSecret(SecretStatusPlain, payload, "", "")
return NewSecret(sdkkms.SecretStatusPlain, payload, "", "")
}
// Initialize configures the KMS support
@ -139,16 +103,16 @@ func (c *Configuration) Initialize() error {
}
config = *c
if config.Secrets.URL == "" {
config.Secrets.URL = SchemeLocal + "://"
config.Secrets.URL = sdkkms.SchemeLocal + "://"
}
for k, v := range secretProviders {
logger.Debug(logSender, "", "secret provider registered for scheme: %#v, encrypted status: %#v",
logger.Info(logSender, "", "secret provider registered for scheme: %#v, encrypted status: %#v",
k, v.encryptedStatus)
}
return nil
}
func (c *Configuration) newSecret(status SecretStatus, payload, key, data string) *Secret {
func (c *Configuration) newSecret(status sdkkms.SecretStatus, payload, key, data string) *Secret {
base := BaseSecret{
Status: status,
Key: key,
@ -206,7 +170,7 @@ func (s *Secret) UnmarshalJSON(data []byte) error {
return nil
}
if baseSecret.Status == SecretStatusPlain || baseSecret.Status == SecretStatusRedacted {
if baseSecret.Status == sdkkms.SecretStatusPlain || baseSecret.Status == sdkkms.SecretStatusRedacted {
s.provider = config.getSecretProvider(baseSecret)
return nil
}
@ -217,8 +181,7 @@ func (s *Secret) UnmarshalJSON(data []byte) error {
return nil
}
}
logger.Debug(logSender, "", "no provider registered for status %#v", baseSecret.Status)
logger.Error(logSender, "", "no provider registered for status %#v", baseSecret.Status)
return ErrInvalidSecret
}
@ -267,7 +230,7 @@ func (s *Secret) IsPlain() bool {
s.RLock()
defer s.RUnlock()
return s.provider.GetStatus() == SecretStatusPlain
return s.provider.GetStatus() == sdkkms.SecretStatusPlain
}
// IsNotPlainAndNotEmpty returns true if the secret is not plain and not empty.
@ -285,7 +248,7 @@ func (s *Secret) IsRedacted() bool {
s.RLock()
defer s.RUnlock()
return s.provider.GetStatus() == SecretStatusRedacted
return s.provider.GetStatus() == sdkkms.SecretStatusRedacted
}
// GetPayload returns the secret payload
@ -305,7 +268,7 @@ func (s *Secret) GetAdditionalData() string {
}
// GetStatus returns the secret status
func (s *Secret) GetStatus() SecretStatus {
func (s *Secret) GetStatus() sdkkms.SecretStatus {
s.RLock()
defer s.RUnlock()
@ -337,7 +300,7 @@ func (s *Secret) SetAdditionalData(value string) {
}
// SetStatus sets the status for this secret
func (s *Secret) SetStatus(value SecretStatus) {
func (s *Secret) SetStatus(value sdkkms.SecretStatus) {
s.Lock()
defer s.Unlock()
@ -381,11 +344,11 @@ func (s *Secret) IsValid() bool {
return false
}
switch s.provider.GetStatus() {
case SecretStatusAES256GCM, SecretStatusSecretBox:
case sdkkms.SecretStatusAES256GCM, sdkkms.SecretStatusSecretBox:
if len(s.provider.GetKey()) != 64 {
return false
}
case SecretStatusAWS, SecretStatusGCP, SecretStatusVaultTransit:
case sdkkms.SecretStatusAWS, sdkkms.SecretStatusGCP, sdkkms.SecretStatusVaultTransit:
key := s.provider.GetKey()
if key != "" && len(key) != 64 {
return false
@ -399,7 +362,7 @@ func (s *Secret) IsValidInput() bool {
s.RLock()
defer s.RUnlock()
if !util.IsStringInSlice(s.provider.GetStatus(), validSecretStatuses) {
if !isSecretStatusValid(s.provider.GetStatus()) {
return false
}
if s.provider.GetPayload() == "" {
@ -444,3 +407,12 @@ func (s *Secret) TryDecrypt() error {
}
return nil
}
func isSecretStatusValid(status string) bool {
for idx := range validSecretStatuses {
if validSecretStatuses[idx] == status {
return true
}
}
return false
}

View file

@ -7,12 +7,13 @@ import (
"encoding/hex"
"io"
sdkkms "github.com/sftpgo/sdk/kms"
"gocloud.dev/secrets/localsecrets"
"golang.org/x/crypto/hkdf"
)
func init() {
RegisterSecretProvider(SchemeLocal, SecretStatusSecretBox, NewLocalSecret)
RegisterSecretProvider(sdkkms.SchemeLocal, sdkkms.SecretStatusSecretBox, NewLocalSecret)
}
type localSecret struct {
@ -33,11 +34,11 @@ func (s *localSecret) Name() string {
}
func (s *localSecret) IsEncrypted() bool {
return s.Status == SecretStatusSecretBox
return s.Status == sdkkms.SecretStatusSecretBox
}
func (s *localSecret) Encrypt() error {
if s.Status != SecretStatusPlain {
if s.Status != sdkkms.SecretStatusPlain {
return ErrWrongSecretStatus
}
if s.Payload == "" {
@ -60,7 +61,7 @@ func (s *localSecret) Encrypt() error {
}
s.Key = hex.EncodeToString(secretKey[:])
s.Payload = base64.StdEncoding.EncodeToString(ciphertext)
s.Status = SecretStatusSecretBox
s.Status = sdkkms.SecretStatusSecretBox
s.Mode = s.getEncryptionMode()
return nil
}
@ -88,7 +89,7 @@ func (s *localSecret) Decrypt() error {
if err != nil {
return err
}
s.Status = SecretStatusPlain
s.Status = sdkkms.SecretStatusPlain
s.Payload = string(plaintext)
s.Key = ""
s.AdditionalData = ""

View file

@ -41,6 +41,10 @@ var (
rollingLogger *lumberjack.Logger
)
func init() {
zerolog.TimeFieldFormat = dateFormat
}
// StdLoggerWrapper is a wrapper for standard logger compatibility
type StdLoggerWrapper struct {
Sender string
@ -123,6 +127,11 @@ func (l *LeveledLogger) Warn(msg string, keysAndValues ...interface{}) {
ev.Msg(msg)
}
// Panic logs the panic at error level for the specified sender
func (l *LeveledLogger) Panic(msg string, keysAndValues ...interface{}) {
l.Error(msg, keysAndValues...)
}
// With returns a LeveledLogger with additional context specific keyvals
func (l *LeveledLogger) With(keysAndValues ...interface{}) ftpserverlog.Logger {
return &LeveledLogger{
@ -138,9 +147,10 @@ func GetLogger() *zerolog.Logger {
// SetLogTime sets logging time related setting
func SetLogTime(utc bool) {
zerolog.TimeFieldFormat = dateFormat
if utc {
zerolog.TimestampFunc = time.Now().UTC
zerolog.TimestampFunc = func() time.Time {
return time.Now().UTC()
}
} else {
zerolog.TimestampFunc = time.Now
}
@ -231,22 +241,22 @@ func Log(level LogLevel, sender string, connectionID string, format string, v ..
}
// Debug logs at debug level for the specified sender
func Debug(sender string, connectionID string, format string, v ...interface{}) {
func Debug(sender, connectionID, format string, v ...interface{}) {
Log(LevelDebug, sender, connectionID, format, v...)
}
// Info logs at info level for the specified sender
func Info(sender string, connectionID string, format string, v ...interface{}) {
func Info(sender, connectionID, format string, v ...interface{}) {
Log(LevelInfo, sender, connectionID, format, v...)
}
// Warn logs at warn level for the specified sender
func Warn(sender string, connectionID string, format string, v ...interface{}) {
func Warn(sender, connectionID, format string, v ...interface{}) {
Log(LevelWarn, sender, connectionID, format, v...)
}
// Error logs at error level for the specified sender
func Error(sender string, connectionID string, format string, v ...interface{}) {
func Error(sender, connectionID, format string, v ...interface{}) {
Log(LevelError, sender, connectionID, format, v...)
}

View file

@ -23,7 +23,7 @@ info:
SFTPGo also supports virtual folders, a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one.
Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
SFTPGo allows to create HTTP/S links to externally share files and folders securely, by setting limits to the number of downloads/uploads, protecting the share with a password, limiting access by source IP address, setting an automatic expiration date.
version: 2.2.1-dev
version: 2.2.3
contact:
name: API support
url: 'https://github.com/drakkan/sftpgo'
@ -1758,6 +1758,24 @@ paths:
type: string
description: 'the event SSH command must be the same as the one specified. Empty or missing means omit this filter'
required: false
- in: query
name: fs_provider
schema:
type: integer
description: 'the event filesystem provider must be the same as the one specified. Empty or missing means omit this filter'
required: false
- in: query
name: bucket
schema:
type: string
description: 'the bucket must be the same as the one specified. Empty or missing means omit this filter'
required: false
- in: query
name: endpoint
schema:
type: string
description: 'the endpoint must be the same as the one specified. Empty or missing means omit this filter'
required: false
- in: query
name: protocols
schema:
@ -4130,6 +4148,23 @@ components:
* `retention_checks` - view and start retention checks is allowed
* `metadata_checks` - view and start metadata checks is allowed
* `view_events` - view and search filesystem and provider events is allowed
FsProviders:
type: integer
enum:
- 0
- 1
- 2
- 3
- 4
- 5
description: |
Filesystem providers:
* `0` - Local filesystem
* `1` - S3 Compatible Object Storage
* `2` - Google Cloud Storage
* `3` - Azure Blob Storage
* `4` - Local filesystem encrypted
* `5` - SFTP
LoginMethods:
type: string
enum:
@ -4268,6 +4303,7 @@ components:
type: string
enum:
- download
- pre-upload
- upload
- delete
- rename
@ -4621,22 +4657,7 @@ components:
type: object
properties:
provider:
type: integer
enum:
- 0
- 1
- 2
- 3
- 4
- 5
description: |
Providers:
* `0` - Local filesystem
* `1` - S3 Compatible Object Storage
* `2` - Google Cloud Storage
* `3` - Azure Blob Storage
* `4` - Local filesystem encrypted
* `5` - SFTP
$ref: '#/components/schemas/FsProviders'
s3config:
$ref: '#/components/schemas/S3Config'
gcsconfig:
@ -5413,6 +5434,14 @@ components:
type: string
session_id:
type: string
fs_provider:
$ref: '#/components/schemas/FsProviders'
bucket:
type: string
endpoint:
type: string
open_flags:
type: string
instance_id:
type: string
ProviderEvent:

View file

@ -1,6 +1,6 @@
#!/bin/bash
NFPM_VERSION=2.11.3
NFPM_VERSION=2.15.1
NFPM_ARCH=${NFPM_ARCH:-amd64}
if [ -z ${SFTPGO_VERSION} ]
then

View file

@ -35,6 +35,8 @@ if [ "$1" = "configure" ]; then
chmod 750 /srv/sftpgo
fi
# set the cap_net_bind_service capability so the service can bind to privileged ports
setcap cap_net_bind_service=+ep /usr/bin/sftpgo || true
fi
#DEBHELPER#

View file

@ -35,6 +35,8 @@ if [ "$1" = "configure" ]; then
chmod 750 /srv/sftpgo
fi
# set the cap_net_bind_service capability so the service can bind to privileged ports
setcap cap_net_bind_service=+ep /usr/bin/sftpgo || true
fi
if [ "$1" = "configure" ] || [ "$1" = "abort-upgrade" ] || [ "$1" = "abort-deconfigure" ] || [ "$1" = "abort-remove" ] ; then

View file

@ -32,5 +32,8 @@ if [ -d /var/lib/sftpgo ]; then
/usr/bin/chmod 750 /var/lib/sftpgo
fi
# set the cap_net_bind_service capability so the service can bind to privileged ports
setcap cap_net_bind_service=+ep /usr/bin/sftpgo || :
# reload to pick up any changes to systemd files
/bin/systemctl daemon-reload >/dev/null 2>&1 || :

View file

@ -8,9 +8,9 @@ import (
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin"
"github.com/sftpgo/sdk/plugin/auth"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin/auth"
)
// Supported auth scopes

View file

@ -8,17 +8,18 @@ import (
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin"
sdkkms "github.com/sftpgo/sdk/kms"
kmsplugin "github.com/sftpgo/sdk/plugin/kms"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
kmsplugin "github.com/drakkan/sftpgo/v2/sdk/plugin/kms"
"github.com/drakkan/sftpgo/v2/util"
)
var (
validKMSSchemes = []string{kms.SchemeAWS, kms.SchemeGCP, kms.SchemeVaultTransit, kms.SchemeAzureKeyVault}
validKMSEncryptedStatuses = []string{kms.SecretStatusVaultTransit, kms.SecretStatusAWS, kms.SecretStatusGCP,
kms.SecretStatusAzureKeyVault}
validKMSSchemes = []string{sdkkms.SchemeAWS, sdkkms.SchemeGCP, sdkkms.SchemeVaultTransit, sdkkms.SchemeAzureKeyVault}
validKMSEncryptedStatuses = []string{sdkkms.SecretStatusVaultTransit, sdkkms.SecretStatusAWS, sdkkms.SecretStatusGCP,
sdkkms.SecretStatusAzureKeyVault}
)
// KMSConfig defines configuration parameters for kms plugins
@ -133,7 +134,7 @@ func (s *kmsPluginSecretProvider) IsEncrypted() bool {
}
func (s *kmsPluginSecretProvider) Encrypt() error {
if s.Status != kms.SecretStatusPlain {
if s.Status != sdkkms.SecretStatusPlain {
return kms.ErrWrongSecretStatus
}
if s.Payload == "" {
@ -160,7 +161,7 @@ func (s *kmsPluginSecretProvider) Decrypt() error {
if err != nil {
return err
}
s.Status = kms.SecretStatusPlain
s.Status = sdkkms.SecretStatusPlain
s.Payload = payload
s.Key = ""
s.AdditionalData = ""

View file

@ -7,9 +7,9 @@ import (
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin"
"github.com/sftpgo/sdk/plugin/metadata"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin/metadata"
)
type metadataPlugin struct {

View file

@ -9,10 +9,9 @@ import (
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/sdk/plugin/notifier/proto"
"github.com/drakkan/sftpgo/v2/util"
)
@ -37,48 +36,25 @@ func (c *NotifierConfig) hasActions() bool {
type eventsQueue struct {
sync.RWMutex
fsEvents []*proto.FsEvent
providerEvents []*proto.ProviderEvent
fsEvents []*notifier.FsEvent
providerEvents []*notifier.ProviderEvent
}
func (q *eventsQueue) addFsEvent(timestamp int64, action, username, fsPath, fsTargetPath, sshCmd, protocol, ip string,
fileSize int64, status int,
) {
func (q *eventsQueue) addFsEvent(event *notifier.FsEvent) {
q.Lock()
defer q.Unlock()
q.fsEvents = append(q.fsEvents, &proto.FsEvent{
Timestamp: timestamp,
Action: action,
Username: username,
FsPath: fsPath,
FsTargetPath: fsTargetPath,
SshCmd: sshCmd,
FileSize: fileSize,
Protocol: protocol,
Ip: ip,
Status: int32(status),
})
q.fsEvents = append(q.fsEvents, event)
}
func (q *eventsQueue) addProviderEvent(timestamp int64, action, username, objectType, objectName, ip string,
objectAsJSON []byte,
) {
func (q *eventsQueue) addProviderEvent(event *notifier.ProviderEvent) {
q.Lock()
defer q.Unlock()
q.providerEvents = append(q.providerEvents, &proto.ProviderEvent{
Timestamp: timestamp,
Action: action,
ObjectType: objectType,
Username: username,
Ip: ip,
ObjectName: objectName,
ObjectData: objectAsJSON,
})
q.providerEvents = append(q.providerEvents, event)
}
func (q *eventsQueue) popFsEvent() *proto.FsEvent {
func (q *eventsQueue) popFsEvent() *notifier.FsEvent {
q.Lock()
defer q.Unlock()
@ -93,7 +69,7 @@ func (q *eventsQueue) popFsEvent() *proto.FsEvent {
return ev
}
func (q *eventsQueue) popProviderEvent() *proto.ProviderEvent {
func (q *eventsQueue) popProviderEvent() *notifier.ProviderEvent {
q.Lock()
defer q.Unlock()
@ -193,7 +169,9 @@ func (p *notifierPlugin) canQueueEvent(timestamp int64) bool {
if p.config.NotifierOptions.RetryMaxTime == 0 {
return false
}
if time.Now().After(util.GetTimeFromMsecSinceEpoch(timestamp).Add(time.Duration(p.config.NotifierOptions.RetryMaxTime) * time.Second)) {
if time.Now().After(time.Unix(0, timestamp).Add(time.Duration(p.config.NotifierOptions.RetryMaxTime) * time.Second)) {
logger.Warn(logSender, "", "dropping too late event for plugin %v, event timestamp: %v",
p.config.Cmd, time.Unix(0, timestamp))
return false
}
if p.config.NotifierOptions.RetryQueueMaxSize > 0 {
@ -202,58 +180,47 @@ func (p *notifierPlugin) canQueueEvent(timestamp int64) bool {
return true
}
func (p *notifierPlugin) notifyFsAction(timestamp int64, action, username, fsPath, fsTargetPath, sshCmd,
protocol, ip, virtualPath, virtualTargetPath, sessionID string, fileSize int64, errAction error) {
if !util.IsStringInSlice(action, p.config.NotifierOptions.FsEvents) {
func (p *notifierPlugin) notifyFsAction(event *notifier.FsEvent) {
if !util.IsStringInSlice(event.Action, p.config.NotifierOptions.FsEvents) {
return
}
go func() {
status := 1
if errAction != nil {
status = 0
}
p.sendFsEvent(timestamp, action, username, fsPath, fsTargetPath, sshCmd, protocol, ip, virtualPath, virtualTargetPath,
sessionID, fileSize, status)
p.sendFsEvent(event)
}()
}
func (p *notifierPlugin) notifyProviderAction(timestamp int64, action, username, objectType, objectName, ip string,
object Renderer,
) {
if !util.IsStringInSlice(action, p.config.NotifierOptions.ProviderEvents) ||
!util.IsStringInSlice(objectType, p.config.NotifierOptions.ProviderObjects) {
func (p *notifierPlugin) notifyProviderAction(event *notifier.ProviderEvent, object Renderer) {
if !util.IsStringInSlice(event.Action, p.config.NotifierOptions.ProviderEvents) ||
!util.IsStringInSlice(event.ObjectType, p.config.NotifierOptions.ProviderObjects) {
return
}
go func() {
objectAsJSON, err := object.RenderAsJSON(action != "delete")
objectAsJSON, err := object.RenderAsJSON(event.Action != "delete")
if err != nil {
logger.Warn(logSender, "", "unable to render user as json for action %v: %v", action, err)
logger.Warn(logSender, "", "unable to render user as json for action %v: %v", event.Action, err)
return
}
p.sendProviderEvent(timestamp, action, username, objectType, objectName, ip, objectAsJSON)
event.ObjectData = objectAsJSON
p.sendProviderEvent(event)
}()
}
func (p *notifierPlugin) sendFsEvent(timestamp int64, action, username, fsPath, fsTargetPath, sshCmd,
protocol, ip, virtualPath, virtualTargetPath, sessionID string, fileSize int64, status int) {
if err := p.notifier.NotifyFsEvent(timestamp, action, username, fsPath, fsTargetPath, sshCmd, protocol, ip,
virtualPath, virtualTargetPath, sessionID, fileSize, status); err != nil {
func (p *notifierPlugin) sendFsEvent(event *notifier.FsEvent) {
if err := p.notifier.NotifyFsEvent(event); err != nil {
logger.Warn(logSender, "", "unable to send fs action notification to plugin %v: %v", p.config.Cmd, err)
if p.canQueueEvent(timestamp) {
p.queue.addFsEvent(timestamp, action, username, fsPath, fsTargetPath, sshCmd, protocol, ip, fileSize, status)
if p.canQueueEvent(event.Timestamp) {
p.queue.addFsEvent(event)
}
}
}
func (p *notifierPlugin) sendProviderEvent(timestamp int64, action, username, objectType, objectName, ip string,
objectAsJSON []byte,
) {
if err := p.notifier.NotifyProviderEvent(timestamp, action, username, objectType, objectName, ip, objectAsJSON); err != nil {
func (p *notifierPlugin) sendProviderEvent(event *notifier.ProviderEvent) {
if err := p.notifier.NotifyProviderEvent(event); err != nil {
logger.Warn(logSender, "", "unable to send user action notification to plugin %v: %v", p.config.Cmd, err)
if p.canQueueEvent(timestamp) {
p.queue.addProviderEvent(timestamp, action, username, objectType, objectName, ip, objectAsJSON)
if p.canQueueEvent(event.Timestamp) {
p.queue.addProviderEvent(event)
}
}
}
@ -266,16 +233,17 @@ func (p *notifierPlugin) sendQueuedEvents() {
logger.Debug(logSender, "", "check queued events for notifier %#v, events size: %v", p.config.Cmd, queueSize)
fsEv := p.queue.popFsEvent()
for fsEv != nil {
go p.sendFsEvent(fsEv.Timestamp, fsEv.Action, fsEv.Username, fsEv.FsPath, fsEv.FsTargetPath,
fsEv.SshCmd, fsEv.Protocol, fsEv.Ip, fsEv.VirtualPath, fsEv.VirtualTargetPath, fsEv.SessionId,
fsEv.FileSize, int(fsEv.Status))
go func(ev *notifier.FsEvent) {
p.sendFsEvent(ev)
}(fsEv)
fsEv = p.queue.popFsEvent()
}
providerEv := p.queue.popProviderEvent()
for providerEv != nil {
go p.sendProviderEvent(providerEv.Timestamp, providerEv.Action, providerEv.Username, providerEv.ObjectType,
providerEv.ObjectName, providerEv.Ip, providerEv.ObjectData)
go func(ev *notifier.ProviderEvent) {
p.sendProviderEvent(ev)
}(providerEv)
providerEv = p.queue.popProviderEvent()
}
logger.Debug(logSender, "", "queued events sent for notifier %#v, new events size: %v", p.config.Cmd, p.queue.getSize())

View file

@ -10,14 +10,14 @@ import (
"time"
"github.com/hashicorp/go-hclog"
"github.com/sftpgo/sdk/plugin/auth"
"github.com/sftpgo/sdk/plugin/eventsearcher"
kmsplugin "github.com/sftpgo/sdk/plugin/kms"
"github.com/sftpgo/sdk/plugin/metadata"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin/auth"
"github.com/drakkan/sftpgo/v2/sdk/plugin/eventsearcher"
kmsplugin "github.com/drakkan/sftpgo/v2/sdk/plugin/kms"
"github.com/drakkan/sftpgo/v2/sdk/plugin/metadata"
"github.com/drakkan/sftpgo/v2/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/util"
)
@ -95,6 +95,7 @@ type Manager struct {
authScopes int
hasSearcher bool
hasMetadater bool
hasNotifiers bool
}
// Initialize initializes the configured plugins
@ -134,7 +135,7 @@ func Initialize(configs []Config, logVerbose bool) error {
kmsID++
kms.RegisterSecretProvider(config.KMSOptions.Scheme, config.KMSOptions.EncryptedStatus,
Handler.Configs[idx].newKMSPluginSecretProvider)
logger.Debug(logSender, "", "registered secret provider for scheme: %v, encrypted status: %v",
logger.Info(logSender, "", "registered secret provider for scheme: %v, encrypted status: %v",
config.KMSOptions.Scheme, config.KMSOptions.EncryptedStatus)
case auth.PluginName:
plugin, err := newAuthPlugin(config)
@ -172,6 +173,7 @@ func (m *Manager) validateConfigs() error {
kmsEncryptions := make(map[string]bool)
m.hasSearcher = false
m.hasMetadater = false
m.hasNotifiers = false
for _, config := range m.Configs {
if config.Type == kmsplugin.PluginName {
@ -196,40 +198,40 @@ func (m *Manager) validateConfigs() error {
}
m.hasMetadater = true
}
if config.Type == notifier.PluginName {
m.hasNotifiers = true
}
}
return nil
}
// HasNotifiers returns true if there is at least a notifier plugin
func (m *Manager) HasNotifiers() bool {
return m.hasNotifiers
}
// NotifyFsEvent sends the fs event notifications using any defined notifier plugins
func (m *Manager) NotifyFsEvent(timestamp int64, action, username, fsPath, fsTargetPath, sshCmd, protocol, ip,
virtualPath, virtualTargetPath, sessionID string, fileSize int64, err error,
) {
func (m *Manager) NotifyFsEvent(event *notifier.FsEvent) {
m.notifLock.RLock()
defer m.notifLock.RUnlock()
for _, n := range m.notifiers {
n.notifyFsAction(timestamp, action, username, fsPath, fsTargetPath, sshCmd, protocol, ip, virtualPath,
virtualTargetPath, sessionID, fileSize, err)
n.notifyFsAction(event)
}
}
// NotifyProviderEvent sends the provider event notifications using any defined notifier plugins
func (m *Manager) NotifyProviderEvent(timestamp int64, action, username, objectType, objectName, ip string,
object Renderer,
) {
func (m *Manager) NotifyProviderEvent(event *notifier.ProviderEvent, object Renderer) {
m.notifLock.RLock()
defer m.notifLock.RUnlock()
for _, n := range m.notifiers {
n.notifyProviderAction(timestamp, action, username, objectType, objectName, ip, object)
n.notifyProviderAction(event, object)
}
}
// SearchFsEvents returns the filesystem events matching the specified filter and a continuation token
// to use for cursor based pagination
func (m *Manager) SearchFsEvents(startTimestamp, endTimestamp int64, username, ip, sshCmd string, actions,
protocols, instanceIDs, excludeIDs []string, statuses []int32, limit, order int,
) ([]byte, []string, []string, error) {
// SearchFsEvents returns the filesystem events matching the specified filters
func (m *Manager) SearchFsEvents(searchFilters *eventsearcher.FsEventSearch) ([]byte, []string, []string, error) {
if !m.hasSearcher {
return nil, nil, nil, ErrNoSearcher
}
@ -237,15 +239,11 @@ func (m *Manager) SearchFsEvents(startTimestamp, endTimestamp int64, username, i
plugin := m.searcher
m.searcherLock.RUnlock()
return plugin.searchear.SearchFsEvents(startTimestamp, endTimestamp, username, ip, sshCmd, actions, protocols,
instanceIDs, excludeIDs, statuses, limit, order)
return plugin.searchear.SearchFsEvents(searchFilters)
}
// SearchProviderEvents returns the provider events matching the specified filter and a continuation token
// to use for cursor based pagination
func (m *Manager) SearchProviderEvents(startTimestamp, endTimestamp int64, username, ip, objectName string,
limit, order int, actions, objectTypes, instanceIDs, excludeIDs []string,
) ([]byte, []string, []string, error) {
// SearchProviderEvents returns the provider events matching the specified filters
func (m *Manager) SearchProviderEvents(searchFilters *eventsearcher.ProviderEventSearch) ([]byte, []string, []string, error) {
if !m.hasSearcher {
return nil, nil, nil, ErrNoSearcher
}
@ -253,8 +251,7 @@ func (m *Manager) SearchProviderEvents(startTimestamp, endTimestamp int64, usern
plugin := m.searcher
m.searcherLock.RUnlock()
return plugin.searchear.SearchProviderEvents(startTimestamp, endTimestamp, username, ip, objectName, limit,
order, actions, objectTypes, instanceIDs, excludeIDs)
return plugin.searchear.SearchProviderEvents(searchFilters)
}
// HasMetadater returns true if a metadata plugin is defined
@ -360,7 +357,7 @@ func (m *Manager) Authenticate(username, password, ip, protocol string, pkey str
case AuthScopeTLSCertificate:
cert, err := util.EncodeTLSCertToPem(tlsCert)
if err != nil {
logger.Warn(logSender, "", "unable to encode tls certificate to pem: %v", err)
logger.Error(logSender, "", "unable to encode tls certificate to pem: %v", err)
return nil, fmt.Errorf("unable to encode tls cert to pem: %w", err)
}
return m.checkUserAndTLSCert(username, cert, ip, protocol, userAsJSON)
@ -526,7 +523,7 @@ func (m *Manager) restartNotifierPlugin(config Config, idx int) {
logger.Info(logSender, "", "try to restart crashed notifier plugin %#v, idx: %v", config.Cmd, idx)
plugin, err := newNotifierPlugin(config)
if err != nil {
logger.Warn(logSender, "", "unable to restart notifier plugin %#v, err: %v", config.Cmd, err)
logger.Error(logSender, "", "unable to restart notifier plugin %#v, err: %v", config.Cmd, err)
return
}
@ -544,7 +541,7 @@ func (m *Manager) restartKMSPlugin(config Config, idx int) {
logger.Info(logSender, "", "try to restart crashed kms plugin %#v, idx: %v", config.Cmd, idx)
plugin, err := newKMSPlugin(config)
if err != nil {
logger.Warn(logSender, "", "unable to restart kms plugin %#v, err: %v", config.Cmd, err)
logger.Error(logSender, "", "unable to restart kms plugin %#v, err: %v", config.Cmd, err)
return
}
@ -560,7 +557,7 @@ func (m *Manager) restartAuthPlugin(config Config, idx int) {
logger.Info(logSender, "", "try to restart crashed auth plugin %#v, idx: %v", config.Cmd, idx)
plugin, err := newAuthPlugin(config)
if err != nil {
logger.Warn(logSender, "", "unable to restart auth plugin %#v, err: %v", config.Cmd, err)
logger.Error(logSender, "", "unable to restart auth plugin %#v, err: %v", config.Cmd, err)
return
}
@ -576,7 +573,7 @@ func (m *Manager) restartSearcherPlugin(config Config) {
logger.Info(logSender, "", "try to restart crashed searcher plugin %#v", config.Cmd)
plugin, err := newSearcherPlugin(config)
if err != nil {
logger.Warn(logSender, "", "unable to restart searcher plugin %#v, err: %v", config.Cmd, err)
logger.Error(logSender, "", "unable to restart searcher plugin %#v, err: %v", config.Cmd, err)
return
}
@ -592,7 +589,7 @@ func (m *Manager) restartMetadaterPlugin(config Config) {
logger.Info(logSender, "", "try to restart crashed metadater plugin %#v", config.Cmd)
plugin, err := newMetadaterPlugin(config)
if err != nil {
logger.Warn(logSender, "", "unable to restart metadater plugin %#v, err: %v", config.Cmd, err)
logger.Error(logSender, "", "unable to restart metadater plugin %#v, err: %v", config.Cmd, err)
return
}

View file

@ -7,9 +7,9 @@ import (
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin"
"github.com/sftpgo/sdk/plugin/eventsearcher"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin/eventsearcher"
)
type searcherPlugin struct {

View file

@ -16,10 +16,10 @@ func killProcess(processPath string) {
if err == nil {
if cmdLine == processPath {
err = p.Kill()
logger.Debug(logSender, "", "killed process %v, pid %v, err %v", cmdLine, p.Pid, err)
logger.Debug(logSender, "killed process %v, pid %v, err %v", cmdLine, p.Pid, err)
return
}
}
}
logger.Debug(logSender, "", "no match for plugin process %v", processPath)
logger.Debug(logSender, "no match for plugin process %v", processPath)
}

View file

@ -1,224 +0,0 @@
package sdk
import "github.com/drakkan/sftpgo/v2/kms"
// FilesystemProvider defines the supported storage filesystems
type FilesystemProvider int
// supported values for FilesystemProvider
const (
LocalFilesystemProvider FilesystemProvider = iota // Local
S3FilesystemProvider // AWS S3 compatible
GCSFilesystemProvider // Google Cloud Storage
AzureBlobFilesystemProvider // Azure Blob Storage
CryptedFilesystemProvider // Local encrypted
SFTPFilesystemProvider // SFTP
)
// GetProviderByName returns the FilesystemProvider matching a given name
// to provide backwards compatibility, numeric strings are accepted as well
func GetProviderByName(name string) FilesystemProvider {
switch name {
case "0", "osfs":
return LocalFilesystemProvider
case "1", "s3fs":
return S3FilesystemProvider
case "2", "gcsfs":
return GCSFilesystemProvider
case "3", "azblobfs":
return AzureBlobFilesystemProvider
case "4", "cryptfs":
return CryptedFilesystemProvider
case "5", "sftpfs":
return SFTPFilesystemProvider
}
// TODO think about returning an error value instead of silently defaulting to LocalFilesystemProvider
return LocalFilesystemProvider
}
// Name returns the Provider's unique name
func (p FilesystemProvider) Name() string {
switch p {
case LocalFilesystemProvider:
return "osfs"
case S3FilesystemProvider:
return "s3fs"
case GCSFilesystemProvider:
return "gcsfs"
case AzureBlobFilesystemProvider:
return "azblobfs"
case CryptedFilesystemProvider:
return "cryptfs"
case SFTPFilesystemProvider:
return "sftpfs"
}
return "" // let's not claim to be
}
// ShortInfo returns a human readable, short description for the given FilesystemProvider
func (p FilesystemProvider) ShortInfo() string {
switch p {
case LocalFilesystemProvider:
return "Local"
case S3FilesystemProvider:
return "AWS S3 (Compatible)"
case GCSFilesystemProvider:
return "Google Cloud Storage"
case AzureBlobFilesystemProvider:
return "Azure Blob Storage"
case CryptedFilesystemProvider:
return "Local encrypted"
case SFTPFilesystemProvider:
return "SFTP"
}
return ""
}
// ListProviders returns a list of available FilesystemProviders.
func ListProviders() []FilesystemProvider {
return []FilesystemProvider{
LocalFilesystemProvider, S3FilesystemProvider,
GCSFilesystemProvider, AzureBlobFilesystemProvider,
CryptedFilesystemProvider, SFTPFilesystemProvider,
}
}
// S3FsConfig defines the configuration for S3 based filesystem
type S3FsConfig struct {
Bucket string `json:"bucket,omitempty"`
// KeyPrefix is similar to a chroot directory for local filesystem.
// If specified then the SFTP user will only see objects that starts
// with this prefix and so you can restrict access to a specific
// folder. The prefix, if not empty, must not start with "/" and must
// end with "/".
// If empty the whole bucket contents will be available
KeyPrefix string `json:"key_prefix,omitempty"`
Region string `json:"region,omitempty"`
AccessKey string `json:"access_key,omitempty"`
AccessSecret *kms.Secret `json:"access_secret,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
StorageClass string `json:"storage_class,omitempty"`
// The canned ACL to apply to uploaded objects. Leave empty to use the default ACL.
// For more information and available ACLs, see here:
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl
ACL string `json:"acl,omitempty"`
// The buffer size (in MB) to use for multipart uploads. The minimum allowed part size is 5MB,
// and if this value is set to zero, the default value (5MB) for the AWS SDK will be used.
// The minimum allowed value is 5.
// Please note that if the upload bandwidth between the SFTP client and SFTPGo is greater than
// the upload bandwidth between SFTPGo and S3 then the SFTP client have to wait for the upload
// of the last parts to S3 after it ends the file upload to SFTPGo, and it may time out.
// Keep this in mind if you customize these parameters.
UploadPartSize int64 `json:"upload_part_size,omitempty"`
// How many parts are uploaded in parallel
UploadConcurrency int `json:"upload_concurrency,omitempty"`
// The buffer size (in MB) to use for multipart downloads. The minimum allowed part size is 5MB,
// and if this value is set to zero, the default value (5MB) for the AWS SDK will be used.
// The minimum allowed value is 5. Ignored for partial downloads.
DownloadPartSize int64 `json:"download_part_size,omitempty"`
// How many parts are downloaded in parallel. Ignored for partial downloads.
DownloadConcurrency int `json:"download_concurrency,omitempty"`
// DownloadPartMaxTime defines the maximum time allowed, in seconds, to download a single chunk (5MB).
// 0 means no timeout. Ignored for partial downloads.
DownloadPartMaxTime int `json:"download_part_max_time,omitempty"`
// Set this to `true` to force the request to use path-style addressing,
// i.e., `http://s3.amazonaws.com/BUCKET/KEY`. By default, the S3 client
// will use virtual hosted bucket addressing when possible
// (`http://BUCKET.s3.amazonaws.com/KEY`)
ForcePathStyle bool `json:"force_path_style,omitempty"`
}
// GCSFsConfig defines the configuration for Google Cloud Storage based filesystem
type GCSFsConfig struct {
Bucket string `json:"bucket,omitempty"`
// KeyPrefix is similar to a chroot directory for local filesystem.
// If specified then the SFTP user will only see objects that starts
// with this prefix and so you can restrict access to a specific
// folder. The prefix, if not empty, must not start with "/" and must
// end with "/".
// If empty the whole bucket contents will be available
KeyPrefix string `json:"key_prefix,omitempty"`
CredentialFile string `json:"-"`
Credentials *kms.Secret `json:"credentials,omitempty"`
// 0 explicit, 1 automatic
AutomaticCredentials int `json:"automatic_credentials,omitempty"`
StorageClass string `json:"storage_class,omitempty"`
// The ACL to apply to uploaded objects. Leave empty to use the default ACL.
// For more information and available ACLs, refer to the JSON API here:
// https://cloud.google.com/storage/docs/access-control/lists#predefined-acl
ACL string `json:"acl,omitempty"`
}
// AzBlobFsConfig defines the configuration for Azure Blob Storage based filesystem
type AzBlobFsConfig struct {
Container string `json:"container,omitempty"`
// Storage Account Name, leave blank to use SAS URL
AccountName string `json:"account_name,omitempty"`
// Storage Account Key leave blank to use SAS URL.
// The access key is stored encrypted based on the kms configuration
AccountKey *kms.Secret `json:"account_key,omitempty"`
// Optional endpoint. Default is "blob.core.windows.net".
// If you use the emulator the endpoint must include the protocol,
// for example "http://127.0.0.1:10000"
Endpoint string `json:"endpoint,omitempty"`
// Shared access signature URL, leave blank if using account/key
SASURL *kms.Secret `json:"sas_url,omitempty"`
// KeyPrefix is similar to a chroot directory for local filesystem.
// If specified then the SFTPGo user will only see objects that starts
// with this prefix and so you can restrict access to a specific
// folder. The prefix, if not empty, must not start with "/" and must
// end with "/".
// If empty the whole bucket contents will be available
KeyPrefix string `json:"key_prefix,omitempty"`
// The buffer size (in MB) to use for multipart uploads.
// If this value is set to zero, the default value (1MB) for the Azure SDK will be used.
// Please note that if the upload bandwidth between the SFTPGo client and SFTPGo server is
// greater than the upload bandwidth between SFTPGo and Azure then the SFTP client have
// to wait for the upload of the last parts to Azure after it ends the file upload to SFTPGo,
// and it may time out.
// Keep this in mind if you customize these parameters.
UploadPartSize int64 `json:"upload_part_size,omitempty"`
// How many parts are uploaded in parallel
UploadConcurrency int `json:"upload_concurrency,omitempty"`
// Set to true if you use an Azure emulator such as Azurite
UseEmulator bool `json:"use_emulator,omitempty"`
// Blob Access Tier
AccessTier string `json:"access_tier,omitempty"`
}
// CryptFsConfig defines the configuration to store local files as encrypted
type CryptFsConfig struct {
Passphrase *kms.Secret `json:"passphrase,omitempty"`
}
// SFTPFsConfig defines the configuration for SFTP based filesystem
type SFTPFsConfig struct {
Endpoint string `json:"endpoint,omitempty"`
Username string `json:"username,omitempty"`
Password *kms.Secret `json:"password,omitempty"`
PrivateKey *kms.Secret `json:"private_key,omitempty"`
Fingerprints []string `json:"fingerprints,omitempty"`
// Prefix is the path prefix to strip from SFTP resource paths.
Prefix string `json:"prefix,omitempty"`
// Concurrent reads are safe to use and disabling them will degrade performance.
// Some servers automatically delete files once they are downloaded.
// Using concurrent reads is problematic with such servers.
DisableCouncurrentReads bool `json:"disable_concurrent_reads,omitempty"`
// The buffer size (in MB) to use for transfers.
// Buffering could improve performance for high latency networks.
// With buffering enabled upload resume is not supported and a file
// cannot be opened for both reading and writing at the same time
// 0 means disabled.
BufferSize int64 `json:"buffer_size,omitempty"`
}
// Filesystem defines filesystem details
type Filesystem struct {
Provider FilesystemProvider `json:"provider"`
S3Config S3FsConfig `json:"s3config,omitempty"`
GCSConfig GCSFsConfig `json:"gcsconfig,omitempty"`
AzBlobConfig AzBlobFsConfig `json:"azblobconfig,omitempty"`
CryptConfig CryptFsConfig `json:"cryptconfig,omitempty"`
SFTPConfig SFTPFsConfig `json:"sftpconfig,omitempty"`
}

View file

@ -1,35 +0,0 @@
package sdk
// BaseVirtualFolder defines the path for the virtual folder and the used quota limits.
// The same folder can be shared among multiple users and each user can have different
// quota limits or a different virtual path.
type BaseVirtualFolder struct {
ID int64 `json:"id"`
Name string `json:"name"`
MappedPath string `json:"mapped_path,omitempty"`
Description string `json:"description,omitempty"`
UsedQuotaSize int64 `json:"used_quota_size"`
// Used quota as number of files
UsedQuotaFiles int `json:"used_quota_files"`
// Last quota update as unix timestamp in milliseconds
LastQuotaUpdate int64 `json:"last_quota_update"`
// list of usernames associated with this virtual folder
Users []string `json:"users,omitempty"`
// Filesystem configuration details
FsConfig Filesystem `json:"filesystem"`
}
// VirtualFolder defines a mapping between an SFTPGo exposed virtual path and a
// filesystem path outside the user home directory.
// The specified paths must be absolute and the virtual path cannot be "/",
// it must be a sub directory. The parent directory for the specified virtual
// path must exist. SFTPGo will, by default, try to automatically create any missing
// parent directory for the configured virtual folders at user login.
type VirtualFolder struct {
BaseVirtualFolder
VirtualPath string `json:"virtual_path"`
// Maximum size allowed as bytes. 0 means unlimited, -1 included in user quota
QuotaSize int64 `json:"quota_size"`
// Maximum number of files allowed. 0 means unlimited, -1 included in user quota
QuotaFiles int `json:"quota_files"`
}

View file

@ -1,59 +0,0 @@
// Package auth defines the implementation for authentication plugins.
// Authentication plugins allow to authenticate external users
package auth
import (
"context"
"github.com/hashicorp/go-plugin"
"google.golang.org/grpc"
"github.com/drakkan/sftpgo/v2/sdk/plugin/auth/proto"
)
const (
// PluginName defines the name for a notifier plugin
PluginName = "auth"
)
// Handshake is a common handshake that is shared by plugin and host.
var Handshake = plugin.HandshakeConfig{
ProtocolVersion: 1,
MagicCookieKey: "SFTPGO_PLUGIN_AUTH",
MagicCookieValue: "d1ed507d-d2be-4a38-a460-6fe0b2cc7efc",
}
// PluginMap is the map of plugins we can dispense.
var PluginMap = map[string]plugin.Plugin{
PluginName: &Plugin{},
}
// Authenticator defines the interface for authentication plugins
type Authenticator interface {
CheckUserAndPass(username, password, ip, protocol string, userAsJSON []byte) ([]byte, error)
CheckUserAndTLSCert(username, tlsCert, ip, protocol string, userAsJSON []byte) ([]byte, error)
CheckUserAndPublicKey(username, pubKey, ip, protocol string, userAsJSON []byte) ([]byte, error)
CheckUserAndKeyboardInteractive(username, ip, protocol string, userAsJSON []byte) ([]byte, error)
SendKeyboardAuthRequest(requestID, username, password, ip string, answers, questions []string, step int32) (string, []string, []bool, int, int, error)
}
// Plugin defines the implementation to serve/connect to an authe plugin
type Plugin struct {
plugin.Plugin
Impl Authenticator
}
// GRPCServer defines the GRPC server implementation for this plugin
func (p *Plugin) GRPCServer(broker *plugin.GRPCBroker, s *grpc.Server) error {
proto.RegisterAuthServer(s, &GRPCServer{
Impl: p.Impl,
})
return nil
}
// GRPCClient defines the GRPC client implementation for this plugin
func (p *Plugin) GRPCClient(ctx context.Context, broker *plugin.GRPCBroker, c *grpc.ClientConn) (interface{}, error) {
return &GRPCClient{
client: proto.NewAuthClient(c),
}, nil
}

View file

@ -1,165 +0,0 @@
package auth
import (
"context"
"time"
"github.com/drakkan/sftpgo/v2/sdk/plugin/auth/proto"
)
const (
rpcTimeout = 20 * time.Second
)
// GRPCClient is an implementation of Authenticator interface that talks over RPC.
type GRPCClient struct {
client proto.AuthClient
}
// CheckUserAndPass implements the Authenticator interface
func (c *GRPCClient) CheckUserAndPass(username, password, ip, protocol string, userAsJSON []byte) ([]byte, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.CheckUserAndPass(ctx, &proto.CheckUserAndPassRequest{
Username: username,
Password: password,
Ip: ip,
Protocol: protocol,
User: userAsJSON,
})
if err != nil {
return nil, err
}
return resp.User, nil
}
// CheckUserAndTLSCert implements the Authenticator interface
func (c *GRPCClient) CheckUserAndTLSCert(username, tlsCert, ip, protocol string, userAsJSON []byte) ([]byte, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.CheckUserAndTLSCert(ctx, &proto.CheckUserAndTLSCertRequest{
Username: username,
TlsCert: tlsCert,
Ip: ip,
Protocol: protocol,
User: userAsJSON,
})
if err != nil {
return nil, err
}
return resp.User, nil
}
// CheckUserAndPublicKey implements the Authenticator interface
func (c *GRPCClient) CheckUserAndPublicKey(username, pubKey, ip, protocol string, userAsJSON []byte) ([]byte, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.CheckUserAndPublicKey(ctx, &proto.CheckUserAndPublicKeyRequest{
Username: username,
PubKey: pubKey,
Ip: ip,
Protocol: protocol,
User: userAsJSON,
})
if err != nil {
return nil, err
}
return resp.User, nil
}
// CheckUserAndKeyboardInteractive implements the Authenticator interface
func (c *GRPCClient) CheckUserAndKeyboardInteractive(username, ip, protocol string, userAsJSON []byte) ([]byte, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.CheckUserAndKeyboardInteractive(ctx, &proto.CheckUserAndKeyboardInteractiveRequest{
Username: username,
Ip: ip,
Protocol: protocol,
User: userAsJSON,
})
if err != nil {
return nil, err
}
return resp.User, nil
}
// SendKeyboardAuthRequest implements the Authenticator interface
func (c *GRPCClient) SendKeyboardAuthRequest(requestID, username, password, ip string, answers, questions []string, step int32) (string, []string, []bool, int, int, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.SendKeyboardAuthRequest(ctx, &proto.KeyboardAuthRequest{
RequestID: requestID,
Username: username,
Password: password,
Ip: ip,
Answers: answers,
Questions: questions,
Step: step,
})
if err != nil {
return "", nil, nil, 0, 0, err
}
return resp.Instructions, resp.Questions, resp.Echos, int(resp.AuthResult), int(resp.CheckPassword), err
}
// GRPCServer defines the gRPC server that GRPCClient talks to.
type GRPCServer struct {
Impl Authenticator
}
// CheckUserAndPass implements the server side check user and password method
func (s *GRPCServer) CheckUserAndPass(ctx context.Context, req *proto.CheckUserAndPassRequest) (*proto.AuthResponse, error) {
user, err := s.Impl.CheckUserAndPass(req.Username, req.Password, req.Ip, req.Protocol, req.User)
if err != nil {
return nil, err
}
return &proto.AuthResponse{User: user}, nil
}
// CheckUserAndTLSCert implements the server side check user and tls certificate method
func (s *GRPCServer) CheckUserAndTLSCert(ctx context.Context, req *proto.CheckUserAndTLSCertRequest) (*proto.AuthResponse, error) {
user, err := s.Impl.CheckUserAndTLSCert(req.Username, req.TlsCert, req.Ip, req.Protocol, req.User)
if err != nil {
return nil, err
}
return &proto.AuthResponse{User: user}, nil
}
// CheckUserAndPublicKey implements the server side check user and public key method
func (s *GRPCServer) CheckUserAndPublicKey(ctx context.Context, req *proto.CheckUserAndPublicKeyRequest) (*proto.AuthResponse, error) {
user, err := s.Impl.CheckUserAndPublicKey(req.Username, req.PubKey, req.Ip, req.Protocol, req.User)
if err != nil {
return nil, err
}
return &proto.AuthResponse{User: user}, nil
}
// CheckUserAndKeyboardInteractive implements the server side check user and keyboard interactive method
func (s *GRPCServer) CheckUserAndKeyboardInteractive(ctx context.Context, req *proto.CheckUserAndKeyboardInteractiveRequest) (*proto.AuthResponse, error) {
user, err := s.Impl.CheckUserAndKeyboardInteractive(req.Username, req.Ip, req.Protocol, req.User)
if err != nil {
return nil, err
}
return &proto.AuthResponse{User: user}, nil
}
// SendKeyboardAuthRequest implements the server side method to send a keyboard interactive authentication request
func (s *GRPCServer) SendKeyboardAuthRequest(ctx context.Context, req *proto.KeyboardAuthRequest) (*proto.KeyboardAuthResponse, error) {
instructions, questions, echos, authResult, checkPwd, err := s.Impl.SendKeyboardAuthRequest(req.RequestID, req.Username,
req.Password, req.Ip, req.Answers, req.Questions, req.Step)
if err != nil {
return nil, err
}
return &proto.KeyboardAuthResponse{
Instructions: instructions,
Questions: questions,
Echos: echos,
AuthResult: int32(authResult),
CheckPassword: int32(checkPwd),
}, nil
}

File diff suppressed because it is too large Load diff

View file

@ -1,65 +0,0 @@
syntax = "proto3";
package proto;
option go_package = "sdk/plugin/auth/proto";
message CheckUserAndPassRequest {
string username = 1;
string password = 2;
string ip = 3;
string protocol = 4;
bytes user = 5; // SFTPGo user JSON serialized
}
message CheckUserAndTLSCertRequest {
string username = 1;
string tlsCert = 2; // tls certificate pem encoded
string ip = 3;
string protocol = 4;
bytes user = 5; // SFTPGo user JSON serialized
}
message CheckUserAndPublicKeyRequest {
string username = 1;
string pubKey = 2;
string ip = 3;
string protocol = 4;
bytes user = 5; // SFTPGo user JSON serialized
}
message CheckUserAndKeyboardInteractiveRequest {
string username = 1;
string ip = 2;
string protocol = 3;
bytes user = 4; // SFTPGo user JSON serialized
}
message KeyboardAuthRequest {
string requestID = 1;
string username = 2;
string password = 3;
string ip = 4;
repeated string answers = 5;
repeated string questions = 6;
int32 step = 7;
}
message KeyboardAuthResponse {
string instructions = 1;
repeated string questions = 2;
repeated bool echos = 3;
int32 auth_result = 4;
int32 check_password = 5;
}
message AuthResponse {
bytes user = 1; // SFTPGo user JSON serialized
}
service Auth {
rpc CheckUserAndPass(CheckUserAndPassRequest) returns (AuthResponse);
rpc CheckUserAndTLSCert(CheckUserAndTLSCertRequest) returns (AuthResponse);
rpc CheckUserAndPublicKey(CheckUserAndPublicKeyRequest) returns (AuthResponse);
rpc CheckUserAndKeyboardInteractive(CheckUserAndKeyboardInteractiveRequest) returns (AuthResponse);
rpc SendKeyboardAuthRequest(KeyboardAuthRequest) returns (KeyboardAuthResponse);
}

View file

@ -1,58 +0,0 @@
// Package eventsearcher defines the implementation for events search plugins.
// Events search plugins allow to search for filesystem and provider events.
package eventsearcher
import (
"context"
"github.com/hashicorp/go-plugin"
"google.golang.org/grpc"
"github.com/drakkan/sftpgo/v2/sdk/plugin/eventsearcher/proto"
)
const (
// PluginName defines the name for an events search plugin
PluginName = "eventsearcher"
)
// Handshake is a common handshake that is shared by plugin and host.
var Handshake = plugin.HandshakeConfig{
ProtocolVersion: 1,
MagicCookieKey: "SFTPGO_PLUGIN_EVENTSEARCHER",
MagicCookieValue: "2b523805-0279-471c-895e-6c0d39002ca4",
}
// PluginMap is the map of plugins we can dispense.
var PluginMap = map[string]plugin.Plugin{
PluginName: &Plugin{},
}
// Searcher defines the interface for events search plugins
type Searcher interface {
SearchFsEvents(startTimestamp, endTimestamp int64, username, ip, sshCmd string, actions, protocols,
instanceIDs, excludeIDs []string, statuses []int32, limit, order int) ([]byte, []string, []string, error)
SearchProviderEvents(startTimestamp, endTimestamp int64, username, ip, objectName string,
limit, order int, actions, objectTypes, instanceIDs, excludeIDs []string) ([]byte, []string, []string, error)
}
// Plugin defines the implementation to serve/connect to a event search plugin
type Plugin struct {
plugin.Plugin
Impl Searcher
}
// GRPCServer defines the GRPC server implementation for this plugin
func (p *Plugin) GRPCServer(broker *plugin.GRPCBroker, s *grpc.Server) error {
proto.RegisterSearcherServer(s, &GRPCServer{
Impl: p.Impl,
})
return nil
}
// GRPCClient defines the GRPC client implementation for this plugin
func (p *Plugin) GRPCClient(ctx context.Context, broker *plugin.GRPCBroker, c *grpc.ClientConn) (interface{}, error) {
return &GRPCClient{
client: proto.NewSearcherClient(c),
}, nil
}

View file

@ -1,103 +0,0 @@
package eventsearcher
import (
"context"
"time"
"github.com/drakkan/sftpgo/v2/sdk/plugin/eventsearcher/proto"
)
const (
rpcTimeout = 20 * time.Second
)
// GRPCClient is an implementation of Notifier interface that talks over RPC.
type GRPCClient struct {
client proto.SearcherClient
}
// SearchFsEvents implements the Searcher interface
func (c *GRPCClient) SearchFsEvents(startTimestamp, endTimestamp int64, username, ip, sshCmd string, actions,
protocols, instanceIDs, excludeIDs []string, statuses []int32, limit, order int,
) ([]byte, []string, []string, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.SearchFsEvents(ctx, &proto.FsEventsFilter{
StartTimestamp: startTimestamp,
EndTimestamp: endTimestamp,
Actions: actions,
Username: username,
Ip: ip,
SshCmd: sshCmd,
Protocols: protocols,
InstanceIds: instanceIDs,
Statuses: statuses,
Limit: int32(limit),
Order: proto.FsEventsFilter_Order(order),
ExcludeIds: excludeIDs,
})
if err != nil {
return nil, nil, nil, err
}
return resp.Data, resp.SameTsAtStart, resp.SameTsAtEnd, nil
}
// SearchProviderEvents implements the Searcher interface
func (c *GRPCClient) SearchProviderEvents(startTimestamp, endTimestamp int64, username, ip, objectName string,
limit, order int, actions, objectTypes, instanceIDs, excludeIDs []string,
) ([]byte, []string, []string, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.SearchProviderEvents(ctx, &proto.ProviderEventsFilter{
StartTimestamp: startTimestamp,
EndTimestamp: endTimestamp,
Actions: actions,
Username: username,
Ip: ip,
ObjectTypes: objectTypes,
ObjectName: objectName,
InstanceIds: instanceIDs,
Limit: int32(limit),
Order: proto.ProviderEventsFilter_Order(order),
ExcludeIds: excludeIDs,
})
if err != nil {
return nil, nil, nil, err
}
return resp.Data, resp.SameTsAtStart, resp.SameTsAtEnd, nil
}
// GRPCServer defines the gRPC server that GRPCClient talks to.
type GRPCServer struct {
Impl Searcher
}
// SearchFsEvents implements the server side fs events search method
func (s *GRPCServer) SearchFsEvents(ctx context.Context, req *proto.FsEventsFilter) (*proto.SearchResponse, error) {
responseData, sameTsAtStart, sameTsAtEnd, err := s.Impl.SearchFsEvents(req.StartTimestamp,
req.EndTimestamp, req.Username, req.Ip, req.SshCmd, req.Actions, req.Protocols, req.InstanceIds,
req.ExcludeIds, req.Statuses, int(req.Limit), int(req.Order))
return &proto.SearchResponse{
Data: responseData,
SameTsAtStart: sameTsAtStart,
SameTsAtEnd: sameTsAtEnd,
}, err
}
// SearchProviderEvents implement the server side provider events search method
func (s *GRPCServer) SearchProviderEvents(ctx context.Context, req *proto.ProviderEventsFilter) (*proto.SearchResponse, error) {
responseData, sameTsAtStart, sameTsAtEnd, err := s.Impl.SearchProviderEvents(req.StartTimestamp,
req.EndTimestamp, req.Username, req.Ip, req.ObjectName, int(req.Limit),
int(req.Order), req.Actions, req.ObjectTypes, req.InstanceIds, req.ExcludeIds)
return &proto.SearchResponse{
Data: responseData,
SameTsAtStart: sameTsAtStart,
SameTsAtEnd: sameTsAtEnd,
}, err
}

View file

@ -1,737 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.17.3
// source: eventsearcher/proto/search.proto
package proto
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type FsEventsFilter_Order int32
const (
FsEventsFilter_DESC FsEventsFilter_Order = 0
FsEventsFilter_ASC FsEventsFilter_Order = 1
)
// Enum value maps for FsEventsFilter_Order.
var (
FsEventsFilter_Order_name = map[int32]string{
0: "DESC",
1: "ASC",
}
FsEventsFilter_Order_value = map[string]int32{
"DESC": 0,
"ASC": 1,
}
)
func (x FsEventsFilter_Order) Enum() *FsEventsFilter_Order {
p := new(FsEventsFilter_Order)
*p = x
return p
}
func (x FsEventsFilter_Order) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (FsEventsFilter_Order) Descriptor() protoreflect.EnumDescriptor {
return file_eventsearcher_proto_search_proto_enumTypes[0].Descriptor()
}
func (FsEventsFilter_Order) Type() protoreflect.EnumType {
return &file_eventsearcher_proto_search_proto_enumTypes[0]
}
func (x FsEventsFilter_Order) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use FsEventsFilter_Order.Descriptor instead.
func (FsEventsFilter_Order) EnumDescriptor() ([]byte, []int) {
return file_eventsearcher_proto_search_proto_rawDescGZIP(), []int{0, 0}
}
type ProviderEventsFilter_Order int32
const (
ProviderEventsFilter_DESC ProviderEventsFilter_Order = 0
ProviderEventsFilter_ASC ProviderEventsFilter_Order = 1
)
// Enum value maps for ProviderEventsFilter_Order.
var (
ProviderEventsFilter_Order_name = map[int32]string{
0: "DESC",
1: "ASC",
}
ProviderEventsFilter_Order_value = map[string]int32{
"DESC": 0,
"ASC": 1,
}
)
func (x ProviderEventsFilter_Order) Enum() *ProviderEventsFilter_Order {
p := new(ProviderEventsFilter_Order)
*p = x
return p
}
func (x ProviderEventsFilter_Order) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (ProviderEventsFilter_Order) Descriptor() protoreflect.EnumDescriptor {
return file_eventsearcher_proto_search_proto_enumTypes[1].Descriptor()
}
func (ProviderEventsFilter_Order) Type() protoreflect.EnumType {
return &file_eventsearcher_proto_search_proto_enumTypes[1]
}
func (x ProviderEventsFilter_Order) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use ProviderEventsFilter_Order.Descriptor instead.
func (ProviderEventsFilter_Order) EnumDescriptor() ([]byte, []int) {
return file_eventsearcher_proto_search_proto_rawDescGZIP(), []int{1, 0}
}
type FsEventsFilter struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
StartTimestamp int64 `protobuf:"varint,1,opt,name=start_timestamp,json=startTimestamp,proto3" json:"start_timestamp,omitempty"`
EndTimestamp int64 `protobuf:"varint,2,opt,name=end_timestamp,json=endTimestamp,proto3" json:"end_timestamp,omitempty"`
Actions []string `protobuf:"bytes,3,rep,name=actions,proto3" json:"actions,omitempty"`
Username string `protobuf:"bytes,4,opt,name=username,proto3" json:"username,omitempty"`
Ip string `protobuf:"bytes,5,opt,name=ip,proto3" json:"ip,omitempty"`
SshCmd string `protobuf:"bytes,6,opt,name=ssh_cmd,json=sshCmd,proto3" json:"ssh_cmd,omitempty"`
Protocols []string `protobuf:"bytes,7,rep,name=protocols,proto3" json:"protocols,omitempty"`
Statuses []int32 `protobuf:"varint,8,rep,packed,name=statuses,proto3" json:"statuses,omitempty"`
InstanceIds []string `protobuf:"bytes,9,rep,name=instance_ids,json=instanceIds,proto3" json:"instance_ids,omitempty"`
Limit int32 `protobuf:"varint,10,opt,name=limit,proto3" json:"limit,omitempty"`
ExcludeIds []string `protobuf:"bytes,11,rep,name=exclude_ids,json=excludeIds,proto3" json:"exclude_ids,omitempty"`
Order FsEventsFilter_Order `protobuf:"varint,12,opt,name=order,proto3,enum=proto.FsEventsFilter_Order" json:"order,omitempty"`
}
func (x *FsEventsFilter) Reset() {
*x = FsEventsFilter{}
if protoimpl.UnsafeEnabled {
mi := &file_eventsearcher_proto_search_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *FsEventsFilter) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*FsEventsFilter) ProtoMessage() {}
func (x *FsEventsFilter) ProtoReflect() protoreflect.Message {
mi := &file_eventsearcher_proto_search_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use FsEventsFilter.ProtoReflect.Descriptor instead.
func (*FsEventsFilter) Descriptor() ([]byte, []int) {
return file_eventsearcher_proto_search_proto_rawDescGZIP(), []int{0}
}
func (x *FsEventsFilter) GetStartTimestamp() int64 {
if x != nil {
return x.StartTimestamp
}
return 0
}
func (x *FsEventsFilter) GetEndTimestamp() int64 {
if x != nil {
return x.EndTimestamp
}
return 0
}
func (x *FsEventsFilter) GetActions() []string {
if x != nil {
return x.Actions
}
return nil
}
func (x *FsEventsFilter) GetUsername() string {
if x != nil {
return x.Username
}
return ""
}
func (x *FsEventsFilter) GetIp() string {
if x != nil {
return x.Ip
}
return ""
}
func (x *FsEventsFilter) GetSshCmd() string {
if x != nil {
return x.SshCmd
}
return ""
}
func (x *FsEventsFilter) GetProtocols() []string {
if x != nil {
return x.Protocols
}
return nil
}
func (x *FsEventsFilter) GetStatuses() []int32 {
if x != nil {
return x.Statuses
}
return nil
}
func (x *FsEventsFilter) GetInstanceIds() []string {
if x != nil {
return x.InstanceIds
}
return nil
}
func (x *FsEventsFilter) GetLimit() int32 {
if x != nil {
return x.Limit
}
return 0
}
func (x *FsEventsFilter) GetExcludeIds() []string {
if x != nil {
return x.ExcludeIds
}
return nil
}
func (x *FsEventsFilter) GetOrder() FsEventsFilter_Order {
if x != nil {
return x.Order
}
return FsEventsFilter_DESC
}
type ProviderEventsFilter struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
StartTimestamp int64 `protobuf:"varint,1,opt,name=start_timestamp,json=startTimestamp,proto3" json:"start_timestamp,omitempty"`
EndTimestamp int64 `protobuf:"varint,2,opt,name=end_timestamp,json=endTimestamp,proto3" json:"end_timestamp,omitempty"`
Actions []string `protobuf:"bytes,3,rep,name=actions,proto3" json:"actions,omitempty"`
Username string `protobuf:"bytes,4,opt,name=username,proto3" json:"username,omitempty"`
Ip string `protobuf:"bytes,5,opt,name=ip,proto3" json:"ip,omitempty"`
ObjectTypes []string `protobuf:"bytes,6,rep,name=object_types,json=objectTypes,proto3" json:"object_types,omitempty"`
ObjectName string `protobuf:"bytes,7,opt,name=object_name,json=objectName,proto3" json:"object_name,omitempty"`
InstanceIds []string `protobuf:"bytes,8,rep,name=instance_ids,json=instanceIds,proto3" json:"instance_ids,omitempty"`
Limit int32 `protobuf:"varint,9,opt,name=limit,proto3" json:"limit,omitempty"`
Order ProviderEventsFilter_Order `protobuf:"varint,10,opt,name=order,proto3,enum=proto.ProviderEventsFilter_Order" json:"order,omitempty"`
ExcludeIds []string `protobuf:"bytes,11,rep,name=exclude_ids,json=excludeIds,proto3" json:"exclude_ids,omitempty"`
}
func (x *ProviderEventsFilter) Reset() {
*x = ProviderEventsFilter{}
if protoimpl.UnsafeEnabled {
mi := &file_eventsearcher_proto_search_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ProviderEventsFilter) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ProviderEventsFilter) ProtoMessage() {}
func (x *ProviderEventsFilter) ProtoReflect() protoreflect.Message {
mi := &file_eventsearcher_proto_search_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ProviderEventsFilter.ProtoReflect.Descriptor instead.
func (*ProviderEventsFilter) Descriptor() ([]byte, []int) {
return file_eventsearcher_proto_search_proto_rawDescGZIP(), []int{1}
}
func (x *ProviderEventsFilter) GetStartTimestamp() int64 {
if x != nil {
return x.StartTimestamp
}
return 0
}
func (x *ProviderEventsFilter) GetEndTimestamp() int64 {
if x != nil {
return x.EndTimestamp
}
return 0
}
func (x *ProviderEventsFilter) GetActions() []string {
if x != nil {
return x.Actions
}
return nil
}
func (x *ProviderEventsFilter) GetUsername() string {
if x != nil {
return x.Username
}
return ""
}
func (x *ProviderEventsFilter) GetIp() string {
if x != nil {
return x.Ip
}
return ""
}
func (x *ProviderEventsFilter) GetObjectTypes() []string {
if x != nil {
return x.ObjectTypes
}
return nil
}
func (x *ProviderEventsFilter) GetObjectName() string {
if x != nil {
return x.ObjectName
}
return ""
}
func (x *ProviderEventsFilter) GetInstanceIds() []string {
if x != nil {
return x.InstanceIds
}
return nil
}
func (x *ProviderEventsFilter) GetLimit() int32 {
if x != nil {
return x.Limit
}
return 0
}
func (x *ProviderEventsFilter) GetOrder() ProviderEventsFilter_Order {
if x != nil {
return x.Order
}
return ProviderEventsFilter_DESC
}
func (x *ProviderEventsFilter) GetExcludeIds() []string {
if x != nil {
return x.ExcludeIds
}
return nil
}
type SearchResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Data []byte `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"` // JSON serialized response to return
SameTsAtStart []string `protobuf:"bytes,2,rep,name=same_ts_at_start,json=sameTsAtStart,proto3" json:"same_ts_at_start,omitempty"`
SameTsAtEnd []string `protobuf:"bytes,3,rep,name=same_ts_at_end,json=sameTsAtEnd,proto3" json:"same_ts_at_end,omitempty"`
}
func (x *SearchResponse) Reset() {
*x = SearchResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_eventsearcher_proto_search_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SearchResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SearchResponse) ProtoMessage() {}
func (x *SearchResponse) ProtoReflect() protoreflect.Message {
mi := &file_eventsearcher_proto_search_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SearchResponse.ProtoReflect.Descriptor instead.
func (*SearchResponse) Descriptor() ([]byte, []int) {
return file_eventsearcher_proto_search_proto_rawDescGZIP(), []int{2}
}
func (x *SearchResponse) GetData() []byte {
if x != nil {
return x.Data
}
return nil
}
func (x *SearchResponse) GetSameTsAtStart() []string {
if x != nil {
return x.SameTsAtStart
}
return nil
}
func (x *SearchResponse) GetSameTsAtEnd() []string {
if x != nil {
return x.SameTsAtEnd
}
return nil
}
var File_eventsearcher_proto_search_proto protoreflect.FileDescriptor
var file_eventsearcher_proto_search_proto_rawDesc = []byte{
0x0a, 0x20, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x65, 0x61, 0x72, 0x63, 0x68, 0x65, 0x72, 0x2f,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x73, 0x65, 0x61, 0x72, 0x63, 0x68, 0x2e, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x12, 0x05, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xa0, 0x03, 0x0a, 0x0e, 0x46, 0x73,
0x45, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x12, 0x27, 0x0a, 0x0f,
0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18,
0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0e, 0x73, 0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65,
0x73, 0x74, 0x61, 0x6d, 0x70, 0x12, 0x23, 0x0a, 0x0d, 0x65, 0x6e, 0x64, 0x5f, 0x74, 0x69, 0x6d,
0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18, 0x02, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0c, 0x65, 0x6e,
0x64, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x12, 0x18, 0x0a, 0x07, 0x61, 0x63,
0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x09, 0x52, 0x07, 0x61, 0x63, 0x74,
0x69, 0x6f, 0x6e, 0x73, 0x12, 0x1a, 0x0a, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d, 0x65,
0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d, 0x65,
0x12, 0x0e, 0x0a, 0x02, 0x69, 0x70, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x70,
0x12, 0x17, 0x0a, 0x07, 0x73, 0x73, 0x68, 0x5f, 0x63, 0x6d, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28,
0x09, 0x52, 0x06, 0x73, 0x73, 0x68, 0x43, 0x6d, 0x64, 0x12, 0x1c, 0x0a, 0x09, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x63, 0x6f, 0x6c, 0x73, 0x18, 0x07, 0x20, 0x03, 0x28, 0x09, 0x52, 0x09, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x63, 0x6f, 0x6c, 0x73, 0x12, 0x1a, 0x0a, 0x08, 0x73, 0x74, 0x61, 0x74, 0x75,
0x73, 0x65, 0x73, 0x18, 0x08, 0x20, 0x03, 0x28, 0x05, 0x52, 0x08, 0x73, 0x74, 0x61, 0x74, 0x75,
0x73, 0x65, 0x73, 0x12, 0x21, 0x0a, 0x0c, 0x69, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x5f,
0x69, 0x64, 0x73, 0x18, 0x09, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0b, 0x69, 0x6e, 0x73, 0x74, 0x61,
0x6e, 0x63, 0x65, 0x49, 0x64, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x18,
0x0a, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x12, 0x1f, 0x0a, 0x0b,
0x65, 0x78, 0x63, 0x6c, 0x75, 0x64, 0x65, 0x5f, 0x69, 0x64, 0x73, 0x18, 0x0b, 0x20, 0x03, 0x28,
0x09, 0x52, 0x0a, 0x65, 0x78, 0x63, 0x6c, 0x75, 0x64, 0x65, 0x49, 0x64, 0x73, 0x12, 0x31, 0x0a,
0x05, 0x6f, 0x72, 0x64, 0x65, 0x72, 0x18, 0x0c, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x1b, 0x2e, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x46, 0x73, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x46, 0x69, 0x6c,
0x74, 0x65, 0x72, 0x2e, 0x4f, 0x72, 0x64, 0x65, 0x72, 0x52, 0x05, 0x6f, 0x72, 0x64, 0x65, 0x72,
0x22, 0x1a, 0x0a, 0x05, 0x4f, 0x72, 0x64, 0x65, 0x72, 0x12, 0x08, 0x0a, 0x04, 0x44, 0x45, 0x53,
0x43, 0x10, 0x00, 0x12, 0x07, 0x0a, 0x03, 0x41, 0x53, 0x43, 0x10, 0x01, 0x22, 0x9d, 0x03, 0x0a,
0x14, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x46,
0x69, 0x6c, 0x74, 0x65, 0x72, 0x12, 0x27, 0x0a, 0x0f, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74,
0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0e,
0x73, 0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x12, 0x23,
0x0a, 0x0d, 0x65, 0x6e, 0x64, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18,
0x02, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0c, 0x65, 0x6e, 0x64, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74,
0x61, 0x6d, 0x70, 0x12, 0x18, 0x0a, 0x07, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03,
0x20, 0x03, 0x28, 0x09, 0x52, 0x07, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x1a, 0x0a,
0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52,
0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x70, 0x18,
0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x70, 0x12, 0x21, 0x0a, 0x0c, 0x6f, 0x62, 0x6a,
0x65, 0x63, 0x74, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x18, 0x06, 0x20, 0x03, 0x28, 0x09, 0x52,
0x0b, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x54, 0x79, 0x70, 0x65, 0x73, 0x12, 0x1f, 0x0a, 0x0b,
0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28,
0x09, 0x52, 0x0a, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x21, 0x0a,
0x0c, 0x69, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x5f, 0x69, 0x64, 0x73, 0x18, 0x08, 0x20,
0x03, 0x28, 0x09, 0x52, 0x0b, 0x69, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x49, 0x64, 0x73,
0x12, 0x14, 0x0a, 0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x18, 0x09, 0x20, 0x01, 0x28, 0x05, 0x52,
0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x12, 0x37, 0x0a, 0x05, 0x6f, 0x72, 0x64, 0x65, 0x72, 0x18,
0x0a, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x21, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x50, 0x72,
0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x46, 0x69, 0x6c, 0x74,
0x65, 0x72, 0x2e, 0x4f, 0x72, 0x64, 0x65, 0x72, 0x52, 0x05, 0x6f, 0x72, 0x64, 0x65, 0x72, 0x12,
0x1f, 0x0a, 0x0b, 0x65, 0x78, 0x63, 0x6c, 0x75, 0x64, 0x65, 0x5f, 0x69, 0x64, 0x73, 0x18, 0x0b,
0x20, 0x03, 0x28, 0x09, 0x52, 0x0a, 0x65, 0x78, 0x63, 0x6c, 0x75, 0x64, 0x65, 0x49, 0x64, 0x73,
0x22, 0x1a, 0x0a, 0x05, 0x4f, 0x72, 0x64, 0x65, 0x72, 0x12, 0x08, 0x0a, 0x04, 0x44, 0x45, 0x53,
0x43, 0x10, 0x00, 0x12, 0x07, 0x0a, 0x03, 0x41, 0x53, 0x43, 0x10, 0x01, 0x22, 0x72, 0x0a, 0x0e,
0x53, 0x65, 0x61, 0x72, 0x63, 0x68, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12,
0x0a, 0x04, 0x64, 0x61, 0x74, 0x61, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x04, 0x64, 0x61,
0x74, 0x61, 0x12, 0x27, 0x0a, 0x10, 0x73, 0x61, 0x6d, 0x65, 0x5f, 0x74, 0x73, 0x5f, 0x61, 0x74,
0x5f, 0x73, 0x74, 0x61, 0x72, 0x74, 0x18, 0x02, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0d, 0x73, 0x61,
0x6d, 0x65, 0x54, 0x73, 0x41, 0x74, 0x53, 0x74, 0x61, 0x72, 0x74, 0x12, 0x23, 0x0a, 0x0e, 0x73,
0x61, 0x6d, 0x65, 0x5f, 0x74, 0x73, 0x5f, 0x61, 0x74, 0x5f, 0x65, 0x6e, 0x64, 0x18, 0x03, 0x20,
0x03, 0x28, 0x09, 0x52, 0x0b, 0x73, 0x61, 0x6d, 0x65, 0x54, 0x73, 0x41, 0x74, 0x45, 0x6e, 0x64,
0x32, 0x96, 0x01, 0x0a, 0x08, 0x53, 0x65, 0x61, 0x72, 0x63, 0x68, 0x65, 0x72, 0x12, 0x3e, 0x0a,
0x0e, 0x53, 0x65, 0x61, 0x72, 0x63, 0x68, 0x46, 0x73, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x12,
0x15, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x46, 0x73, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x73,
0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x1a, 0x15, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x53,
0x65, 0x61, 0x72, 0x63, 0x68, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x4a, 0x0a,
0x14, 0x53, 0x65, 0x61, 0x72, 0x63, 0x68, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x45,
0x76, 0x65, 0x6e, 0x74, 0x73, 0x12, 0x1b, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x50, 0x72,
0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x46, 0x69, 0x6c, 0x74,
0x65, 0x72, 0x1a, 0x15, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x53, 0x65, 0x61, 0x72, 0x63,
0x68, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x42, 0x20, 0x5a, 0x1e, 0x73, 0x64, 0x6b,
0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x65, 0x61,
0x72, 0x63, 0x68, 0x65, 0x72, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x33,
}
var (
file_eventsearcher_proto_search_proto_rawDescOnce sync.Once
file_eventsearcher_proto_search_proto_rawDescData = file_eventsearcher_proto_search_proto_rawDesc
)
func file_eventsearcher_proto_search_proto_rawDescGZIP() []byte {
file_eventsearcher_proto_search_proto_rawDescOnce.Do(func() {
file_eventsearcher_proto_search_proto_rawDescData = protoimpl.X.CompressGZIP(file_eventsearcher_proto_search_proto_rawDescData)
})
return file_eventsearcher_proto_search_proto_rawDescData
}
var file_eventsearcher_proto_search_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
var file_eventsearcher_proto_search_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
var file_eventsearcher_proto_search_proto_goTypes = []interface{}{
(FsEventsFilter_Order)(0), // 0: proto.FsEventsFilter.Order
(ProviderEventsFilter_Order)(0), // 1: proto.ProviderEventsFilter.Order
(*FsEventsFilter)(nil), // 2: proto.FsEventsFilter
(*ProviderEventsFilter)(nil), // 3: proto.ProviderEventsFilter
(*SearchResponse)(nil), // 4: proto.SearchResponse
}
var file_eventsearcher_proto_search_proto_depIdxs = []int32{
0, // 0: proto.FsEventsFilter.order:type_name -> proto.FsEventsFilter.Order
1, // 1: proto.ProviderEventsFilter.order:type_name -> proto.ProviderEventsFilter.Order
2, // 2: proto.Searcher.SearchFsEvents:input_type -> proto.FsEventsFilter
3, // 3: proto.Searcher.SearchProviderEvents:input_type -> proto.ProviderEventsFilter
4, // 4: proto.Searcher.SearchFsEvents:output_type -> proto.SearchResponse
4, // 5: proto.Searcher.SearchProviderEvents:output_type -> proto.SearchResponse
4, // [4:6] is the sub-list for method output_type
2, // [2:4] is the sub-list for method input_type
2, // [2:2] is the sub-list for extension type_name
2, // [2:2] is the sub-list for extension extendee
0, // [0:2] is the sub-list for field type_name
}
func init() { file_eventsearcher_proto_search_proto_init() }
func file_eventsearcher_proto_search_proto_init() {
if File_eventsearcher_proto_search_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_eventsearcher_proto_search_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*FsEventsFilter); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_eventsearcher_proto_search_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ProviderEventsFilter); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_eventsearcher_proto_search_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SearchResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_eventsearcher_proto_search_proto_rawDesc,
NumEnums: 2,
NumMessages: 3,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_eventsearcher_proto_search_proto_goTypes,
DependencyIndexes: file_eventsearcher_proto_search_proto_depIdxs,
EnumInfos: file_eventsearcher_proto_search_proto_enumTypes,
MessageInfos: file_eventsearcher_proto_search_proto_msgTypes,
}.Build()
File_eventsearcher_proto_search_proto = out.File
file_eventsearcher_proto_search_proto_rawDesc = nil
file_eventsearcher_proto_search_proto_goTypes = nil
file_eventsearcher_proto_search_proto_depIdxs = nil
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConnInterface
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion6
// SearcherClient is the client API for Searcher service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type SearcherClient interface {
SearchFsEvents(ctx context.Context, in *FsEventsFilter, opts ...grpc.CallOption) (*SearchResponse, error)
SearchProviderEvents(ctx context.Context, in *ProviderEventsFilter, opts ...grpc.CallOption) (*SearchResponse, error)
}
type searcherClient struct {
cc grpc.ClientConnInterface
}
func NewSearcherClient(cc grpc.ClientConnInterface) SearcherClient {
return &searcherClient{cc}
}
func (c *searcherClient) SearchFsEvents(ctx context.Context, in *FsEventsFilter, opts ...grpc.CallOption) (*SearchResponse, error) {
out := new(SearchResponse)
err := c.cc.Invoke(ctx, "/proto.Searcher/SearchFsEvents", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *searcherClient) SearchProviderEvents(ctx context.Context, in *ProviderEventsFilter, opts ...grpc.CallOption) (*SearchResponse, error) {
out := new(SearchResponse)
err := c.cc.Invoke(ctx, "/proto.Searcher/SearchProviderEvents", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// SearcherServer is the server API for Searcher service.
type SearcherServer interface {
SearchFsEvents(context.Context, *FsEventsFilter) (*SearchResponse, error)
SearchProviderEvents(context.Context, *ProviderEventsFilter) (*SearchResponse, error)
}
// UnimplementedSearcherServer can be embedded to have forward compatible implementations.
type UnimplementedSearcherServer struct {
}
func (*UnimplementedSearcherServer) SearchFsEvents(context.Context, *FsEventsFilter) (*SearchResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method SearchFsEvents not implemented")
}
func (*UnimplementedSearcherServer) SearchProviderEvents(context.Context, *ProviderEventsFilter) (*SearchResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method SearchProviderEvents not implemented")
}
func RegisterSearcherServer(s *grpc.Server, srv SearcherServer) {
s.RegisterService(&_Searcher_serviceDesc, srv)
}
func _Searcher_SearchFsEvents_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(FsEventsFilter)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SearcherServer).SearchFsEvents(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.Searcher/SearchFsEvents",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SearcherServer).SearchFsEvents(ctx, req.(*FsEventsFilter))
}
return interceptor(ctx, in, info, handler)
}
func _Searcher_SearchProviderEvents_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ProviderEventsFilter)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SearcherServer).SearchProviderEvents(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.Searcher/SearchProviderEvents",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SearcherServer).SearchProviderEvents(ctx, req.(*ProviderEventsFilter))
}
return interceptor(ctx, in, info, handler)
}
var _Searcher_serviceDesc = grpc.ServiceDesc{
ServiceName: "proto.Searcher",
HandlerType: (*SearcherServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "SearchFsEvents",
Handler: _Searcher_SearchFsEvents_Handler,
},
{
MethodName: "SearchProviderEvents",
Handler: _Searcher_SearchProviderEvents_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "eventsearcher/proto/search.proto",
}

View file

@ -1,52 +0,0 @@
syntax = "proto3";
package proto;
option go_package = "sdk/plugin/eventsearcher/proto";
message FsEventsFilter {
int64 start_timestamp = 1;
int64 end_timestamp = 2;
repeated string actions = 3;
string username = 4;
string ip = 5;
string ssh_cmd = 6;
repeated string protocols = 7;
repeated int32 statuses = 8;
repeated string instance_ids = 9;
int32 limit = 10;
repeated string exclude_ids = 11;
enum Order {
DESC = 0;
ASC = 1;
}
Order order = 12;
}
message ProviderEventsFilter {
int64 start_timestamp = 1;
int64 end_timestamp = 2;
repeated string actions = 3;
string username = 4;
string ip = 5;
repeated string object_types = 6;
string object_name = 7;
repeated string instance_ids = 8;
int32 limit = 9;
enum Order {
DESC = 0;
ASC = 1;
}
Order order = 10;
repeated string exclude_ids = 11;
}
message SearchResponse {
bytes data = 1; // JSON serialized response to return
repeated string same_ts_at_start = 2;
repeated string same_ts_at_end = 3;
}
service Searcher {
rpc SearchFsEvents(FsEventsFilter) returns (SearchResponse);
rpc SearchProviderEvents(ProviderEventsFilter) returns (SearchResponse);
}

View file

@ -1,84 +0,0 @@
package kms
import (
"context"
"time"
"github.com/drakkan/sftpgo/v2/sdk/plugin/kms/proto"
)
const (
rpcTimeout = 20 * time.Second
)
// GRPCClient is an implementation of KMS interface that talks over RPC.
type GRPCClient struct {
client proto.KMSClient
}
// Encrypt implements the KMSService interface
func (c *GRPCClient) Encrypt(payload, additionalData, URL, masterKey string) (string, string, int32, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.Encrypt(ctx, &proto.EncryptRequest{
Payload: payload,
AdditionalData: additionalData,
Url: URL,
MasterKey: masterKey,
})
if err != nil {
return "", "", 0, err
}
return resp.Payload, resp.Key, resp.Mode, nil
}
// Decrypt implements the KMSService interface
func (c *GRPCClient) Decrypt(payload, key, additionalData string, mode int, URL, masterKey string) (string, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.Decrypt(ctx, &proto.DecryptRequest{
Payload: payload,
Key: key,
AdditionalData: additionalData,
Mode: int32(mode),
Url: URL,
MasterKey: masterKey,
})
if err != nil {
return "", err
}
return resp.Payload, nil
}
// GRPCServer defines the gRPC server that GRPCClient talks to.
type GRPCServer struct {
Impl Service
}
// Encrypt implements the serve side encrypt method
func (s *GRPCServer) Encrypt(ctx context.Context, req *proto.EncryptRequest) (*proto.EncryptResponse, error) {
payload, key, mode, err := s.Impl.Encrypt(req.Payload, req.AdditionalData, req.Url, req.MasterKey)
if err != nil {
return nil, err
}
return &proto.EncryptResponse{
Payload: payload,
Key: key,
Mode: mode,
}, nil
}
// Decrypt implements the serve side decrypt method
func (s *GRPCServer) Decrypt(ctx context.Context, req *proto.DecryptRequest) (*proto.DecryptResponse, error) {
payload, err := s.Impl.Decrypt(req.Payload, req.Key, req.AdditionalData, int(req.Mode), req.Url, req.MasterKey)
if err != nil {
return nil, err
}
return &proto.DecryptResponse{
Payload: payload,
}, nil
}

View file

@ -1,56 +0,0 @@
// Package kms defines the implementation for kms plugins.
// KMS plugins allow to encrypt/decrypt sensitive data.
package kms
import (
"context"
"github.com/hashicorp/go-plugin"
"google.golang.org/grpc"
"github.com/drakkan/sftpgo/v2/sdk/plugin/kms/proto"
)
const (
// PluginName defines the name for a kms plugin
PluginName = "kms"
)
// Handshake is a common handshake that is shared by plugin and host.
var Handshake = plugin.HandshakeConfig{
ProtocolVersion: 1,
MagicCookieKey: "SFTPGO_PLUGIN_KMS",
MagicCookieValue: "223e3571-7ed2-4b96-b4b3-c7eb87d7ca1d",
}
// PluginMap is the map of plugins we can dispense.
var PluginMap = map[string]plugin.Plugin{
PluginName: &Plugin{},
}
// Service defines the interface for kms plugins
type Service interface {
Encrypt(payload, additionalData, URL, masterKey string) (string, string, int32, error)
Decrypt(payload, key, additionalData string, mode int, URL, masterKey string) (string, error)
}
// Plugin defines the implementation to serve/connect to a notifier plugin
type Plugin struct {
plugin.Plugin
Impl Service
}
// GRPCServer defines the GRPC server implementation for this plugin
func (p *Plugin) GRPCServer(broker *plugin.GRPCBroker, s *grpc.Server) error {
proto.RegisterKMSServer(s, &GRPCServer{
Impl: p.Impl,
})
return nil
}
// GRPCClient defines the GRPC client implementation for this plugin
func (p *Plugin) GRPCClient(ctx context.Context, broker *plugin.GRPCBroker, c *grpc.ClientConn) (interface{}, error) {
return &GRPCClient{
client: proto.NewKMSClient(c),
}, nil
}

View file

@ -1,559 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.17.3
// source: kms/proto/kms.proto
package proto
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type EncryptRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Payload string `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"`
AdditionalData string `protobuf:"bytes,2,opt,name=additional_data,json=additionalData,proto3" json:"additional_data,omitempty"`
Url string `protobuf:"bytes,3,opt,name=url,proto3" json:"url,omitempty"`
MasterKey string `protobuf:"bytes,4,opt,name=master_key,json=masterKey,proto3" json:"master_key,omitempty"`
}
func (x *EncryptRequest) Reset() {
*x = EncryptRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_kms_proto_kms_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *EncryptRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*EncryptRequest) ProtoMessage() {}
func (x *EncryptRequest) ProtoReflect() protoreflect.Message {
mi := &file_kms_proto_kms_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use EncryptRequest.ProtoReflect.Descriptor instead.
func (*EncryptRequest) Descriptor() ([]byte, []int) {
return file_kms_proto_kms_proto_rawDescGZIP(), []int{0}
}
func (x *EncryptRequest) GetPayload() string {
if x != nil {
return x.Payload
}
return ""
}
func (x *EncryptRequest) GetAdditionalData() string {
if x != nil {
return x.AdditionalData
}
return ""
}
func (x *EncryptRequest) GetUrl() string {
if x != nil {
return x.Url
}
return ""
}
func (x *EncryptRequest) GetMasterKey() string {
if x != nil {
return x.MasterKey
}
return ""
}
type EncryptResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Payload string `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"`
Key string `protobuf:"bytes,2,opt,name=key,proto3" json:"key,omitempty"`
Mode int32 `protobuf:"varint,3,opt,name=mode,proto3" json:"mode,omitempty"`
}
func (x *EncryptResponse) Reset() {
*x = EncryptResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_kms_proto_kms_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *EncryptResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*EncryptResponse) ProtoMessage() {}
func (x *EncryptResponse) ProtoReflect() protoreflect.Message {
mi := &file_kms_proto_kms_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use EncryptResponse.ProtoReflect.Descriptor instead.
func (*EncryptResponse) Descriptor() ([]byte, []int) {
return file_kms_proto_kms_proto_rawDescGZIP(), []int{1}
}
func (x *EncryptResponse) GetPayload() string {
if x != nil {
return x.Payload
}
return ""
}
func (x *EncryptResponse) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
func (x *EncryptResponse) GetMode() int32 {
if x != nil {
return x.Mode
}
return 0
}
type DecryptRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Payload string `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"`
Key string `protobuf:"bytes,2,opt,name=key,proto3" json:"key,omitempty"`
AdditionalData string `protobuf:"bytes,3,opt,name=additional_data,json=additionalData,proto3" json:"additional_data,omitempty"`
Mode int32 `protobuf:"varint,4,opt,name=mode,proto3" json:"mode,omitempty"`
Url string `protobuf:"bytes,5,opt,name=url,proto3" json:"url,omitempty"`
MasterKey string `protobuf:"bytes,6,opt,name=master_key,json=masterKey,proto3" json:"master_key,omitempty"`
}
func (x *DecryptRequest) Reset() {
*x = DecryptRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_kms_proto_kms_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DecryptRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DecryptRequest) ProtoMessage() {}
func (x *DecryptRequest) ProtoReflect() protoreflect.Message {
mi := &file_kms_proto_kms_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DecryptRequest.ProtoReflect.Descriptor instead.
func (*DecryptRequest) Descriptor() ([]byte, []int) {
return file_kms_proto_kms_proto_rawDescGZIP(), []int{2}
}
func (x *DecryptRequest) GetPayload() string {
if x != nil {
return x.Payload
}
return ""
}
func (x *DecryptRequest) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
func (x *DecryptRequest) GetAdditionalData() string {
if x != nil {
return x.AdditionalData
}
return ""
}
func (x *DecryptRequest) GetMode() int32 {
if x != nil {
return x.Mode
}
return 0
}
func (x *DecryptRequest) GetUrl() string {
if x != nil {
return x.Url
}
return ""
}
func (x *DecryptRequest) GetMasterKey() string {
if x != nil {
return x.MasterKey
}
return ""
}
type DecryptResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Payload string `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"`
}
func (x *DecryptResponse) Reset() {
*x = DecryptResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_kms_proto_kms_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DecryptResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DecryptResponse) ProtoMessage() {}
func (x *DecryptResponse) ProtoReflect() protoreflect.Message {
mi := &file_kms_proto_kms_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DecryptResponse.ProtoReflect.Descriptor instead.
func (*DecryptResponse) Descriptor() ([]byte, []int) {
return file_kms_proto_kms_proto_rawDescGZIP(), []int{3}
}
func (x *DecryptResponse) GetPayload() string {
if x != nil {
return x.Payload
}
return ""
}
var File_kms_proto_kms_proto protoreflect.FileDescriptor
var file_kms_proto_kms_proto_rawDesc = []byte{
0x0a, 0x13, 0x6b, 0x6d, 0x73, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x6b, 0x6d, 0x73, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x05, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x84, 0x01, 0x0a,
0x0e, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12,
0x18, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x27, 0x0a, 0x0f, 0x61, 0x64, 0x64,
0x69, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x5f, 0x64, 0x61, 0x74, 0x61, 0x18, 0x02, 0x20, 0x01,
0x28, 0x09, 0x52, 0x0e, 0x61, 0x64, 0x64, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x44, 0x61,
0x74, 0x61, 0x12, 0x10, 0x0a, 0x03, 0x75, 0x72, 0x6c, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52,
0x03, 0x75, 0x72, 0x6c, 0x12, 0x1d, 0x0a, 0x0a, 0x6d, 0x61, 0x73, 0x74, 0x65, 0x72, 0x5f, 0x6b,
0x65, 0x79, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x6d, 0x61, 0x73, 0x74, 0x65, 0x72,
0x4b, 0x65, 0x79, 0x22, 0x51, 0x0a, 0x0f, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61,
0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64,
0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b,
0x65, 0x79, 0x12, 0x12, 0x0a, 0x04, 0x6d, 0x6f, 0x64, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x05,
0x52, 0x04, 0x6d, 0x6f, 0x64, 0x65, 0x22, 0xaa, 0x01, 0x0a, 0x0e, 0x44, 0x65, 0x63, 0x72, 0x79,
0x70, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79,
0x6c, 0x6f, 0x61, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c,
0x6f, 0x61, 0x64, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09,
0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x27, 0x0a, 0x0f, 0x61, 0x64, 0x64, 0x69, 0x74, 0x69, 0x6f,
0x6e, 0x61, 0x6c, 0x5f, 0x64, 0x61, 0x74, 0x61, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e,
0x61, 0x64, 0x64, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x44, 0x61, 0x74, 0x61, 0x12, 0x12,
0x0a, 0x04, 0x6d, 0x6f, 0x64, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x04, 0x6d, 0x6f,
0x64, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x75, 0x72, 0x6c, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52,
0x03, 0x75, 0x72, 0x6c, 0x12, 0x1d, 0x0a, 0x0a, 0x6d, 0x61, 0x73, 0x74, 0x65, 0x72, 0x5f, 0x6b,
0x65, 0x79, 0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x6d, 0x61, 0x73, 0x74, 0x65, 0x72,
0x4b, 0x65, 0x79, 0x22, 0x2b, 0x0a, 0x0f, 0x44, 0x65, 0x63, 0x72, 0x79, 0x70, 0x74, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61,
0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64,
0x32, 0x79, 0x0a, 0x03, 0x4b, 0x4d, 0x53, 0x12, 0x38, 0x0a, 0x07, 0x45, 0x6e, 0x63, 0x72, 0x79,
0x70, 0x74, 0x12, 0x15, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x45, 0x6e, 0x63, 0x72, 0x79,
0x70, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x16, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x2e, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
0x65, 0x12, 0x38, 0x0a, 0x07, 0x44, 0x65, 0x63, 0x72, 0x79, 0x70, 0x74, 0x12, 0x15, 0x2e, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x44, 0x65, 0x63, 0x72, 0x79, 0x70, 0x74, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x1a, 0x16, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x44, 0x65, 0x63, 0x72,
0x79, 0x70, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x42, 0x16, 0x5a, 0x14, 0x73,
0x64, 0x6b, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x6b, 0x6d, 0x73, 0x2f, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_kms_proto_kms_proto_rawDescOnce sync.Once
file_kms_proto_kms_proto_rawDescData = file_kms_proto_kms_proto_rawDesc
)
func file_kms_proto_kms_proto_rawDescGZIP() []byte {
file_kms_proto_kms_proto_rawDescOnce.Do(func() {
file_kms_proto_kms_proto_rawDescData = protoimpl.X.CompressGZIP(file_kms_proto_kms_proto_rawDescData)
})
return file_kms_proto_kms_proto_rawDescData
}
var file_kms_proto_kms_proto_msgTypes = make([]protoimpl.MessageInfo, 4)
var file_kms_proto_kms_proto_goTypes = []interface{}{
(*EncryptRequest)(nil), // 0: proto.EncryptRequest
(*EncryptResponse)(nil), // 1: proto.EncryptResponse
(*DecryptRequest)(nil), // 2: proto.DecryptRequest
(*DecryptResponse)(nil), // 3: proto.DecryptResponse
}
var file_kms_proto_kms_proto_depIdxs = []int32{
0, // 0: proto.KMS.Encrypt:input_type -> proto.EncryptRequest
2, // 1: proto.KMS.Decrypt:input_type -> proto.DecryptRequest
1, // 2: proto.KMS.Encrypt:output_type -> proto.EncryptResponse
3, // 3: proto.KMS.Decrypt:output_type -> proto.DecryptResponse
2, // [2:4] is the sub-list for method output_type
0, // [0:2] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_kms_proto_kms_proto_init() }
func file_kms_proto_kms_proto_init() {
if File_kms_proto_kms_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_kms_proto_kms_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*EncryptRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_kms_proto_kms_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*EncryptResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_kms_proto_kms_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DecryptRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_kms_proto_kms_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DecryptResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_kms_proto_kms_proto_rawDesc,
NumEnums: 0,
NumMessages: 4,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_kms_proto_kms_proto_goTypes,
DependencyIndexes: file_kms_proto_kms_proto_depIdxs,
MessageInfos: file_kms_proto_kms_proto_msgTypes,
}.Build()
File_kms_proto_kms_proto = out.File
file_kms_proto_kms_proto_rawDesc = nil
file_kms_proto_kms_proto_goTypes = nil
file_kms_proto_kms_proto_depIdxs = nil
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConnInterface
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion6
// KMSClient is the client API for KMS service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type KMSClient interface {
Encrypt(ctx context.Context, in *EncryptRequest, opts ...grpc.CallOption) (*EncryptResponse, error)
Decrypt(ctx context.Context, in *DecryptRequest, opts ...grpc.CallOption) (*DecryptResponse, error)
}
type kMSClient struct {
cc grpc.ClientConnInterface
}
func NewKMSClient(cc grpc.ClientConnInterface) KMSClient {
return &kMSClient{cc}
}
func (c *kMSClient) Encrypt(ctx context.Context, in *EncryptRequest, opts ...grpc.CallOption) (*EncryptResponse, error) {
out := new(EncryptResponse)
err := c.cc.Invoke(ctx, "/proto.KMS/Encrypt", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kMSClient) Decrypt(ctx context.Context, in *DecryptRequest, opts ...grpc.CallOption) (*DecryptResponse, error) {
out := new(DecryptResponse)
err := c.cc.Invoke(ctx, "/proto.KMS/Decrypt", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// KMSServer is the server API for KMS service.
type KMSServer interface {
Encrypt(context.Context, *EncryptRequest) (*EncryptResponse, error)
Decrypt(context.Context, *DecryptRequest) (*DecryptResponse, error)
}
// UnimplementedKMSServer can be embedded to have forward compatible implementations.
type UnimplementedKMSServer struct {
}
func (*UnimplementedKMSServer) Encrypt(context.Context, *EncryptRequest) (*EncryptResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Encrypt not implemented")
}
func (*UnimplementedKMSServer) Decrypt(context.Context, *DecryptRequest) (*DecryptResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Decrypt not implemented")
}
func RegisterKMSServer(s *grpc.Server, srv KMSServer) {
s.RegisterService(&_KMS_serviceDesc, srv)
}
func _KMS_Encrypt_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(EncryptRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KMSServer).Encrypt(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.KMS/Encrypt",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KMSServer).Encrypt(ctx, req.(*EncryptRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KMS_Decrypt_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DecryptRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KMSServer).Decrypt(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.KMS/Decrypt",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KMSServer).Decrypt(ctx, req.(*DecryptRequest))
}
return interceptor(ctx, in, info, handler)
}
var _KMS_serviceDesc = grpc.ServiceDesc{
ServiceName: "proto.KMS",
HandlerType: (*KMSServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Encrypt",
Handler: _KMS_Encrypt_Handler,
},
{
MethodName: "Decrypt",
Handler: _KMS_Decrypt_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "kms/proto/kms.proto",
}

View file

@ -1,35 +0,0 @@
syntax = "proto3";
package proto;
option go_package = "sdk/plugin/kms/proto";
message EncryptRequest {
string payload = 1;
string additional_data = 2;
string url = 3;
string master_key = 4;
}
message EncryptResponse {
string payload = 1;
string key = 2;
int32 mode = 3;
}
message DecryptRequest {
string payload = 1;
string key = 2;
string additional_data = 3;
int32 mode = 4;
string url = 5;
string master_key = 6;
}
message DecryptResponse {
string payload = 1;
}
service KMS {
rpc Encrypt(EncryptRequest) returns (EncryptResponse);
rpc Decrypt(DecryptRequest) returns (DecryptResponse);
}

View file

@ -1,160 +0,0 @@
package metadata
import (
"context"
"time"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/emptypb"
"github.com/drakkan/sftpgo/v2/sdk/plugin/metadata/proto"
)
const (
rpcTimeout = 20 * time.Second
)
// GRPCClient is an implementation of Metadater interface that talks over RPC.
type GRPCClient struct {
client proto.MetadataClient
}
// SetModificationTime implements the Metadater interface
func (c *GRPCClient) SetModificationTime(storageID, objectPath string, mTime int64) error {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
_, err := c.client.SetModificationTime(ctx, &proto.SetModificationTimeRequest{
StorageId: storageID,
ObjectPath: objectPath,
ModificationTime: mTime,
})
return c.checkError(err)
}
// GetModificationTime implements the Metadater interface
func (c *GRPCClient) GetModificationTime(storageID, objectPath string) (int64, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.GetModificationTime(ctx, &proto.GetModificationTimeRequest{
StorageId: storageID,
ObjectPath: objectPath,
})
if err != nil {
return 0, c.checkError(err)
}
return resp.ModificationTime, nil
}
// GetModificationTimes implements the Metadater interface
func (c *GRPCClient) GetModificationTimes(storageID, objectPath string) (map[string]int64, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout*4)
defer cancel()
resp, err := c.client.GetModificationTimes(ctx, &proto.GetModificationTimesRequest{
StorageId: storageID,
FolderPath: objectPath,
})
if err != nil {
return nil, c.checkError(err)
}
return resp.Pairs, nil
}
// RemoveMetadata implements the Metadater interface
func (c *GRPCClient) RemoveMetadata(storageID, objectPath string) error {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
_, err := c.client.RemoveMetadata(ctx, &proto.RemoveMetadataRequest{
StorageId: storageID,
ObjectPath: objectPath,
})
return c.checkError(err)
}
// GetFolders implements the Metadater interface
func (c *GRPCClient) GetFolders(storageID string, limit int, from string) ([]string, error) {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
resp, err := c.client.GetFolders(ctx, &proto.GetFoldersRequest{
StorageId: storageID,
Limit: int32(limit),
From: from,
})
if err != nil {
return nil, c.checkError(err)
}
return resp.Folders, nil
}
func (c *GRPCClient) checkError(err error) error {
if err == nil {
return nil
}
if s, ok := status.FromError(err); ok {
if s.Code() == codes.NotFound {
return ErrNoSuchObject
}
}
return err
}
// GRPCServer defines the gRPC server that GRPCClient talks to.
type GRPCServer struct {
Impl Metadater
}
// SetModificationTime implements the server side set modification time method
func (s *GRPCServer) SetModificationTime(ctx context.Context, req *proto.SetModificationTimeRequest) (*emptypb.Empty, error) {
err := s.Impl.SetModificationTime(req.StorageId, req.ObjectPath, req.ModificationTime)
return &emptypb.Empty{}, err
}
// GetModificationTime implements the server side get modification time method
func (s *GRPCServer) GetModificationTime(ctx context.Context, req *proto.GetModificationTimeRequest) (
*proto.GetModificationTimeResponse, error,
) {
mTime, err := s.Impl.GetModificationTime(req.StorageId, req.ObjectPath)
return &proto.GetModificationTimeResponse{
ModificationTime: mTime,
}, err
}
// GetModificationTimes implements the server side get modification times method
func (s *GRPCServer) GetModificationTimes(ctx context.Context, req *proto.GetModificationTimesRequest) (
*proto.GetModificationTimesResponse, error,
) {
res, err := s.Impl.GetModificationTimes(req.StorageId, req.FolderPath)
return &proto.GetModificationTimesResponse{
Pairs: res,
}, err
}
// RemoveMetadata implements the server side remove metadata method
func (s *GRPCServer) RemoveMetadata(ctx context.Context, req *proto.RemoveMetadataRequest) (*emptypb.Empty, error) {
err := s.Impl.RemoveMetadata(req.StorageId, req.ObjectPath)
return &emptypb.Empty{}, err
}
// GetFolders implements the server side get folders method
func (s *GRPCServer) GetFolders(ctx context.Context, req *proto.GetFoldersRequest) (*proto.GetFoldersResponse, error) {
res, err := s.Impl.GetFolders(req.StorageId, int(req.Limit), req.From)
return &proto.GetFoldersResponse{
Folders: res,
}, err
}

View file

@ -1,61 +0,0 @@
package metadata
import (
"context"
"errors"
"github.com/hashicorp/go-plugin"
"google.golang.org/grpc"
"github.com/drakkan/sftpgo/v2/sdk/plugin/metadata/proto"
)
const (
// PluginName defines the name for a metadata plugin
PluginName = "metadata"
)
var (
// Handshake is a common handshake that is shared by plugin and host.
Handshake = plugin.HandshakeConfig{
ProtocolVersion: 1,
MagicCookieKey: "SFTPGO_PLUGIN_METADATA",
MagicCookieValue: "85dddeea-56d8-4d5b-b488-8b125edb3a0f",
}
// ErrNoSuchObject is the error that plugins must return if the request object does not exist
ErrNoSuchObject = errors.New("no such object")
// PluginMap is the map of plugins we can dispense.
PluginMap = map[string]plugin.Plugin{
PluginName: &Plugin{},
}
)
// Metadater defines the interface for metadata plugins
type Metadater interface {
SetModificationTime(storageID, objectPath string, mTime int64) error
GetModificationTime(storageID, objectPath string) (int64, error)
GetModificationTimes(storageID, objectPath string) (map[string]int64, error)
RemoveMetadata(storageID, objectPath string) error
GetFolders(storageID string, limit int, from string) ([]string, error)
}
// Plugin defines the implementation to serve/connect to a metadata plugin
type Plugin struct {
plugin.Plugin
Impl Metadater
}
// GRPCServer defines the GRPC server implementation for this plugin
func (p *Plugin) GRPCServer(broker *plugin.GRPCBroker, s *grpc.Server) error {
proto.RegisterMetadataServer(s, &GRPCServer{
Impl: p.Impl,
})
return nil
}
// GRPCClient defines the GRPC client implementation for this plugin
func (p *Plugin) GRPCClient(ctx context.Context, broker *plugin.GRPCBroker, c *grpc.ClientConn) (interface{}, error) {
return &GRPCClient{
client: proto.NewMetadataClient(c),
}, nil
}

View file

@ -1,938 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.17.3
// source: metadata/proto/metadata.proto
package proto
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
emptypb "google.golang.org/protobuf/types/known/emptypb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type SetModificationTimeRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
StorageId string `protobuf:"bytes,1,opt,name=storage_id,json=storageId,proto3" json:"storage_id,omitempty"`
ObjectPath string `protobuf:"bytes,2,opt,name=object_path,json=objectPath,proto3" json:"object_path,omitempty"`
ModificationTime int64 `protobuf:"varint,3,opt,name=modification_time,json=modificationTime,proto3" json:"modification_time,omitempty"`
}
func (x *SetModificationTimeRequest) Reset() {
*x = SetModificationTimeRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_metadata_proto_metadata_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SetModificationTimeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SetModificationTimeRequest) ProtoMessage() {}
func (x *SetModificationTimeRequest) ProtoReflect() protoreflect.Message {
mi := &file_metadata_proto_metadata_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SetModificationTimeRequest.ProtoReflect.Descriptor instead.
func (*SetModificationTimeRequest) Descriptor() ([]byte, []int) {
return file_metadata_proto_metadata_proto_rawDescGZIP(), []int{0}
}
func (x *SetModificationTimeRequest) GetStorageId() string {
if x != nil {
return x.StorageId
}
return ""
}
func (x *SetModificationTimeRequest) GetObjectPath() string {
if x != nil {
return x.ObjectPath
}
return ""
}
func (x *SetModificationTimeRequest) GetModificationTime() int64 {
if x != nil {
return x.ModificationTime
}
return 0
}
type GetModificationTimeRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
StorageId string `protobuf:"bytes,1,opt,name=storage_id,json=storageId,proto3" json:"storage_id,omitempty"`
ObjectPath string `protobuf:"bytes,2,opt,name=object_path,json=objectPath,proto3" json:"object_path,omitempty"`
}
func (x *GetModificationTimeRequest) Reset() {
*x = GetModificationTimeRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_metadata_proto_metadata_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GetModificationTimeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetModificationTimeRequest) ProtoMessage() {}
func (x *GetModificationTimeRequest) ProtoReflect() protoreflect.Message {
mi := &file_metadata_proto_metadata_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetModificationTimeRequest.ProtoReflect.Descriptor instead.
func (*GetModificationTimeRequest) Descriptor() ([]byte, []int) {
return file_metadata_proto_metadata_proto_rawDescGZIP(), []int{1}
}
func (x *GetModificationTimeRequest) GetStorageId() string {
if x != nil {
return x.StorageId
}
return ""
}
func (x *GetModificationTimeRequest) GetObjectPath() string {
if x != nil {
return x.ObjectPath
}
return ""
}
type GetModificationTimeResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
ModificationTime int64 `protobuf:"varint,1,opt,name=modification_time,json=modificationTime,proto3" json:"modification_time,omitempty"`
}
func (x *GetModificationTimeResponse) Reset() {
*x = GetModificationTimeResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_metadata_proto_metadata_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GetModificationTimeResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetModificationTimeResponse) ProtoMessage() {}
func (x *GetModificationTimeResponse) ProtoReflect() protoreflect.Message {
mi := &file_metadata_proto_metadata_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetModificationTimeResponse.ProtoReflect.Descriptor instead.
func (*GetModificationTimeResponse) Descriptor() ([]byte, []int) {
return file_metadata_proto_metadata_proto_rawDescGZIP(), []int{2}
}
func (x *GetModificationTimeResponse) GetModificationTime() int64 {
if x != nil {
return x.ModificationTime
}
return 0
}
type GetModificationTimesRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
StorageId string `protobuf:"bytes,1,opt,name=storage_id,json=storageId,proto3" json:"storage_id,omitempty"`
FolderPath string `protobuf:"bytes,2,opt,name=folder_path,json=folderPath,proto3" json:"folder_path,omitempty"`
}
func (x *GetModificationTimesRequest) Reset() {
*x = GetModificationTimesRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_metadata_proto_metadata_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GetModificationTimesRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetModificationTimesRequest) ProtoMessage() {}
func (x *GetModificationTimesRequest) ProtoReflect() protoreflect.Message {
mi := &file_metadata_proto_metadata_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetModificationTimesRequest.ProtoReflect.Descriptor instead.
func (*GetModificationTimesRequest) Descriptor() ([]byte, []int) {
return file_metadata_proto_metadata_proto_rawDescGZIP(), []int{3}
}
func (x *GetModificationTimesRequest) GetStorageId() string {
if x != nil {
return x.StorageId
}
return ""
}
func (x *GetModificationTimesRequest) GetFolderPath() string {
if x != nil {
return x.FolderPath
}
return ""
}
type GetModificationTimesResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// the file name (not the full path) is the map key and the modification time is the map value
Pairs map[string]int64 `protobuf:"bytes,1,rep,name=pairs,proto3" json:"pairs,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"varint,2,opt,name=value,proto3"`
}
func (x *GetModificationTimesResponse) Reset() {
*x = GetModificationTimesResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_metadata_proto_metadata_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GetModificationTimesResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetModificationTimesResponse) ProtoMessage() {}
func (x *GetModificationTimesResponse) ProtoReflect() protoreflect.Message {
mi := &file_metadata_proto_metadata_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetModificationTimesResponse.ProtoReflect.Descriptor instead.
func (*GetModificationTimesResponse) Descriptor() ([]byte, []int) {
return file_metadata_proto_metadata_proto_rawDescGZIP(), []int{4}
}
func (x *GetModificationTimesResponse) GetPairs() map[string]int64 {
if x != nil {
return x.Pairs
}
return nil
}
type RemoveMetadataRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
StorageId string `protobuf:"bytes,1,opt,name=storage_id,json=storageId,proto3" json:"storage_id,omitempty"`
ObjectPath string `protobuf:"bytes,2,opt,name=object_path,json=objectPath,proto3" json:"object_path,omitempty"`
}
func (x *RemoveMetadataRequest) Reset() {
*x = RemoveMetadataRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_metadata_proto_metadata_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RemoveMetadataRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RemoveMetadataRequest) ProtoMessage() {}
func (x *RemoveMetadataRequest) ProtoReflect() protoreflect.Message {
mi := &file_metadata_proto_metadata_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RemoveMetadataRequest.ProtoReflect.Descriptor instead.
func (*RemoveMetadataRequest) Descriptor() ([]byte, []int) {
return file_metadata_proto_metadata_proto_rawDescGZIP(), []int{5}
}
func (x *RemoveMetadataRequest) GetStorageId() string {
if x != nil {
return x.StorageId
}
return ""
}
func (x *RemoveMetadataRequest) GetObjectPath() string {
if x != nil {
return x.ObjectPath
}
return ""
}
type GetFoldersRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
StorageId string `protobuf:"bytes,1,opt,name=storage_id,json=storageId,proto3" json:"storage_id,omitempty"`
Limit int32 `protobuf:"varint,2,opt,name=limit,proto3" json:"limit,omitempty"`
From string `protobuf:"bytes,3,opt,name=from,proto3" json:"from,omitempty"`
}
func (x *GetFoldersRequest) Reset() {
*x = GetFoldersRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_metadata_proto_metadata_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GetFoldersRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetFoldersRequest) ProtoMessage() {}
func (x *GetFoldersRequest) ProtoReflect() protoreflect.Message {
mi := &file_metadata_proto_metadata_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetFoldersRequest.ProtoReflect.Descriptor instead.
func (*GetFoldersRequest) Descriptor() ([]byte, []int) {
return file_metadata_proto_metadata_proto_rawDescGZIP(), []int{6}
}
func (x *GetFoldersRequest) GetStorageId() string {
if x != nil {
return x.StorageId
}
return ""
}
func (x *GetFoldersRequest) GetLimit() int32 {
if x != nil {
return x.Limit
}
return 0
}
func (x *GetFoldersRequest) GetFrom() string {
if x != nil {
return x.From
}
return ""
}
type GetFoldersResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Folders []string `protobuf:"bytes,1,rep,name=folders,proto3" json:"folders,omitempty"`
}
func (x *GetFoldersResponse) Reset() {
*x = GetFoldersResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_metadata_proto_metadata_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GetFoldersResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetFoldersResponse) ProtoMessage() {}
func (x *GetFoldersResponse) ProtoReflect() protoreflect.Message {
mi := &file_metadata_proto_metadata_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetFoldersResponse.ProtoReflect.Descriptor instead.
func (*GetFoldersResponse) Descriptor() ([]byte, []int) {
return file_metadata_proto_metadata_proto_rawDescGZIP(), []int{7}
}
func (x *GetFoldersResponse) GetFolders() []string {
if x != nil {
return x.Folders
}
return nil
}
var File_metadata_proto_metadata_proto protoreflect.FileDescriptor
var file_metadata_proto_metadata_proto_rawDesc = []byte{
0x0a, 0x1d, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x2f, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12,
0x05, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1b, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70, 0x74, 0x79, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x22, 0x89, 0x01, 0x0a, 0x1a, 0x53, 0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66,
0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x69, 0x64,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x49,
0x64, 0x12, 0x1f, 0x0a, 0x0b, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x5f, 0x70, 0x61, 0x74, 0x68,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x50, 0x61,
0x74, 0x68, 0x12, 0x2b, 0x0a, 0x11, 0x6d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x52, 0x10, 0x6d,
0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65, 0x22,
0x5c, 0x0a, 0x1a, 0x47, 0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a,
0x0a, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
0x09, 0x52, 0x09, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x49, 0x64, 0x12, 0x1f, 0x0a, 0x0b,
0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x18, 0x02, 0x20, 0x01, 0x28,
0x09, 0x52, 0x0a, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x50, 0x61, 0x74, 0x68, 0x22, 0x4a, 0x0a,
0x1b, 0x47, 0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x54, 0x69, 0x6d, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x2b, 0x0a, 0x11,
0x6d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x74, 0x69, 0x6d,
0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x10, 0x6d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63,
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65, 0x22, 0x5d, 0x0a, 0x1b, 0x47, 0x65, 0x74,
0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65,
0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x74, 0x6f, 0x72,
0x61, 0x67, 0x65, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x74,
0x6f, 0x72, 0x61, 0x67, 0x65, 0x49, 0x64, 0x12, 0x1f, 0x0a, 0x0b, 0x66, 0x6f, 0x6c, 0x64, 0x65,
0x72, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x66, 0x6f,
0x6c, 0x64, 0x65, 0x72, 0x50, 0x61, 0x74, 0x68, 0x22, 0x9e, 0x01, 0x0a, 0x1c, 0x47, 0x65, 0x74,
0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65,
0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x44, 0x0a, 0x05, 0x70, 0x61, 0x69,
0x72, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x2e, 0x47, 0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x54, 0x69, 0x6d, 0x65, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x2e, 0x50, 0x61,
0x69, 0x72, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x05, 0x70, 0x61, 0x69, 0x72, 0x73, 0x1a,
0x38, 0x0a, 0x0a, 0x50, 0x61, 0x69, 0x72, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a,
0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12,
0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x03, 0x52, 0x05,
0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x57, 0x0a, 0x15, 0x52, 0x65, 0x6d,
0x6f, 0x76, 0x65, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x69, 0x64,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x49,
0x64, 0x12, 0x1f, 0x0a, 0x0b, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x5f, 0x70, 0x61, 0x74, 0x68,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x50, 0x61,
0x74, 0x68, 0x22, 0x5c, 0x0a, 0x11, 0x47, 0x65, 0x74, 0x46, 0x6f, 0x6c, 0x64, 0x65, 0x72, 0x73,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x74, 0x6f, 0x72, 0x61,
0x67, 0x65, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x74, 0x6f,
0x72, 0x61, 0x67, 0x65, 0x49, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x18,
0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x12, 0x12, 0x0a, 0x04,
0x66, 0x72, 0x6f, 0x6d, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x66, 0x72, 0x6f, 0x6d,
0x22, 0x2e, 0x0a, 0x12, 0x47, 0x65, 0x74, 0x46, 0x6f, 0x6c, 0x64, 0x65, 0x72, 0x73, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x66, 0x6f, 0x6c, 0x64, 0x65, 0x72,
0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x09, 0x52, 0x07, 0x66, 0x6f, 0x6c, 0x64, 0x65, 0x72, 0x73,
0x32, 0xa6, 0x03, 0x0a, 0x08, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x12, 0x50, 0x0a,
0x13, 0x53, 0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x54, 0x69, 0x6d, 0x65, 0x12, 0x21, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x53, 0x65, 0x74,
0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x12,
0x5c, 0x0a, 0x13, 0x47, 0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x21, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x47,
0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69,
0x6d, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x2e, 0x47, 0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f,
0x6e, 0x54, 0x69, 0x6d, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x5f, 0x0a,
0x14, 0x47, 0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x54, 0x69, 0x6d, 0x65, 0x73, 0x12, 0x22, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x47, 0x65,
0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x23, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x2e, 0x47, 0x65, 0x74, 0x4d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f,
0x6e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x46,
0x0a, 0x0e, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61,
0x12, 0x1c, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x4d,
0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x16,
0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66,
0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x12, 0x41, 0x0a, 0x0a, 0x47, 0x65, 0x74, 0x46, 0x6f, 0x6c,
0x64, 0x65, 0x72, 0x73, 0x12, 0x18, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x47, 0x65, 0x74,
0x46, 0x6f, 0x6c, 0x64, 0x65, 0x72, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x19,
0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x47, 0x65, 0x74, 0x46, 0x6f, 0x6c, 0x64, 0x65, 0x72,
0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x42, 0x1b, 0x5a, 0x19, 0x73, 0x64, 0x6b,
0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61,
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_metadata_proto_metadata_proto_rawDescOnce sync.Once
file_metadata_proto_metadata_proto_rawDescData = file_metadata_proto_metadata_proto_rawDesc
)
func file_metadata_proto_metadata_proto_rawDescGZIP() []byte {
file_metadata_proto_metadata_proto_rawDescOnce.Do(func() {
file_metadata_proto_metadata_proto_rawDescData = protoimpl.X.CompressGZIP(file_metadata_proto_metadata_proto_rawDescData)
})
return file_metadata_proto_metadata_proto_rawDescData
}
var file_metadata_proto_metadata_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_metadata_proto_metadata_proto_goTypes = []interface{}{
(*SetModificationTimeRequest)(nil), // 0: proto.SetModificationTimeRequest
(*GetModificationTimeRequest)(nil), // 1: proto.GetModificationTimeRequest
(*GetModificationTimeResponse)(nil), // 2: proto.GetModificationTimeResponse
(*GetModificationTimesRequest)(nil), // 3: proto.GetModificationTimesRequest
(*GetModificationTimesResponse)(nil), // 4: proto.GetModificationTimesResponse
(*RemoveMetadataRequest)(nil), // 5: proto.RemoveMetadataRequest
(*GetFoldersRequest)(nil), // 6: proto.GetFoldersRequest
(*GetFoldersResponse)(nil), // 7: proto.GetFoldersResponse
nil, // 8: proto.GetModificationTimesResponse.PairsEntry
(*emptypb.Empty)(nil), // 9: google.protobuf.Empty
}
var file_metadata_proto_metadata_proto_depIdxs = []int32{
8, // 0: proto.GetModificationTimesResponse.pairs:type_name -> proto.GetModificationTimesResponse.PairsEntry
0, // 1: proto.Metadata.SetModificationTime:input_type -> proto.SetModificationTimeRequest
1, // 2: proto.Metadata.GetModificationTime:input_type -> proto.GetModificationTimeRequest
3, // 3: proto.Metadata.GetModificationTimes:input_type -> proto.GetModificationTimesRequest
5, // 4: proto.Metadata.RemoveMetadata:input_type -> proto.RemoveMetadataRequest
6, // 5: proto.Metadata.GetFolders:input_type -> proto.GetFoldersRequest
9, // 6: proto.Metadata.SetModificationTime:output_type -> google.protobuf.Empty
2, // 7: proto.Metadata.GetModificationTime:output_type -> proto.GetModificationTimeResponse
4, // 8: proto.Metadata.GetModificationTimes:output_type -> proto.GetModificationTimesResponse
9, // 9: proto.Metadata.RemoveMetadata:output_type -> google.protobuf.Empty
7, // 10: proto.Metadata.GetFolders:output_type -> proto.GetFoldersResponse
6, // [6:11] is the sub-list for method output_type
1, // [1:6] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_metadata_proto_metadata_proto_init() }
func file_metadata_proto_metadata_proto_init() {
if File_metadata_proto_metadata_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_metadata_proto_metadata_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SetModificationTimeRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_metadata_proto_metadata_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GetModificationTimeRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_metadata_proto_metadata_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GetModificationTimeResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_metadata_proto_metadata_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GetModificationTimesRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_metadata_proto_metadata_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GetModificationTimesResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_metadata_proto_metadata_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RemoveMetadataRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_metadata_proto_metadata_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GetFoldersRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_metadata_proto_metadata_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GetFoldersResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_metadata_proto_metadata_proto_rawDesc,
NumEnums: 0,
NumMessages: 9,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_metadata_proto_metadata_proto_goTypes,
DependencyIndexes: file_metadata_proto_metadata_proto_depIdxs,
MessageInfos: file_metadata_proto_metadata_proto_msgTypes,
}.Build()
File_metadata_proto_metadata_proto = out.File
file_metadata_proto_metadata_proto_rawDesc = nil
file_metadata_proto_metadata_proto_goTypes = nil
file_metadata_proto_metadata_proto_depIdxs = nil
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConnInterface
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion6
// MetadataClient is the client API for Metadata service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type MetadataClient interface {
SetModificationTime(ctx context.Context, in *SetModificationTimeRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
GetModificationTime(ctx context.Context, in *GetModificationTimeRequest, opts ...grpc.CallOption) (*GetModificationTimeResponse, error)
GetModificationTimes(ctx context.Context, in *GetModificationTimesRequest, opts ...grpc.CallOption) (*GetModificationTimesResponse, error)
RemoveMetadata(ctx context.Context, in *RemoveMetadataRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
GetFolders(ctx context.Context, in *GetFoldersRequest, opts ...grpc.CallOption) (*GetFoldersResponse, error)
}
type metadataClient struct {
cc grpc.ClientConnInterface
}
func NewMetadataClient(cc grpc.ClientConnInterface) MetadataClient {
return &metadataClient{cc}
}
func (c *metadataClient) SetModificationTime(ctx context.Context, in *SetModificationTimeRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/proto.Metadata/SetModificationTime", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *metadataClient) GetModificationTime(ctx context.Context, in *GetModificationTimeRequest, opts ...grpc.CallOption) (*GetModificationTimeResponse, error) {
out := new(GetModificationTimeResponse)
err := c.cc.Invoke(ctx, "/proto.Metadata/GetModificationTime", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *metadataClient) GetModificationTimes(ctx context.Context, in *GetModificationTimesRequest, opts ...grpc.CallOption) (*GetModificationTimesResponse, error) {
out := new(GetModificationTimesResponse)
err := c.cc.Invoke(ctx, "/proto.Metadata/GetModificationTimes", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *metadataClient) RemoveMetadata(ctx context.Context, in *RemoveMetadataRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/proto.Metadata/RemoveMetadata", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *metadataClient) GetFolders(ctx context.Context, in *GetFoldersRequest, opts ...grpc.CallOption) (*GetFoldersResponse, error) {
out := new(GetFoldersResponse)
err := c.cc.Invoke(ctx, "/proto.Metadata/GetFolders", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// MetadataServer is the server API for Metadata service.
type MetadataServer interface {
SetModificationTime(context.Context, *SetModificationTimeRequest) (*emptypb.Empty, error)
GetModificationTime(context.Context, *GetModificationTimeRequest) (*GetModificationTimeResponse, error)
GetModificationTimes(context.Context, *GetModificationTimesRequest) (*GetModificationTimesResponse, error)
RemoveMetadata(context.Context, *RemoveMetadataRequest) (*emptypb.Empty, error)
GetFolders(context.Context, *GetFoldersRequest) (*GetFoldersResponse, error)
}
// UnimplementedMetadataServer can be embedded to have forward compatible implementations.
type UnimplementedMetadataServer struct {
}
func (*UnimplementedMetadataServer) SetModificationTime(context.Context, *SetModificationTimeRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method SetModificationTime not implemented")
}
func (*UnimplementedMetadataServer) GetModificationTime(context.Context, *GetModificationTimeRequest) (*GetModificationTimeResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetModificationTime not implemented")
}
func (*UnimplementedMetadataServer) GetModificationTimes(context.Context, *GetModificationTimesRequest) (*GetModificationTimesResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetModificationTimes not implemented")
}
func (*UnimplementedMetadataServer) RemoveMetadata(context.Context, *RemoveMetadataRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method RemoveMetadata not implemented")
}
func (*UnimplementedMetadataServer) GetFolders(context.Context, *GetFoldersRequest) (*GetFoldersResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetFolders not implemented")
}
func RegisterMetadataServer(s *grpc.Server, srv MetadataServer) {
s.RegisterService(&_Metadata_serviceDesc, srv)
}
func _Metadata_SetModificationTime_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(SetModificationTimeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(MetadataServer).SetModificationTime(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.Metadata/SetModificationTime",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(MetadataServer).SetModificationTime(ctx, req.(*SetModificationTimeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Metadata_GetModificationTime_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetModificationTimeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(MetadataServer).GetModificationTime(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.Metadata/GetModificationTime",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(MetadataServer).GetModificationTime(ctx, req.(*GetModificationTimeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Metadata_GetModificationTimes_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetModificationTimesRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(MetadataServer).GetModificationTimes(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.Metadata/GetModificationTimes",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(MetadataServer).GetModificationTimes(ctx, req.(*GetModificationTimesRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Metadata_RemoveMetadata_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(RemoveMetadataRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(MetadataServer).RemoveMetadata(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.Metadata/RemoveMetadata",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(MetadataServer).RemoveMetadata(ctx, req.(*RemoveMetadataRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Metadata_GetFolders_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetFoldersRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(MetadataServer).GetFolders(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.Metadata/GetFolders",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(MetadataServer).GetFolders(ctx, req.(*GetFoldersRequest))
}
return interceptor(ctx, in, info, handler)
}
var _Metadata_serviceDesc = grpc.ServiceDesc{
ServiceName: "proto.Metadata",
HandlerType: (*MetadataServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "SetModificationTime",
Handler: _Metadata_SetModificationTime_Handler,
},
{
MethodName: "GetModificationTime",
Handler: _Metadata_GetModificationTime_Handler,
},
{
MethodName: "GetModificationTimes",
Handler: _Metadata_GetModificationTimes_Handler,
},
{
MethodName: "RemoveMetadata",
Handler: _Metadata_RemoveMetadata_Handler,
},
{
MethodName: "GetFolders",
Handler: _Metadata_GetFolders_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "metadata/proto/metadata.proto",
}

View file

@ -1,54 +0,0 @@
syntax = "proto3";
package proto;
import "google/protobuf/empty.proto";
option go_package = "sdk/plugin/metadata/proto";
message SetModificationTimeRequest {
string storage_id = 1;
string object_path = 2;
int64 modification_time = 3;
}
message GetModificationTimeRequest {
string storage_id = 1;
string object_path = 2;
}
message GetModificationTimeResponse {
int64 modification_time = 1;
}
message GetModificationTimesRequest {
string storage_id = 1;
string folder_path = 2;
}
message GetModificationTimesResponse {
// the file name (not the full path) is the map key and the modification time is the map value
map<string,int64> pairs = 1;
}
message RemoveMetadataRequest {
string storage_id = 1;
string object_path = 2;
}
message GetFoldersRequest {
string storage_id = 1;
int32 limit = 2;
string from = 3;
}
message GetFoldersResponse {
repeated string folders = 1;
}
service Metadata {
rpc SetModificationTime(SetModificationTimeRequest) returns (google.protobuf.Empty);
rpc GetModificationTime(GetModificationTimeRequest) returns (GetModificationTimeResponse);
rpc GetModificationTimes(GetModificationTimesRequest) returns (GetModificationTimesResponse);
rpc RemoveMetadata(RemoveMetadataRequest) returns (google.protobuf.Empty);
rpc GetFolders(GetFoldersRequest) returns (GetFoldersResponse);
}

View file

@ -1,7 +0,0 @@
#!/bin/bash
protoc notifier/proto/notifier.proto --go_out=plugins=grpc:../.. --go_out=../../..
protoc kms/proto/kms.proto --go_out=plugins=grpc:../.. --go_out=../../..
protoc auth/proto/auth.proto --go_out=plugins=grpc:../.. --go_out=../../..
protoc eventsearcher/proto/search.proto --go_out=plugins=grpc:../.. --go_out=../../..
protoc metadata/proto/metadata.proto --go_out=plugins=grpc:../.. --go_out=../../..

View file

@ -1,82 +0,0 @@
package notifier
import (
"context"
"time"
"google.golang.org/protobuf/types/known/emptypb"
"github.com/drakkan/sftpgo/v2/sdk/plugin/notifier/proto"
)
const (
rpcTimeout = 20 * time.Second
)
// GRPCClient is an implementation of Notifier interface that talks over RPC.
type GRPCClient struct {
client proto.NotifierClient
}
// NotifyFsEvent implements the Notifier interface
func (c *GRPCClient) NotifyFsEvent(timestamp int64, action, username, fsPath, fsTargetPath, sshCmd, protocol, ip,
virtualPath, virtualTargetPath, sessionID string, fileSize int64, status int,
) error {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
_, err := c.client.SendFsEvent(ctx, &proto.FsEvent{
Timestamp: timestamp,
Action: action,
Username: username,
FsPath: fsPath,
FsTargetPath: fsTargetPath,
SshCmd: sshCmd,
FileSize: fileSize,
Protocol: protocol,
Ip: ip,
Status: int32(status),
VirtualPath: virtualPath,
VirtualTargetPath: virtualTargetPath,
SessionId: sessionID,
})
return err
}
// NotifyProviderEvent implements the Notifier interface
func (c *GRPCClient) NotifyProviderEvent(timestamp int64, action, username, objectType, objectName, ip string, object []byte) error {
ctx, cancel := context.WithTimeout(context.Background(), rpcTimeout)
defer cancel()
_, err := c.client.SendProviderEvent(ctx, &proto.ProviderEvent{
Timestamp: timestamp,
Action: action,
ObjectType: objectType,
Username: username,
Ip: ip,
ObjectName: objectName,
ObjectData: object,
})
return err
}
// GRPCServer defines the gRPC server that GRPCClient talks to.
type GRPCServer struct {
Impl Notifier
}
// SendFsEvent implements the serve side fs notify method
func (s *GRPCServer) SendFsEvent(ctx context.Context, req *proto.FsEvent) (*emptypb.Empty, error) {
err := s.Impl.NotifyFsEvent(req.Timestamp, req.Action, req.Username, req.FsPath, req.FsTargetPath, req.SshCmd,
req.Protocol, req.Ip, req.VirtualPath, req.VirtualTargetPath, req.SessionId, req.FileSize, int(req.Status))
return &emptypb.Empty{}, err
}
// SendProviderEvent implements the serve side provider event notify method
func (s *GRPCServer) SendProviderEvent(ctx context.Context, req *proto.ProviderEvent) (*emptypb.Empty, error) {
err := s.Impl.NotifyProviderEvent(req.Timestamp, req.Action, req.Username, req.ObjectType, req.ObjectName,
req.Ip, req.ObjectData)
return &emptypb.Empty{}, err
}

View file

@ -1,59 +0,0 @@
// Package notifier defines the implementation for event notifier plugins.
// Notifier plugins allow to receive notifications for supported filesystem
// events such as file uploads, downloads etc. and provider events such as
// objects add, update, delete.
package notifier
import (
"context"
"github.com/hashicorp/go-plugin"
"google.golang.org/grpc"
"github.com/drakkan/sftpgo/v2/sdk/plugin/notifier/proto"
)
const (
// PluginName defines the name for a notifier plugin
PluginName = "notifier"
)
// Handshake is a common handshake that is shared by plugin and host.
var Handshake = plugin.HandshakeConfig{
ProtocolVersion: 1,
MagicCookieKey: "SFTPGO_PLUGIN_NOTIFIER",
MagicCookieValue: "c499b98b-cd59-4df2-92b3-6268817f4d80",
}
// PluginMap is the map of plugins we can dispense.
var PluginMap = map[string]plugin.Plugin{
PluginName: &Plugin{},
}
// Notifier defines the interface for notifiers plugins
type Notifier interface {
NotifyFsEvent(timestamp int64, action, username, fsPath, fsTargetPath, sshCmd, protocol, ip,
virtualPath, virtualTargetPath, sessionID string, fileSize int64, status int) error
NotifyProviderEvent(timestamp int64, action, username, objectType, objectName, ip string, object []byte) error
}
// Plugin defines the implementation to serve/connect to a notifier plugin
type Plugin struct {
plugin.Plugin
Impl Notifier
}
// GRPCServer defines the GRPC server implementation for this plugin
func (p *Plugin) GRPCServer(broker *plugin.GRPCBroker, s *grpc.Server) error {
proto.RegisterNotifierServer(s, &GRPCServer{
Impl: p.Impl,
})
return nil
}
// GRPCClient defines the GRPC client implementation for this plugin
func (p *Plugin) GRPCClient(ctx context.Context, broker *plugin.GRPCBroker, c *grpc.ClientConn) (interface{}, error) {
return &GRPCClient{
client: proto.NewNotifierClient(c),
}, nil
}

View file

@ -1,519 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.17.3
// source: notifier/proto/notifier.proto
package proto
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
emptypb "google.golang.org/protobuf/types/known/emptypb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type FsEvent struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Timestamp int64 `protobuf:"varint,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
Action string `protobuf:"bytes,2,opt,name=action,proto3" json:"action,omitempty"`
Username string `protobuf:"bytes,3,opt,name=username,proto3" json:"username,omitempty"`
FsPath string `protobuf:"bytes,4,opt,name=fs_path,json=fsPath,proto3" json:"fs_path,omitempty"`
FsTargetPath string `protobuf:"bytes,5,opt,name=fs_target_path,json=fsTargetPath,proto3" json:"fs_target_path,omitempty"`
SshCmd string `protobuf:"bytes,6,opt,name=ssh_cmd,json=sshCmd,proto3" json:"ssh_cmd,omitempty"`
FileSize int64 `protobuf:"varint,7,opt,name=file_size,json=fileSize,proto3" json:"file_size,omitempty"`
Protocol string `protobuf:"bytes,8,opt,name=protocol,proto3" json:"protocol,omitempty"`
Status int32 `protobuf:"varint,9,opt,name=status,proto3" json:"status,omitempty"`
Ip string `protobuf:"bytes,10,opt,name=ip,proto3" json:"ip,omitempty"`
VirtualPath string `protobuf:"bytes,11,opt,name=virtual_path,json=virtualPath,proto3" json:"virtual_path,omitempty"`
VirtualTargetPath string `protobuf:"bytes,12,opt,name=virtual_target_path,json=virtualTargetPath,proto3" json:"virtual_target_path,omitempty"`
SessionId string `protobuf:"bytes,13,opt,name=session_id,json=sessionId,proto3" json:"session_id,omitempty"`
}
func (x *FsEvent) Reset() {
*x = FsEvent{}
if protoimpl.UnsafeEnabled {
mi := &file_notifier_proto_notifier_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *FsEvent) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*FsEvent) ProtoMessage() {}
func (x *FsEvent) ProtoReflect() protoreflect.Message {
mi := &file_notifier_proto_notifier_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use FsEvent.ProtoReflect.Descriptor instead.
func (*FsEvent) Descriptor() ([]byte, []int) {
return file_notifier_proto_notifier_proto_rawDescGZIP(), []int{0}
}
func (x *FsEvent) GetTimestamp() int64 {
if x != nil {
return x.Timestamp
}
return 0
}
func (x *FsEvent) GetAction() string {
if x != nil {
return x.Action
}
return ""
}
func (x *FsEvent) GetUsername() string {
if x != nil {
return x.Username
}
return ""
}
func (x *FsEvent) GetFsPath() string {
if x != nil {
return x.FsPath
}
return ""
}
func (x *FsEvent) GetFsTargetPath() string {
if x != nil {
return x.FsTargetPath
}
return ""
}
func (x *FsEvent) GetSshCmd() string {
if x != nil {
return x.SshCmd
}
return ""
}
func (x *FsEvent) GetFileSize() int64 {
if x != nil {
return x.FileSize
}
return 0
}
func (x *FsEvent) GetProtocol() string {
if x != nil {
return x.Protocol
}
return ""
}
func (x *FsEvent) GetStatus() int32 {
if x != nil {
return x.Status
}
return 0
}
func (x *FsEvent) GetIp() string {
if x != nil {
return x.Ip
}
return ""
}
func (x *FsEvent) GetVirtualPath() string {
if x != nil {
return x.VirtualPath
}
return ""
}
func (x *FsEvent) GetVirtualTargetPath() string {
if x != nil {
return x.VirtualTargetPath
}
return ""
}
func (x *FsEvent) GetSessionId() string {
if x != nil {
return x.SessionId
}
return ""
}
type ProviderEvent struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Timestamp int64 `protobuf:"varint,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
Action string `protobuf:"bytes,2,opt,name=action,proto3" json:"action,omitempty"`
ObjectType string `protobuf:"bytes,3,opt,name=object_type,json=objectType,proto3" json:"object_type,omitempty"`
Username string `protobuf:"bytes,4,opt,name=username,proto3" json:"username,omitempty"`
Ip string `protobuf:"bytes,5,opt,name=ip,proto3" json:"ip,omitempty"`
ObjectName string `protobuf:"bytes,6,opt,name=object_name,json=objectName,proto3" json:"object_name,omitempty"`
ObjectData []byte `protobuf:"bytes,7,opt,name=object_data,json=objectData,proto3" json:"object_data,omitempty"` // object JSON serialized
}
func (x *ProviderEvent) Reset() {
*x = ProviderEvent{}
if protoimpl.UnsafeEnabled {
mi := &file_notifier_proto_notifier_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ProviderEvent) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ProviderEvent) ProtoMessage() {}
func (x *ProviderEvent) ProtoReflect() protoreflect.Message {
mi := &file_notifier_proto_notifier_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ProviderEvent.ProtoReflect.Descriptor instead.
func (*ProviderEvent) Descriptor() ([]byte, []int) {
return file_notifier_proto_notifier_proto_rawDescGZIP(), []int{1}
}
func (x *ProviderEvent) GetTimestamp() int64 {
if x != nil {
return x.Timestamp
}
return 0
}
func (x *ProviderEvent) GetAction() string {
if x != nil {
return x.Action
}
return ""
}
func (x *ProviderEvent) GetObjectType() string {
if x != nil {
return x.ObjectType
}
return ""
}
func (x *ProviderEvent) GetUsername() string {
if x != nil {
return x.Username
}
return ""
}
func (x *ProviderEvent) GetIp() string {
if x != nil {
return x.Ip
}
return ""
}
func (x *ProviderEvent) GetObjectName() string {
if x != nil {
return x.ObjectName
}
return ""
}
func (x *ProviderEvent) GetObjectData() []byte {
if x != nil {
return x.ObjectData
}
return nil
}
var File_notifier_proto_notifier_proto protoreflect.FileDescriptor
var file_notifier_proto_notifier_proto_rawDesc = []byte{
0x0a, 0x1d, 0x6e, 0x6f, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x2f, 0x6e, 0x6f, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12,
0x05, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1b, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70, 0x74, 0x79, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x22, 0x86, 0x03, 0x0a, 0x07, 0x46, 0x73, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x12,
0x1c, 0x0a, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18, 0x01, 0x20, 0x01,
0x28, 0x03, 0x52, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x12, 0x16, 0x0a,
0x06, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x61,
0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x1a, 0x0a, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d,
0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d,
0x65, 0x12, 0x17, 0x0a, 0x07, 0x66, 0x73, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x18, 0x04, 0x20, 0x01,
0x28, 0x09, 0x52, 0x06, 0x66, 0x73, 0x50, 0x61, 0x74, 0x68, 0x12, 0x24, 0x0a, 0x0e, 0x66, 0x73,
0x5f, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x18, 0x05, 0x20, 0x01,
0x28, 0x09, 0x52, 0x0c, 0x66, 0x73, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x50, 0x61, 0x74, 0x68,
0x12, 0x17, 0x0a, 0x07, 0x73, 0x73, 0x68, 0x5f, 0x63, 0x6d, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28,
0x09, 0x52, 0x06, 0x73, 0x73, 0x68, 0x43, 0x6d, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x66, 0x69, 0x6c,
0x65, 0x5f, 0x73, 0x69, 0x7a, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x66, 0x69,
0x6c, 0x65, 0x53, 0x69, 0x7a, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63,
0x6f, 0x6c, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63,
0x6f, 0x6c, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x09, 0x20, 0x01,
0x28, 0x05, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x70,
0x18, 0x0a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x70, 0x12, 0x21, 0x0a, 0x0c, 0x76, 0x69,
0x72, 0x74, 0x75, 0x61, 0x6c, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x09,
0x52, 0x0b, 0x76, 0x69, 0x72, 0x74, 0x75, 0x61, 0x6c, 0x50, 0x61, 0x74, 0x68, 0x12, 0x2e, 0x0a,
0x13, 0x76, 0x69, 0x72, 0x74, 0x75, 0x61, 0x6c, 0x5f, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x5f,
0x70, 0x61, 0x74, 0x68, 0x18, 0x0c, 0x20, 0x01, 0x28, 0x09, 0x52, 0x11, 0x76, 0x69, 0x72, 0x74,
0x75, 0x61, 0x6c, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x50, 0x61, 0x74, 0x68, 0x12, 0x1d, 0x0a,
0x0a, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x69, 0x64, 0x18, 0x0d, 0x20, 0x01, 0x28,
0x09, 0x52, 0x09, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x64, 0x22, 0xd4, 0x01, 0x0a,
0x0d, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x12, 0x1c,
0x0a, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18, 0x01, 0x20, 0x01, 0x28,
0x03, 0x52, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x12, 0x16, 0x0a, 0x06,
0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x61, 0x63,
0x74, 0x69, 0x6f, 0x6e, 0x12, 0x1f, 0x0a, 0x0b, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x5f, 0x74,
0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x6f, 0x62, 0x6a, 0x65, 0x63,
0x74, 0x54, 0x79, 0x70, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d,
0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x75, 0x73, 0x65, 0x72, 0x6e, 0x61, 0x6d,
0x65, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x70, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69,
0x70, 0x12, 0x1f, 0x0a, 0x0b, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x5f, 0x6e, 0x61, 0x6d, 0x65,
0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x4e, 0x61,
0x6d, 0x65, 0x12, 0x1f, 0x0a, 0x0b, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x5f, 0x64, 0x61, 0x74,
0x61, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0a, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x44,
0x61, 0x74, 0x61, 0x32, 0x84, 0x01, 0x0a, 0x08, 0x4e, 0x6f, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72,
0x12, 0x35, 0x0a, 0x0b, 0x53, 0x65, 0x6e, 0x64, 0x46, 0x73, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x12,
0x0e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x46, 0x73, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x1a,
0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x12, 0x41, 0x0a, 0x11, 0x53, 0x65, 0x6e, 0x64, 0x50,
0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x12, 0x14, 0x2e, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x45, 0x76, 0x65,
0x6e, 0x74, 0x1a, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x42, 0x1b, 0x5a, 0x19, 0x73, 0x64,
0x6b, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x6e, 0x6f, 0x74, 0x69, 0x66, 0x69, 0x65,
0x72, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_notifier_proto_notifier_proto_rawDescOnce sync.Once
file_notifier_proto_notifier_proto_rawDescData = file_notifier_proto_notifier_proto_rawDesc
)
func file_notifier_proto_notifier_proto_rawDescGZIP() []byte {
file_notifier_proto_notifier_proto_rawDescOnce.Do(func() {
file_notifier_proto_notifier_proto_rawDescData = protoimpl.X.CompressGZIP(file_notifier_proto_notifier_proto_rawDescData)
})
return file_notifier_proto_notifier_proto_rawDescData
}
var file_notifier_proto_notifier_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_notifier_proto_notifier_proto_goTypes = []interface{}{
(*FsEvent)(nil), // 0: proto.FsEvent
(*ProviderEvent)(nil), // 1: proto.ProviderEvent
(*emptypb.Empty)(nil), // 2: google.protobuf.Empty
}
var file_notifier_proto_notifier_proto_depIdxs = []int32{
0, // 0: proto.Notifier.SendFsEvent:input_type -> proto.FsEvent
1, // 1: proto.Notifier.SendProviderEvent:input_type -> proto.ProviderEvent
2, // 2: proto.Notifier.SendFsEvent:output_type -> google.protobuf.Empty
2, // 3: proto.Notifier.SendProviderEvent:output_type -> google.protobuf.Empty
2, // [2:4] is the sub-list for method output_type
0, // [0:2] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_notifier_proto_notifier_proto_init() }
func file_notifier_proto_notifier_proto_init() {
if File_notifier_proto_notifier_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_notifier_proto_notifier_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*FsEvent); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_notifier_proto_notifier_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ProviderEvent); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_notifier_proto_notifier_proto_rawDesc,
NumEnums: 0,
NumMessages: 2,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_notifier_proto_notifier_proto_goTypes,
DependencyIndexes: file_notifier_proto_notifier_proto_depIdxs,
MessageInfos: file_notifier_proto_notifier_proto_msgTypes,
}.Build()
File_notifier_proto_notifier_proto = out.File
file_notifier_proto_notifier_proto_rawDesc = nil
file_notifier_proto_notifier_proto_goTypes = nil
file_notifier_proto_notifier_proto_depIdxs = nil
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConnInterface
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion6
// NotifierClient is the client API for Notifier service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type NotifierClient interface {
SendFsEvent(ctx context.Context, in *FsEvent, opts ...grpc.CallOption) (*emptypb.Empty, error)
SendProviderEvent(ctx context.Context, in *ProviderEvent, opts ...grpc.CallOption) (*emptypb.Empty, error)
}
type notifierClient struct {
cc grpc.ClientConnInterface
}
func NewNotifierClient(cc grpc.ClientConnInterface) NotifierClient {
return &notifierClient{cc}
}
func (c *notifierClient) SendFsEvent(ctx context.Context, in *FsEvent, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/proto.Notifier/SendFsEvent", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *notifierClient) SendProviderEvent(ctx context.Context, in *ProviderEvent, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/proto.Notifier/SendProviderEvent", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// NotifierServer is the server API for Notifier service.
type NotifierServer interface {
SendFsEvent(context.Context, *FsEvent) (*emptypb.Empty, error)
SendProviderEvent(context.Context, *ProviderEvent) (*emptypb.Empty, error)
}
// UnimplementedNotifierServer can be embedded to have forward compatible implementations.
type UnimplementedNotifierServer struct {
}
func (*UnimplementedNotifierServer) SendFsEvent(context.Context, *FsEvent) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method SendFsEvent not implemented")
}
func (*UnimplementedNotifierServer) SendProviderEvent(context.Context, *ProviderEvent) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method SendProviderEvent not implemented")
}
func RegisterNotifierServer(s *grpc.Server, srv NotifierServer) {
s.RegisterService(&_Notifier_serviceDesc, srv)
}
func _Notifier_SendFsEvent_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(FsEvent)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(NotifierServer).SendFsEvent(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.Notifier/SendFsEvent",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(NotifierServer).SendFsEvent(ctx, req.(*FsEvent))
}
return interceptor(ctx, in, info, handler)
}
func _Notifier_SendProviderEvent_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ProviderEvent)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(NotifierServer).SendProviderEvent(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/proto.Notifier/SendProviderEvent",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(NotifierServer).SendProviderEvent(ctx, req.(*ProviderEvent))
}
return interceptor(ctx, in, info, handler)
}
var _Notifier_serviceDesc = grpc.ServiceDesc{
ServiceName: "proto.Notifier",
HandlerType: (*NotifierServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "SendFsEvent",
Handler: _Notifier_SendFsEvent_Handler,
},
{
MethodName: "SendProviderEvent",
Handler: _Notifier_SendProviderEvent_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "notifier/proto/notifier.proto",
}

View file

@ -1,37 +0,0 @@
syntax = "proto3";
package proto;
import "google/protobuf/empty.proto";
option go_package = "sdk/plugin/notifier/proto";
message FsEvent {
int64 timestamp = 1;
string action = 2;
string username = 3;
string fs_path = 4;
string fs_target_path = 5;
string ssh_cmd = 6;
int64 file_size = 7;
string protocol = 8;
int32 status = 9;
string ip = 10;
string virtual_path = 11;
string virtual_target_path = 12;
string session_id = 13;
}
message ProviderEvent {
int64 timestamp = 1;
string action = 2;
string object_type = 3;
string username = 4;
string ip = 5;
string object_name = 6;
bytes object_data = 7; // object JSON serialized
}
service Notifier {
rpc SendFsEvent(FsEvent) returns (google.protobuf.Empty);
rpc SendProviderEvent(ProviderEvent) returns (google.protobuf.Empty);
}

View file

@ -1,2 +0,0 @@
// Package sdk provides SFTPGo data structures primarily intended for use within plugins
package sdk

View file

@ -1,276 +0,0 @@
package sdk
import (
"fmt"
"net"
"strings"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/util"
)
// Web Client/user REST API restrictions
const (
WebClientPubKeyChangeDisabled = "publickey-change-disabled"
WebClientWriteDisabled = "write-disabled"
WebClientMFADisabled = "mfa-disabled"
WebClientPasswordChangeDisabled = "password-change-disabled"
WebClientAPIKeyAuthChangeDisabled = "api-key-auth-change-disabled"
WebClientInfoChangeDisabled = "info-change-disabled"
WebClientSharesDisabled = "shares-disabled"
WebClientPasswordResetDisabled = "password-reset-disabled"
)
var (
// WebClientOptions defines the available options for the web client interface/user REST API
WebClientOptions = []string{WebClientWriteDisabled, WebClientPasswordChangeDisabled, WebClientPasswordResetDisabled,
WebClientPubKeyChangeDisabled, WebClientMFADisabled, WebClientAPIKeyAuthChangeDisabled, WebClientInfoChangeDisabled,
WebClientSharesDisabled}
// UserTypes defines the supported user type hints for auth plugins
UserTypes = []string{string(UserTypeLDAP), string(UserTypeOS)}
)
// TLSUsername defines the TLS certificate attribute to use as username
type TLSUsername string
// Supported certificate attributes to use as username
const (
TLSUsernameNone TLSUsername = "None"
TLSUsernameCN TLSUsername = "CommonName"
)
// UserType defines the supported user types.
// This is an hint for external auth plugins, is not used in SFTPGo directly
type UserType string
// User types, auth plugins could use this info to choose the correct authentication backend
const (
UserTypeLDAP UserType = "LDAPUser"
UserTypeOS UserType = "OSUser"
)
// DirectoryPermissions defines permissions for a directory virtual path
type DirectoryPermissions struct {
Path string
Permissions []string
}
// HasPerm returns true if the directory has the specified permissions
func (d *DirectoryPermissions) HasPerm(perm string) bool {
return util.IsStringInSlice(perm, d.Permissions)
}
// PatternsFilter defines filters based on shell like patterns.
// These restrictions do not apply to files listing for performance reasons, so
// a denied file cannot be downloaded/overwritten/renamed but will still be
// in the list of files.
// System commands such as Git and rsync interacts with the filesystem directly
// and they are not aware about these restrictions so they are not allowed
// inside paths with extensions filters
type PatternsFilter struct {
// Virtual path, if no other specific filter is defined, the filter applies for
// sub directories too.
// For example if filters are defined for the paths "/" and "/sub" then the
// filters for "/" are applied for any file outside the "/sub" directory
Path string `json:"path"`
// files with these, case insensitive, patterns are allowed.
// Denied file patterns are evaluated before the allowed ones
AllowedPatterns []string `json:"allowed_patterns,omitempty"`
// files with these, case insensitive, patterns are not allowed.
// Denied file patterns are evaluated before the allowed ones
DeniedPatterns []string `json:"denied_patterns,omitempty"`
}
// GetCommaSeparatedPatterns returns the first non empty patterns list comma separated
func (p *PatternsFilter) GetCommaSeparatedPatterns() string {
if len(p.DeniedPatterns) > 0 {
return strings.Join(p.DeniedPatterns, ",")
}
return strings.Join(p.AllowedPatterns, ",")
}
// IsDenied returns true if the patterns has one or more denied patterns
func (p *PatternsFilter) IsDenied() bool {
return len(p.DeniedPatterns) > 0
}
// IsAllowed returns true if the patterns has one or more allowed patterns
func (p *PatternsFilter) IsAllowed() bool {
return len(p.AllowedPatterns) > 0
}
// HooksFilter defines user specific overrides for global hooks
type HooksFilter struct {
ExternalAuthDisabled bool `json:"external_auth_disabled"`
PreLoginDisabled bool `json:"pre_login_disabled"`
CheckPasswordDisabled bool `json:"check_password_disabled"`
}
// RecoveryCode defines a 2FA recovery code
type RecoveryCode struct {
Secret *kms.Secret `json:"secret"`
Used bool `json:"used,omitempty"`
}
// TOTPConfig defines the time-based one time password configuration
type TOTPConfig struct {
Enabled bool `json:"enabled,omitempty"`
ConfigName string `json:"config_name,omitempty"`
Secret *kms.Secret `json:"secret,omitempty"`
// TOTP will be required for the specified protocols.
// SSH protocol (SFTP/SCP/SSH commands) will ask for the TOTP passcode if the client uses keyboard interactive
// authentication.
// FTP have no standard way to support two factor authentication, if you
// enable the support for this protocol you have to add the TOTP passcode after the password.
// For example if your password is "password" and your one time passcode is
// "123456" you have to use "password123456" as password.
Protocols []string `json:"protocols,omitempty"`
}
// BandwidthLimit defines a per-source bandwidth limit
type BandwidthLimit struct {
// Source networks in CIDR notation as defined in RFC 4632 and RFC 4291
// for example "192.0.2.0/24" or "2001:db8::/32". The limit applies if the
// defined networks contain the client IP
Sources []string `json:"sources"`
// Maximum upload bandwidth as KB/s
UploadBandwidth int64 `json:"upload_bandwidth,omitempty"`
// Maximum download bandwidth as KB/s
DownloadBandwidth int64 `json:"download_bandwidth,omitempty"`
}
// Validate returns an error if the bandwidth limit is not valid
func (l *BandwidthLimit) Validate() error {
for _, source := range l.Sources {
_, _, err := net.ParseCIDR(source)
if err != nil {
return util.NewValidationError(fmt.Sprintf("could not parse bandwidth limit source %#v: %v", source, err))
}
}
return nil
}
// GetSourcesAsString returns the sources as comma separated string
func (l *BandwidthLimit) GetSourcesAsString() string {
return strings.Join(l.Sources, ",")
}
// UserFilters defines additional restrictions for a user
// TODO: rename to UserOptions in v3
type UserFilters struct {
// only clients connecting from these IP/Mask are allowed.
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
// for example "192.0.2.0/24" or "2001:db8::/32"
AllowedIP []string `json:"allowed_ip,omitempty"`
// clients connecting from these IP/Mask are not allowed.
// Denied rules will be evaluated before allowed ones
DeniedIP []string `json:"denied_ip,omitempty"`
// these login methods are not allowed.
// If null or empty any available login method is allowed
DeniedLoginMethods []string `json:"denied_login_methods,omitempty"`
// these protocols are not allowed.
// If null or empty any available protocol is allowed
DeniedProtocols []string `json:"denied_protocols,omitempty"`
// filter based on shell patterns.
// Please note that these restrictions can be easily bypassed.
FilePatterns []PatternsFilter `json:"file_patterns,omitempty"`
// max size allowed for a single upload, 0 means unlimited
MaxUploadFileSize int64 `json:"max_upload_file_size,omitempty"`
// TLS certificate attribute to use as username.
// For FTP clients it must match the name provided using the
// "USER" command
TLSUsername TLSUsername `json:"tls_username,omitempty"`
// user specific hook overrides
Hooks HooksFilter `json:"hooks,omitempty"`
// Disable checks for existence and automatic creation of home directory
// and virtual folders.
// SFTPGo requires that the user's home directory, virtual folder root,
// and intermediate paths to virtual folders exist to work properly.
// If you already know that the required directories exist, disabling
// these checks will speed up login.
// You could, for example, disable these checks after the first login
DisableFsChecks bool `json:"disable_fs_checks,omitempty"`
// WebClient related configuration options
WebClient []string `json:"web_client,omitempty"`
// API key auth allows to impersonate this user with an API key
AllowAPIKeyAuth bool `json:"allow_api_key_auth,omitempty"`
// Time-based one time passwords configuration
TOTPConfig TOTPConfig `json:"totp_config,omitempty"`
// Recovery codes to use if the user loses access to their second factor auth device.
// Each code can only be used once, you should use these codes to login and disable or
// reset 2FA for your account
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
// UserType is an hint for authentication plugins.
// It is ignored when using SFTPGo internal authentication
UserType string `json:"user_type,omitempty"`
// Per-source bandwidth limits
BandwidthLimits []BandwidthLimit `json:"bandwidth_limits,omitempty"`
}
// BaseUser defines the shared user fields
type BaseUser struct {
// Data provider unique identifier
ID int64 `json:"id"`
// 1 enabled, 0 disabled (login is not allowed)
Status int `json:"status"`
// Username
Username string `json:"username"`
// Email
Email string `json:"email,omitempty"`
// Account expiration date as unix timestamp in milliseconds. An expired account cannot login.
// 0 means no expiration
ExpirationDate int64 `json:"expiration_date"`
// Password used for password authentication.
// For users created using SFTPGo REST API the password is be stored using bcrypt or argon2id hashing algo.
// Checking passwords stored with pbkdf2, md5crypt and sha512crypt is supported too.
Password string `json:"password,omitempty"`
// PublicKeys used for public key authentication. At least one between password and a public key is mandatory
PublicKeys []string `json:"public_keys,omitempty"`
// The user cannot upload or download files outside this directory. Must be an absolute path
HomeDir string `json:"home_dir"`
// If sftpgo runs as root system user then the created files and directories will be assigned to this system UID
UID int `json:"uid"`
// If sftpgo runs as root system user then the created files and directories will be assigned to this system GID
GID int `json:"gid"`
// Maximum concurrent sessions. 0 means unlimited
MaxSessions int `json:"max_sessions"`
// Maximum size allowed as bytes. 0 means unlimited
QuotaSize int64 `json:"quota_size"`
// Maximum number of files allowed. 0 means unlimited
QuotaFiles int `json:"quota_files"`
// List of the granted permissions
Permissions map[string][]string `json:"permissions"`
// Used quota as bytes
UsedQuotaSize int64 `json:"used_quota_size,omitempty"`
// Used quota as number of files
UsedQuotaFiles int `json:"used_quota_files,omitempty"`
// Last quota update as unix timestamp in milliseconds
LastQuotaUpdate int64 `json:"last_quota_update,omitempty"`
// Maximum upload bandwidth as KB/s, 0 means unlimited.
// This is the default if no per-source limit match
UploadBandwidth int64 `json:"upload_bandwidth,omitempty"`
// Maximum download bandwidth as KB/s, 0 means unlimited.
// This is the default if no per-source limit match
DownloadBandwidth int64 `json:"download_bandwidth,omitempty"`
// Last login as unix timestamp in milliseconds
LastLogin int64 `json:"last_login,omitempty"`
// Creation time as unix timestamp in milliseconds. It will be 0 for admins created before v2.2.0
CreatedAt int64 `json:"created_at"`
// last update time as unix timestamp in milliseconds
UpdatedAt int64 `json:"updated_at"`
// Additional restrictions
Filters UserFilters `json:"filters"`
// optional description, for example full name
Description string `json:"description,omitempty"`
// free form text field for external systems
AdditionalInfo string `json:"additional_info,omitempty"`
}
// User defines a SFTPGo user
type User struct {
BaseUser
// Mapping between virtual paths and virtual folders
VirtualFolders []VirtualFolder `json:"virtual_folders,omitempty"`
// Filesystem configuration details
FsConfig Filesystem `json:"filesystem"`
}

View file

@ -14,7 +14,7 @@ import (
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpd"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
)
@ -168,7 +168,7 @@ func (s *Service) startServices() {
go func() {
redactedConf := sftpdConf
redactedConf.KeyboardInteractiveHook = util.GetRedactedURL(sftpdConf.KeyboardInteractiveHook)
logger.Debug(logSender, "", "initializing SFTP server with config %+v", redactedConf)
logger.Info(logSender, "", "initializing SFTP server with config %+v", redactedConf)
if err := sftpdConf.Initialize(s.ConfigDir); err != nil {
logger.Error(logSender, "", "could not start SFTP server: %v", err)
logger.ErrorToConsole("could not start SFTP server: %v", err)
@ -177,7 +177,7 @@ func (s *Service) startServices() {
s.Shutdown <- true
}()
} else {
logger.Debug(logSender, "", "SFTP server not started, disabled in config file")
logger.Info(logSender, "", "SFTP server not started, disabled in config file")
}
if httpdConf.ShouldBind() {
@ -190,9 +190,9 @@ func (s *Service) startServices() {
s.Shutdown <- true
}()
} else {
logger.Debug(logSender, "", "HTTP server not started, disabled in config file")
logger.Info(logSender, "", "HTTP server not started, disabled in config file")
if s.PortableMode != 1 {
logger.DebugToConsole("HTTP server not started, disabled in config file")
logger.InfoToConsole("HTTP server not started, disabled in config file")
}
}
if ftpdConf.ShouldBind() {
@ -205,7 +205,7 @@ func (s *Service) startServices() {
s.Shutdown <- true
}()
} else {
logger.Debug(logSender, "", "FTP server not started, disabled in config file")
logger.Info(logSender, "", "FTP server not started, disabled in config file")
}
if webDavDConf.ShouldBind() {
go func() {
@ -217,7 +217,7 @@ func (s *Service) startServices() {
s.Shutdown <- true
}()
} else {
logger.Debug(logSender, "", "WebDAV server not started, disabled in config file")
logger.Info(logSender, "", "WebDAV server not started, disabled in config file")
}
if telemetryConf.ShouldBind() {
go func() {
@ -229,9 +229,9 @@ func (s *Service) startServices() {
s.Shutdown <- true
}()
} else {
logger.Debug(logSender, "", "telemetry server not started, disabled in config file")
logger.Info(logSender, "", "telemetry server not started, disabled in config file")
if s.PortableMode != 1 {
logger.DebugToConsole("telemetry server not started, disabled in config file")
logger.InfoToConsole("telemetry server not started, disabled in config file")
}
}
}

View file

@ -13,13 +13,13 @@ import (
"time"
"github.com/grandcat/zeroconf"
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/ftpd"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"

View file

@ -16,7 +16,7 @@ import (
"github.com/drakkan/sftpgo/v2/ftpd"
"github.com/drakkan/sftpgo/v2/httpd"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/telemetry"
"github.com/drakkan/sftpgo/v2/webdavd"
)

View file

@ -13,7 +13,7 @@ import (
"github.com/drakkan/sftpgo/v2/ftpd"
"github.com/drakkan/sftpgo/v2/httpd"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/telemetry"
"github.com/drakkan/sftpgo/v2/webdavd"
)

View file

@ -5,7 +5,7 @@ import (
"os/signal"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/plugin"
)
func registerSignals() {

View file

@ -11,12 +11,12 @@ import (
"time"
"github.com/minio/sio"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpdtest"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/vfs"
)

Some files were not shown because too many files have changed in this diff Show more