mirror of
https://github.com/drakkan/sftpgo.git
synced 2024-11-21 23:20:24 +00:00
add support for serving Google Cloud Storage over SFTP/SCP
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder
This commit is contained in:
parent
45a13f5f4e
commit
3491717c26
33 changed files with 1632 additions and 165 deletions
67
README.md
67
README.md
|
@ -22,7 +22,7 @@ Full featured and highly configurable SFTP server
|
|||
- Atomic uploads are configurable.
|
||||
- Support for Git repositories over SSH.
|
||||
- SCP and rsync are supported.
|
||||
- Support for serving S3 Compatible Object Storage over SFTP.
|
||||
- Support for serving local filesystem, S3 Compatible Object Storage and Google Cloud Storage over SFTP/SCP.
|
||||
- Prometheus metrics are exposed.
|
||||
- REST API for users management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
|
||||
- Web based interface to easily manage users and connections.
|
||||
|
@ -178,6 +178,7 @@ The `sftpgo` configuration file contains the following sections:
|
|||
- `http_notification_url`, a valid URL. Leave empty to disable.
|
||||
- `external_auth_program`, string. Absolute path to an external program to use for users authentication. See the "External Authentication" paragraph for more details.
|
||||
- `external_auth_scope`, integer. 0 means all supported authetication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. The flags can be combined, for example 6 means public keys and keyboard interactive
|
||||
- `credentials_path`, string. It defines the directory for storing user provided credential files such as Google Cloud Storage credentials. This can be an absolute path or a path relative to the config dir
|
||||
- **"httpd"**, the configuration for the HTTP server used to serve REST API
|
||||
- `bind_port`, integer. The port used for serving HTTP requests. Set to 0 to disable HTTP server. Default: 8080
|
||||
- `bind_address`, string. Leave blank to listen on all available network interfaces. Default: "127.0.0.1"
|
||||
|
@ -232,7 +233,8 @@ Here is a full example showing the default config in JSON format:
|
|||
"http_notification_url": ""
|
||||
},
|
||||
"external_auth_program": "",
|
||||
"external_auth_scope": 0
|
||||
"external_auth_scope": 0,
|
||||
"credentials_path": "credentials"
|
||||
},
|
||||
"httpd": {
|
||||
"bind_port": 8080,
|
||||
|
@ -270,10 +272,12 @@ Please note that to override configuration options with environment variables a
|
|||
|
||||
### Data provider initialization
|
||||
|
||||
Before starting `sftpgo serve` a data provider must be configured.
|
||||
Before starting `sftpgo serve` please ensure that the configured dataprovider is properly initialized.
|
||||
|
||||
SQL scripts to create the required database structure can be found inside the source tree [sql](./sql "sql") directory. The SQL scripts filename is, by convention, the date as `YYYYMMDD` and the suffix `.sql`. You need to apply all the SQL scripts for your database ordered by name, for example `20190828.sql` must be applied before `20191112.sql` and so on.
|
||||
Example for `sqlite`: `find sql/sqlite/ -type f -iname '*.sql' -print |sort -n|xargs cat |sqlite3 sftpgo.db`
|
||||
SQL based data providers (SQLite, MySQL, PostgreSQL) requires the creation of a database containing the required tables. Memory and bolt data providers does not require an initialization.
|
||||
|
||||
SQL scripts to create the required database structure can be found inside the source tree [sql](./sql "sql") directory. The SQL scripts filename is, by convention, the date as `YYYYMMDD` and the suffix `.sql`. You need to apply all the SQL scripts for your database ordered by name, for example `20190828.sql` must be applied before `20191112.sql` and so on.
|
||||
Example for `SQLite`: `find sql/sqlite/ -type f -iname '*.sql' -print | sort -n |xargs cat | sqlite3 sftpgo.db`
|
||||
|
||||
### Starting SFTGo in server mode
|
||||
|
||||
|
@ -482,13 +486,13 @@ The HTTP request has a 15 seconds timeout.
|
|||
|
||||
## S3 Compabible Object Storage backends
|
||||
|
||||
Each user can be mapped with an S3-Compatible bucket or a bucket virtual folder, this way the mapped bucket/virtual folder is exposed over SFTP/SCP.
|
||||
Each user can be mapped to whole bucket or to a bucket virtual folder, this way the mapped bucket/virtual folder is exposed over SFTP/SCP.
|
||||
|
||||
Specifying a different `key_prefix` you can assign different virtual folders of the same bucket to different users. This is similar to a chroot directory for local filesystem. The virtual folder identified by `key_prefix` does not need to be pre-created.
|
||||
Specifying a different `key_prefix` you can assign different virtual folders of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access to the assigned virtual folder and to its contents The virtual folder identified by `key_prefix` does not need to be pre-created.
|
||||
|
||||
SFTPGo uses multipart uploads and parallel downloads for storing and retrieving files from S3.
|
||||
|
||||
SFTPGo tries to automatically create the mapped bucket if it does not exists but it's a better idea to pre-create the bucket and to assign to it the wanted options such as automatic encryption and authorizations.
|
||||
The configured bucket must exist.
|
||||
|
||||
Some SFTP commands doesn't work over S3:
|
||||
|
||||
|
@ -504,6 +508,21 @@ Other notes:
|
|||
- For server side encryption you have to configure the mapped bucket to automatically encrypt objects.
|
||||
- A local home directory is still required to store temporary files.
|
||||
|
||||
## Google Cloud Storage backend
|
||||
|
||||
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder, this way the mapped bucket/virtual folder is exposed over SFTP/SCP. This backend is very similar to the S3 backend and it has the same limitations.
|
||||
|
||||
## Other Storage backends
|
||||
|
||||
Adding new storage backends it's quite easy:
|
||||
|
||||
- implement the [Fs interface](./vfs/vfs.go#L18 "interface for filesystem backends").
|
||||
- update the user method `GetFilesystem` to return the new backend
|
||||
- update the web interface and the REST API CLI
|
||||
- add the flags for the new storage backed to the `portable` mode
|
||||
|
||||
Anyway some backends require a pay per use account (or they offer free account for a limited time period only), to be able to add support for such backends or to review pull requests please provide a test account. The test account must be available over the time to be able to maintain the backend and do basic tests before each new release.
|
||||
|
||||
## Portable mode
|
||||
|
||||
SFTPGo allows to share a single directory on demand using the `portable` subcommand:
|
||||
|
@ -520,25 +539,29 @@ Usage:
|
|||
sftpgo portable [flags]
|
||||
|
||||
Flags:
|
||||
-C, --advertise-credentials If the SFTP service is advertised via multicast DNS this flag allows to put username/password inside the advertised TXT record
|
||||
-S, --advertise-service Advertise SFTP service using multicast DNS (default true)
|
||||
-d, --directory string Path to the directory to serve. This can be an absolute path or a path relative to the current directory (default ".")
|
||||
-f, --fs-provider int 0 means local filesystem, 1 S3 compatible
|
||||
-h, --help help for portable
|
||||
-l, --log-file-path string Leave empty to disable logging
|
||||
-p, --password string Leave empty to use an auto generated value
|
||||
-g, --permissions strings User's permissions. "*" means any permission (default [list,download])
|
||||
-C, --advertise-credentials If the SFTP service is advertised via multicast DNS this flag allows to put username/password inside the advertised TXT record
|
||||
-S, --advertise-service Advertise SFTP service using multicast DNS (default true)
|
||||
-d, --directory string Path to the directory to serve. This can be an absolute path or a path relative to the current directory (default ".")
|
||||
-f, --fs-provider int 0 means local filesystem, 1 Amazon S3 compatible, 2 Google Cloud Storage
|
||||
--gcs-bucket string
|
||||
--gcs-credentials-file string Google Cloud Storage JSON credentials file
|
||||
--gcs-key-prefix string Allows to restrict access to the virtual folder identified by this prefix and its contents
|
||||
--gcs-storage-class string
|
||||
-h, --help help for portable
|
||||
-l, --log-file-path string Leave empty to disable logging
|
||||
-p, --password string Leave empty to use an auto generated value
|
||||
-g, --permissions strings User's permissions. "*" means any permission (default [list,download])
|
||||
-k, --public-key strings
|
||||
--s3-access-key string
|
||||
--s3-access-secret string
|
||||
--s3-bucket string
|
||||
--s3-endpoint string
|
||||
--s3-key-prefix string Allows to restrict access to the virtual folder identified by this prefix and its contents
|
||||
--s3-key-prefix string Allows to restrict access to the virtual folder identified by this prefix and its contents
|
||||
--s3-region string
|
||||
--s3-storage-class string
|
||||
-s, --sftpd-port int 0 means a random non privileged port
|
||||
-c, --ssh-commands strings SSH commands to enable. "*" means any supported SSH command including scp (default [md5sum,sha1sum,cd,pwd])
|
||||
-u, --username string Leave empty to use an auto generated value
|
||||
-s, --sftpd-port int 0 means a random non privileged port
|
||||
-c, --ssh-commands strings SSH commands to enable. "*" means any supported SSH command including scp (default [md5sum,sha1sum,cd,pwd])
|
||||
-u, --username string Leave empty to use an auto generated value
|
||||
```
|
||||
|
||||
In portable mode SFTPGo can advertise the SFTP service and, optionally, the credentials via multicast DNS, so there is a standard way to discover the service and to automatically connect to it.
|
||||
|
@ -592,6 +615,10 @@ For each account the following properties can be configured:
|
|||
- `s3_endpoint`, specifies s3 endpoint (server) different from AWS
|
||||
- `s3_storage_class`
|
||||
- `s3_key_prefix`, allows to restrict access to the virtual folder identified by this prefix and its contents
|
||||
- `gcs_bucket`, required for GCS filesystem
|
||||
- `gcs_credentials`, Google Cloud Storage JSON credentials base64 encoded
|
||||
- `gcs_storage_class`
|
||||
- `gcs_key_prefix`, allows to restrict access to the virtual folder identified by this prefix and its contents
|
||||
|
||||
These properties are stored inside the data provider.
|
||||
|
||||
|
|
|
@ -1,6 +1,10 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
|
@ -29,6 +33,10 @@ var (
|
|||
portableS3Endpoint string
|
||||
portableS3StorageClass string
|
||||
portableS3KeyPrefix string
|
||||
portableGCSBucket string
|
||||
portableGCSCredentialsFile string
|
||||
portableGCSStorageClass string
|
||||
portableGCSKeyPrefix string
|
||||
portableCmd = &cobra.Command{
|
||||
Use: "portable",
|
||||
Short: "Serve a single directory",
|
||||
|
@ -44,6 +52,24 @@ Please take a look at the usage below to customize the serving parameters`,
|
|||
}
|
||||
permissions := make(map[string][]string)
|
||||
permissions["/"] = portablePermissions
|
||||
portableGCSCredentials := ""
|
||||
if portableFsProvider == 2 {
|
||||
fi, err := os.Stat(portableGCSCredentialsFile)
|
||||
if err != nil {
|
||||
fmt.Printf("Invalid GCS credentials file: %v\n", err)
|
||||
return
|
||||
}
|
||||
if fi.Size() > 1048576 {
|
||||
fmt.Printf("Invalid GCS credentials file: %#v is too big %v/1048576 bytes\n", portableGCSCredentialsFile,
|
||||
fi.Size())
|
||||
return
|
||||
}
|
||||
creds, err := ioutil.ReadFile(portableGCSCredentialsFile)
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to read credentials file: %v\n", err)
|
||||
}
|
||||
portableGCSCredentials = base64.StdEncoding.EncodeToString(creds)
|
||||
}
|
||||
service := service.Service{
|
||||
ConfigDir: defaultConfigDir,
|
||||
ConfigFile: defaultConfigName,
|
||||
|
@ -73,6 +99,12 @@ Please take a look at the usage below to customize the serving parameters`,
|
|||
StorageClass: portableS3StorageClass,
|
||||
KeyPrefix: portableS3KeyPrefix,
|
||||
},
|
||||
GCSConfig: vfs.GCSFsConfig{
|
||||
Bucket: portableGCSBucket,
|
||||
Credentials: portableGCSCredentials,
|
||||
StorageClass: portableGCSStorageClass,
|
||||
KeyPrefix: portableGCSKeyPrefix,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
@ -100,7 +132,8 @@ func init() {
|
|||
"Advertise SFTP service using multicast DNS")
|
||||
portableCmd.Flags().BoolVarP(&portableAdvertiseCredentials, "advertise-credentials", "C", false,
|
||||
"If the SFTP service is advertised via multicast DNS this flag allows to put username/password inside the advertised TXT record")
|
||||
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", 0, "0 means local filesystem, 1 S3 compatible")
|
||||
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", 0, "0 means local filesystem, 1 Amazon S3 compatible, "+
|
||||
"2 Google Cloud Storage")
|
||||
portableCmd.Flags().StringVar(&portableS3Bucket, "s3-bucket", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3Region, "s3-region", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3AccessKey, "s3-access-key", "", "")
|
||||
|
@ -109,5 +142,10 @@ func init() {
|
|||
portableCmd.Flags().StringVar(&portableS3StorageClass, "s3-storage-class", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", "Allows to restrict access to the virtual folder "+
|
||||
"identified by this prefix and its contents")
|
||||
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
|
||||
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
|
||||
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", "Allows to restrict access to the virtual folder "+
|
||||
"identified by this prefix and its contents")
|
||||
portableCmd.Flags().StringVar(&portableGCSCredentialsFile, "gcs-credentials-file", "", "Google Cloud Storage JSON credentials file")
|
||||
rootCmd.AddCommand(portableCmd)
|
||||
}
|
||||
|
|
|
@ -84,6 +84,7 @@ func init() {
|
|||
},
|
||||
ExternalAuthProgram: "",
|
||||
ExternalAuthScope: 0,
|
||||
CredentialsPath: "credentials",
|
||||
},
|
||||
HTTPDConfig: httpd.Conf{
|
||||
BindPort: 8080,
|
||||
|
@ -178,6 +179,12 @@ func LoadConfig(configDir, configName string) error {
|
|||
logger.Warn(logSender, "", "Configuration error: %v", err)
|
||||
logger.WarnToConsole("Configuration error: %v", err)
|
||||
}
|
||||
if len(globalConf.ProviderConf.CredentialsPath) == 0 {
|
||||
err = fmt.Errorf("invalid credentials path, reset to \"credentials\"")
|
||||
globalConf.ProviderConf.CredentialsPath = "credentials"
|
||||
logger.Warn(logSender, "", "Configuration error: %v", err)
|
||||
logger.WarnToConsole("Configuration error: %v", err)
|
||||
}
|
||||
logger.Debug(logSender, "", "config file used: '%v', config loaded: %+v", viper.ConfigFileUsed(), getRedactedGlobalConf())
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -119,6 +119,27 @@ func TestInvalidExternalAuthScope(t *testing.T) {
|
|||
os.Remove(configFilePath)
|
||||
}
|
||||
|
||||
func TestInvalidCredentialsPath(t *testing.T) {
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
config.LoadConfig(configDir, "")
|
||||
providerConf := config.GetProviderConf()
|
||||
providerConf.CredentialsPath = ""
|
||||
c := make(map[string]dataprovider.Config)
|
||||
c["data_provider"] = providerConf
|
||||
jsonConf, _ := json.Marshal(c)
|
||||
err := ioutil.WriteFile(configFilePath, jsonConf, 0666)
|
||||
if err != nil {
|
||||
t.Errorf("error saving temporary configuration")
|
||||
}
|
||||
err = config.LoadConfig(configDir, tempConfigName)
|
||||
if err == nil {
|
||||
t.Errorf("Loading configuration with credentials path must fail")
|
||||
}
|
||||
os.Remove(configFilePath)
|
||||
}
|
||||
|
||||
func TestSetGetConfig(t *testing.T) {
|
||||
sftpdConf := config.GetSFTPDConfig()
|
||||
sftpdConf.IdleTimeout = 3
|
||||
|
|
|
@ -319,6 +319,10 @@ func (p BoltProvider) dumpUsers() ([]User, error) {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = addCredentialsToUser(&user)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
users = append(users, user)
|
||||
}
|
||||
return err
|
||||
|
|
|
@ -16,6 +16,7 @@ import (
|
|||
"errors"
|
||||
"fmt"
|
||||
"hash"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
|
@ -85,6 +86,7 @@ var (
|
|||
availabilityTicker *time.Ticker
|
||||
availabilityTickerDone chan bool
|
||||
errWrongPassword = errors.New("password does not match")
|
||||
credentialsDirPath string
|
||||
)
|
||||
|
||||
// Actions to execute on user create, update, delete.
|
||||
|
@ -179,6 +181,10 @@ type Config struct {
|
|||
// you can combine the scopes, for example 3 means password and public key, 5 password and keyboard
|
||||
// interactive and so on
|
||||
ExternalAuthScope int `json:"external_auth_scope" mapstructure:"external_auth_scope"`
|
||||
// CredentialsPath defines the directory for storing user provided credential files such as
|
||||
// Google Cloud Storage credentials. It can be a path relative to the config dir or an
|
||||
// absolute path
|
||||
CredentialsPath string `json:"credentials_path" mapstructure:"credentials_path"`
|
||||
}
|
||||
|
||||
type keyboardAuthProgramResponse struct {
|
||||
|
@ -268,6 +274,9 @@ func Initialize(cnf Config, basePath string) error {
|
|||
return err
|
||||
}
|
||||
}
|
||||
if err := validateCredentialsDir(basePath); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if config.Driver == SQLiteDataProviderName {
|
||||
err = initializeSQLiteProvider(basePath)
|
||||
|
@ -509,6 +518,25 @@ func validateFilters(user *User) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func saveGCSCredentials(user *User) error {
|
||||
if user.FsConfig.Provider != 2 {
|
||||
return nil
|
||||
}
|
||||
if len(user.FsConfig.GCSConfig.Credentials) == 0 {
|
||||
return nil
|
||||
}
|
||||
decoded, err := base64.StdEncoding.DecodeString(user.FsConfig.GCSConfig.Credentials)
|
||||
if err != nil {
|
||||
return &ValidationError{err: fmt.Sprintf("could not validate GCS credentials: %v", err)}
|
||||
}
|
||||
err = ioutil.WriteFile(user.getGCSCredentialsFilePath(), decoded, 0600)
|
||||
if err != nil {
|
||||
return &ValidationError{err: fmt.Sprintf("could not save GCS credentials: %v", err)}
|
||||
}
|
||||
user.FsConfig.GCSConfig.Credentials = ""
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateFilesystemConfig(user *User) error {
|
||||
if user.FsConfig.Provider == 1 {
|
||||
err := vfs.ValidateS3FsConfig(&user.FsConfig.S3Config)
|
||||
|
@ -524,9 +552,16 @@ func validateFilesystemConfig(user *User) error {
|
|||
user.FsConfig.S3Config.AccessSecret = accessSecret
|
||||
}
|
||||
return nil
|
||||
} else if user.FsConfig.Provider == 2 {
|
||||
err := vfs.ValidateGCSFsConfig(&user.FsConfig.GCSConfig, user.getGCSCredentialsFilePath())
|
||||
if err != nil {
|
||||
return &ValidationError{err: fmt.Sprintf("could not validate GCS config: %v", err)}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
user.FsConfig.Provider = 0
|
||||
user.FsConfig.S3Config = vfs.S3FsConfig{}
|
||||
user.FsConfig.GCSConfig = vfs.GCSFsConfig{}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -563,6 +598,9 @@ func validateUser(user *User) error {
|
|||
if err := validateFilters(user); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := saveGCSCredentials(user); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -704,10 +742,24 @@ func HideUserSensitiveData(user *User) User {
|
|||
user.Password = ""
|
||||
if user.FsConfig.Provider == 1 {
|
||||
user.FsConfig.S3Config.AccessSecret = utils.RemoveDecryptionKey(user.FsConfig.S3Config.AccessSecret)
|
||||
} else if user.FsConfig.Provider == 2 {
|
||||
user.FsConfig.GCSConfig.Credentials = ""
|
||||
}
|
||||
return *user
|
||||
}
|
||||
|
||||
func addCredentialsToUser(user *User) error {
|
||||
if user.FsConfig.Provider != 2 {
|
||||
return nil
|
||||
}
|
||||
cred, err := ioutil.ReadFile(user.getGCSCredentialsFilePath())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
user.FsConfig.GCSConfig.Credentials = base64.StdEncoding.EncodeToString(cred)
|
||||
return nil
|
||||
}
|
||||
|
||||
func getSSLMode() string {
|
||||
if config.Driver == PGSQLDataProviderName {
|
||||
if config.SSLMode == 0 {
|
||||
|
@ -748,6 +800,25 @@ func startAvailabilityTimer() {
|
|||
}()
|
||||
}
|
||||
|
||||
func validateCredentialsDir(basePath string) error {
|
||||
if filepath.IsAbs(config.CredentialsPath) {
|
||||
credentialsDirPath = config.CredentialsPath
|
||||
} else {
|
||||
credentialsDirPath = filepath.Join(basePath, config.CredentialsPath)
|
||||
}
|
||||
fi, err := os.Stat(credentialsDirPath)
|
||||
if err == nil {
|
||||
if !fi.IsDir() {
|
||||
return errors.New("Credential path is not a valid directory")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
return os.MkdirAll(credentialsDirPath, 0700)
|
||||
}
|
||||
|
||||
func checkDataprovider() {
|
||||
err := provider.checkAvailability()
|
||||
if err != nil {
|
||||
|
|
|
@ -224,6 +224,10 @@ func (p MemoryProvider) dumpUsers() ([]User, error) {
|
|||
}
|
||||
for _, username := range p.dbHandle.usernames {
|
||||
user := p.dbHandle.users[username]
|
||||
err = addCredentialsToUser(&user)
|
||||
if err != nil {
|
||||
return users, err
|
||||
}
|
||||
users = append(users, user)
|
||||
}
|
||||
return users, err
|
||||
|
|
|
@ -232,11 +232,14 @@ func sqlCommonDumpUsers(dbHandle *sql.DB) ([]User, error) {
|
|||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
u, err := getUserFromDbRow(nil, rows)
|
||||
if err == nil {
|
||||
users = append(users, u)
|
||||
} else {
|
||||
break
|
||||
if err != nil {
|
||||
return users, err
|
||||
}
|
||||
err = addCredentialsToUser(&u)
|
||||
if err != nil {
|
||||
return users, err
|
||||
}
|
||||
users = append(users, u)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -55,9 +55,10 @@ type UserFilters struct {
|
|||
|
||||
// Filesystem defines cloud storage filesystem details
|
||||
type Filesystem struct {
|
||||
// 0 local filesystem, 1 Amazon S3 compatible
|
||||
Provider int `json:"provider"`
|
||||
S3Config vfs.S3FsConfig `json:"s3config,omitempty"`
|
||||
// 0 local filesystem, 1 Amazon S3 compatible, 2 Google Cloud Storage
|
||||
Provider int `json:"provider"`
|
||||
S3Config vfs.S3FsConfig `json:"s3config,omitempty"`
|
||||
GCSConfig vfs.GCSFsConfig `json:"gcsconfig,omitempty"`
|
||||
}
|
||||
|
||||
// User defines an SFTP user
|
||||
|
@ -73,7 +74,7 @@ type User struct {
|
|||
ExpirationDate int64 `json:"expiration_date"`
|
||||
// Password used for password authentication.
|
||||
// For users created using SFTPGo REST API the password is be stored using argon2id hashing algo.
|
||||
// Checking passwords stored with bcrypt, pbkdf2 and sha512crypt is supported too.
|
||||
// Checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt is supported too.
|
||||
Password string `json:"password,omitempty"`
|
||||
// PublicKeys used for public key authentication. At least one between password and a public key is mandatory
|
||||
PublicKeys []string `json:"public_keys,omitempty"`
|
||||
|
@ -113,6 +114,10 @@ type User struct {
|
|||
func (u *User) GetFilesystem(connectionID string) (vfs.Fs, error) {
|
||||
if u.FsConfig.Provider == 1 {
|
||||
return vfs.NewS3Fs(connectionID, u.GetHomeDir(), u.FsConfig.S3Config)
|
||||
} else if u.FsConfig.Provider == 2 {
|
||||
config := u.FsConfig.GCSConfig
|
||||
config.CredentialFile = u.getGCSCredentialsFilePath()
|
||||
return vfs.NewGCSFs(connectionID, u.GetHomeDir(), config)
|
||||
}
|
||||
return vfs.NewOsFs(connectionID, u.GetHomeDir()), nil
|
||||
}
|
||||
|
@ -321,7 +326,8 @@ func (u *User) GetBandwidthAsString() string {
|
|||
}
|
||||
|
||||
// GetInfoString returns user's info as string.
|
||||
// Number of public keys, max sessions, uid and gid are returned
|
||||
// Storage provider, number of public keys, max sessions, uid,
|
||||
// gid, denied and allowed IP/Mask are returned
|
||||
func (u *User) GetInfoString() string {
|
||||
var result string
|
||||
if u.LastLogin > 0 {
|
||||
|
@ -329,7 +335,9 @@ func (u *User) GetInfoString() string {
|
|||
result += fmt.Sprintf("Last login: %v ", t.Format("2006-01-02 15:04:05")) // YYYY-MM-DD HH:MM:SS
|
||||
}
|
||||
if u.FsConfig.Provider == 1 {
|
||||
result += fmt.Sprintf("Storage: S3")
|
||||
result += fmt.Sprintf("Storage: S3 ")
|
||||
} else if u.FsConfig.Provider == 2 {
|
||||
result += fmt.Sprintf("Storage: GCS ")
|
||||
}
|
||||
if len(u.PublicKeys) > 0 {
|
||||
result += fmt.Sprintf("Public keys: %v ", len(u.PublicKeys))
|
||||
|
@ -410,6 +418,12 @@ func (u *User) getACopy() User {
|
|||
StorageClass: u.FsConfig.S3Config.StorageClass,
|
||||
KeyPrefix: u.FsConfig.S3Config.KeyPrefix,
|
||||
},
|
||||
GCSConfig: vfs.GCSFsConfig{
|
||||
Bucket: u.FsConfig.GCSConfig.Bucket,
|
||||
CredentialFile: u.FsConfig.GCSConfig.CredentialFile,
|
||||
StorageClass: u.FsConfig.GCSConfig.StorageClass,
|
||||
KeyPrefix: u.FsConfig.GCSConfig.KeyPrefix,
|
||||
},
|
||||
}
|
||||
|
||||
return User{
|
||||
|
@ -458,3 +472,7 @@ func (u *User) getNotificationFieldsAsEnvVars(action string) []string {
|
|||
fmt.Sprintf("SFTPGO_USER_UID=%v", u.UID),
|
||||
fmt.Sprintf("SFTPGO_USER_GID=%v", u.GID)}
|
||||
}
|
||||
|
||||
func (u *User) getGCSCredentialsFilePath() string {
|
||||
return filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json", u.Username))
|
||||
}
|
||||
|
|
|
@ -12,6 +12,7 @@ sudo groupadd -g 1003 sftpgrp && \
|
|||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20190828.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20191112.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20191230.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20200116.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /home/sftpuser/conf/sftpgo.json
|
||||
|
||||
# Get and build SFTPGo image
|
||||
|
|
9
go.mod
9
go.mod
|
@ -3,6 +3,8 @@ module github.com/drakkan/sftpgo
|
|||
go 1.13
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.52.0 // indirect
|
||||
cloud.google.com/go/storage v1.5.0
|
||||
github.com/alexedwards/argon2id v0.0.0-20190612080829-01a59b2b8802
|
||||
github.com/aws/aws-sdk-go v1.28.3
|
||||
github.com/cenkalti/backoff v2.2.1+incompatible // indirect
|
||||
|
@ -10,6 +12,7 @@ require (
|
|||
github.com/go-chi/chi v4.0.2+incompatible
|
||||
github.com/go-chi/render v1.0.1
|
||||
github.com/go-sql-driver/mysql v1.5.0
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect
|
||||
github.com/grandcat/zeroconf v0.0.0-20190424104450-85eadb44205c
|
||||
github.com/lib/pq v1.3.0
|
||||
github.com/mattn/go-sqlite3 v2.0.2+incompatible
|
||||
|
@ -23,7 +26,11 @@ require (
|
|||
github.com/spf13/viper v1.6.1
|
||||
go.etcd.io/bbolt v1.3.3
|
||||
golang.org/x/crypto v0.0.0-20200109152110-61a87790db17
|
||||
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f
|
||||
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a // indirect
|
||||
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82
|
||||
golang.org/x/tools v0.0.0-20200124200720-1b668f209185 // indirect
|
||||
google.golang.org/api v0.15.0
|
||||
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150 // indirect
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.0.0
|
||||
)
|
||||
|
||||
|
|
183
go.sum
183
go.sum
|
@ -1,5 +1,26 @@
|
|||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
|
||||
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
|
||||
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
|
||||
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
|
||||
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
|
||||
cloud.google.com/go v0.50.0 h1:0E3eE8MX426vUOs7aHfI7aN1BrIzzzf4ccKCSfSjGmc=
|
||||
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
|
||||
cloud.google.com/go v0.52.0 h1:GGslhk/BU052LPlnI1vpp3fcbUs+hQ3E+Doti/3/vF8=
|
||||
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
|
||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
||||
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
|
||||
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
|
||||
cloud.google.com/go/storage v1.5.0 h1:RPUcBvDeYgQFMfQu1eBMq6piD1SXmLH+vK3qjewZPus=
|
||||
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
|
@ -16,10 +37,14 @@ github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
|||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/cenkalti/backoff v2.2.1+incompatible h1:tNowT99t7UNflLxfYYSlKYsBpXdEet03Pg2g16Swow4=
|
||||
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
|
||||
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
||||
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
||||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
||||
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
|
@ -34,6 +59,8 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZm
|
|||
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
||||
github.com/drakkan/pipeat v0.0.0-20200123131427-11c048cfc0ec h1:DXfzg1NXoesnFzdCyyi2uU3o1o0XiWTN2ZcpWDE7MCk=
|
||||
github.com/drakkan/pipeat v0.0.0-20200123131427-11c048cfc0ec/go.mod h1:wNYvIpR5rIhoezOYcpxcXz4HbIEOu7A45EqlQCA+h+w=
|
||||
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||
|
@ -41,6 +68,8 @@ github.com/go-chi/chi v4.0.2+incompatible h1:maB6vn6FqCxrpz4FqWdh4+lwpyZIQS7YEAU
|
|||
github.com/go-chi/chi v4.0.2+incompatible/go.mod h1:eB3wogJHnLi3x/kFX2A+IbTBlXxmMeXJVKy9tTv1XzQ=
|
||||
github.com/go-chi/render v1.0.1 h1:4/5tis2cKaNdnv9zFLfXzcquC9HbeZgCnxGnKrltBS8=
|
||||
github.com/go-chi/render v1.0.1/go.mod h1:pq4Rr7HbnsdaeHagklXub+p6Wd16Af5l9koip1OvJns=
|
||||
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
|
||||
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
|
@ -52,15 +81,35 @@ github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7a
|
|||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7 h1:5ZkaAPbicIKTF2I64qf5Fh8Aa83Q/dnOafMYV0OMwjA=
|
||||
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
|
||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||
github.com/grandcat/zeroconf v0.0.0-20190424104450-85eadb44205c h1:svzQzfVE9t7Y1CGULS5PsMWs4/H4Au/ZTJzU/0CKgqc=
|
||||
|
@ -68,14 +117,20 @@ github.com/grandcat/zeroconf v0.0.0-20190424104450-85eadb44205c/go.mod h1:YjKB0W
|
|||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
|
||||
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
|
||||
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
|
||||
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
||||
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
|
||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
|
||||
github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o=
|
||||
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
|
||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
||||
|
@ -124,6 +179,7 @@ github.com/prometheus/client_golang v1.3.0 h1:miYCvYqFXtl/J9FIy8eNpBfYthAEFg+Ys0
|
|||
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.1.0 h1:ElTg5tNp4DqfV7UQjDqv2+RJlNzsDtvNAWccbItceIE=
|
||||
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
|
@ -138,6 +194,7 @@ github.com/prometheus/procfs v0.0.8 h1:+fpWZdT24pJBiqJdAwYBjPSk+5YmQzYNPYzQsdzLk
|
|||
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
|
||||
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
|
||||
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
|
||||
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
|
||||
github.com/rs/zerolog v1.17.2 h1:RMRHFw2+wF7LO0QqtELQwo8hqSmqISyCJeFeAAuWcRo=
|
||||
|
@ -178,34 +235,80 @@ github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxt
|
|||
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
go.etcd.io/bbolt v1.3.3 h1:MUGmc65QhB3pIlaQ5bB4LwqSj6GIonVJXpZiaKNyaKk=
|
||||
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
|
||||
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
||||
go.opencensus.io v0.22.2 h1:75k/FF0Q2YM8QYo07VPddOLBslDt1MZOdEslOHvmzAs=
|
||||
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200109152110-61a87790db17 h1:nVJ3guKA9qdkEQ3TUdXI9QSINo2CUPM/cySEvw2w8I0=
|
||||
golang.org/x/crypto v0.0.0-20200109152110-61a87790db17/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
|
||||
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
|
||||
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20191227195350-da58074b4299 h1:zQpM52jfKHG6II1ISZY1ZcpygvuSFZpLwfluuF89XOg=
|
||||
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a h1:7Wlg8L54In96HTWOaI4sreLJ6qfyGuvSau5el3fK41Y=
|
||||
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
||||
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f h1:J5lckAjkw6qYlOZNj90mLYNTEKDvWeuc1yieZ8qUzUE=
|
||||
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
|
||||
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
|
||||
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
|
||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
|
||||
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553 h1:efeOvDhwQ29Dj3SdAV/MJf8oukgn+8D8WgaCaRMchF8=
|
||||
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa h1:F+8P+gmewFQYRk6JoLQLwjBCTu3mcIURZfNkVweuRKA=
|
||||
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6 h1:pE8b58s1HRDMi8RDc79m0HISf9D4TzseP40cEA6IGfs=
|
||||
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw=
|
||||
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
|
@ -214,29 +317,103 @@ golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5h
|
|||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f h1:68K/z8GLUxV76xGSqwTWw2gyk/jwn79LUL43rES2g8o=
|
||||
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8 h1:JA8d3MPx/IToSyXZG/RhwYEtfrKO1Fxrqe8KrkiLXKM=
|
||||
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82 h1:ywK/j/KkyTHcdyYSZNXGjMwgmDSfjglYZ3vStQ/gSCU=
|
||||
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20190828213141-aed303cbaa74/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191216052735-49a3e744a425/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4 h1:Toz2IK7k8rbltAXwNAxKcn9OzqyNfMUhUNjz3sL0NMk=
|
||||
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200124200720-1b668f209185 h1:UhyNb/h6VU6sPOVb6x118tYR91HRLBUtWS01bOSFAz4=
|
||||
golang.org/x/tools v0.0.0-20200124200720-1b668f209185/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
||||
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
|
||||
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
|
||||
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
|
||||
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
|
||||
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
|
||||
google.golang.org/api v0.15.0 h1:yzlyyDW/J0w8yNFJIhiAJy4kq74S+1DOLdawELNxFMA=
|
||||
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
|
||||
google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
|
||||
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
|
||||
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb h1:ADPHZzpzM4tk4V4S5cnCrr5SwzvlrPRmqqCuJDB8UTs=
|
||||
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150 h1:VPpdpQkGvFicX9yo4G5oxZPi9ALBnEOZblPSa/Wa2m4=
|
||||
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.26.0 h1:2dTRdpdFEEhJYQD8EMLB61nnrzSCTbG38PhqdhvOltg=
|
||||
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
|
||||
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8=
|
||||
|
@ -248,3 +425,9 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
|||
gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I=
|
||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||
|
|
|
@ -14,6 +14,7 @@ func getQuotaScans(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func startQuotaScan(w http.ResponseWriter, r *http.Request) {
|
||||
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
|
||||
var u dataprovider.User
|
||||
err := render.DecodeJSON(r.Body, &u)
|
||||
if err != nil {
|
||||
|
|
|
@ -73,6 +73,7 @@ func getUserByID(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func addUser(w http.ResponseWriter, r *http.Request) {
|
||||
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
|
||||
var user dataprovider.User
|
||||
err := render.DecodeJSON(r.Body, &user)
|
||||
if err != nil {
|
||||
|
@ -93,6 +94,7 @@ func addUser(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func updateUser(w http.ResponseWriter, r *http.Request) {
|
||||
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
|
||||
userID, err := strconv.ParseInt(chi.URLParam(r, "userID"), 10, 64)
|
||||
if err != nil {
|
||||
err = errors.New("Invalid userID")
|
||||
|
@ -100,10 +102,10 @@ func updateUser(w http.ResponseWriter, r *http.Request) {
|
|||
return
|
||||
}
|
||||
user, err := dataprovider.GetUserByID(dataProvider, userID)
|
||||
oldPermissions := user.Permissions
|
||||
oldS3AccessSecret := ""
|
||||
currentPermissions := user.Permissions
|
||||
currentS3AccessSecret := ""
|
||||
if user.FsConfig.Provider == 1 {
|
||||
oldS3AccessSecret = user.FsConfig.S3Config.AccessSecret
|
||||
currentS3AccessSecret = user.FsConfig.S3Config.AccessSecret
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
if _, ok := err.(*dataprovider.RecordNotFoundError); ok {
|
||||
|
@ -120,13 +122,13 @@ func updateUser(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
// we use new Permissions if passed otherwise the old ones
|
||||
if len(user.Permissions) == 0 {
|
||||
user.Permissions = oldPermissions
|
||||
user.Permissions = currentPermissions
|
||||
}
|
||||
// we use the new access secret if different from the old one and not empty
|
||||
if user.FsConfig.Provider == 1 {
|
||||
if utils.RemoveDecryptionKey(oldS3AccessSecret) == user.FsConfig.S3Config.AccessSecret ||
|
||||
if utils.RemoveDecryptionKey(currentS3AccessSecret) == user.FsConfig.S3Config.AccessSecret ||
|
||||
len(user.FsConfig.S3Config.AccessSecret) == 0 {
|
||||
user.FsConfig.S3Config.AccessSecret = oldS3AccessSecret
|
||||
user.FsConfig.S3Config.AccessSecret = currentS3AccessSecret
|
||||
}
|
||||
}
|
||||
if user.ID != userID {
|
||||
|
|
|
@ -439,6 +439,16 @@ func compareUserFsConfig(expected *dataprovider.User, actual *dataprovider.User)
|
|||
expected.FsConfig.S3Config.KeyPrefix+"/" != actual.FsConfig.S3Config.KeyPrefix {
|
||||
return errors.New("S3 key prefix mismatch")
|
||||
}
|
||||
if expected.FsConfig.GCSConfig.Bucket != actual.FsConfig.GCSConfig.Bucket {
|
||||
return errors.New("GCS bucket mismatch")
|
||||
}
|
||||
if expected.FsConfig.GCSConfig.StorageClass != actual.FsConfig.GCSConfig.StorageClass {
|
||||
return errors.New("GCS storage class mismatch")
|
||||
}
|
||||
if expected.FsConfig.GCSConfig.KeyPrefix != actual.FsConfig.GCSConfig.KeyPrefix &&
|
||||
expected.FsConfig.GCSConfig.KeyPrefix+"/" != actual.FsConfig.GCSConfig.KeyPrefix {
|
||||
return errors.New("GCS key prefix mismatch")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -33,6 +33,7 @@ const (
|
|||
webConnectionsPath = "/web/connections"
|
||||
webStaticFilesPath = "/static"
|
||||
maxRestoreSize = 10485760 // 10 MB
|
||||
maxRequestSize = 1048576 // 1MB
|
||||
)
|
||||
|
||||
var (
|
||||
|
|
|
@ -3,9 +3,12 @@ package httpd_test
|
|||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"mime/multipart"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
|
@ -56,6 +59,7 @@ var (
|
|||
defaultPerms = []string{dataprovider.PermAny}
|
||||
homeBasePath string
|
||||
backupsPath string
|
||||
credentialsPath string
|
||||
testServer *httptest.Server
|
||||
providerDriverName string
|
||||
)
|
||||
|
@ -66,7 +70,10 @@ func TestMain(m *testing.M) {
|
|||
logger.InitLogger(logfilePath, 5, 1, 28, false, zerolog.DebugLevel)
|
||||
config.LoadConfig(configDir, "")
|
||||
providerConf := config.GetProviderConf()
|
||||
credentialsPath = filepath.Join(os.TempDir(), "test_credentials")
|
||||
providerConf.CredentialsPath = credentialsPath
|
||||
providerDriverName = providerConf.Driver
|
||||
os.RemoveAll(credentialsPath)
|
||||
|
||||
err := dataprovider.Initialize(providerConf, configDir)
|
||||
if err != nil {
|
||||
|
@ -102,6 +109,7 @@ func TestMain(m *testing.M) {
|
|||
exitCode := m.Run()
|
||||
os.Remove(logfilePath)
|
||||
os.RemoveAll(backupsPath)
|
||||
os.RemoveAll(credentialsPath)
|
||||
os.Exit(exitCode)
|
||||
}
|
||||
|
||||
|
@ -250,6 +258,8 @@ func TestAddUserInvalidFsConfig(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Errorf("unexpected error adding user with invalid fs config: %v", err)
|
||||
}
|
||||
os.RemoveAll(credentialsPath)
|
||||
os.MkdirAll(credentialsPath, 0700)
|
||||
u.FsConfig.S3Config.Bucket = "test"
|
||||
u.FsConfig.S3Config.Region = "eu-west-1"
|
||||
u.FsConfig.S3Config.AccessKey = "access-key"
|
||||
|
@ -261,6 +271,32 @@ func TestAddUserInvalidFsConfig(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Errorf("unexpected error adding user with invalid fs config: %v", err)
|
||||
}
|
||||
u = getTestUser()
|
||||
u.FsConfig.Provider = 2
|
||||
u.FsConfig.GCSConfig.Bucket = ""
|
||||
_, _, err = httpd.AddUser(u, http.StatusBadRequest)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error adding user with invalid fs config: %v", err)
|
||||
}
|
||||
u.FsConfig.GCSConfig.Bucket = "test"
|
||||
u.FsConfig.GCSConfig.StorageClass = "Standard"
|
||||
u.FsConfig.GCSConfig.KeyPrefix = "/somedir/subdir/"
|
||||
u.FsConfig.GCSConfig.Credentials = base64.StdEncoding.EncodeToString([]byte("test"))
|
||||
_, _, err = httpd.AddUser(u, http.StatusBadRequest)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error adding user with invalid fs config: %v", err)
|
||||
}
|
||||
u.FsConfig.GCSConfig.KeyPrefix = "somedir/subdir/"
|
||||
u.FsConfig.GCSConfig.Credentials = ""
|
||||
_, _, err = httpd.AddUser(u, http.StatusBadRequest)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error adding user with invalid fs config: %v", err)
|
||||
}
|
||||
u.FsConfig.GCSConfig.Credentials = "no base64 encoded"
|
||||
_, _, err = httpd.AddUser(u, http.StatusBadRequest)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error adding user with invalid fs config: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUserPublicKey(t *testing.T) {
|
||||
|
@ -363,6 +399,56 @@ func TestUserS3Config(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestUserGCSConfig(t *testing.T) {
|
||||
user, _, err := httpd.AddUser(getTestUser(), http.StatusOK)
|
||||
if err != nil {
|
||||
t.Errorf("unable to add user: %v", err)
|
||||
}
|
||||
os.RemoveAll(credentialsPath)
|
||||
os.MkdirAll(credentialsPath, 0700)
|
||||
user.FsConfig.Provider = 2
|
||||
user.FsConfig.GCSConfig.Bucket = "test"
|
||||
user.FsConfig.GCSConfig.Credentials = base64.StdEncoding.EncodeToString([]byte("fake credentials"))
|
||||
user, _, err = httpd.UpdateUser(user, http.StatusOK)
|
||||
if err != nil {
|
||||
t.Errorf("unable to update user: %v", err)
|
||||
}
|
||||
_, err = httpd.RemoveUser(user, http.StatusOK)
|
||||
if err != nil {
|
||||
t.Errorf("unable to remove: %v", err)
|
||||
}
|
||||
user.Password = defaultPassword
|
||||
user.ID = 0
|
||||
// the user will be added since the credentials file is found
|
||||
user, _, err = httpd.AddUser(user, http.StatusOK)
|
||||
if err != nil {
|
||||
t.Errorf("unable to add user: %v", err)
|
||||
}
|
||||
user.FsConfig.Provider = 1
|
||||
user.FsConfig.S3Config.Bucket = "test1"
|
||||
user.FsConfig.S3Config.Region = "us-east-1"
|
||||
user.FsConfig.S3Config.AccessKey = "Server-Access-Key1"
|
||||
user.FsConfig.S3Config.AccessSecret = "secret"
|
||||
user.FsConfig.S3Config.Endpoint = "http://localhost:9000"
|
||||
user.FsConfig.S3Config.KeyPrefix = "somedir/subdir"
|
||||
user, _, err = httpd.UpdateUser(user, http.StatusOK)
|
||||
if err != nil {
|
||||
t.Errorf("unable to update user: %v", err)
|
||||
}
|
||||
user.FsConfig.Provider = 2
|
||||
user.FsConfig.GCSConfig.Bucket = "test1"
|
||||
user.FsConfig.GCSConfig.Credentials = base64.StdEncoding.EncodeToString([]byte("fake credentials"))
|
||||
user, _, err = httpd.UpdateUser(user, http.StatusOK)
|
||||
if err != nil {
|
||||
t.Errorf("unable to update user: %v", err)
|
||||
}
|
||||
|
||||
_, err = httpd.RemoveUser(user, http.StatusOK)
|
||||
if err != nil {
|
||||
t.Errorf("unable to remove: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateUserNoCredentials(t *testing.T) {
|
||||
user, _, err := httpd.AddUser(getTestUser(), http.StatusOK)
|
||||
if err != nil {
|
||||
|
@ -594,6 +680,8 @@ func TestUserBaseDir(t *testing.T) {
|
|||
dataprovider.Close(dataProvider)
|
||||
config.LoadConfig(configDir, "")
|
||||
providerConf = config.GetProviderConf()
|
||||
providerConf.CredentialsPath = credentialsPath
|
||||
os.RemoveAll(credentialsPath)
|
||||
err = dataprovider.Initialize(providerConf, configDir)
|
||||
if err != nil {
|
||||
t.Errorf("error initializing data provider")
|
||||
|
@ -646,6 +734,8 @@ func TestProviderErrors(t *testing.T) {
|
|||
os.Remove(backupFilePath)
|
||||
config.LoadConfig(configDir, "")
|
||||
providerConf := config.GetProviderConf()
|
||||
providerConf.CredentialsPath = credentialsPath
|
||||
os.RemoveAll(credentialsPath)
|
||||
err = dataprovider.Initialize(providerConf, configDir)
|
||||
if err != nil {
|
||||
t.Errorf("error initializing data provider")
|
||||
|
@ -655,7 +745,17 @@ func TestProviderErrors(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestDumpdata(t *testing.T) {
|
||||
_, _, err := httpd.Dumpdata("", http.StatusBadRequest)
|
||||
dataProvider := dataprovider.GetProvider()
|
||||
dataprovider.Close(dataProvider)
|
||||
config.LoadConfig(configDir, "")
|
||||
providerConf := config.GetProviderConf()
|
||||
err := dataprovider.Initialize(providerConf, configDir)
|
||||
if err != nil {
|
||||
t.Errorf("error initializing data provider")
|
||||
}
|
||||
httpd.SetDataProvider(dataprovider.GetProvider())
|
||||
sftpd.SetDataProvider(dataprovider.GetProvider())
|
||||
_, _, err = httpd.Dumpdata("", http.StatusBadRequest)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
|
@ -680,6 +780,15 @@ func TestDumpdata(t *testing.T) {
|
|||
}
|
||||
os.Chmod(backupsPath, 0755)
|
||||
}
|
||||
providerConf = config.GetProviderConf()
|
||||
providerConf.CredentialsPath = credentialsPath
|
||||
os.RemoveAll(credentialsPath)
|
||||
err = dataprovider.Initialize(providerConf, configDir)
|
||||
if err != nil {
|
||||
t.Errorf("error initializing data provider")
|
||||
}
|
||||
httpd.SetDataProvider(dataprovider.GetProvider())
|
||||
sftpd.SetDataProvider(dataprovider.GetProvider())
|
||||
}
|
||||
|
||||
func TestLoaddata(t *testing.T) {
|
||||
|
@ -1228,16 +1337,23 @@ func TestBasicWebUsersMock(t *testing.T) {
|
|||
checkResponseCode(t, http.StatusBadRequest, rr.Code)
|
||||
form := make(url.Values)
|
||||
form.Set("username", user.Username)
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
b, contentType, _ := getMultipartFormData(form, "", "")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/"+strconv.FormatInt(user.ID, 10), strings.NewReader(form.Encode()))
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/"+strconv.FormatInt(user.ID, 10), &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/0", strings.NewReader(form.Encode()))
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/0", &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusNotFound, rr.Code)
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/a", strings.NewReader(form.Encode()))
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/a", &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusBadRequest, rr.Code)
|
||||
req, _ = http.NewRequest(http.MethodDelete, userPath+"/"+strconv.FormatInt(user.ID, 10), nil)
|
||||
|
@ -1261,90 +1377,103 @@ func TestWebUserAddMock(t *testing.T) {
|
|||
form.Set("expiration_date", "")
|
||||
form.Set("permissions", "*")
|
||||
form.Set("sub_dirs_permissions", " /subdir:list ,download ")
|
||||
b, contentType, _ := getMultipartFormData(form, "", "")
|
||||
// test invalid url escape
|
||||
req, _ := http.NewRequest(http.MethodPost, webUserPath+"?a=%2", strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ := http.NewRequest(http.MethodPost, webUserPath+"?a=%2", &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr := executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("public_keys", testPubKey)
|
||||
form.Set("uid", strconv.FormatInt(int64(user.UID), 10))
|
||||
form.Set("gid", "a")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid gid
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("gid", "0")
|
||||
form.Set("max_sessions", "a")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid max sessions
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("max_sessions", "0")
|
||||
form.Set("quota_size", "a")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid quota size
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("quota_size", "0")
|
||||
form.Set("quota_files", "a")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid quota files
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("quota_files", "0")
|
||||
form.Set("upload_bandwidth", "a")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid upload bandwidth
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("upload_bandwidth", strconv.FormatInt(user.UploadBandwidth, 10))
|
||||
form.Set("download_bandwidth", "a")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid download bandwidth
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("download_bandwidth", strconv.FormatInt(user.DownloadBandwidth, 10))
|
||||
form.Set("status", "a")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid status
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("status", strconv.Itoa(user.Status))
|
||||
form.Set("expiration_date", "123")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid expiration date
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("expiration_date", "")
|
||||
form.Set("allowed_ip", "invalid,ip")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid allowed_ip
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("allowed_ip", "")
|
||||
form.Set("denied_ip", "192.168.1.2") // it should be 192.168.1.2/32
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
// test invalid denied_ip
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
form.Set("denied_ip", "")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusSeeOther, rr.Code)
|
||||
// the user already exists, was created with the above request
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
b, contentType, _ = getMultipartFormData(form, "", "")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath, &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
req, _ = http.NewRequest(http.MethodGet, userPath+"?limit=1&offset=0&order=ASC&username="+user.Username, nil)
|
||||
|
@ -1356,7 +1485,7 @@ func TestWebUserAddMock(t *testing.T) {
|
|||
t.Errorf("Error decoding users: %v", err)
|
||||
}
|
||||
if len(users) != 1 {
|
||||
t.Errorf("1 user is expected")
|
||||
t.Errorf("1 user is expected, actual: %v", len(users))
|
||||
}
|
||||
newUser := users[0]
|
||||
if newUser.UID != user.UID {
|
||||
|
@ -1413,8 +1542,9 @@ func TestWebUserUpdateMock(t *testing.T) {
|
|||
form.Set("expiration_date", "2020-01-01 00:00:00")
|
||||
form.Set("allowed_ip", " 192.168.1.3/32, 192.168.2.0/24 ")
|
||||
form.Set("denied_ip", " 10.0.0.2/32 ")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/"+strconv.FormatInt(user.ID, 10), strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
b, contentType, _ := getMultipartFormData(form, "", "")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/"+strconv.FormatInt(user.ID, 10), &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusSeeOther, rr.Code)
|
||||
req, _ = http.NewRequest(http.MethodGet, userPath+"?limit=1&offset=0&order=ASC&username="+user.Username, nil)
|
||||
|
@ -1504,8 +1634,9 @@ func TestWebUserS3Mock(t *testing.T) {
|
|||
form.Set("s3_storage_class", user.FsConfig.S3Config.StorageClass)
|
||||
form.Set("s3_endpoint", user.FsConfig.S3Config.Endpoint)
|
||||
form.Set("s3_key_prefix", user.FsConfig.S3Config.KeyPrefix)
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/"+strconv.FormatInt(user.ID, 10), strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
b, contentType, _ := getMultipartFormData(form, "", "")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/"+strconv.FormatInt(user.ID, 10), &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusSeeOther, rr.Code)
|
||||
req, _ = http.NewRequest(http.MethodGet, userPath+"?limit=1&offset=0&order=ASC&username="+user.Username, nil)
|
||||
|
@ -1552,6 +1683,97 @@ func TestWebUserS3Mock(t *testing.T) {
|
|||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
}
|
||||
|
||||
func TestWebUserGCSMock(t *testing.T) {
|
||||
user := getTestUser()
|
||||
userAsJSON := getUserAsJSON(t, user)
|
||||
req, _ := http.NewRequest(http.MethodPost, userPath, bytes.NewBuffer(userAsJSON))
|
||||
rr := executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
err := render.DecodeJSON(rr.Body, &user)
|
||||
if err != nil {
|
||||
t.Errorf("Error get user: %v", err)
|
||||
}
|
||||
credentialsFilePath := filepath.Join(os.TempDir(), "gcs.json")
|
||||
err = createTestFile(credentialsFilePath, 0)
|
||||
if err != nil {
|
||||
t.Errorf("unable to create credential test file: %v", err)
|
||||
}
|
||||
user.FsConfig.Provider = 2
|
||||
user.FsConfig.GCSConfig.Bucket = "test"
|
||||
user.FsConfig.GCSConfig.KeyPrefix = "somedir/subdir/"
|
||||
user.FsConfig.GCSConfig.StorageClass = "standard"
|
||||
form := make(url.Values)
|
||||
form.Set("username", user.Username)
|
||||
form.Set("home_dir", user.HomeDir)
|
||||
form.Set("uid", "0")
|
||||
form.Set("gid", strconv.FormatInt(int64(user.GID), 10))
|
||||
form.Set("max_sessions", strconv.FormatInt(int64(user.MaxSessions), 10))
|
||||
form.Set("quota_size", strconv.FormatInt(user.QuotaSize, 10))
|
||||
form.Set("quota_files", strconv.FormatInt(int64(user.QuotaFiles), 10))
|
||||
form.Set("upload_bandwidth", "0")
|
||||
form.Set("download_bandwidth", "0")
|
||||
form.Set("permissions", "*")
|
||||
form.Set("sub_dirs_permissions", "")
|
||||
form.Set("status", strconv.Itoa(user.Status))
|
||||
form.Set("expiration_date", "2020-01-01 00:00:00")
|
||||
form.Set("allowed_ip", "")
|
||||
form.Set("denied_ip", "")
|
||||
form.Set("fs_provider", "2")
|
||||
form.Set("gcs_bucket", user.FsConfig.GCSConfig.Bucket)
|
||||
form.Set("gcs_storage_class", user.FsConfig.GCSConfig.StorageClass)
|
||||
form.Set("gcs_key_prefix", user.FsConfig.GCSConfig.KeyPrefix)
|
||||
b, contentType, _ := getMultipartFormData(form, "", "")
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/"+strconv.FormatInt(user.ID, 10), &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
b, contentType, _ = getMultipartFormData(form, "gcs_credential_file", credentialsFilePath)
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/"+strconv.FormatInt(user.ID, 10), &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
err = createTestFile(credentialsFilePath, 4096)
|
||||
if err != nil {
|
||||
t.Errorf("unable to create credential test file: %v", err)
|
||||
}
|
||||
b, contentType, _ = getMultipartFormData(form, "gcs_credential_file", credentialsFilePath)
|
||||
req, _ = http.NewRequest(http.MethodPost, webUserPath+"/"+strconv.FormatInt(user.ID, 10), &b)
|
||||
req.Header.Set("Content-Type", contentType)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusSeeOther, rr.Code)
|
||||
req, _ = http.NewRequest(http.MethodGet, userPath+"?limit=1&offset=0&order=ASC&username="+user.Username, nil)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
var users []dataprovider.User
|
||||
err = render.DecodeJSON(rr.Body, &users)
|
||||
if err != nil {
|
||||
t.Errorf("Error decoding users: %v", err)
|
||||
}
|
||||
if len(users) != 1 {
|
||||
t.Errorf("1 user is expected")
|
||||
}
|
||||
updateUser := users[0]
|
||||
if updateUser.ExpirationDate != 1577836800000 {
|
||||
t.Errorf("invalid expiration date: %v", updateUser.ExpirationDate)
|
||||
}
|
||||
if updateUser.FsConfig.Provider != user.FsConfig.Provider {
|
||||
t.Error("fs provider mismatch")
|
||||
}
|
||||
if updateUser.FsConfig.GCSConfig.Bucket != user.FsConfig.GCSConfig.Bucket {
|
||||
t.Error("GCS bucket mismatch")
|
||||
}
|
||||
if updateUser.FsConfig.GCSConfig.StorageClass != user.FsConfig.GCSConfig.StorageClass {
|
||||
t.Error("GCS storage class mismatch")
|
||||
}
|
||||
if updateUser.FsConfig.GCSConfig.KeyPrefix != user.FsConfig.GCSConfig.KeyPrefix {
|
||||
t.Error("GCS key prefix mismatch")
|
||||
}
|
||||
req, _ = http.NewRequest(http.MethodDelete, userPath+"/"+strconv.FormatInt(user.ID, 10), nil)
|
||||
rr = executeRequest(req)
|
||||
checkResponseCode(t, http.StatusOK, rr.Code)
|
||||
os.Remove(credentialsFilePath)
|
||||
}
|
||||
|
||||
func TestProviderClosedMock(t *testing.T) {
|
||||
if providerDriverName == dataprovider.BoltDataProviderName {
|
||||
t.Skip("skipping test provider errors for bolt provider")
|
||||
|
@ -1571,6 +1793,8 @@ func TestProviderClosedMock(t *testing.T) {
|
|||
checkResponseCode(t, http.StatusInternalServerError, rr.Code)
|
||||
config.LoadConfig(configDir, "")
|
||||
providerConf := config.GetProviderConf()
|
||||
providerConf.CredentialsPath = credentialsPath
|
||||
os.RemoveAll(credentialsPath)
|
||||
err := dataprovider.Initialize(providerConf, configDir)
|
||||
if err != nil {
|
||||
t.Errorf("error initializing data provider")
|
||||
|
@ -1644,9 +1868,38 @@ func createTestFile(path string, size int64) error {
|
|||
os.MkdirAll(baseDir, 0777)
|
||||
}
|
||||
content := make([]byte, size)
|
||||
_, err := rand.Read(content)
|
||||
if err != nil {
|
||||
return err
|
||||
if size > 0 {
|
||||
_, err := rand.Read(content)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return ioutil.WriteFile(path, content, 0666)
|
||||
}
|
||||
|
||||
func getMultipartFormData(values url.Values, fileFieldName, filePath string) (bytes.Buffer, string, error) {
|
||||
var b bytes.Buffer
|
||||
w := multipart.NewWriter(&b)
|
||||
for k, v := range values {
|
||||
for _, s := range v {
|
||||
if err := w.WriteField(k, s); err != nil {
|
||||
return b, "", err
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(fileFieldName) > 0 && len(filePath) > 0 {
|
||||
fw, err := w.CreateFormFile(fileFieldName, filepath.Base(filePath))
|
||||
if err != nil {
|
||||
return b, "", err
|
||||
}
|
||||
f, err := os.Open(filePath)
|
||||
if err != nil {
|
||||
return b, "", err
|
||||
}
|
||||
if _, err = io.Copy(fw, f); err != nil {
|
||||
return b, "", err
|
||||
}
|
||||
}
|
||||
err := w.Close()
|
||||
return b, w.FormDataContentType(), err
|
||||
}
|
||||
|
|
|
@ -6,7 +6,9 @@ import (
|
|||
"html/template"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"net/url"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
|
@ -280,6 +282,38 @@ func TestCompareUserFsConfig(t *testing.T) {
|
|||
if err == nil {
|
||||
t.Errorf("S3 key prefix does not match")
|
||||
}
|
||||
expected.FsConfig.S3Config.KeyPrefix = ""
|
||||
expected.FsConfig.GCSConfig.KeyPrefix = "somedir/subdir"
|
||||
err = compareUserFsConfig(expected, actual)
|
||||
if err == nil {
|
||||
t.Errorf("GCS key prefix does not match")
|
||||
}
|
||||
expected.FsConfig.GCSConfig.KeyPrefix = ""
|
||||
expected.FsConfig.GCSConfig.Bucket = "bucket"
|
||||
err = compareUserFsConfig(expected, actual)
|
||||
if err == nil {
|
||||
t.Errorf("GCS bucket does not match")
|
||||
}
|
||||
expected.FsConfig.GCSConfig.Bucket = ""
|
||||
expected.FsConfig.GCSConfig.StorageClass = "Standard"
|
||||
err = compareUserFsConfig(expected, actual)
|
||||
if err == nil {
|
||||
t.Errorf("GCS storage class does not match")
|
||||
}
|
||||
expected.FsConfig.GCSConfig.StorageClass = ""
|
||||
}
|
||||
|
||||
func TestGCSWebInvalidFormFile(t *testing.T) {
|
||||
form := make(url.Values)
|
||||
form.Set("username", "test_username")
|
||||
form.Set("fs_provider", "2")
|
||||
req, _ := http.NewRequest(http.MethodPost, webUserPath, strings.NewReader(form.Encode()))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req.ParseForm()
|
||||
_, err := getFsConfigFromUserPostFields(req)
|
||||
if err != http.ErrNotMultipart {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApiCallsWithBadURL(t *testing.T) {
|
||||
|
|
|
@ -2,7 +2,7 @@ openapi: 3.0.1
|
|||
info:
|
||||
title: SFTPGo
|
||||
description: 'SFTPGo REST API'
|
||||
version: 1.5.0
|
||||
version: 1.6.0
|
||||
|
||||
servers:
|
||||
- url: /api/v1
|
||||
|
@ -743,6 +743,26 @@ components:
|
|||
- access_secret
|
||||
nullable: true
|
||||
description: S3 Compatible Object Storage configuration details
|
||||
GCSConfig:
|
||||
type: object
|
||||
properties:
|
||||
bucket:
|
||||
type: string
|
||||
minLength: 1
|
||||
credentials:
|
||||
type: string
|
||||
format: byte
|
||||
description: Google Cloud Storage JSON credentials base64 encoded. This field must be populated only when adding/updating an user. It will be always omitted, since there are sensitive data, when you search/get users. The credentials will be stored in the configured "credentials_path"
|
||||
storage_class:
|
||||
type: string
|
||||
key_prefix:
|
||||
type: string
|
||||
description: key_prefix is similar to a chroot directory for a local filesystem. If specified the SFTP user will only see contents that starts with this prefix and so you can restrict access to a specific virtual folder. The prefix, if not empty, must not start with "/" and must end with "/". If empty the whole bucket contents will be available
|
||||
example: folder/subfolder/
|
||||
required:
|
||||
- bucket
|
||||
nullable: true
|
||||
description: Google Cloud Storage configuration details
|
||||
FilesystemConfig:
|
||||
type: object
|
||||
properties:
|
||||
|
@ -751,12 +771,16 @@ components:
|
|||
enum:
|
||||
- 0
|
||||
- 1
|
||||
- 2
|
||||
description: >
|
||||
Providers:
|
||||
* `0` - local filesystem
|
||||
* `1` - S3 Compatible Object Storage
|
||||
* `2` - Google Cloud Storage
|
||||
s3config:
|
||||
$ref: '#/components/schemas/S3Config'
|
||||
gcsconfig:
|
||||
$ref: '#/components/schemas/GCSConfig'
|
||||
description: Storage filesystem details
|
||||
User:
|
||||
type: object
|
||||
|
|
37
httpd/web.go
37
httpd/web.go
|
@ -1,8 +1,11 @@
|
|||
package httpd
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
|
@ -224,7 +227,7 @@ func getFiltersFromUserPostFields(r *http.Request) dataprovider.UserFilters {
|
|||
return filters
|
||||
}
|
||||
|
||||
func getFsConfigFromUserPostFields(r *http.Request) dataprovider.Filesystem {
|
||||
func getFsConfigFromUserPostFields(r *http.Request) (dataprovider.Filesystem, error) {
|
||||
var fs dataprovider.Filesystem
|
||||
provider, err := strconv.Atoi(r.Form.Get("fs_provider"))
|
||||
if err != nil {
|
||||
|
@ -239,13 +242,33 @@ func getFsConfigFromUserPostFields(r *http.Request) dataprovider.Filesystem {
|
|||
fs.S3Config.Endpoint = r.Form.Get("s3_endpoint")
|
||||
fs.S3Config.StorageClass = r.Form.Get("s3_storage_class")
|
||||
fs.S3Config.KeyPrefix = r.Form.Get("s3_key_prefix")
|
||||
} else if fs.Provider == 2 {
|
||||
fs.GCSConfig.Bucket = r.Form.Get("gcs_bucket")
|
||||
fs.GCSConfig.StorageClass = r.Form.Get("gcs_storage_class")
|
||||
fs.GCSConfig.KeyPrefix = r.Form.Get("gcs_key_prefix")
|
||||
credentials, _, err := r.FormFile("gcs_credential_file")
|
||||
if err == http.ErrMissingFile {
|
||||
return fs, nil
|
||||
}
|
||||
if err != nil {
|
||||
return fs, err
|
||||
}
|
||||
defer credentials.Close()
|
||||
fileBytes, err := ioutil.ReadAll(credentials)
|
||||
if err != nil || len(fileBytes) == 0 {
|
||||
if len(fileBytes) == 0 {
|
||||
err = errors.New("credentials file size must be greater than 0")
|
||||
}
|
||||
return fs, err
|
||||
}
|
||||
fs.GCSConfig.Credentials = base64.StdEncoding.EncodeToString(fileBytes)
|
||||
}
|
||||
return fs
|
||||
return fs, nil
|
||||
}
|
||||
|
||||
func getUserFromPostFields(r *http.Request) (dataprovider.User, error) {
|
||||
var user dataprovider.User
|
||||
err := r.ParseForm()
|
||||
err := r.ParseMultipartForm(maxRequestSize)
|
||||
if err != nil {
|
||||
return user, err
|
||||
}
|
||||
|
@ -292,6 +315,10 @@ func getUserFromPostFields(r *http.Request) (dataprovider.User, error) {
|
|||
}
|
||||
expirationDateMillis = utils.GetTimeAsMsSinceEpoch(expirationDate)
|
||||
}
|
||||
fsConfig, err := getFsConfigFromUserPostFields(r)
|
||||
if err != nil {
|
||||
return user, err
|
||||
}
|
||||
user = dataprovider.User{
|
||||
Username: r.Form.Get("username"),
|
||||
Password: r.Form.Get("password"),
|
||||
|
@ -308,7 +335,7 @@ func getUserFromPostFields(r *http.Request) (dataprovider.User, error) {
|
|||
Status: status,
|
||||
ExpirationDate: expirationDateMillis,
|
||||
Filters: getFiltersFromUserPostFields(r),
|
||||
FsConfig: getFsConfigFromUserPostFields(r),
|
||||
FsConfig: fsConfig,
|
||||
}
|
||||
return user, err
|
||||
}
|
||||
|
@ -365,6 +392,7 @@ func handleWebUpdateUserGet(userID string, w http.ResponseWriter, r *http.Reques
|
|||
}
|
||||
|
||||
func handleWebAddUserPost(w http.ResponseWriter, r *http.Request) {
|
||||
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
|
||||
user, err := getUserFromPostFields(r)
|
||||
if err != nil {
|
||||
renderAddUserPage(w, user, err.Error())
|
||||
|
@ -379,6 +407,7 @@ func handleWebAddUserPost(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func handleWebUpdateUserPost(userID string, w http.ResponseWriter, r *http.Request) {
|
||||
r.Body = http.MaxBytesReader(w, r.Body, maxRequestSize)
|
||||
id, err := strconv.ParseInt(userID, 10, 64)
|
||||
if err != nil {
|
||||
renderBadRequestPage(w, err)
|
||||
|
|
|
@ -246,22 +246,94 @@ var (
|
|||
Help: "The total number of successful S3 head bucket requests",
|
||||
})
|
||||
|
||||
// totalS3CreateBucket is the metric that reports the total successful S3 create bucket requests
|
||||
totalS3CreateBucket = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_s3_create_bucket",
|
||||
Help: "The total number of successful S3 create bucket requests",
|
||||
})
|
||||
|
||||
// totalS3HeadBucketErrors is the metric that reports the total S3 head bucket errors
|
||||
totalS3HeadBucketErrors = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_s3_head_bucket_errors",
|
||||
Help: "The total number of S3 head bucket errors",
|
||||
})
|
||||
|
||||
// totalS3CreateBucketErrors is the metric that reports the total S3 create bucket errors
|
||||
totalS3CreateBucketErrors = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_s3_create_bucket_errors",
|
||||
Help: "The total number of S3 create bucket errors",
|
||||
// totalGCSUploads is the metric that reports the total number of successful GCS uploads
|
||||
totalGCSUploads = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_uploads_total",
|
||||
Help: "The total number of successful GCS uploads",
|
||||
})
|
||||
|
||||
// totalGCSDownloads is the metric that reports the total number of successful GCS downloads
|
||||
totalGCSDownloads = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_downloads_total",
|
||||
Help: "The total number of successful GCS downloads",
|
||||
})
|
||||
|
||||
// totalGCSUploadErrors is the metric that reports the total number of GCS upload errors
|
||||
totalGCSUploadErrors = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_upload_errors_total",
|
||||
Help: "The total number of GCS upload errors",
|
||||
})
|
||||
|
||||
// totalGCSDownloadErrors is the metric that reports the total number of GCS download errors
|
||||
totalGCSDownloadErrors = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_download_errors_total",
|
||||
Help: "The total number of GCS download errors",
|
||||
})
|
||||
|
||||
// totalGCSUploadSize is the metric that reports the total GCS uploads size as bytes
|
||||
totalGCSUploadSize = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_upload_size",
|
||||
Help: "The total GCS upload size as bytes, partial uploads are included",
|
||||
})
|
||||
|
||||
// totalGCSDownloadSize is the metric that reports the total GCS downloads size as bytes
|
||||
totalGCSDownloadSize = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_download_size",
|
||||
Help: "The total GCS download size as bytes, partial downloads are included",
|
||||
})
|
||||
|
||||
// totalS3ListObjects is the metric that reports the total successful GCS list objects requests
|
||||
totalGCSListObjects = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_list_objects",
|
||||
Help: "The total number of successful GCS list objects requests",
|
||||
})
|
||||
|
||||
// totalGCSCopyObject is the metric that reports the total successful GCS copy object requests
|
||||
totalGCSCopyObject = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_copy_object",
|
||||
Help: "The total number of successful GCS copy object requests",
|
||||
})
|
||||
|
||||
// totalGCSDeleteObject is the metric that reports the total successful S3 delete object requests
|
||||
totalGCSDeleteObject = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_delete_object",
|
||||
Help: "The total number of successful GCS delete object requests",
|
||||
})
|
||||
|
||||
// totalGCSListObjectsError is the metric that reports the total GCS list objects errors
|
||||
totalGCSListObjectsErrors = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_list_objects_errors",
|
||||
Help: "The total number of GCS list objects errors",
|
||||
})
|
||||
|
||||
// totalGCSCopyObjectErrors is the metric that reports the total GCS copy object errors
|
||||
totalGCSCopyObjectErrors = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_copy_object_errors",
|
||||
Help: "The total number of GCS copy object errors",
|
||||
})
|
||||
|
||||
// totalGCSDeleteObjectErrors is the metric that reports the total GCS delete object errors
|
||||
totalGCSDeleteObjectErrors = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_delete_object_errors",
|
||||
Help: "The total number of GCS delete object errors",
|
||||
})
|
||||
|
||||
// totalGCSHeadBucket is the metric that reports the total successful GCS head bucket requests
|
||||
totalGCSHeadBucket = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_head_bucket",
|
||||
Help: "The total number of successful GCS head bucket requests",
|
||||
})
|
||||
|
||||
// totalGCSHeadBucketErrors is the metric that reports the total GCS head bucket errors
|
||||
totalGCSHeadBucketErrors = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "sftpgo_gcs_head_bucket_errors",
|
||||
Help: "The total number of GCS head bucket errors",
|
||||
})
|
||||
)
|
||||
|
||||
|
@ -343,12 +415,60 @@ func S3HeadBucketCompleted(err error) {
|
|||
}
|
||||
}
|
||||
|
||||
// S3CreateBucketCompleted updates metrics after an S3 create bucket request terminates
|
||||
func S3CreateBucketCompleted(err error) {
|
||||
if err == nil {
|
||||
totalS3CreateBucket.Inc()
|
||||
// GCSTransferCompleted updates metrics after a GCS upload or a download
|
||||
func GCSTransferCompleted(bytes int64, transferKind int, err error) {
|
||||
if transferKind == 0 {
|
||||
// upload
|
||||
if err == nil {
|
||||
totalGCSUploads.Inc()
|
||||
} else {
|
||||
totalGCSUploadErrors.Inc()
|
||||
}
|
||||
totalGCSUploadSize.Add(float64(bytes))
|
||||
} else {
|
||||
totalS3CreateBucketErrors.Inc()
|
||||
// download
|
||||
if err == nil {
|
||||
totalGCSDownloads.Inc()
|
||||
} else {
|
||||
totalGCSDownloadErrors.Inc()
|
||||
}
|
||||
totalGCSDownloadSize.Add(float64(bytes))
|
||||
}
|
||||
}
|
||||
|
||||
// GCSListObjectsCompleted updates metrics after a GCS list objects request terminates
|
||||
func GCSListObjectsCompleted(err error) {
|
||||
if err == nil {
|
||||
totalGCSListObjects.Inc()
|
||||
} else {
|
||||
totalGCSListObjectsErrors.Inc()
|
||||
}
|
||||
}
|
||||
|
||||
// GCSCopyObjectCompleted updates metrics after a GCS copy object request terminates
|
||||
func GCSCopyObjectCompleted(err error) {
|
||||
if err == nil {
|
||||
totalGCSCopyObject.Inc()
|
||||
} else {
|
||||
totalGCSCopyObjectErrors.Inc()
|
||||
}
|
||||
}
|
||||
|
||||
// GCSDeleteObjectCompleted updates metrics after a GCS delete object request terminates
|
||||
func GCSDeleteObjectCompleted(err error) {
|
||||
if err == nil {
|
||||
totalGCSDeleteObject.Inc()
|
||||
} else {
|
||||
totalGCSDeleteObjectErrors.Inc()
|
||||
}
|
||||
}
|
||||
|
||||
// GCSHeadBucketCompleted updates metrics after a GCS head bucket request terminates
|
||||
func GCSHeadBucketCompleted(err error) {
|
||||
if err == nil {
|
||||
totalGCSHeadBucket.Inc()
|
||||
} else {
|
||||
totalGCSHeadBucketErrors.Inc()
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -54,6 +54,7 @@ Output:
|
|||
"download_bandwidth": 60,
|
||||
"expiration_date": 1546297200000,
|
||||
"filesystem": {
|
||||
"gcsconfig": {},
|
||||
"provider": 1,
|
||||
"s3config": {
|
||||
"access_key": "accesskey",
|
||||
|
@ -139,6 +140,7 @@ Output:
|
|||
"download_bandwidth": 80,
|
||||
"expiration_date": 0,
|
||||
"filesystem": {
|
||||
"gcsconfig": {},
|
||||
"provider": 0,
|
||||
"s3config": {}
|
||||
},
|
||||
|
@ -191,6 +193,7 @@ Output:
|
|||
"download_bandwidth": 80,
|
||||
"expiration_date": 0,
|
||||
"filesystem": {
|
||||
"gcsconfig": {},
|
||||
"provider": 0,
|
||||
"s3config": {}
|
||||
},
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
#!/usr/bin/env python
|
||||
import argparse
|
||||
import base64
|
||||
from datetime import datetime
|
||||
import json
|
||||
import platform
|
||||
|
@ -74,7 +75,7 @@ class SFTPGoApiRequests:
|
|||
max_sessions=0, quota_size=0, quota_files=0, permissions={}, upload_bandwidth=0, download_bandwidth=0,
|
||||
status=1, expiration_date=0, allowed_ip=[], denied_ip=[], fs_provider='local', s3_bucket='',
|
||||
s3_region='', s3_access_key='', s3_access_secret='', s3_endpoint='', s3_storage_class='',
|
||||
s3_key_prefix=''):
|
||||
s3_key_prefix='', gcs_bucket='', gcs_key_prefix='', gcs_storage_class='', gcs_credentials_file=''):
|
||||
user = {"id":user_id, "username":username, "uid":uid, "gid":gid,
|
||||
"max_sessions":max_sessions, "quota_size":quota_size, "quota_files":quota_files,
|
||||
"upload_bandwidth":upload_bandwidth, "download_bandwidth":download_bandwidth,
|
||||
|
@ -92,9 +93,9 @@ class SFTPGoApiRequests:
|
|||
user.update({"permissions":permissions})
|
||||
if allowed_ip or denied_ip:
|
||||
user.update({"filters":self.buildFilters(allowed_ip, denied_ip)})
|
||||
user.update({"filesystem":self.buildFsConfig(fs_provider, s3_bucket, s3_region, s3_access_key,
|
||||
s3_access_secret, s3_endpoint, s3_storage_class,
|
||||
s3_key_prefix)})
|
||||
user.update({"filesystem":self.buildFsConfig(fs_provider, s3_bucket, s3_region, s3_access_key, s3_access_secret,
|
||||
s3_endpoint, s3_storage_class, s3_key_prefix, gcs_bucket,
|
||||
gcs_key_prefix, gcs_storage_class, gcs_credentials_file)})
|
||||
return user
|
||||
|
||||
def buildPermissions(self, root_perms, subdirs_perms):
|
||||
|
@ -129,13 +130,19 @@ class SFTPGoApiRequests:
|
|||
return filters
|
||||
|
||||
def buildFsConfig(self, fs_provider, s3_bucket, s3_region, s3_access_key, s3_access_secret, s3_endpoint,
|
||||
s3_storage_class, s3_key_prefix):
|
||||
s3_storage_class, s3_key_prefix, gcs_bucket, gcs_key_prefix, gcs_storage_class, gcs_credentials_file):
|
||||
fs_config = {'provider':0}
|
||||
if fs_provider == 'S3':
|
||||
s3config = {'bucket':s3_bucket, 'region':s3_region, 'access_key':s3_access_key, 'access_secret':
|
||||
s3_access_secret, 'endpoint':s3_endpoint, 'storage_class':s3_storage_class, 'key_prefix':
|
||||
s3_key_prefix}
|
||||
fs_config.update({'provider':1, 's3config':s3config})
|
||||
elif fs_provider == 'GCS':
|
||||
gcsconfig = {'bucket':gcs_bucket, 'key_prefix':gcs_key_prefix, 'storage_class':gcs_storage_class}
|
||||
if gcs_credentials_file:
|
||||
with open(gcs_credentials_file) as creds:
|
||||
gcsconfig.update({'credentials':base64.b64encode(creds.read().encode('UTF-8')).decode('UTF-8')})
|
||||
fs_config.update({'provider':2, 'gcsconfig':gcsconfig})
|
||||
return fs_config
|
||||
|
||||
def getUsers(self, limit=100, offset=0, order="ASC", username=""):
|
||||
|
@ -150,11 +157,13 @@ class SFTPGoApiRequests:
|
|||
def addUser(self, username="", password="", public_keys="", home_dir="", uid=0, gid=0, max_sessions=0, quota_size=0,
|
||||
quota_files=0, perms=[], upload_bandwidth=0, download_bandwidth=0, status=1, expiration_date=0,
|
||||
subdirs_permissions=[], allowed_ip=[], denied_ip=[], fs_provider='local', s3_bucket='', s3_region='',
|
||||
s3_access_key='', s3_access_secret='', s3_endpoint='', s3_storage_class='', s3_key_prefix=''):
|
||||
s3_access_key='', s3_access_secret='', s3_endpoint='', s3_storage_class='', s3_key_prefix='', gcs_bucket='',
|
||||
gcs_key_prefix='', gcs_storage_class='', gcs_credentials_file=''):
|
||||
u = self.buildUserObject(0, username, password, public_keys, home_dir, uid, gid, max_sessions,
|
||||
quota_size, quota_files, self.buildPermissions(perms, subdirs_permissions), upload_bandwidth, download_bandwidth,
|
||||
status, expiration_date, allowed_ip, denied_ip, fs_provider, s3_bucket, s3_region,
|
||||
s3_access_key, s3_access_secret, s3_endpoint, s3_storage_class, s3_key_prefix)
|
||||
status, expiration_date, allowed_ip, denied_ip, fs_provider, s3_bucket, s3_region, s3_access_key,
|
||||
s3_access_secret, s3_endpoint, s3_storage_class, s3_key_prefix, gcs_bucket, gcs_key_prefix, gcs_storage_class,
|
||||
gcs_credentials_file)
|
||||
r = requests.post(self.userPath, json=u, auth=self.auth, verify=self.verify)
|
||||
self.printResponse(r)
|
||||
|
||||
|
@ -162,11 +171,12 @@ class SFTPGoApiRequests:
|
|||
quota_size=0, quota_files=0, perms=[], upload_bandwidth=0, download_bandwidth=0, status=1,
|
||||
expiration_date=0, subdirs_permissions=[], allowed_ip=[], denied_ip=[], fs_provider='local',
|
||||
s3_bucket='', s3_region='', s3_access_key='', s3_access_secret='', s3_endpoint='', s3_storage_class='',
|
||||
s3_key_prefix=''):
|
||||
s3_key_prefix='', gcs_bucket='', gcs_key_prefix='', gcs_storage_class='', gcs_credentials_file=''):
|
||||
u = self.buildUserObject(user_id, username, password, public_keys, home_dir, uid, gid, max_sessions,
|
||||
quota_size, quota_files, self.buildPermissions(perms, subdirs_permissions), upload_bandwidth, download_bandwidth,
|
||||
status, expiration_date, allowed_ip, denied_ip, fs_provider, s3_bucket, s3_region, s3_access_key,
|
||||
s3_access_secret, s3_endpoint, s3_storage_class, s3_key_prefix)
|
||||
s3_access_secret, s3_endpoint, s3_storage_class, s3_key_prefix, gcs_bucket, gcs_key_prefix, gcs_storage_class,
|
||||
gcs_credentials_file)
|
||||
r = requests.put(urlparse.urljoin(self.userPath, "user/" + str(user_id)), json=u, auth=self.auth, verify=self.verify)
|
||||
self.printResponse(r)
|
||||
|
||||
|
@ -420,7 +430,7 @@ def addCommonUserArguments(parser):
|
|||
help='Allowed IP/Mask in CIDR notation. For example "192.168.2.0/24" or "2001:db8::/32". Default: %(default)s')
|
||||
parser.add_argument('-N', '--denied-ip', type=str, nargs='+', default=[],
|
||||
help='Denied IP/Mask in CIDR notation. For example "192.168.2.0/24" or "2001:db8::/32". Default: %(default)s')
|
||||
parser.add_argument('--fs', type=str, default='local', choices=['local', 'S3'],
|
||||
parser.add_argument('--fs', type=str, default='local', choices=['local', 'S3', 'GCS'],
|
||||
help='Filesystem provider. Default: %(default)s')
|
||||
parser.add_argument('--s3-bucket', type=str, default='', help='Default: %(default)s')
|
||||
parser.add_argument('--s3-key-prefix', type=str, default='', help='Virtual root directory. If non empty only this ' +
|
||||
|
@ -431,6 +441,12 @@ def addCommonUserArguments(parser):
|
|||
parser.add_argument('--s3-access-secret', type=str, default='', help='Default: %(default)s')
|
||||
parser.add_argument('--s3-endpoint', type=str, default='', help='Default: %(default)s')
|
||||
parser.add_argument('--s3-storage-class', type=str, default='', help='Default: %(default)s')
|
||||
parser.add_argument('--gcs-bucket', type=str, default='', help='Default: %(default)s')
|
||||
parser.add_argument('--gcs-key-prefix', type=str, default='', help='Virtual root directory. If non empty only this ' +
|
||||
'directory and its contents will be available. Cannot start with "/". For example "folder/subfolder/".' +
|
||||
' Default: %(default)s')
|
||||
parser.add_argument('--gcs-storage-class', type=str, default='', help='Default: %(default)s')
|
||||
parser.add_argument('--gcs-credentials-file', type=str, default='', help='Default: %(default)s')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
@ -534,14 +550,16 @@ if __name__ == '__main__':
|
|||
args.quota_size, args.quota_files, args.permissions, args.upload_bandwidth, args.download_bandwidth,
|
||||
args.status, getDatetimeAsMillisSinceEpoch(args.expiration_date), args.subdirs_permissions, args.allowed_ip,
|
||||
args.denied_ip, args.fs, args.s3_bucket, args.s3_region, args.s3_access_key, args.s3_access_secret,
|
||||
args.s3_endpoint, args.s3_storage_class, args.s3_key_prefix)
|
||||
args.s3_endpoint, args.s3_storage_class, args.s3_key_prefix, args.gcs_bucket, args.gcs_key_prefix,
|
||||
args.gcs_storage_class, args.gcs_credentials_file)
|
||||
elif args.command == 'update-user':
|
||||
api.updateUser(args.id, args.username, args.password, args.public_keys, args.home_dir, args.uid, args.gid,
|
||||
args.max_sessions, args.quota_size, args.quota_files, args.permissions, args.upload_bandwidth,
|
||||
args.download_bandwidth, args.status, getDatetimeAsMillisSinceEpoch(args.expiration_date),
|
||||
args.subdirs_permissions, args.allowed_ip, args.denied_ip, args.fs, args.s3_bucket, args.s3_region,
|
||||
args.s3_access_key, args.s3_access_secret, args.s3_endpoint, args.s3_storage_class,
|
||||
args.s3_key_prefix)
|
||||
args.s3_key_prefix, args.gcs_bucket, args.gcs_key_prefix, args.gcs_storage_class,
|
||||
args.gcs_credentials_file)
|
||||
elif args.command == 'delete-user':
|
||||
api.deleteUser(args.id)
|
||||
elif args.command == 'get-users':
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
"math/rand"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
@ -146,6 +147,7 @@ func (s *Service) StartPortableMode(sftpdPort int, enabledSSHCommands []string,
|
|||
}
|
||||
dataProviderConf := config.GetProviderConf()
|
||||
dataProviderConf.Driver = dataprovider.MemoryDataProviderName
|
||||
dataProviderConf.CredentialsPath = filepath.Join(os.TempDir(), "credentials")
|
||||
config.SetProviderConf(dataProviderConf)
|
||||
httpdConf := config.GetHTTPDConfig()
|
||||
httpdConf.BindPort = 0
|
||||
|
|
|
@ -1053,7 +1053,7 @@ func TestLoginInvalidFs(t *testing.T) {
|
|||
t.Errorf("unable to add user: %v", err)
|
||||
}
|
||||
// we update the database using sqlite3 CLI since we cannot add an user with an invalid config
|
||||
time.Sleep(150 * time.Millisecond)
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
updateUserQuery := fmt.Sprintf("UPDATE users SET filesystem='{\"provider\":1}' WHERE id=%v", user.ID)
|
||||
cmd := exec.Command("sqlite3", dbPath, updateUserQuery)
|
||||
out, err := cmd.CombinedOutput()
|
||||
|
@ -3039,7 +3039,11 @@ func TestRelativePaths(t *testing.T) {
|
|||
KeyPrefix: strings.TrimPrefix(user.GetHomeDir(), "/") + "/",
|
||||
}
|
||||
s3fs, _ := vfs.NewS3Fs("", user.GetHomeDir(), s3config)
|
||||
filesystems = append(filesystems, s3fs)
|
||||
gcsConfig := vfs.GCSFsConfig{
|
||||
KeyPrefix: strings.TrimPrefix(user.GetHomeDir(), "/") + "/",
|
||||
}
|
||||
gcsfs, _ := vfs.NewGCSFs("", user.GetHomeDir(), gcsConfig)
|
||||
filesystems = append(filesystems, s3fs, gcsfs)
|
||||
for _, fs := range filesystems {
|
||||
path = filepath.Join(user.HomeDir, "/")
|
||||
rel = fs.GetRelativePath(path)
|
||||
|
@ -3104,7 +3108,11 @@ func TestResolvePaths(t *testing.T) {
|
|||
}
|
||||
os.MkdirAll(user.GetHomeDir(), 0777)
|
||||
s3fs, _ := vfs.NewS3Fs("", user.GetHomeDir(), s3config)
|
||||
filesystems = append(filesystems, s3fs)
|
||||
gcsConfig := vfs.GCSFsConfig{
|
||||
KeyPrefix: strings.TrimPrefix(user.GetHomeDir(), "/") + "/",
|
||||
}
|
||||
gcsfs, _ := vfs.NewGCSFs("", user.GetHomeDir(), gcsConfig)
|
||||
filesystems = append(filesystems, s3fs, gcsfs)
|
||||
for _, fs := range filesystems {
|
||||
path = "/"
|
||||
resolved, _ = fs.ResolvePath(filepath.ToSlash(path))
|
||||
|
@ -3509,7 +3517,8 @@ func TestSCPBasicHandling(t *testing.T) {
|
|||
t.Errorf("stat for the downloaded file must succeed")
|
||||
} else {
|
||||
if fi.Size() != testFileSize {
|
||||
t.Errorf("size of the file downloaded via SCP does not match the expected one")
|
||||
t.Errorf("size of the file downloaded via SCP does not match the expected one: %v/%v",
|
||||
fi.Size(), testFileSize)
|
||||
}
|
||||
}
|
||||
os.Remove(localPath)
|
||||
|
@ -3582,7 +3591,8 @@ func TestSCPUploadFileOverwrite(t *testing.T) {
|
|||
t.Errorf("stat for the downloaded file must succeed")
|
||||
} else {
|
||||
if fi.Size() != testFileSize {
|
||||
t.Errorf("size of the file downloaded via SCP does not match the expected one")
|
||||
t.Errorf("size of the file downloaded via SCP does not match the expected one: %v/%v",
|
||||
fi.Size(), testFileSize)
|
||||
}
|
||||
}
|
||||
os.Remove(localPath)
|
||||
|
|
|
@ -42,7 +42,8 @@
|
|||
"http_notification_url": ""
|
||||
},
|
||||
"external_auth_program": "",
|
||||
"external_auth_scope": 0
|
||||
"external_auth_scope": 0,
|
||||
"credentials_path": "credentials"
|
||||
},
|
||||
"httpd": {
|
||||
"bind_port": 8080,
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
<div class="card-body text-form-error">{{.Error}}</div>
|
||||
</div>
|
||||
{{end}}
|
||||
<form id="user_form" action="{{.CurrentURL}}" method="POST" autocomplete="off">
|
||||
<form id="user_form" enctype="multipart/form-data" action="{{.CurrentURL}}" method="POST" autocomplete="off">
|
||||
<div class="form-group row">
|
||||
<label for="idUsername" class="col-sm-2 col-form-label">Username</label>
|
||||
<div class="col-sm-10">
|
||||
|
@ -194,14 +194,15 @@
|
|||
<div class="form-group row">
|
||||
<label for="idFilesystem" class="col-sm-2 col-form-label">Storage</label>
|
||||
<div class="col-sm-10">
|
||||
<select class="form-control" id="idFilesystem" name="fs_provider">
|
||||
<select class="form-control" id="idFilesystem" name="fs_provider" onchange="onFilesystemChanged(this.value)">
|
||||
<option value="0" {{if eq .User.FsConfig.Provider 0 }}selected{{end}}>local</option>
|
||||
<option value="1" {{if eq .User.FsConfig.Provider 1 }}selected{{end}}>S3</option>
|
||||
<option value="1" {{if eq .User.FsConfig.Provider 1 }}selected{{end}}>Amazon S3 (Compatible)</option>
|
||||
<option value="2" {{if eq .User.FsConfig.Provider 2 }}selected{{end}}>Google Cloud Storage</option>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="form-group row">
|
||||
<div class="form-group row s3">
|
||||
<label for="idS3Bucket" class="col-sm-2 col-form-label">S3 Bucket</label>
|
||||
<div class="col-sm-3">
|
||||
<input type="text" class="form-control" id="idS3Bucket" name="s3_bucket" placeholder=""
|
||||
|
@ -215,7 +216,7 @@
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<div class="form-group row">
|
||||
<div class="form-group row s3">
|
||||
<label for="idS3AccessKey" class="col-sm-2 col-form-label">S3 Access Key</label>
|
||||
<div class="col-sm-3">
|
||||
<input type="text" class="form-control" id="idS3AccessKey" name="s3_access_key" placeholder=""
|
||||
|
@ -229,7 +230,7 @@
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<div class="form-group row">
|
||||
<div class="form-group row s3">
|
||||
<label for="idS3StorageClass" class="col-sm-2 col-form-label">S3 Storage Class</label>
|
||||
<div class="col-sm-3">
|
||||
<input type="text" class="form-control" id="idS3StorageClass" name="s3_storage_class" placeholder=""
|
||||
|
@ -243,7 +244,7 @@
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<div class="form-group row">
|
||||
<div class="form-group row s3">
|
||||
<label for="idS3KeyPrefix" class="col-sm-2 col-form-label">S3 Key Prefix</label>
|
||||
<div class="col-sm-10">
|
||||
<input type="text" class="form-control" id="idS3KeyPrefix" name="s3_key_prefix" placeholder=""
|
||||
|
@ -254,6 +255,43 @@
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<div class="form-group row gcs">
|
||||
<label for="idGCSBucket" class="col-sm-2 col-form-label">GCS Bucket</label>
|
||||
<div class="col-sm-10">
|
||||
<input type="text" class="form-control" id="idGCSBucket" name="gcs_bucket" placeholder=""
|
||||
value="{{.User.FsConfig.GCSConfig.Bucket}}" maxlength="255">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="form-group row gcs">
|
||||
<label for="idGCSCredentialFile" class="col-sm-2 col-form-label">GCS Credential file</label>
|
||||
<div class="col-sm-4">
|
||||
<input type="file" class="form-control-file" id="idGCSCredentialFile" name="gcs_credential_file"
|
||||
aria-describedby="GCSCredentialsHelpBlock">
|
||||
<small id="GCSCredentialsHelpBlock" class="form-text text-muted">
|
||||
Add or update credentials from a JSON file
|
||||
</small>
|
||||
</div>
|
||||
<div class="col-sm-1"></div>
|
||||
<label for="idGCSStorageClass" class="col-sm-2 col-form-label">GCS Storage Class</label>
|
||||
<div class="col-sm-3">
|
||||
<input type="text" class="form-control" id="idGCSStorageClass" name="gcs_storage_class" placeholder=""
|
||||
value="{{.User.FsConfig.GCSConfig.StorageClass}}" maxlength="255">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="form-group row gcs">
|
||||
<label for="idGCSKeyPrefix" class="col-sm-2 col-form-label">GCS Key Prefix</label>
|
||||
<div class="col-sm-10">
|
||||
<input type="text" class="form-control" id="idGCSKeyPrefix" name="gcs_key_prefix" placeholder=""
|
||||
value="{{.User.FsConfig.GCSConfig.KeyPrefix}}" maxlength="255" aria-describedby="GCSKeyPrefixHelpBlock">
|
||||
<small id="GCSKeyPrefixHelpBlock" class="form-text text-muted">
|
||||
Similar to a chroot for local filesystem. Cannot start with "/". Example: "somedir/subdir/".
|
||||
</small>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
<input type="hidden" name="expiration_date" id="hidden_start_datetime" value="">
|
||||
<button type="submit" class="btn btn-primary float-right mt-3 mb-5 px-5 px-3">Submit</button>
|
||||
</form>
|
||||
|
@ -295,6 +333,22 @@
|
|||
}
|
||||
return true;
|
||||
});
|
||||
|
||||
onFilesystemChanged('{{.User.FsConfig.Provider}}');
|
||||
|
||||
});
|
||||
|
||||
function onFilesystemChanged(val){
|
||||
if (val == '1'){
|
||||
$('.form-group.row.gcs').hide();
|
||||
$('.form-group.row.s3').show();
|
||||
} else if (val == '2'){
|
||||
$('.form-group.row.gcs').show();
|
||||
$('.form-group.row.s3').hide();
|
||||
} else {
|
||||
$('.form-group.row.gcs').hide();
|
||||
$('.form-group.row.s3').hide();
|
||||
}
|
||||
}
|
||||
</script>
|
||||
{{end}}
|
|
@ -5,8 +5,8 @@ import (
|
|||
"time"
|
||||
)
|
||||
|
||||
// S3FileInfo implements os.FileInfo for a file in S3.
|
||||
type S3FileInfo struct {
|
||||
// FileInfo implements os.FileInfo for a file in S3.
|
||||
type FileInfo struct {
|
||||
name string
|
||||
sizeInBytes int64
|
||||
modTime time.Time
|
||||
|
@ -14,14 +14,14 @@ type S3FileInfo struct {
|
|||
sys interface{}
|
||||
}
|
||||
|
||||
// NewS3FileInfo creates file info.
|
||||
func NewS3FileInfo(name string, isDirectory bool, sizeInBytes int64, modTime time.Time) S3FileInfo {
|
||||
// NewFileInfo creates file info.
|
||||
func NewFileInfo(name string, isDirectory bool, sizeInBytes int64, modTime time.Time) FileInfo {
|
||||
mode := os.FileMode(0644)
|
||||
if isDirectory {
|
||||
mode = os.FileMode(0755) | os.ModeDir
|
||||
}
|
||||
|
||||
return S3FileInfo{
|
||||
return FileInfo{
|
||||
name: name,
|
||||
sizeInBytes: sizeInBytes,
|
||||
modTime: modTime,
|
||||
|
@ -30,31 +30,31 @@ func NewS3FileInfo(name string, isDirectory bool, sizeInBytes int64, modTime tim
|
|||
}
|
||||
|
||||
// Name provides the base name of the file.
|
||||
func (fi S3FileInfo) Name() string {
|
||||
func (fi FileInfo) Name() string {
|
||||
return fi.name
|
||||
}
|
||||
|
||||
// Size provides the length in bytes for a file.
|
||||
func (fi S3FileInfo) Size() int64 {
|
||||
func (fi FileInfo) Size() int64 {
|
||||
return fi.sizeInBytes
|
||||
}
|
||||
|
||||
// Mode provides the file mode bits
|
||||
func (fi S3FileInfo) Mode() os.FileMode {
|
||||
func (fi FileInfo) Mode() os.FileMode {
|
||||
return fi.mode
|
||||
}
|
||||
|
||||
// ModTime provides the last modification time.
|
||||
func (fi S3FileInfo) ModTime() time.Time {
|
||||
func (fi FileInfo) ModTime() time.Time {
|
||||
return fi.modTime
|
||||
}
|
||||
|
||||
// IsDir provides the abbreviation for Mode().IsDir()
|
||||
func (fi S3FileInfo) IsDir() bool {
|
||||
func (fi FileInfo) IsDir() bool {
|
||||
return fi.mode&os.ModeDir != 0
|
||||
}
|
||||
|
||||
// Sys provides the underlying data source (can return nil)
|
||||
func (fi S3FileInfo) Sys() interface{} {
|
||||
func (fi FileInfo) Sys() interface{} {
|
||||
return fi.getFileInfoSys()
|
||||
}
|
|
@ -21,7 +21,7 @@ func init() {
|
|||
}
|
||||
}
|
||||
|
||||
func (fi S3FileInfo) getFileInfoSys() interface{} {
|
||||
func (fi FileInfo) getFileInfoSys() interface{} {
|
||||
return &syscall.Stat_t{
|
||||
Uid: uint32(defaultUID),
|
||||
Gid: uint32(defaultGID)}
|
|
@ -2,6 +2,6 @@ package vfs
|
|||
|
||||
import "syscall"
|
||||
|
||||
func (fi S3FileInfo) getFileInfoSys() interface{} {
|
||||
func (fi FileInfo) getFileInfoSys() interface{} {
|
||||
return syscall.Win32FileAttributeData{}
|
||||
}
|
511
vfs/gcsfs.go
Normal file
511
vfs/gcsfs.go
Normal file
|
@ -0,0 +1,511 @@
|
|||
package vfs
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"cloud.google.com/go/storage"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/metrics"
|
||||
"github.com/eikenb/pipeat"
|
||||
"google.golang.org/api/googleapi"
|
||||
"google.golang.org/api/iterator"
|
||||
"google.golang.org/api/option"
|
||||
)
|
||||
|
||||
var (
|
||||
// we cannot use attrs selection until this bug is fixed:
|
||||
//
|
||||
// https://github.com/googleapis/google-cloud-go/issues/1763
|
||||
//
|
||||
gcsDefaultFieldsSelection = []string{"Name", "Size", "Deleted", "Updated"}
|
||||
)
|
||||
|
||||
// GCSFsConfig defines the configuration for Google Cloud Storage based filesystem
|
||||
type GCSFsConfig struct {
|
||||
Bucket string `json:"bucket,omitempty"`
|
||||
// KeyPrefix is similar to a chroot directory for local filesystem.
|
||||
// If specified the SFTP user will only see objects that starts with
|
||||
// this prefix and so you can restrict access to a specific virtual
|
||||
// folder. The prefix, if not empty, must not start with "/" and must
|
||||
// end with "/".
|
||||
// If empty the whole bucket contents will be available
|
||||
KeyPrefix string `json:"key_prefix,omitempty"`
|
||||
CredentialFile string `json:"-"`
|
||||
Credentials string `json:"credentials,omitempty"`
|
||||
StorageClass string `json:"storage_class,omitempty"`
|
||||
}
|
||||
|
||||
// GCSFs is a Fs implementation for Google Cloud Storage.
|
||||
type GCSFs struct {
|
||||
connectionID string
|
||||
localTempDir string
|
||||
config GCSFsConfig
|
||||
svc *storage.Client
|
||||
ctxTimeout time.Duration
|
||||
ctxLongTimeout time.Duration
|
||||
}
|
||||
|
||||
// NewGCSFs returns an GCSFs object that allows to interact with Google Cloud Storage
|
||||
func NewGCSFs(connectionID, localTempDir string, config GCSFsConfig) (Fs, error) {
|
||||
var err error
|
||||
fs := GCSFs{
|
||||
connectionID: connectionID,
|
||||
localTempDir: localTempDir,
|
||||
config: config,
|
||||
ctxTimeout: 30 * time.Second,
|
||||
ctxLongTimeout: 300 * time.Second,
|
||||
}
|
||||
if err = ValidateGCSFsConfig(&fs.config, fs.config.CredentialFile); err != nil {
|
||||
return fs, err
|
||||
}
|
||||
ctx := context.Background()
|
||||
fs.svc, err = storage.NewClient(ctx, option.WithCredentialsFile(fs.config.CredentialFile))
|
||||
return fs, err
|
||||
}
|
||||
|
||||
// Name returns the name for the Fs implementation
|
||||
func (fs GCSFs) Name() string {
|
||||
return fmt.Sprintf("GCSFs bucket: %#v", fs.config.Bucket)
|
||||
}
|
||||
|
||||
// ConnectionID returns the SSH connection ID associated to this Fs implementation
|
||||
func (fs GCSFs) ConnectionID() string {
|
||||
return fs.connectionID
|
||||
}
|
||||
|
||||
// Stat returns a FileInfo describing the named file
|
||||
func (fs GCSFs) Stat(name string) (os.FileInfo, error) {
|
||||
var result FileInfo
|
||||
var err error
|
||||
if len(name) == 0 || name == "." {
|
||||
err := fs.checkIfBucketExists()
|
||||
if err != nil {
|
||||
return result, err
|
||||
}
|
||||
return NewFileInfo(name, true, 0, time.Time{}), nil
|
||||
}
|
||||
if fs.config.KeyPrefix == name+"/" {
|
||||
return NewFileInfo(name, true, 0, time.Time{}), nil
|
||||
}
|
||||
prefix := fs.getPrefixForStat(name)
|
||||
query := &storage.Query{Prefix: prefix, Delimiter: "/"}
|
||||
/*err = query.SetAttrSelection(gcsDefaultFieldsSelection)
|
||||
if err != nil {
|
||||
return result, err
|
||||
}*/
|
||||
ctx, cancelFn := context.WithDeadline(context.Background(), time.Now().Add(fs.ctxTimeout))
|
||||
defer cancelFn()
|
||||
bkt := fs.svc.Bucket(fs.config.Bucket)
|
||||
it := bkt.Objects(ctx, query)
|
||||
for {
|
||||
attrs, err := it.Next()
|
||||
if err == iterator.Done {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
metrics.GCSListObjectsCompleted(err)
|
||||
return result, err
|
||||
}
|
||||
if len(attrs.Prefix) > 0 {
|
||||
if fs.isEqual(attrs.Prefix, name) {
|
||||
result = NewFileInfo(name, true, 0, time.Time{})
|
||||
}
|
||||
} else {
|
||||
if !attrs.Deleted.IsZero() {
|
||||
continue
|
||||
}
|
||||
if fs.isEqual(attrs.Name, name) {
|
||||
isDir := strings.HasSuffix(attrs.Name, "/")
|
||||
result = NewFileInfo(name, isDir, attrs.Size, attrs.Updated)
|
||||
}
|
||||
}
|
||||
}
|
||||
metrics.GCSListObjectsCompleted(nil)
|
||||
if len(result.Name()) == 0 {
|
||||
err = errors.New("404 no such file or directory")
|
||||
}
|
||||
return result, err
|
||||
}
|
||||
|
||||
// Lstat returns a FileInfo describing the named file
|
||||
func (fs GCSFs) Lstat(name string) (os.FileInfo, error) {
|
||||
return fs.Stat(name)
|
||||
}
|
||||
|
||||
// Open opens the named file for reading
|
||||
func (fs GCSFs) Open(name string) (*os.File, *pipeat.PipeReaderAt, func(), error) {
|
||||
r, w, err := pipeat.AsyncWriterPipeInDir(fs.localTempDir)
|
||||
if err != nil {
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
bkt := fs.svc.Bucket(fs.config.Bucket)
|
||||
obj := bkt.Object(name)
|
||||
ctx, cancelFn := context.WithCancel(context.Background())
|
||||
objectReader, err := obj.NewReader(ctx)
|
||||
if err != nil {
|
||||
r.Close()
|
||||
w.Close()
|
||||
cancelFn()
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
go func() {
|
||||
defer cancelFn()
|
||||
defer objectReader.Close()
|
||||
n, err := io.Copy(w, objectReader)
|
||||
w.CloseWithError(err)
|
||||
fsLog(fs, logger.LevelDebug, "download completed, path: %#v size: %v, err: %v", name, n, err)
|
||||
metrics.GCSTransferCompleted(n, 1, err)
|
||||
}()
|
||||
return nil, r, cancelFn, nil
|
||||
}
|
||||
|
||||
// Create creates or opens the named file for writing
|
||||
func (fs GCSFs) Create(name string, flag int) (*os.File, *pipeat.PipeWriterAt, func(), error) {
|
||||
r, w, err := pipeat.PipeInDir(fs.localTempDir)
|
||||
if err != nil {
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
bkt := fs.svc.Bucket(fs.config.Bucket)
|
||||
obj := bkt.Object(name)
|
||||
ctx, cancelFn := context.WithCancel(context.Background())
|
||||
objectWriter := obj.NewWriter(ctx)
|
||||
if len(fs.config.StorageClass) > 0 {
|
||||
objectWriter.ObjectAttrs.StorageClass = fs.config.StorageClass
|
||||
}
|
||||
go func() {
|
||||
defer cancelFn()
|
||||
defer objectWriter.Close()
|
||||
n, err := io.Copy(objectWriter, r)
|
||||
r.CloseWithError(err)
|
||||
fsLog(fs, logger.LevelDebug, "upload completed, path: %#v, readed bytes: %v, err: %v", name, n, err)
|
||||
metrics.GCSTransferCompleted(n, 0, err)
|
||||
}()
|
||||
return nil, w, cancelFn, nil
|
||||
}
|
||||
|
||||
// Rename renames (moves) source to target.
|
||||
// We don't support renaming non empty directories since we should
|
||||
// rename all the contents too and this could take long time: think
|
||||
// about directories with thousands of files, for each file we should
|
||||
// execute a CopyObject call.
|
||||
func (fs GCSFs) Rename(source, target string) error {
|
||||
if source == target {
|
||||
return nil
|
||||
}
|
||||
fi, err := fs.Stat(source)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if fi.IsDir() {
|
||||
contents, err := fs.ReadDir(source)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(contents) > 0 {
|
||||
return fmt.Errorf("Cannot rename non empty directory: %#v", source)
|
||||
}
|
||||
if !strings.HasSuffix(source, "/") {
|
||||
source += "/"
|
||||
}
|
||||
if !strings.HasSuffix(target, "/") {
|
||||
target += "/"
|
||||
}
|
||||
}
|
||||
src := fs.svc.Bucket(fs.config.Bucket).Object(source)
|
||||
dst := fs.svc.Bucket(fs.config.Bucket).Object(target)
|
||||
ctx, cancelFn := context.WithDeadline(context.Background(), time.Now().Add(fs.ctxTimeout))
|
||||
defer cancelFn()
|
||||
copier := dst.CopierFrom(src)
|
||||
if len(fs.config.StorageClass) > 0 {
|
||||
copier.StorageClass = fs.config.StorageClass
|
||||
}
|
||||
_, err = copier.Run(ctx)
|
||||
metrics.GCSCopyObjectCompleted(err)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return fs.Remove(source, fi.IsDir())
|
||||
}
|
||||
|
||||
// Remove removes the named file or (empty) directory.
|
||||
func (fs GCSFs) Remove(name string, isDir bool) error {
|
||||
if isDir {
|
||||
contents, err := fs.ReadDir(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(contents) > 0 {
|
||||
return fmt.Errorf("Cannot remove non empty directory: %#v", name)
|
||||
}
|
||||
if !strings.HasSuffix(name, "/") {
|
||||
name += "/"
|
||||
}
|
||||
}
|
||||
ctx, cancelFn := context.WithDeadline(context.Background(), time.Now().Add(fs.ctxTimeout))
|
||||
defer cancelFn()
|
||||
err := fs.svc.Bucket(fs.config.Bucket).Object(name).Delete(ctx)
|
||||
metrics.GCSDeleteObjectCompleted(err)
|
||||
return err
|
||||
}
|
||||
|
||||
// Mkdir creates a new directory with the specified name and default permissions
|
||||
func (fs GCSFs) Mkdir(name string) error {
|
||||
_, err := fs.Stat(name)
|
||||
if !fs.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
if !strings.HasSuffix(name, "/") {
|
||||
name += "/"
|
||||
}
|
||||
_, w, _, err := fs.Create(name, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return w.Close()
|
||||
}
|
||||
|
||||
// Symlink creates source as a symbolic link to target.
|
||||
func (GCSFs) Symlink(source, target string) error {
|
||||
return errors.New("403 symlinks are not supported")
|
||||
}
|
||||
|
||||
// Chown changes the numeric uid and gid of the named file.
|
||||
// Silently ignored.
|
||||
func (GCSFs) Chown(name string, uid int, gid int) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Chmod changes the mode of the named file to mode.
|
||||
// Silently ignored.
|
||||
func (GCSFs) Chmod(name string, mode os.FileMode) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Chtimes changes the access and modification times of the named file.
|
||||
// Silently ignored.
|
||||
func (GCSFs) Chtimes(name string, atime, mtime time.Time) error {
|
||||
return errors.New("403 chtimes is not supported")
|
||||
}
|
||||
|
||||
// ReadDir reads the directory named by dirname and returns
|
||||
// a list of directory entries.
|
||||
func (fs GCSFs) ReadDir(dirname string) ([]os.FileInfo, error) {
|
||||
var result []os.FileInfo
|
||||
// dirname deve essere già cleaned
|
||||
prefix := ""
|
||||
if len(dirname) > 0 && dirname != "." {
|
||||
prefix = strings.TrimPrefix(dirname, "/")
|
||||
if !strings.HasSuffix(prefix, "/") {
|
||||
prefix += "/"
|
||||
}
|
||||
}
|
||||
query := &storage.Query{Prefix: prefix, Delimiter: "/"}
|
||||
/*err := query.SetAttrSelection(gcsDefaultFieldsSelection)
|
||||
if err != nil {
|
||||
return result, err
|
||||
}*/
|
||||
ctx, cancelFn := context.WithDeadline(context.Background(), time.Now().Add(fs.ctxTimeout))
|
||||
defer cancelFn()
|
||||
bkt := fs.svc.Bucket(fs.config.Bucket)
|
||||
it := bkt.Objects(ctx, query)
|
||||
for {
|
||||
attrs, err := it.Next()
|
||||
if err == iterator.Done {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
metrics.GCSListObjectsCompleted(err)
|
||||
return result, err
|
||||
}
|
||||
if len(attrs.Prefix) > 0 {
|
||||
name, _ := fs.resolve(attrs.Prefix, prefix)
|
||||
result = append(result, NewFileInfo(name, true, 0, time.Time{}))
|
||||
} else {
|
||||
name, isDir := fs.resolve(attrs.Name, prefix)
|
||||
if len(name) == 0 {
|
||||
continue
|
||||
}
|
||||
if !attrs.Deleted.IsZero() {
|
||||
continue
|
||||
}
|
||||
result = append(result, NewFileInfo(name, isDir, attrs.Size, attrs.Updated))
|
||||
}
|
||||
}
|
||||
metrics.GCSListObjectsCompleted(nil)
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// IsUploadResumeSupported returns true if upload resume is supported.
|
||||
// SFTP Resume is not supported on S3
|
||||
func (GCSFs) IsUploadResumeSupported() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// IsAtomicUploadSupported returns true if atomic upload is supported.
|
||||
// S3 uploads are already atomic, we don't need to upload to a temporary
|
||||
// file
|
||||
func (GCSFs) IsAtomicUploadSupported() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// IsNotExist returns a boolean indicating whether the error is known to
|
||||
// report that a file or directory does not exist
|
||||
func (GCSFs) IsNotExist(err error) bool {
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
if err == storage.ErrObjectNotExist || err == storage.ErrBucketNotExist {
|
||||
return true
|
||||
}
|
||||
if e, ok := err.(*googleapi.Error); ok {
|
||||
if e.Code == http.StatusNotFound {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return strings.Contains(err.Error(), "404")
|
||||
}
|
||||
|
||||
// IsPermission returns a boolean indicating whether the error is known to
|
||||
// report that permission is denied.
|
||||
func (GCSFs) IsPermission(err error) bool {
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
if e, ok := err.(*googleapi.Error); ok {
|
||||
if e.Code == http.StatusForbidden || e.Code == http.StatusUnauthorized {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return strings.Contains(err.Error(), "403")
|
||||
}
|
||||
|
||||
// CheckRootPath creates the specified root directory if it does not exists
|
||||
func (fs GCSFs) CheckRootPath(username string, uid int, gid int) bool {
|
||||
// we need a local directory for temporary files
|
||||
osFs := NewOsFs(fs.ConnectionID(), fs.localTempDir)
|
||||
osFs.CheckRootPath(username, uid, gid)
|
||||
return fs.checkIfBucketExists() != nil
|
||||
}
|
||||
|
||||
// ScanRootDirContents returns the number of files contained in the bucket,
|
||||
// and their size
|
||||
func (fs GCSFs) ScanRootDirContents() (int, int64, error) {
|
||||
numFiles := 0
|
||||
size := int64(0)
|
||||
query := &storage.Query{Prefix: fs.config.KeyPrefix}
|
||||
err := query.SetAttrSelection(gcsDefaultFieldsSelection)
|
||||
if err != nil {
|
||||
return numFiles, size, err
|
||||
}
|
||||
ctx, cancelFn := context.WithDeadline(context.Background(), time.Now().Add(fs.ctxLongTimeout))
|
||||
defer cancelFn()
|
||||
bkt := fs.svc.Bucket(fs.config.Bucket)
|
||||
it := bkt.Objects(ctx, query)
|
||||
for {
|
||||
attrs, err := it.Next()
|
||||
if err == iterator.Done {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
metrics.GCSListObjectsCompleted(err)
|
||||
return numFiles, size, err
|
||||
}
|
||||
if !attrs.Deleted.IsZero() {
|
||||
continue
|
||||
}
|
||||
numFiles++
|
||||
size += attrs.Size
|
||||
}
|
||||
metrics.GCSListObjectsCompleted(nil)
|
||||
return numFiles, size, err
|
||||
}
|
||||
|
||||
// GetAtomicUploadPath returns the path to use for an atomic upload.
|
||||
// S3 uploads are already atomic, we never call this method for S3
|
||||
func (GCSFs) GetAtomicUploadPath(name string) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
// GetRelativePath returns the path for a file relative to the user's home dir.
|
||||
// This is the path as seen by SFTP users
|
||||
func (fs GCSFs) GetRelativePath(name string) string {
|
||||
rel := path.Clean(name)
|
||||
if rel == "." {
|
||||
rel = ""
|
||||
}
|
||||
if !path.IsAbs(rel) {
|
||||
rel = "/" + rel
|
||||
}
|
||||
if len(fs.config.KeyPrefix) > 0 {
|
||||
if !strings.HasPrefix(rel, "/"+fs.config.KeyPrefix) {
|
||||
rel = "/"
|
||||
}
|
||||
rel = path.Clean("/" + strings.TrimPrefix(rel, "/"+fs.config.KeyPrefix))
|
||||
}
|
||||
return rel
|
||||
}
|
||||
|
||||
// Join joins any number of path elements into a single path
|
||||
func (GCSFs) Join(elem ...string) string {
|
||||
return strings.TrimPrefix(path.Join(elem...), "/")
|
||||
}
|
||||
|
||||
// ResolvePath returns the matching filesystem path for the specified sftp path
|
||||
func (fs GCSFs) ResolvePath(sftpPath string) (string, error) {
|
||||
if !path.IsAbs(sftpPath) {
|
||||
sftpPath = path.Clean("/" + sftpPath)
|
||||
}
|
||||
return fs.Join(fs.config.KeyPrefix, strings.TrimPrefix(sftpPath, "/")), nil
|
||||
}
|
||||
|
||||
func (fs *GCSFs) resolve(name string, prefix string) (string, bool) {
|
||||
result := strings.TrimPrefix(name, prefix)
|
||||
isDir := strings.HasSuffix(result, "/")
|
||||
if isDir {
|
||||
result = strings.TrimSuffix(result, "/")
|
||||
}
|
||||
return result, isDir
|
||||
}
|
||||
|
||||
func (fs *GCSFs) isEqual(key string, sftpName string) bool {
|
||||
if key == sftpName {
|
||||
return true
|
||||
}
|
||||
if key == sftpName+"/" {
|
||||
return true
|
||||
}
|
||||
if key+"/" == sftpName {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (fs *GCSFs) checkIfBucketExists() error {
|
||||
ctx, cancelFn := context.WithDeadline(context.Background(), time.Now().Add(fs.ctxTimeout))
|
||||
defer cancelFn()
|
||||
bkt := fs.svc.Bucket(fs.config.Bucket)
|
||||
_, err := bkt.Attrs(ctx)
|
||||
metrics.GCSHeadBucketCompleted(err)
|
||||
return err
|
||||
}
|
||||
|
||||
func (fs *GCSFs) getPrefixForStat(name string) string {
|
||||
prefix := path.Dir(name)
|
||||
if prefix == "/" || prefix == "." || len(prefix) == 0 {
|
||||
prefix = ""
|
||||
} else {
|
||||
prefix = strings.TrimPrefix(prefix, "/")
|
||||
if !strings.HasSuffix(prefix, "/") {
|
||||
prefix += "/"
|
||||
}
|
||||
}
|
||||
return prefix
|
||||
}
|
39
vfs/s3fs.go
39
vfs/s3fs.go
|
@ -21,15 +21,15 @@ import (
|
|||
"github.com/eikenb/pipeat"
|
||||
)
|
||||
|
||||
// S3FsConfig defines the configuration for S3fs
|
||||
// S3FsConfig defines the configuration for S3 based filesystem
|
||||
type S3FsConfig struct {
|
||||
Bucket string `json:"bucket,omitempty"`
|
||||
// KeyPrefix is similar to a chroot directory for local filesystem.
|
||||
// If specified the SFTP user will only see contents that starts with
|
||||
// If specified the SFTP user will only see objects that starts with
|
||||
// this prefix and so you can restrict access to a specific virtual
|
||||
// folder. The prefix, if not empty, must not start with "/" and must
|
||||
// end with "/".
|
||||
//If empty the whole bucket contents will be available
|
||||
// If empty the whole bucket contents will be available
|
||||
KeyPrefix string `json:"key_prefix,omitempty"`
|
||||
Region string `json:"region,omitempty"`
|
||||
AccessKey string `json:"access_key,omitempty"`
|
||||
|
@ -70,7 +70,6 @@ func NewS3Fs(connectionID, localTempDir string, config S3FsConfig) (Fs, error) {
|
|||
Region: aws.String(fs.config.Region),
|
||||
Credentials: credentials.NewStaticCredentials(fs.config.AccessKey, fs.config.AccessSecret, ""),
|
||||
}
|
||||
//config.WithLogLevel(aws.LogDebugWithHTTPBody)
|
||||
if len(fs.config.Endpoint) > 0 {
|
||||
awsConfig.Endpoint = aws.String(fs.config.Endpoint)
|
||||
awsConfig.S3ForcePathStyle = aws.Bool(true)
|
||||
|
@ -95,16 +94,16 @@ func (fs S3Fs) ConnectionID() string {
|
|||
|
||||
// Stat returns a FileInfo describing the named file
|
||||
func (fs S3Fs) Stat(name string) (os.FileInfo, error) {
|
||||
var result S3FileInfo
|
||||
var result FileInfo
|
||||
if name == "/" || name == "." {
|
||||
err := fs.checkIfBucketExists()
|
||||
if err != nil {
|
||||
return result, err
|
||||
}
|
||||
return NewS3FileInfo(name, true, 0, time.Time{}), nil
|
||||
return NewFileInfo(name, true, 0, time.Time{}), nil
|
||||
}
|
||||
if "/"+fs.config.KeyPrefix == name+"/" {
|
||||
return NewS3FileInfo(name, true, 0, time.Time{}), nil
|
||||
return NewFileInfo(name, true, 0, time.Time{}), nil
|
||||
}
|
||||
prefix := path.Dir(name)
|
||||
if prefix == "/" || prefix == "." {
|
||||
|
@ -124,7 +123,7 @@ func (fs S3Fs) Stat(name string) (os.FileInfo, error) {
|
|||
}, func(page *s3.ListObjectsV2Output, lastPage bool) bool {
|
||||
for _, p := range page.CommonPrefixes {
|
||||
if fs.isEqual(p.Prefix, name) {
|
||||
result = NewS3FileInfo(name, true, 0, time.Time{})
|
||||
result = NewFileInfo(name, true, 0, time.Time{})
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
@ -133,7 +132,7 @@ func (fs S3Fs) Stat(name string) (os.FileInfo, error) {
|
|||
objectSize := *fileObject.Size
|
||||
objectModTime := *fileObject.LastModified
|
||||
isDir := strings.HasSuffix(*fileObject.Key, "/")
|
||||
result = NewS3FileInfo(name, isDir, objectSize, objectModTime)
|
||||
result = NewFileInfo(name, isDir, objectSize, objectModTime)
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
@ -325,7 +324,7 @@ func (fs S3Fs) ReadDir(dirname string) ([]os.FileInfo, error) {
|
|||
}, func(page *s3.ListObjectsV2Output, lastPage bool) bool {
|
||||
for _, p := range page.CommonPrefixes {
|
||||
name, isDir := fs.resolve(p.Prefix, prefix)
|
||||
result = append(result, NewS3FileInfo(name, isDir, 0, time.Time{}))
|
||||
result = append(result, NewFileInfo(name, isDir, 0, time.Time{}))
|
||||
}
|
||||
for _, fileObject := range page.Contents {
|
||||
objectSize := *fileObject.Size
|
||||
|
@ -334,7 +333,7 @@ func (fs S3Fs) ReadDir(dirname string) ([]os.FileInfo, error) {
|
|||
if len(name) == 0 {
|
||||
continue
|
||||
}
|
||||
result = append(result, NewS3FileInfo(name, isDir, objectSize, objectModTime))
|
||||
result = append(result, NewFileInfo(name, isDir, objectSize, objectModTime))
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
@ -394,23 +393,7 @@ func (fs S3Fs) CheckRootPath(username string, uid int, gid int) bool {
|
|||
// we need a local directory for temporary files
|
||||
osFs := NewOsFs(fs.ConnectionID(), fs.localTempDir)
|
||||
osFs.CheckRootPath(username, uid, gid)
|
||||
err := fs.checkIfBucketExists()
|
||||
if err == nil {
|
||||
return true
|
||||
}
|
||||
if !fs.IsNotExist(err) {
|
||||
return false
|
||||
}
|
||||
ctx, cancelFn := context.WithDeadline(context.Background(), time.Now().Add(fs.ctxTimeout))
|
||||
defer cancelFn()
|
||||
input := &s3.CreateBucketInput{
|
||||
Bucket: aws.String(fs.config.Bucket),
|
||||
}
|
||||
_, err = fs.svc.CreateBucketWithContext(ctx, input)
|
||||
fsLog(fs, logger.LevelDebug, "bucket %#v for user %#v does not exists, try to create, error: %v",
|
||||
fs.config.Bucket, username, err)
|
||||
metrics.S3CreateBucketCompleted(err)
|
||||
return err == nil
|
||||
return fs.checkIfBucketExists() != nil
|
||||
}
|
||||
|
||||
// ScanRootDirContents returns the number of files contained in the bucket,
|
||||
|
|
29
vfs/vfs.go
29
vfs/vfs.go
|
@ -3,6 +3,7 @@ package vfs
|
|||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"runtime"
|
||||
|
@ -14,7 +15,7 @@ import (
|
|||
"github.com/pkg/sftp"
|
||||
)
|
||||
|
||||
// Fs defines the interface for filesystems backends
|
||||
// Fs defines the interface for filesystem backends
|
||||
type Fs interface {
|
||||
Name() string
|
||||
ConnectionID() string
|
||||
|
@ -94,6 +95,32 @@ func ValidateS3FsConfig(config *S3FsConfig) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// ValidateGCSFsConfig returns nil if the specified GCS config is valid, otherwise an error
|
||||
func ValidateGCSFsConfig(config *GCSFsConfig, credentialsFilePath string) error {
|
||||
if len(config.Bucket) == 0 {
|
||||
return errors.New("bucket cannot be empty")
|
||||
}
|
||||
if len(config.KeyPrefix) > 0 {
|
||||
if strings.HasPrefix(config.KeyPrefix, "/") {
|
||||
return errors.New("key_prefix cannot start with /")
|
||||
}
|
||||
config.KeyPrefix = path.Clean(config.KeyPrefix)
|
||||
if !strings.HasSuffix(config.KeyPrefix, "/") {
|
||||
config.KeyPrefix += "/"
|
||||
}
|
||||
}
|
||||
if len(config.Credentials) == 0 {
|
||||
fi, err := os.Stat(credentialsFilePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid credentials %v", err)
|
||||
}
|
||||
if fi.Size() == 0 {
|
||||
return errors.New("credentials cannot be empty")
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetPathPermissions calls fs.Chown.
|
||||
// It does nothing for local filesystem on windows
|
||||
func SetPathPermissions(fs Fs, path string, uid int, gid int) {
|
||||
|
|
Loading…
Reference in a new issue