add support for integrated database schema migrations

added the "initprovider" command to initialize the database structure.
If we change the database schema the required changes will be checked
at startup and automatically applyed.
This commit is contained in:
Nicola Murino 2020-02-08 14:44:25 +01:00
parent 553cceab42
commit d6fa853a37
16 changed files with 370 additions and 73 deletions

View file

@ -11,7 +11,7 @@ env:
- GO111MODULE=on
before_script:
- sqlite3 sftpgo.db 'CREATE TABLE "users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE, "password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" TEXT NULL, "filesystem" text NULL);'
- sftpgo initprovider
install:
- go get -v -t ./...

View file

@ -102,9 +102,10 @@ Usage:
sftpgo [command]
Available Commands:
help Help about any command
portable Serve a single directory
serve Start the SFTP Server
help Help about any command
initprovider Initializes the configured data provider
portable Serve a single directory
serve Start the SFTP Server
Flags:
-h, --help help for sftpgo
@ -113,7 +114,7 @@ Flags:
Use "sftpgo [command] --help" for more information about a command
```
The `serve` subcommand supports the following flags:
The `serve` command supports the following flags:
- `--config-dir` string. Location of the config dir. This directory should contain the `sftpgo` configuration file and is used as the base for files with a relative path (eg. the private keys for the SFTP server, the SQLite or bblot database if you use SQLite or bbolt as data provider). The default value is "." or the value of `SFTPGO_CONFIG_DIR` environment variable.
- `--config-file` string. Name of the configuration file. It must be the name of a file stored in config-dir not the absolute path to the configuration file. The specified file name must have no extension we automatically load JSON, YAML, TOML, HCL and Java properties. The default value is "sftpgo" (and therefore `sftpgo.json`, `sftpgo.yaml` and so on are searched) or the value of `SFTPGO_CONFIG_FILE` environment variable.
@ -282,8 +283,27 @@ Before starting `sftpgo serve` please ensure that the configured dataprovider is
SQL based data providers (SQLite, MySQL, PostgreSQL) requires the creation of a database containing the required tables. Memory and bolt data providers does not require an initialization.
SQL scripts to create the required database structure can be found inside the source tree [sql](./sql "sql") directory. The SQL scripts filename is, by convention, the date as `YYYYMMDD` and the suffix `.sql`. You need to apply all the SQL scripts for your database ordered by name, for example `20190828.sql` must be applied before `20191112.sql` and so on.
After configuring the data provider, using the configuration file, you can create the required database structure using the `initprovider` command.
For SQLite provider the database file will be auto created if missing.
For PostgreSQL and MySQL providers you need to create the configured database, `initprovider` command will create the required tables.
For example you can simply execute the following command from the configuration directory:
```
sftpgo initprovider
```
Take a look at the CLI usage to learn how to specify a different configuration file:
```
sftpgo initprovider --help
```
The `initprovider` command is enough for new installations. From now on, the database structure will be automatically checked and updated, if required, at startup.
If you are upgrading from version 0.9.5 or before you have to manually execute the SQL scripts to create the required database structure.Theese script can be found inside the source tree [sql](./sql "sql") directory. The SQL scripts filename is, by convention, the date as `YYYYMMDD` and the suffix `.sql`. You need to apply all the SQL scripts for your database ordered by name, for example `20190828.sql` must be applied before `20191112.sql` and so on.
Example for SQLite: `find sql/sqlite/ -type f -iname '*.sql' -print | sort -n | xargs cat | sqlite3 sftpgo.db`.
After applying these scripts your database structure is the same as the one obtained using `initprovider` for new installations, so from now on you don't have to manually upgrade your database anymore.
The `memory` provider can load users from a dump obtained using the `dumpdata` REST API. The path to this dump file can be configured using the dataprovider `name` configuration key. It will be loaded at startup and can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The `memory` provider will not modify the provided file so quota usage and last login will not be persisted.
@ -318,7 +338,7 @@ Flags:
Use "sftpgo service [command] --help" for more information about a command.
```
`install` subcommand accepts the same flags valid for `serve`.
`install` command accepts the same flags valid for `serve`.
After installing as Windows Service please remember to allow network access to the SFTPGo executable using something like this:
@ -510,6 +530,8 @@ SFTPGo uses multipart uploads and parallel downloads for storing and retrieving
The configured bucket must exist.
To connect SFTPGo to AWS a `region` is required, here is the list of available [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions). For example if your bucket is at `Frankfurt` you have to set the region to `eu-central-1`. You can specify an AWS [storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) too, leave blank to use the default AWS storage class. An endpoint is required if you are connecting to a Compatible AWS Storage such as [MinIO](https://min.io/).
Some SFTP commands doesn't work over S3:
- `symlink` and `chtimes` will fail
@ -528,6 +550,10 @@ Other notes:
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder, this way the mapped bucket/virtual folder is exposed over SFTP/SCP. This backend is very similar to the S3 backend and it has the same limitations.
To connect SFTPGo to Google Cloud Storage you need a credentials file that you can obtain from the Google Cloud Console, take a look at the "Setting up authentication" section [here](https://cloud.google.com/storage/docs/reference/libraries) for details.
You can optionally specify a [storage class](https://cloud.google.com/storage/docs/storage-classes) too, leave blank to use the default storage class.
## Other Storage backends
Adding new storage backends it's quite easy:
@ -625,11 +651,11 @@ For each account the following properties can be configured:
- `denied_ip`, List of IP/Mask not allowed to login. If an IP address is both allowed and denied then login will be denied
- `fs_provider`, filesystem to serve via SFTP. Local filesystem and S3 Compatible Object Storage are supported
- `s3_bucket`, required for S3 filesystem
- `s3_region`, required for S3 filesystem
- `s3_region`, required for S3 filesystem. Must match the region for your bucket. You can find here the list of available [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions). For example if your bucket is at `Frankfurt` you have to set the region to `eu-central-1`
- `s3_access_key`, required for S3 filesystem
- `s3_access_secret`, required for S3 filesystem. It is stored encrypted (AES-256-GCM)
- `s3_endpoint`, specifies s3 endpoint (server) different from AWS
- `s3_storage_class`
- `s3_endpoint`, specifies a S3 endpoint (server) different from AWS. It is not required if you are connecting to AWS
- `s3_storage_class`, leave blank to use the default or specify a valid AWS [storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
- `s3_key_prefix`, allows to restrict access to the virtual folder identified by this prefix and its contents
- `gcs_bucket`, required for GCS filesystem
- `gcs_credentials`, Google Cloud Storage JSON credentials base64 encoded

48
cmd/initprovider.go Normal file
View file

@ -0,0 +1,48 @@
package cmd
import (
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var (
initProviderCmd = &cobra.Command{
Use: "initprovider",
Short: "Initializes the configured data provider",
Long: `This command reads the data provider connection details from the specified configuration file and creates the initial structure.
Some data providers such as bolt and memory does not require an initialization.
For SQLite provider the database file will be auto created if missing.
For PostgreSQL and MySQL providers you need to create the configured database, this command will create the required tables.
To initialize the data provider from the configuration directory simply use:
sftpgo initprovider
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
config.LoadConfig(configDir, configFile)
providerConf := config.GetProviderConf()
logger.DebugToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err := dataprovider.InitializeDatabase(providerConf, configDir)
if err == nil {
logger.DebugToConsole("Data provider successfully initialized")
} else {
logger.WarnToConsole("Unable to initialize data provider: %v", err)
}
},
}
)
func init() {
rootCmd.AddCommand(initProviderCmd)
addConfigFlags(initProviderCmd)
}

View file

@ -73,7 +73,7 @@ func Execute() {
}
}
func addServeFlags(cmd *cobra.Command) {
func addConfigFlags(cmd *cobra.Command) {
viper.SetDefault(configDirKey, defaultConfigDir)
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR")
cmd.Flags().StringVarP(&configDir, configDirFlag, "c", viper.GetString(configDirKey),
@ -90,6 +90,10 @@ func addServeFlags(cmd *cobra.Command) {
"Java properties. Therefore if you set \"sftpgo\" then \"sftpgo.json\", \"sftpgo.yaml\" and so on are searched. "+
"This flag can be set using SFTPGO_CONFIG_FILE env var too.")
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag))
}
func addServeFlags(cmd *cobra.Command) {
addConfigFlags(cmd)
viper.SetDefault(logFilePathKey, defaultLogFile)
viper.BindEnv(logFilePathKey, "SFTPGO_LOG_FILE_PATH")

View file

@ -14,7 +14,7 @@ import (
)
const (
databaseVersion = 3
boltDatabaseVersion = 3
)
var (
@ -29,10 +29,6 @@ type BoltProvider struct {
dbHandle *bolt.DB
}
type boltDatabaseVersion struct {
Version int
}
type compatUserV2 struct {
ID int64 `json:"id"`
Username string `json:"username"`
@ -93,7 +89,6 @@ func initializeBoltProvider(basePath string) error {
return err
}
provider = BoltProvider{dbHandle: dbHandle}
err = checkBoltDatabaseVersion(dbHandle)
} else {
providerLog(logger.LevelWarn, "error creating bolt key/value store handler: %v", err)
}
@ -396,6 +391,33 @@ func (p BoltProvider) reloadConfig() error {
return nil
}
// initializeDatabase does nothing, no initilization is needed for bolt provider
func (p BoltProvider) initializeDatabase() error {
return errNoInitRequired
}
func (p BoltProvider) migrateDatabase() error {
dbVersion, err := getBoltDatabaseVersion(p.dbHandle)
if err != nil {
return err
}
if dbVersion.Version == boltDatabaseVersion {
providerLog(logger.LevelDebug, "bolt database is updated, current version: %v", dbVersion.Version)
return nil
}
if dbVersion.Version == 1 {
err = updateDatabaseFrom1To2(p.dbHandle)
if err != nil {
return err
}
return updateDatabaseFrom2To3(p.dbHandle)
} else if dbVersion.Version == 2 {
return updateDatabaseFrom2To3(p.dbHandle)
}
return nil
}
// itob returns an 8-byte big endian representation of v.
func itob(v int64) []byte {
b := make([]byte, 8)
@ -413,28 +435,6 @@ func getBuckets(tx *bolt.Tx) (*bolt.Bucket, *bolt.Bucket, error) {
return bucket, idxBucket, err
}
func checkBoltDatabaseVersion(dbHandle *bolt.DB) error {
dbVersion, err := getBoltDatabaseVersion(dbHandle)
if err != nil {
return err
}
if dbVersion.Version == databaseVersion {
providerLog(logger.LevelDebug, "bolt database updated, version: %v", dbVersion.Version)
return nil
}
if dbVersion.Version == 1 {
err = updateDatabaseFrom1To2(dbHandle)
if err != nil {
return err
}
return updateDatabaseFrom2To3(dbHandle)
} else if dbVersion.Version == 2 {
return updateDatabaseFrom2To3(dbHandle)
}
return nil
}
func updateDatabaseFrom1To2(dbHandle *bolt.DB) error {
providerLog(logger.LevelInfo, "updating bolt database version: 1 -> 2")
usernames, err := getBoltAvailableUsernames(dbHandle)
@ -527,8 +527,8 @@ func getBoltAvailableUsernames(dbHandle *bolt.DB) ([]string, error) {
return usernames, err
}
func getBoltDatabaseVersion(dbHandle *bolt.DB) (boltDatabaseVersion, error) {
var dbVersion boltDatabaseVersion
func getBoltDatabaseVersion(dbHandle *bolt.DB) (schemaVersion, error) {
var dbVersion schemaVersion
err := dbHandle.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(dbVersionBucket)
if bucket == nil {
@ -536,7 +536,7 @@ func getBoltDatabaseVersion(dbHandle *bolt.DB) (boltDatabaseVersion, error) {
}
v := bucket.Get(dbVersionKey)
if v == nil {
dbVersion = boltDatabaseVersion{
dbVersion = schemaVersion{
Version: 1,
}
return nil
@ -552,7 +552,7 @@ func updateBoltDatabaseVersion(dbHandle *bolt.DB, version int) error {
if bucket == nil {
return fmt.Errorf("unable to find database version bucket")
}
newDbVersion := boltDatabaseVersion{
newDbVersion := schemaVersion{
Version: version,
}
buf, err := json.Marshal(newDbVersion)

View file

@ -87,9 +87,14 @@ var (
availabilityTicker *time.Ticker
availabilityTickerDone chan bool
errWrongPassword = errors.New("password does not match")
errNoInitRequired = errors.New("initialization is not required for this data provider")
credentialsDirPath string
)
type schemaVersion struct {
Version int
}
// Actions to execute on user create, update, delete.
// An external command can be executed and/or an HTTP notification can be fired
type Actions struct {
@ -258,6 +263,8 @@ type Provider interface {
checkAvailability() error
close() error
reloadConfig() error
initializeDatabase() error
migrateDatabase() error
}
func init() {
@ -277,31 +284,39 @@ func Initialize(cnf Config, basePath string) error {
}
_, err := os.Stat(config.ExternalAuthProgram)
if err != nil {
providerLog(logger.LevelWarn, "invalid external auth program:: %v", err)
providerLog(logger.LevelWarn, "invalid external auth program: %v", err)
return err
}
}
if err := validateCredentialsDir(basePath); err != nil {
if err = validateCredentialsDir(basePath); err != nil {
return err
}
err = createProvider(basePath)
if err != nil {
return err
}
err = provider.migrateDatabase()
if err != nil {
providerLog(logger.LevelWarn, "database migration error: %v", err)
return err
}
startAvailabilityTimer()
return nil
}
if config.Driver == SQLiteDataProviderName {
err = initializeSQLiteProvider(basePath)
} else if config.Driver == PGSQLDataProviderName {
err = initializePGSQLProvider()
} else if config.Driver == MySQLDataProviderName {
err = initializeMySQLProvider()
} else if config.Driver == BoltDataProviderName {
err = initializeBoltProvider(basePath)
} else if config.Driver == MemoryDataProviderName {
err = initializeMemoryProvider(basePath)
} else {
err = fmt.Errorf("unsupported data provider: %v", config.Driver)
// InitializeDatabase creates the initial database structure
func InitializeDatabase(cnf Config, basePath string) error {
config = cnf
sqlPlaceholders = getSQLPlaceholders()
if config.Driver == BoltDataProviderName || config.Driver == MemoryDataProviderName {
return errNoInitRequired
}
if err == nil {
startAvailabilityTimer()
err := createProvider(basePath)
if err != nil {
return err
}
return err
return provider.initializeDatabase()
}
// CheckUserAndPass retrieves the SFTP user with the given username and password if a match is found or an error
@ -455,6 +470,24 @@ func Close(p Provider) error {
return p.close()
}
func createProvider(basePath string) error {
var err error
if config.Driver == SQLiteDataProviderName {
err = initializeSQLiteProvider(basePath)
} else if config.Driver == PGSQLDataProviderName {
err = initializePGSQLProvider()
} else if config.Driver == MySQLDataProviderName {
err = initializeMySQLProvider()
} else if config.Driver == BoltDataProviderName {
err = initializeBoltProvider(basePath)
} else if config.Driver == MemoryDataProviderName {
err = initializeMemoryProvider(basePath)
} else {
err = fmt.Errorf("unsupported data provider: %v", config.Driver)
}
return err
}
func buildUserHomeDir(user *User) {
if len(user.HomeDir) == 0 {
if len(config.UsersBaseDir) > 0 {

View file

@ -390,3 +390,12 @@ func (p MemoryProvider) reloadConfig() error {
providerLog(logger.LevelDebug, "users loaded from file: %#v", p.dbHandle.configFile)
return nil
}
// initializeDatabase does nothing, no initilization is needed for memory provider
func (p MemoryProvider) initializeDatabase() error {
return errNoInitRequired
}
func (p MemoryProvider) migrateDatabase() error {
return nil
}

View file

@ -3,11 +3,24 @@ package dataprovider
import (
"database/sql"
"fmt"
"strings"
"time"
"github.com/drakkan/sftpgo/logger"
)
const (
mysqlUsersTableSQL = "CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`username` varchar(255) NOT NULL UNIQUE, `password` varchar(255) NULL, `public_keys` longtext NULL, " +
"`home_dir` varchar(255) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, `max_sessions` integer NOT NULL, " +
" `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, " +
"`upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, `expiration_date` bigint(20) NOT NULL, " +
"`last_login` bigint(20) NOT NULL, `status` int(11) NOT NULL, `filters` longtext DEFAULT NULL, " +
"`filesystem` longtext DEFAULT NULL);"
mysqlSchemaTableSQL = "CREATE TABLE `schema_version` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);"
)
// MySQLProvider auth provider for MySQL/MariaDB database
type MySQLProvider struct {
dbHandle *sql.DB
@ -103,3 +116,32 @@ func (p MySQLProvider) close() error {
func (p MySQLProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p MySQLProvider) initializeDatabase() error {
sqlUsers := strings.Replace(mysqlUsersTableSQL, "{{users}}", config.UsersTable, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
tx.Rollback()
return err
}
_, err = tx.Exec(mysqlSchemaTableSQL)
if err != nil {
tx.Rollback()
return err
}
_, err = tx.Exec(initialDBVersionSQL)
if err != nil {
tx.Rollback()
return err
}
return tx.Commit()
}
func (p MySQLProvider) migrateDatabase() error {
return sqlCommonMigrateDatabase(p.dbHandle)
}

View file

@ -3,10 +3,22 @@ package dataprovider
import (
"database/sql"
"fmt"
"strings"
"github.com/drakkan/sftpgo/logger"
)
const (
pgsqlUsersTableSQL = `CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
pgsqlSchemaTableSQL = `CREATE TABLE "schema_version" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);`
)
// PGSQLProvider auth provider for PostgreSQL database
type PGSQLProvider struct {
dbHandle *sql.DB
@ -102,3 +114,32 @@ func (p PGSQLProvider) close() error {
func (p PGSQLProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p PGSQLProvider) initializeDatabase() error {
sqlUsers := strings.Replace(pgsqlUsersTableSQL, "{{users}}", config.UsersTable, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
tx.Rollback()
return err
}
_, err = tx.Exec(pgsqlSchemaTableSQL)
if err != nil {
tx.Rollback()
return err
}
_, err = tx.Exec(initialDBVersionSQL)
if err != nil {
tx.Rollback()
return err
}
return tx.Commit()
}
func (p PGSQLProvider) migrateDatabase() error {
return sqlCommonMigrateDatabase(p.dbHandle)
}

View file

@ -11,6 +11,11 @@ import (
"github.com/drakkan/sftpgo/utils"
)
const (
sqlDatabaseVersion = 1
initialDBVersionSQL = "INSERT INTO schema_version (version) VALUES (1);"
)
func getUserByUsername(username string, dbHandle *sql.DB) (User, error) {
var user User
q := getUserByUsernameQuery()
@ -350,3 +355,42 @@ func getUserFromDbRow(row *sql.Row, rows *sql.Rows) (User, error) {
}
return user, err
}
func sqlCommonMigrateDatabase(dbHandle *sql.DB) error {
dbVersion, err := sqlCommonGetDatabaseVersion(dbHandle)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
return nil
}
return nil
}
func sqlCommonGetDatabaseVersion(dbHandle *sql.DB) (schemaVersion, error) {
var result schemaVersion
q := getDatabaseVersionQuery()
stmt, err := dbHandle.Prepare(q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return result, err
}
defer stmt.Close()
row := stmt.QueryRow()
err = row.Scan(&result.Version)
return result, err
}
func sqlCommonUpdateDatabaseVersion(dbHandle *sql.DB) error {
q := getUpdateDBVersionQuery()
stmt, err := dbHandle.Prepare(q)
if err != nil {
providerLog(logger.LevelWarn, "error preparing database query %#v: %v", q, err)
return err
}
defer stmt.Close()
_, err = stmt.Exec(sqlDatabaseVersion)
return err
}

View file

@ -2,14 +2,24 @@ package dataprovider
import (
"database/sql"
"errors"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/drakkan/sftpgo/logger"
)
const (
sqliteUsersTableSQL = `CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255)
NOT NULL UNIQUE, "password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
sqliteSchemaTableSQL = `CREATE TABLE "schema_version" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);`
)
// SQLiteProvider auth provider for SQLite database
type SQLiteProvider struct {
dbHandle *sql.DB
@ -24,16 +34,6 @@ func initializeSQLiteProvider(basePath string) error {
if !filepath.IsAbs(dbPath) {
dbPath = filepath.Join(basePath, dbPath)
}
fi, err := os.Stat(dbPath)
if err != nil {
providerLog(logger.LevelWarn, "sqlite database file does not exists, please be sure to create and initialize"+
" a database before starting sftpgo")
return err
}
if fi.Size() == 0 {
return errors.New("sqlite database file is invalid, please be sure to create and initialize" +
" a database before starting sftpgo")
}
connectionString = fmt.Sprintf("file:%v?cache=shared", dbPath)
} else {
connectionString = config.ConnectionString
@ -109,3 +109,15 @@ func (p SQLiteProvider) close() error {
func (p SQLiteProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p SQLiteProvider) initializeDatabase() error {
sqlUsers := strings.Replace(sqliteUsersTableSQL, "{{users}}", config.UsersTable, 1)
sql := sqlUsers + " " + sqliteSchemaTableSQL + " " + initialDBVersionSQL
_, err := p.dbHandle.Exec(sql)
return err
}
func (p SQLiteProvider) migrateDatabase() error {
return sqlCommonMigrateDatabase(p.dbHandle)
}

View file

@ -79,3 +79,11 @@ func getUpdateUserQuery() string {
func getDeleteUserQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, config.UsersTable, sqlPlaceholders[0])
}
func getDatabaseVersionQuery() string {
return "SELECT version from schema_version LIMIT 1"
}
func getUpdateDBVersionQuery() string {
return fmt.Sprintf(`UPDATE schema_version SET version=%v`, sqlPlaceholders[0])
}

10
sql/mysql/20200208.sql Normal file
View file

@ -0,0 +1,10 @@
BEGIN;
--
-- Create model SchemaVersion
--
CREATE TABLE `schema_version` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);
---
--- Add initial version
---
INSERT INTO schema_version (version) VALUES (1);
COMMIT;

10
sql/pgsql/20200208.sql Normal file
View file

@ -0,0 +1,10 @@
BEGIN;
--
-- Create model SchemaVersion
--
CREATE TABLE "schema_version" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);
---
--- Add initial version
---
INSERT INTO schema_version (version) VALUES (1);
COMMIT;

10
sql/sqlite/20200208.sql Normal file
View file

@ -0,0 +1,10 @@
BEGIN;
--
-- Create model SchemaVersion
--
CREATE TABLE "schema_version" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);
---
--- Add initial version
---
INSERT INTO schema_version (version) VALUES (1);
COMMIT;

View file

@ -264,7 +264,7 @@
</div>
<div class="form-group row gcs">
<label for="idGCSCredentialFile" class="col-sm-2 col-form-label">GCS Credential file</label>
<label for="idGCSCredentialFile" class="col-sm-2 col-form-label">GCS Credentials file</label>
<div class="col-sm-4">
<input type="file" class="form-control-file" id="idGCSCredentialFile" name="gcs_credential_file"
aria-describedby="GCSCredentialsHelpBlock">