2.3 KiB
S3 Compabible Object Storage backends
To connect SFTPGo to AWS, you need to specify credentials, a bucket
and a region
. Here is the list of available AWS regions. For example, if your bucket is at Frankfurt
, you have to set the region to eu-central-1
. You can specify an AWS storage class too. Leave it blank to use the default AWS storage class. An endpoint is required if you are connecting to a Compatible AWS Storage such as MinIO.
AWS SDK has different options for credentials. More Detail. We support:
- Providing Access Keys.
- Use IAM roles for Amazon EC2
- Use IAM roles for tasks if your application uses an ECS task definition
So, you need to provide access keys to activate option 1, or leave them blank to use the other ways to specify credentials.
Specifying a different key_prefix
, you can assign different virtual folders of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access the assigned virtual folder and its contents. The virtual folder identified by key_prefix
does not need to be pre-created.
SFTPGo uses multipart uploads and parallel downloads for storing and retrieving files from S3.
The configured bucket must exist.
Some SFTP commands don't work over S3:
symlink
andchtimes
will failchown
andchmod
are silently ignored- upload resume is not supported
- upload mode
atomic
is ignored since S3 uploads are already atomic
Other notes:
rename
is a two step operation: server-side copy and then deletion. So, it is not atomic as for local filesystem.- We don't support renaming non empty directories since we should rename all the contents too and this could take a long time: think about directories with thousands of files; for each file we should do an AWS API call.
- For server side encryption, you have to configure the mapped bucket to automatically encrypt objects.
- A local home directory is still required to store temporary files.