CLI for Linux/MacOS supports Amazon S3 | Google Cloud Storage | Linode (Akamai) Object Storage | Oracle Cloud Object Storage
Special thanks to JetBrains! s3p was developed with help from JetBrains' Open Source program.
s3p is an S3 / Object Storage upload and backup tool. It uses YAML-based configs that tell it what to upload, where to upload, how to name, and how to tag the objects. s3p makes backup redundancy easier by using separate profiles for buckets and providers. Currently, it supports AWS, Google Cloud, Linode (Akamai), OCI (Oracle Cloud).
See the releases page...
s3p supports AWS S3, Google Cloud Storage, Oracle Cloud Object Storage (OCI), and Linode (Akamai) Object Storage.
- AWS: using_aws.md
- Google: using_gcloud.md (experimental)
- Linode: using_linode.md
- OCI: using_oci.md
See the example profiles:
- example_aws.yaml (AWS)
- example_gcloud.yaml (Google Cloud)
- example_linode.yaml (Linode)
- example_oci.yaml (OCI)
To start a session with an existing profile, just type in the following command:
~$ s3p use -f "my-custom-profile.yml"
s3p can create a base profile to help get you started. To create one, use the --create
flag:
~$ s3p profile sample --filename "new-profile.yml"
s3p profiles are written in YAML. To set one up, you just need to fill out a few fields before you can get started.
Tell s3p which service you're using
PROVIDER | Acceptable Values | Required | Description |
---|---|---|---|
Use | aws, google, linode, oci | Y | name of service provider you will be using |
Provider:
Use: aws
Each provider needs their own special fields to configure.
SEE: docs/general_config.md
Tell s3p where the bucket is and whether to create it
BUCKET | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
Create | boolean | false | F | Whether s3p should create the bucket if it is not found |
Name | any string | Y | The name of the bucket | |
Region | any string | Y | The region that the bucket is or will be in, e.g. eu-north-1 |
Bucket:
Create: true
Name: "deep-freeze"
Region: "eu-north-1"
s3p's configurable options
OPTIONS | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
MaxUploads | any integer | 1 | N | The number of simultaneous uploads, at least 1. |
FollowSymlinks | boolean | false | N | Whether to follow symlinks under dirs provided |
WalkDirs | boolean | true | N | Whether s3p will walk subdirectories of dirs provided |
OverwriteObjects | always, never | never | N | Whether overwrite objects that already exist in the bucket |
Options:
MaxUploads: 1
FollowSymlinks: false
WalkDirs: true
OverwriteObjects: "never"
The suggested setting for MaxUploads
is 1
.
Be careful when you set MaxUploads
as some services struggle with anything more than 1
. The notable exception being
AWS which seems fine with 50-100 -- though, whether its faster to have a high MaxUploads
value depends on your upload
job.
Large files can be broken up into many parts which are then simultaneously uploaded. s3p uses default SDK values for part count, part size, and the large file threshold values, unless otherwise called out.
An example of this would be: If you specify a MaxUploads value of 5, and s3p tries to upload 5 large files that are each split into 20 parts, then there would be 100 simultaneous uploads happening. If you specify a MaxUpload value of 50 and there are 50 large files each split into 20 parts, then you could potentially have as many as 1,000 simultaneous uploads.
s3p's configurable options for object name and renaming
OBJECTS | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
NamingType | absolute, relative | Y | the method s3p uses to name objects that it uploads | |
NamePrefix | any string | N | The string that will be prefixed to the object's "file" name | |
PathPrefix | any string | N | a string path that will be prefixed to the object's "file" name and "path" name | |
OmitRootDir | boolean | True | N | whether the relative root of a provided directory will be added to the objects path name |
Objects:
NamingType: absolute
NamePrefix: backup-
PathPrefix: /backups/april/2023
OmitRootDir: true
NamingType
The default is relative
.
relative
: The key will be prepended with the relative path of the file on the local filesystem (individual files specified in the profile will always end up at the root of the bucket, plus thePathPrefix
and thenNamePrefix
).absolute
: The key will be prepended with the absolute path of the file on the local filesystem.
NamePrefix
This is blank by default. Any value you put here will be added before the filename when it's uploaded to S3. Using
something like weekly-
will add that string to any file you're uploading, like weekly-log.log
or weekly-2021-01-01.log
.
PathPrefix
This is blank by default. Any value put here will be added before the file path when it's uploaded to S3. If you use
something like /backups/monthly
, the file will be uploaded to /backups/monthly/your-file.txt
.
Tells s3p what you want to upload. You can specify directories or individual files. When you specify a directory, s3p will NOT traverse subdirectories, unless configured to. You must specify one or the other.
FILES | Required | Description |
---|---|---|
path | Y | the absolute path to the file that will be uploaded |
DIRS | Required | Description |
---|---|---|
path | Y | the absolute path to the directory that will be uploaded |
Files:
- "/Users/forrest/docs/stocks/apple"
- "/Users/jenny/docs/song_lyrics"
Dirs:
- "/Users/forrest/docs/objJob-application-lawn-mower.pdf"
- "/Users/forrest/docs/dr-pepper-recipe.txt"
- "/Users/jenny/letters/from-forrest.docx"
Add tags to each uploaded object (if supported by the provider)
TAGS | Acceptable Values | Required | Description |
---|---|---|---|
Key | any value | N | key:value tag pair, will be converted to a string |
Tags:
Author: "Forrest Gump"
Year: 1994
Options related to object tagging (dependent on whether the provider supports object tagging)
TAGOPTIONS | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
OriginPath | boolean | False | N | Whether s3p will tag the object with the original absolute path of the file |
ChecksumSHA256 | boolean | False | N | Whether s3p will tag the object with the sha256 checksum of the file as uploaded |
Tagging:
OriginPath: true
ChecksumSHA256: false
Note on Checksum Tagging
Some providers have checksum validation on objects to verify that uploads are completed correctly. This checksum is
calculated separately from that process and is only for your reference.
Options for logging output
LOGGING | Acceptable Values | Default | Required | Description |
---|---|---|---|---|
Level | 1-5 | 2 | N | The severity level a log message must be to output to the console or file |
Console | boolean | True | N | Whether logging message will be output to stdout. |
File | boolean | False | N | Whether logging output will be written to a file. Output is structured in JSON format. |
Logfile | path | "/var/log/s3p.log" | N | The name of the file that output logging will be appended to. |
Logging:
Level: 3
Console: true
File: true
Logfile: "/var/log/backup.log"
Notes on Level
Level is set to 2
(WARN) by default. The setting is by severity, with 1 being least severe (INFO) and 5 being most
severe (PANIC).
Individual Files
If you’re uploading individual files, just remember that the prefix will be added to the start of each filename and they’ll be uploaded right to the root of the bucket (unless you specify custom PathPrefix. Note that if you have multiple files with the same name (like if you have five ‘log.log’ files from different directories), they could be overwritten as you upload.
Directories
When you’re uploading directories, when WalkDirs is set to true, then all the subdirectories and files will be uploaded to the bucket as well. Processing directories with a large number of files can take some time as the checksums are calculated and each directory entry is read.
If you run into problems, errors, or have feature suggestions, it would be great if you took the time to open a new issue on GitHub.