S3 bucket integration

This option is about connecting an Amazon S3 bucket to one or more of your Uploadcare projects. You can connect a custom bucket in the custom storage settings.

You may want to choose this option over Uploadcare Storage when you:

  • Implement custom file workflows, integrated deeply into your system.
  • Follow company storage policies pointing at specific buckets.

Storage workflow

  1. A file is uploaded to the Uploadcare storage.
  2. Your app requests Uploadcare to save files to your Custom Storage (manually via REST API or automatically).
  3. Files can then be served directly from your storage or from a third-party CDN.

When handling images with image processing operations, you can request Uploadcare to save processed versions to your Custom Storage.

Setting up integration

First, you need an Amazon Web Services account. Second, you need to create a new or select an existing bucket in S3 bucket management.

Then, select a project settings in your Dashboard, go to "Custom storage" and click "Connect S3 bucket" to connect your s3 storage to Uploadcare.

Technically, storage is a named set of settings. It includes a name that identifies your custom storage in our APIs, S3 bucket name, and prefix. You can use prefixes to organize files in your bucket. A prefix is added to each file saved to the specified storage. Hence, with various prefixes, you can have multiple storage names associated with a single S3 bucket.

Setting up S3 bucket

  1. Enter your Amazon S3 console and go to Buckets.
  2. Open an existing bucket or create a new one. Use DNS-compliant lowercase names such as johnsbucket1.
  3. Go to the Permissions tab on your bucket’s properties pane and add the following settings to your Bucket Policy (where replace [bucket-name] with the name of your bucket):
{
    "Version": "2008-10-17",
    "Statement": [{
        "Sid": "AllowUploadcareAccess",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::899746369614:user/bucket-consumer"
        },
        "Action": [
            "s3:ListBucket",
            "s3:GetBucketAcl",
            "s3:PutObject",
            "s3:GetBucketLocation",
            "s3:DeleteObject",
            "s3:GetObjectVersion",
            "s3:DeleteObjectVersion"
        ],
        "Resource": [
            "arn:aws:s3:::[bucket-name]",
            "arn:aws:s3:::[bucket-name]/*"
        ]
    }]
}
  1. Change Block public access settings (all checkboxes are checked by default). Uncheck the "Block public access to buckets and objects granted throw new access control lists (ACLs)" checkbox.
    Block public access
    Block public access
  2. Enable ACL (disabled by default).
    Enable ACL
    Enable ACL
  3. Save your Bucket settings and Connect.

Uploadcare will run a set of tests to make sure it can connect and upload the files to the bucket.

Once the bucket has been connected, you can remove these actions from your S3 Bucket Policy:

s3:DeleteObject
s3:GetObjectVersion
s3:DeleteObjectVersion

Copying files to S3 bucket

With a temporary link (CDN URL or UUID of a file) provided by Uploadcare, you can copy a file to your storage by making a post API request. In your request, source should hold the temporary link, and target should represent a name of the storage your files are saved to.

Note, the maximum file size that applies when copying files via REST API is 5 GB.

Files are copied to the following path in your bucket:

/<prefix>/<file_uuid>/<filename>

Where <prefix> is taken from your storage settings, <file_uuid> holds a temporary UUID, and <filename> contains the original filename, which is represented by all symbols after the last slash in the original URL; more on filenames.

When handling image files, you can provide a temporary Uploadcare URL with processing operations included. In this case, the maximum file size should not exceed 100 MB.

The name of a processed file is constructed of its original filename and a set of applied operations. For instance, the following source:

http://www.ucarecdn.com/<file_uuid>/-/preview/200x200/roof.jpg

Will result in the filename:

/<prefix>/<file_uuid>/roofpreview_200x200.jpg

Hence, all versions of your processed file are stored in the same virtual folder.

Our copy method returns a link to your file with ad-hoc scheme s3 in the result member of the response object. For example:

s3://my-bucket/<prefix>/<file_uuid>/roofpreview_200x200.jpg

In this scheme, the host is represented by an S3 bucket name, and path stands for a path to your file in the bucket. If you want to serve files directly from S3, you can get the proper http link to your file by replacing s3:// with http://s3.amazonaws.com/.

The example above deals with the default naming pattern. You can also customize how filenames are constructed. Learn more about naming patterns.

Automatic file copying

You can enable automatic file copying by checking the corresponding checkbox in the custom storage connection settings in your Dashboard.

This will ensure every file uploaded to that project also goes to your custom storage.

Auto-copy always uses parameters from custom storage settings (when you store manually, you can overide them: pattern, make_public, etc).

Note, the maximum file size that applies when copying files via REST API is 5 GB.

Uploadcare storage backup

You can have all your stored files to be copied to a custom S3 bucket automatically. Connect the storage once, and the system will do backups on a timely basis.

Get your bucket's name from Amazon S3 console -> Buckets. Open an existing bucket or create a new one (use DNS-complaint lowercase names). Put this bucket's name into Backup connect bucket form on Uploading configure page.

Note that the s3 bucket must be configured as described in the Setting up S3 Bucket.

Uploading configure, Storage, Backup
Uploading configure, Storage, Backup