Copy files to S3 bucket

Uploadcare allows you to connect an AWS S3 bucket to one or more of your Uploadcare projects to copy uploaded files directly to your own storage.

If you need to upload files directly to an S3 bucket, use AWS S3 storage.

How it works

  1. A file is uploaded to the Uploadcare storage.
  2. Your app requests Uploadcare to programmatically copy files to your AWS S3 bucket via REST API or do it automatically.
  3. Files are stored in your AWS storage.
  4. Files can be served directly from your storage or a third-party CDN if needed.

It allows the same bucket to be connected to different projects.

Note: A file storing on Uploadcare side can be temporary. If you don't store files, our system deletes them in 24 hours. Check out file storing behavior for more details.

When handling images with image processing operations, you can request Uploadcare to copy processed versions to your AWS S3 bucket.

Setting up S3 bucket

You need an Amazon Web Services account with creating a new or selecting an existing bucket in S3 bucket management.

Select a storage settings in your Dashboard, go to "Copy uploads to S3 bucket" and click "Connect bucket" to connect your AWS S3 bucket to Uploadcare project.

  1. Enter the storage name, which will identify your custom storage in REST API.
  2. Get your bucket's name. Enter your AWS S3 console and go to Buckets.
  3. Open an existing bucket or create a new one. Use DNS-compliant lowercase bucket names such as johnsbucket1.
  4. Enter your bucket's name.
  5. Set up access control according to your AWS bucket ACL settings. You can allow anyone to read objects from your s3 bucket by enabling the "Make copies public" rule.
  6. Go to the Permissions tab on your bucket's properties pane and add the following settings to your Bucket Policy (where replace <bucket> with the name of your AWS S3 bucket):
    {
    "Version": "2008-10-17",
    "Statement": [
       {
        "Sid": "AllowUploadcareAccess",
        "Effect": "Allow",
        "Principal":
        {
            "AWS": "arn:aws:iam::899746369614:user/bucket-consumer"
        },
        "Action": [
            "s3:GetBucketLocation",
            "s3:ListBucket",
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion",
            "s3:GetBucketAcl"
        ],
        "Resource": [
            "arn:aws:s3:::{YOUR-BUCKET-NAME}",
            "arn:aws:s3:::{YOUR-BUCKET-NAME}/*"
        ]
       }
      ]
    }
  7. Change Block public access settings if you need to give public access to copied files (all checkboxes are checked by default).
    Block public access
    Block public access
  8. Save your Bucket settings and Connect.
    Uploadcare will run tests to ensure it can connect and upload the files to the bucket. Once the bucket has been connected, you can remove these actions from your S3 Bucket Policy:
    s3:GetObject
    s3:GetObjectVersion
    s3:DeleteObject
    s3:DeleteObjectVersion
  9. By default, all new uploads from the Uploadcare storage will be transferred to your S3 bucket automatically. If this option is disabled, you can copy files manually using POST API requests.

Key pattern (optional)

The Key pattern is used to build an object key name for the copied file.

Key patternValue
${filename}File base name
${uuid}File UUID
${ext}File extension with the leading dot
${auto_filename}${filename}${ext}
${default}${uuid}/${auto_filename}

By default, it's ${default}, which is the same as ${uuid}/${filename}${ext}.

File name specifics

When a file is copied to an external S3 bucket, a sanitized version of the original filename is used for the filename part of the object key name in S3.

When sanitizing a filename, our system removes all characters except [a-zA-Z0-9_]. For example:

  • image-sample.jpegimagesample.jpeg
  • image(1).jpegimage1.jpeg
  • ().pngnoroot.png (if no valid characters remain in the output name, the whole name is changed to noroot).

Auto-copy

All new uploads are automatically copied from the Uploadcare storage to your AWS S3 bucket. This feature is enabled by default.

Copy via REST API

Copying via REST API is used to copy existing file from the Uploadcare storage to AWS S3 bucket.

In your request, source must contain the file UUID, and target must include the identifier of the storage place where your files are saved.

In your request, source must contain a CDN URL or just the UUID of a file subjected to copy, and target must include the storage name where your files are saved.

The response JSON will contain "type": "url" and S3 object URL.

When using the remote_copy method, you can specify the key pattern for each copy individually via the REST API.

If the make_public parameter is 'true', the copied files will be available via public links.

The maximum file size limit that applies when copying files via REST API is 5 GB.

Copy with image processing operations

When copying image files via REST API, you can provide a Uploadcare URL with processing operations included.

The name of a processed file is constructed of its original filename and a set of applied operations. For instance, the following source:

http://www.ucarecdn.com/:UUID/-/preview/200x200/roof.jpg

Result in the filename will be:

/:UUID/roofpreview_200x200.jpg

Our copy method returns a link to your file with ad-hoc scheme s3 in the result member of the response object. For example:

s3://my-bucket/:UUID/roofpreview_200x200.jpg

In this scheme, the host is represented by an S3 bucket name, and path stands for a path to your file in the bucket. If you want to serve files directly from S3, you can get the proper http link to your file by replacing s3:// with http://s3.amazonaws.com/.