File Storage

You can use Uploadcare storage that supports all our features, like on-the-fly CDN processing, or copy files to a custom Amazon S3 bucket, which has some limitations but can help your specific needs.

Uploadcare storage

Uploadcare storage requires no configuration and works out of the box. The traffic and storage costs are included in your plan, and you can monitor your usage in the dashboard.

Uploadcare storage workflow

  1. A file gets to our storage in one of your projects identified by its Public API key.
  2. The file is either stored or becomes a subject to be deleted after 24 hours. Read more about storage behaviour down below.
  3. The file becomes available on our CDN after the first request.

If the file is an image, it can be further processed on the fly via our image processing. You can also get a file info that way.

Video processing and document conversion work via our REST API.

File storing behavior

By default, we don't save files on our storage forever. There is a 24-hour window when you should decide whether to store uploaded files or not.

You can decide it when you upload a file via Upload API. UPLOADCARE_STORE can be set to 0, 1, or auto:

  • 0 — file will be deleted after 24 hours.
  • 1 — file will be stored permanently until further notice.
  • auto — delegates the choice of storing behavior to a project auto-store setting (which is ON by default).

If nothing is sent, 0 is used.

To avoid confusion, we recommend to set the parameter to auto.

Once a file has been uploaded, you can store or delete the file via REST API.

Custom Storage

The custom storage option is about connecting an Amazon S3 bucket to one or more of your Uploadcare projects. You can connect a custom bucket in the custom storage settings.

You may want to choose this option over Uploadcare Storage when you:

  • Implement custom file workflows, integrated deeply into your system.
  • Follow company storage policies pointing at specific buckets.

Custom Storage Workflow

  1. A file is uploaded to our Uploadcare storage, as described above.
  2. Your app requests Uploadcare to save files to your Custom Storage (manually via REST API or automatically).
  3. Files can then be served directly from your storage or from a third-party CDN.

When handling images, your app can request Uploadcare to save their processed versions to your Custom Storage.

Setting Up Custom Storage

First, you need an Amazon Web Services account. Second, you need to create a new or select an existing bucket in S3 bucket management.

Then, select a project settings in your dashboard, go to "Custom storage" and click "Connect S3 bucket" to connect your s3 storage to Uploadcare.

Technically, storage is a named set of settings. It includes a name that identifies your custom storage in our APIs, S3 bucket name, and prefix. You can use prefixes to organize files in your bucket. A prefix is added to each file saved to the specified storage. Hence, with various prefixes, you can have multiple storage names associated with a single S3 bucket.

Setting Up S3 Bucket

  1. Enter your Amazon S3 console and go to Buckets.
  2. Open an existing bucket or create a new one. Use DNS-compliant lowercase names such as johnsbucket1.
  3. Go to the Permissions tab on your bucket’s properties pane and add the following settings to your Bucket Policy (where replace [bucket-name] with the name of your bucket):
    "Version": "2008-10-17",
    "Statement": [{
        "Sid": "AllowUploadcareAccess",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::899746369614:user/bucket-consumer"
        "Action": [
        "Resource": [
  1. Save your Bucket settings and Connect.

Uploadcare will run a set of tests to make sure it can connect and upload the files to the bucket.

Once the bucket has been connected, you can remove these actions from your S3 Bucket Policy:


Copying Files to Custom Storage

With a temporary link (CDN URL or UUID of a file) provided by Uploadcare, you can copy a file to yourstorage by making a post API request. In your request, source should hold the temporary link, and target should represent a name of the storage your files are saved to.

Note, the maximum file size that applies when copying files via REST API is 5 GB.

Files are copied to the following path in your bucket:


Where <prefix> is taken from your storage settings, <file_uuid> holds a temporary UUID, and <filename> contains the original filename, which is represented by all symbols after the last slash in the original URL; more on filenames can be found here.

When handling image files, you can provide a temporary Uploadcare URL with processing operations included. In the case, the maximum file size should not exceed 100 MB.

The name of a processed file is constructed of its original filename and a set of applied operations. For instance, the following source:<file_uuid>/-/preview/200x200/roof.jpg

Will result in the filename:


Hence, all versions of your processed file are stored in the same virtual folder.

Our copy method returns a link to your file with ad-hoc scheme s3 in the result member of the response object. For example:


In this scheme, the host is represented by an S3 bucket name, and path stands for a path to your file in the bucket. If you want to serve files directly from S3, you can get the proper http link to your file by replacing s3:// with

The example above deals with the default naming pattern. You can also customize how filenames are constructed. Learn more about naming patterns here.

Automatic file copying

You can enable automatic file copying by checking the corresponding checkbox in the custom storage connection settings in your dashboard.

This will ensure every file uploaded to that project also goes to your custom storage.

Auto-copy always uses parameters from custom storage settings (when you store manually, you can overide them: pattern, make_public, etc).