Custom storage

You may want to choose this option over the Uploadcare storage when you:

  • Implement custom file workflows.
  • Follow company storage policies pointing at specific buckets.
  • Are not planning to continuously use our Media Processing or CDN delivery.
  • Want to manage traffic and storage on your end without having to use our dashboard.

Here is the custom storage workflow:

  • Users upload their files via Uploadcare Widget or one of our libraries.
  • We provide your app with a temporary link that works within 24 hours.
  • Your app requests Uploadcare to save files to your custom storage.
  • When handling media files, your app can request Uploadcare to save their processed versions to your custom storage.
  • Files can then be served directly from your storage or from a third-party CDN you prefer.

Another option would be to enable file auto copying by checking the respective checkbox in the custom storage options. This will ensure every file uploaded to Uploadcare also goes to your custom storage.

To implement this workflow, you first need to connect your custom storage in the dashboard.

Note, in this case, storage and traffic costs are not included in your Uploadcare plan and depend on third-party services.

Setting up custom storage

First, you need an Amazon Web Services account. Second, you need to create a new or select an existing bucket in S3 bucket management.

Then, select a project in your dashboard, go to Custom storage and click Connect S3 bucket to connect your new storage to Uploadcare.

Technically, storage is a named set of settings. It includes a name that identifies your custom storage in our APIs, S3 bucket name, and prefix. You can use prefixes to organize files in your bucket. A prefix is added to each saved to the specified storage. You can have multiple storage names associated with a single S3 bucket and varied prefixes.

Moving files to custom storage

With a temporary link provided by Uploadcare, you can copy a file to your storage by making a post API request. In your request, source should hold a temporary link while target should represent a name of the storage your files are saved to.

Note, the maximum file size that applies when copying files via REST API is 5 GB.

Files are copied to the following path in your bucket:

/<prefix>/<file_uuid>/<filename>

Where <prefix> is from your storage settings, <file_uuid> holds a temporary UUID, and <filename> contains the original filename, which is represented by all symbols after the last slash in the original link; more on filenames here.

When handling media files, you can provide a temporary Uploadcare URL with processing operations included. In the case, the maximum media file size should not exceed 100 MB. The name of a processed file is constructed of its original filename and a set of applied operations. For instance, the following source:

http://www.ucarecdn.com/<file_uuid>/-/preview/200x200/roof.jpg

Will result in the filename:

/<prefix>/<file_uuid>/roof.preview_200x200.jpg

Hence, all versions of your processed file are stored in the same virtual folder.

Our copy method returns a link to your file with ad-hoc scheme s3 in the result member of the response object. For example,

s3://my-bucket/<prefix>/<file_uuid>/roof.preview_200x200.jpg

In this scheme, host is represented by a S3 bucket name, and path stands for a path to your file in the bucket. If you want to serve files directly from S3, you can get the proper http link to your file by replacing s3:// with http://s3.amazonaws.com/.

We’re always happy to help with code, integration, and other stuff. Search our site for more info or post your question in our Community Area.