There are two storage workflows available with Uploadcare. You can use Uploadcare storage, or connect custom storage.
By default, you start your Uploadcare experience with Uploadcare Storage. The traffic and storage costs are then included in your plan, and you can monitor your limits in the dashboard. Uploadcare Storage requires no configuration and works out of the box.
Every file uploaded to your account goes to our storage, and all the Uploadcare media processing and delivery capabilities are based on it.
The workflow with Uploadcare Storage is as follows:
- A file gets to one of your Uploadcare projects identified by its API key by the means of our file uploader, Upload API or API clients.
- The file goes to our storage and gets cached by CDN layers.
- Then, depending on the “Automatic file storing” setting of your project, the file is either kept in our storage forever or deleted after a 24-hour period.
- If you use our Upload API to get files to your
Uploadcare project, you can implement the
storeflag to manually store the file.
- During the 24-hour expiry period, you can make an API call to manually store the file.
- In any case, your files are then delivered to your users via our CDN.
- If the file is an image, it can be further processed on the fly via our Image Processing.
- Video Processing and Document Conversion will work via our REST API.
There are two basic workflows when uploading files to Uploadcare:
- Uploaded files will sit in your project permanently; this is called “Auto file storing.”
- Uploaded files will be kept in your account for 24 hours and then get deleted.
On Uploadcare, every permanent file is called “stored,” hence making a file permanent is called “storing.” The second workflow allows storing individual files by making an API Request.
By default, every project on your dashboard comes with the “Auto file storing” option enabled. This way you can seamlessly use our File Uploader without having to make store requests: every file uploaded via the file uploader gets stored in your Uploadcare project defined by the set public key.
When you disable “Auto file storing” for a project, every file that goes there gets deleted after a 24-hour period. This could be useful when there is no need for your app to keep every uploaded file that takes up your account storage.
To store a file, you will need to make a separate server-side API call. Such requests should be made after a form a file uploading form gets submitted.
You can also have the master “Auto file storing” setting enabled in a project
and still not store files sent by a specific file uploader instance. This behavior is
carried out via implementing the
data-do-not-store file uploader
You may want to choose this option over Uploadcare Storage when you:
- Implement custom file workflows.
- Follow company storage policies pointing at specific buckets.
- Want to manage traffic and storage on your end without having to use our dashboard.
Here is the custom storage workflow:
- A file gets to one of your Uploadcare projects identified by its API key by the means of our File Uploader, Upload API or API clients.
- We provide your app with a temporary link that expires in 24 hours.
- Your app requests Uploadcare to save files to your Custom Storage.
- When handling images, your app can request Uploadcare to save their processed versions to your Custom Storage.
- Files can then be served directly from your storage or from a third-party content delivery network you prefer.
Another option would be enabling automatic file copying by checking the respective checkbox in the Custom storage options section of your project settings. This will ensure every file uploaded to that project also goes to your custom storage.
Note, in this case, storage and traffic costs are not included in your Uploadcare plan and depend on third-party services.
Then, select a project in your dashboard, go to
Connect S3 bucket to connect your new storage to Uploadcare.
Technically, storage is a named set of settings. It includes a name that identifies your custom storage in our APIs, S3 bucket name, and prefix. You can use prefixes to organize files in your bucket. A prefix is added to each file saved to the specified storage. Hence, with various prefixes, you can have multiple storage names associated with a single S3 bucket.
With a temporary link provided by Uploadcare, you can copy a file to your
storage by making a
post API request. In your
source should hold the temporary link, and
target should represent
a name of the storage your files are saved to.
Note, the maximum file size that applies when copying files via REST API is 5 GB.
Files are copied to the following path in your bucket:
<prefix> is taken from your storage settings,
<file_uuid> holds a
temporary UUID, and
<filename> contains the original filename, which is
represented by all symbols after the last slash in the original URL; more on
filenames can be found here.
When handling image files, you can provide a temporary Uploadcare URL with
processing operations included. In the case, the maximum file size should not
exceed 100 MB.
The name of a processed file is constructed of its original filename and a set
of applied operations. For instance, the following
Will result in the filename:
Hence, all versions of your processed file are stored in the same virtual folder.
Our copy method returns a link to your file with ad-hoc scheme
s3 in the
result member of the response object. For example:
In this scheme, the host is represented by an S3 bucket name, and path stands
for a path to your file in the bucket. If you want to serve files directly from
S3, you can get the proper http link to your file by replacing
The example above deals with the default naming pattern. You can also customize how filenames are constructed. Learn more about naming patterns here.