Copy files to S3 bucket
Uploadcare allows you to connect an AWS S3 bucket to one or more of your Uploadcare projects to copy uploaded files directly to your own storage.
If you need to upload files directly to an S3 bucket, use AWS S3 storage.
How it works
- A file is uploaded to the Uploadcare storage.
- Your app requests Uploadcare to programmatically copy files to your AWS S3 bucket via REST API or do it automatically.
- Files are stored in your AWS storage.
- Files can be served directly from your storage or a third-party CDN if needed.
It allows the same bucket to be connected to different projects.
Note: A file storing on Uploadcare side can be temporary. If you don’t store files, our system deletes them in 24 hours. Check out file storing behavior for more details.
When handling images with image processing operations, you can request Uploadcare to copy processed versions to your AWS S3 bucket.
Setting up S3 bucket
You need an Amazon Web Services account with creating a new or selecting an existing bucket in S3 bucket management.
Select a storage settings in your Dashboard, go to “Copy uploads to S3 bucket” and click “Connect bucket” to connect your AWS S3 bucket to Uploadcare project.
- Enter the storage name, which will identify your custom storage in REST API.
- Get your bucket’s name. Enter your AWS S3 console and go to Buckets.
- Open an existing bucket or create a new one.
Use DNS-compliant lowercase bucket names such as
johnsbucket1
. - Enter your bucket’s name.
- Set up access control according to your AWS bucket ACL settings. You can allow anyone to read objects from your s3 bucket by enabling the “Make copies public” rule.
- Go to the Permissions tab on your bucket’s properties pane
and add the following settings to your Bucket Policy
(where replace
<bucket>
with the name of your AWS S3 bucket): - Change Block public access settings if you need to give public access to copied files (all checkboxes are checked by default).>
- Save your Bucket settings and Connect.
Uploadcare will run tests to ensure it can connect and upload the files to the bucket. Once the bucket has been connected, you can remove these actions from your S3 Bucket Policy: - By default, all new uploads from the Uploadcare storage will be transferred to your S3 bucket automatically. If this option is disabled, you can copy files manually using POST API requests.
Key pattern (optional)
The Key pattern is used to build an object key name for the copied file.
By default, it’s ${default}
, which is the same as ${uuid}/${filename}${ext}
.
File name specifics
When a file is copied to an external S3 bucket, a sanitized version of the
original filename is used for the filename
part of the object key name in S3.
When sanitizing a filename, our system removes all characters except [a-zA-Z0-9_]
.
For example:
image-sample.jpeg
→imagesample.jpeg
image(1).jpeg
→image1.jpeg
().png
→noroot.png
(if no valid characters remain in the output name, the whole name is changed tonoroot
).
Auto-copy
All new uploads are automatically copied from the Uploadcare storage to your AWS S3 bucket. This feature is enabled by default.
Copy via REST API
Copying via REST API is used to copy existing file from the Uploadcare storage to AWS S3 bucket.
In your request, source
must contain the file UUID, and target
must
include the identifier of the storage place where your files are saved.
In your request, source
must contain a CDN URL or just the UUID of
a file subjected to copy, and target
must include
the storage name where your files are saved.
The response JSON will contain "type": "url"
and S3 object URL.
When using the remote_copy
method, you can specify the key pattern
for each copy individually via the remote_copy
REST API.
If the make_public
parameter is ‘true’, the copied files will be available
via public links.
The maximum file size limit that applies when copying files via REST API is 5 GB.
Copy with image processing operations
When copying image files via REST API, you can provide a Uploadcare URL with processing operations included.
The name of a processed file is constructed of its original filename and a set
of applied operations. For instance, the following source
:
Result in the filename will be:
Our copy method returns a link to your file with ad-hoc scheme s3
in the
result
member of the response object.
For example:
In this scheme, the host is represented by an S3 bucket name, and path stands
for a path to your file in the bucket. If you want to serve files directly from
S3, you can get the proper http link to your file by replacing s3://
with
http://s3.amazonaws.com/
.