Unsafe content detection
Uploadcare allows the detection and identification of unwanted, NSFW, or offensive user-generated content in images to help prevent inappropriate images from being published. AWS Rekognition machine learning technology automatically identifies and flags such content. The feature is available only for images and accessible on the REST API v0.7 or higher via add-ons API.
How it works
Unsafe content detection works asynchronously through the REST API.
- Start a processing job via REST API. Send an input file UUID with the necessary processing operations.
- Wait until the processing job status becomes
done
. - Detected moderation labels will be stored in the JSON response in the
appdata
section of the processed file.
Example
Execute unsafe content detection on a target UUID
Check out REST API reference to see how to execute an add-on on a target UUID.
Webhook event
To get the job result, you need to enable file.info_updated
in the
Webhook section of the dashboard.
After completing the processing job, the webhook will be sent to the endpoint you specified in the webhook settings.
File information in response:
Check the execution status
If your application does not have a backend or uses a mobile version, you can submit the request yourself.
Use the request ID returned by the add-on execution request described above.
As a request result, you will receive a UUID of the file when the status is done.
Check out REST API reference to check the execution status.
Get the result
Once the status changes to done
, appdata
will contain the result of the execution.
To get it, run file info request, specifying the include
parameter.
There are two methods for getting info on detected objects via GET requests
for single-file and multi-file cases:
or
You’ll get JSON with appdata
:
Billing
- This feature is available on paid plans.
- Learn how we charge for this operation.