You can classify a video either from a file or a URL.
This section explains how to make a request using a file. For a URL, the process is similar, with the main difference being the endpoint to call in step 1.

Submission guidelines

Video analysis involves processing the video to extract specific frames. For this, the guidelines to follow are similar to those for images:
  • Send the video “closest” to the source to the system: this means, for example, not sending a video of a photo for photo analysis, or a video filmed from another video.
  • In the case of videos taken from social media, the most accurate results are obtained by sending videos without superimposed texts or modifications typical of social shares.
  • If available from other sources, avoid submitting social links due to the post-processing of various platforms.
During submission, it is possible to exclude morphing models. We advise against enabling the morphing model in the absence of faces to avoid the risk of false positives.

Frame extraction strategy selection

Only for videos, it is necessary to select a frame extraction strategy: these will then be analyzed by the detection models. The frame extraction strategies are described below:

Time-spaced

With this mode, N frames (depending on the “Maximum number of frames” selection) are extracted, equally spaced along the duration of the video. In order to enable this, you must add the following parameters:
frames=(5|10|30)
key_frames=false
frames controls how many frames to extract at least. You can select between 5, 10 and 30. A higher number of analyzed frames corresponds to a higher number of credits used for the analysis.

Key frames

  • Key frames - Default: with this mode, up to N iframes are extracted. These frames represent sudden scene changes in the video and are therefore potentially of interest for analyzing different scenes.
  • Key frames - Color: with this mode, up to N frames are extracted that represent the “average” frame of a scene, calculated based on the colors within the image.
  • Key frames - Flow: with this mode, up to N frames are extracted that represent the “stillest” frame of a scene, calculated relative to the previous frame of the scene.
frames=(5|10|30)
key_frames=true
key_frames_method=(iframe|color|flow)
frames controls how many frames to extract at least. You can select between 5, 10 and 30. A higher number of analyzed frames corresponds to a higher number of credits used for the analysis.

Submit a video using the API

To classify a video, you need to make a POST request to /api/classification_video with the required parameters (for details on the parameters, refer to the API Reference section).
curl --request POST \
  --url https://backend.identifai.net/api/classification_video \
  --header 'Content-Type: multipart/form-data' \
  --header 'X-Api-Key: <api-key>' \
  --form "video=@/path/to/sample.mp4" \
  --form frames=5 \
  --form with_morphing=false\
  --form with_tampering=false
In the response you will receive the identifier of the classified video.

Retrieve the results

Use the provided identifier to retrieve the classification results by making a GET request to /api/classification_video/{identifier} (for details on how to structure this request, see the API Reference section).
curl --request GET \
  --url https://backend.identifai.net/api/classification_video/{identifier} \
  --header 'X-Api-Key: <api-key>'
The response will provide the classification results for the video in JSON format. For video classification, the video is divided into frames. In the response, you will find a results array that includes the classification outcomes for each model applied to each analyzed frame, and a verdicts array containing the results for each heuristic used.
The classification may not be finished yet! If the classification is not yet complete, continue sending the GET request until the result is available.

Guidelines on interpreting the results

The results of the video contains both the global verdicts and the individual result on the single analyzed frames. We suggest looking at interpreting image result

Heatmaps on video

For heatmaps on video, the considerations are the same as those on images. The only addition is that to analyze the heatmap, you need to click on Frames and analyze the heatmap of each individual frame.

See also