Clip Evaluation

Evaluation is where AI scores each search result for how well it matches your B-Roll needs. Instead of manually scrubbing through dozens of videos, you get a ranked list with precise timestamp suggestions.

How Evaluation Works

When you click Evaluate on a moment, B-Roll Me sends the following context to the AI:

  • The script excerpt for that moment
  • The editorial note explaining what kind of B-Roll is needed
  • Each search result's title, description, and transcript matches

The AI returns a score, suggested timestamps, and a description for each clip.

Score Ranges

Clips are scored on a 0–100 scale. Here's what each range means:

90–100 Excellent match — highly relevant footage with clear visual connection to the script.
70–89 Good match — relevant footage that works well for the intended purpose.
50–69 Partial match — some relevant content but may require more context or trimming.
30–49 Weak match — tangentially related but not ideal for the specific B-Roll need.
0–29 Not relevant — the clip doesn't match the B-Roll requirement.

Screenshot: Evaluated clips with score badges

Sorting by Score

After evaluation, you can toggle Sort by AI Score to reorder results from highest to lowest score. This puts the best clips at the top so you can quickly find the most relevant footage.

Usability Flags

The AI also flags clips as "usable" or "unusable." Unusable clips might have audio-only content, poor video quality, or content that doesn't match despite keyword overlap. Unusable clips are visually indicated so you can skip them.

Evaluate All

Like Search All, there's an Evaluate All option that processes every moment's search results. Moments are evaluated one at a time to manage API costs.