Relevance rubrics that creative teams actually use
Creative teams rarely refuse structure; they refuse opaque structure. A relevance rubric works when each row sounds like a conversation they already have in critiques. We co-write rows like "Does this offer match the last recorded intent signal?" instead of abstract quality scores.
We also limit rows to what reviewers can judge in under two minutes. If a row requires a data pull, it belongs in a pre-flight checklist, not the rubric. That separation keeps the studio session moving and respects creative time.
Rubrics fail when they become weapons. We rotate reviewers and publish example passes and fails from past launches (anonymized). Seeing the rubric applied to real work builds trust faster than another governance deck.
After rollout, we measure adoption by counting how many briefs link to the rubric without being asked. That is a humble metric, but it tells you the tool earned its place.