Which of the following is considered a best practice when creating a campaign experiment?

Pick two or three metrics to evaluate campaign performance and determine the winner of a test.

When experimenting with creatives, new ads are not subject to ad approvals, so experiments can be expedited.

After an experiment ends, evaluate performance over a timeline that includes the ramp-up period.

Focus tests on one variable at a time and use separate tests to examine the effects of more than one change.

Explanation

When creating a campaign experiment, adhering to scientific testing principles is crucial for obtaining valid and actionable results.

Analysis of Correct Answer(s)

  • Focus tests on one variable at a time and use separate tests to examine the effects of more than one change.
    • This is a cornerstone of A/B testing and controlled experimentation. By changing only one variable (e.g., bid strategy, creative, targeting segment) between your control and experiment groups, you can confidently attribute any observed performance differences directly to that specific change.
    • If multiple variables are altered simultaneously, it becomes impossible to determine which change, or combination of changes, was responsible for the results, leading to ambiguous conclusions. This practice ensures clear causality and reliable insights.

Analysis of Incorrect Options

  • Pick two or three metrics to evaluate campaign performance and determine the winner of a test.
    • While evaluating multiple metrics (e.g., conversions, CPA, ROAS) is good for a holistic view of campaign health, determining a clear "winner" in an experiment typically requires identifying a primary metric (often a key performance indicator or KPI) that defines success for that specific test. Relying on too many primary metrics can lead to conflicting results where one variant might win on one metric but lose on another, making it difficult to declare a definitive winner or scale the best-performing approach. It's best practice to define one primary success metric before starting the experiment, while still monitoring secondary metrics.
  • When experimenting with creatives, new ads are not subject to ad approvals, so experiments can be expedited.
    • This statement is incorrect. All new ad creatives, regardless of whether they are part of an experiment or not, are subject to the advertising platform's ad approval process. This process ensures that ads comply with platform policies, legal requirements, and community standards. Bypassing ad approvals is not possible and would lead to policy violations and potential ad rejections or account suspensions.
  • After an experiment ends, evaluate performance over a timeline that includes the ramp-up period.
    • This is generally not a best practice. The ramp-up period (also known as the learning phase or initial delivery period) often exhibits volatile performance as the system optimizes. Including this period in your final evaluation can skew results and provide an inaccurate representation of the experiment's true, stable performance. It's more effective to evaluate performance after the experiment has exited the learning phase and achieved stable delivery, focusing on the period where performance is representative of its long-term potential.