The Iterative Testing Process

by Brian Toomey, JB Analytics CEO

Graphic showing 1) Understand your data 2) Test and alternative and 3) Implement the winner

I. Preparation

What do we need to get ready to test?

  1. Install split testing software. We usually recommend Visual Website Optimizer and can run tests through our account so you don’t need to worry about that.
  2. Make sure analytics is working, and goals tracking properly so test results can be meaningfully quantified.
  3. Install any additional heat mapping, session recording, or other analytics data collection.
  4. Consider informal and qualitative user research as well as hard data.
  5. Run power analysis on sample size to estimate time needed to run test. How many tests/variants can we run, on what timeline?

In an enterprise setting, you may need to do additional things like testing the quality of your randomization or running tests server side. In very technical settings (i.e. computer science training) you may need to use server side testing as a large percentage of users can block testing software.

JB Analytics’ data-driven approach to search, web design, and testing drove a valuable increase in lead volume and quality. We recommend them to anyone in a technical B2B space.

II. Ideation

What should we test? What are our testing constraints?

  1. Gather design requirements: brand colors, fonts, and overall visual identity.
  2. Understand brand requirements and voice.
  3. Carefully audit all analytics channels.
  4. Query organization stakeholders about their hunches. What would make users happier? What is stopping them from converting?
  5. Have business driven conversation defining KPIs – what will really move the bottom line for your business?
  6. Review several test ideas and rationales.

III. Design & Approval

What exactly will this look like?

We usually design in Figma for client approval and rapid prototyping / collaboration. This allows commenting and iteration for collaborative processes.

Some tests desktop only, some mobile, some both. 

IV. Implementation & Testing

How do we mock the test up, do appropriate QA and run it?

  1. Create page either as a static stand alone, or in VWO depending on test complexity.
  2. Perform QA across mobile, desktop and tablet for major browsers, iOS and Android.
  3. Set up appropriate data collection, usually making use of custom variables in Google Analytics as well as VWO’s native analytics. This allows for a holistic view of all metrics and not just KPIs.
  4. Allow test to run until statistical significance is reached, monitoring results carefully.

V. Analysis & Interpretation

Did our test win? What have we learned?

  1. Extract the data, usually via the Google Analytics API into a shareable Google Sheet.
  2. Calculate uncertainty intervals, expected conversion rates, significance, and p values.
  3. Present results holistically in terms of all relevant GA variables, and not just outcome variables. If sample is large enough, consider cohort sub-analysis.
  4. Calculate cost/benefits of implementation.
Graph showing Conversation Rates of 2 variants

VI. Iteration

What is next? How do we keep improving rapidly?

  1. Implement winning variations, learn from null and losing tests.
  2. Go to start of testing process. Repeat!

Amazon has been testing for over 20 years and is still seeing gains.

 

Happier Users & More Traffic

We’re passionate about delivering value through design, data and development. Let’s talk!