We have an extensive QA process honed over thousands of tests. However, due to the fact that we’re essentially “front-end growth hacking,” the goal of our QA process is to catch most issues for most devices –not to catch 100% of bugs for every possible user or scenario. This is a simple cost-benefit analysis. For a test that’s going to be live for only a few weeks, we have found it’s not worth the dozens of hours of time it would take to chase down every possible issue, because the chances of those actually occurring during that time are incredibly slim. 

Additionally, over the course of our tests we’ve found that minor visual issues have essentially no impact on the outcome of the test. We can still prove or disprove our hypothesis, and for additional issues, we can “fix it in post.” Once a test is a proven winner and is ready to hard-code on to the live site, we do recommend that clients go through their normal QA process for other changes to catch outlying issues. 

What type of QA do we do?

Smoke Testing

Also known as “Build Verification Testing,” this is a type of software testing that comprises a non-exhaustive set of tests that aim at ensuring that the most important functions work.

Functional Testing

The system is tested against the functional requirements/specifications, ensuring that all new functionality is working as expected.

Visual Testing

Ensuring that your app appears to the user as you intended. This includes changes such as animations, hover states, and background or text color, as well as the positioning of the elements on your web page.

Cross Browser Testing

Checking compatibility of your application across multiple web browsers and ensures that your web application works correctly. This entails going through a checklist for a number of different browsers and devices such as Google Chrome and Safari for desktop and mobile, Microsoft Edge and Mozilla Firefox.

Example Checklists

Below is a sample of what we’d check on a typical experiment:

  • Check URL targeting in Experimentation tool

  • Check Page Targeting and Activation is working properly

  • Check all variations are paused or un-paused and traffic allocation is setup properly.

  • Check that all Goals in the Experimentation tool match the spec and that Primary Goal is correct

  • Check that all Relevant Goals are triggered properly (especially new metrics)

  • Check audience(s) in Experimentation tool

  • Check Original for functionality and design errors

  • Make sure all design elements line up properly, spacing is correct, font etc

  • Click all relevant links in preview mode to ensure they work properly

  • Functionality test any forms or user triggered page elements

  • Functionality test any non-user triggered page elements

  • Check that Google Analytics, Hotjar or other integrations are properly configured

  • Verify there are no conflicting tests before launch

  • Test experiment with a Test Audience (as required)

For cross browser testing, we also verify the following on Google Chrome and Safari on desktop and mobile, Microsoft Edge and Mozilla Firefox:

  • Spell check any variation changes

  • View all design elements (lined up properly, spacing is correct, font, etc.)

  • Click all relevant links in preview mode to ensure they work properly

  • Functionality test any forms or user triggered page elements

  • Functionality test any non-user triggered page elements