Skip to main content
Newspaper illustration

MOPs QA Tips & Tricks

Overview

QA, or lack of, got you down? If you’re not sure where to start with QA or if you’re questioning your current QA process, you’re in the right place. We have specific strategies and a few tips & tricks to help you develop a QA process for both Campaign Operations and Platform Operations that is scalable and will help you catch minor mistakes before they become major issues.

COPs Testing & QA Strategies

Standardize your campaign creation process. If you don’t have a standardized, documented campaign creation process, that’s where you’ll want to start. This will make it easier for your team to create consistent campaigns and will help guide you when creating your QA process.

Document How and When to QA. Document the whole process so that your team knows when they should QA and how it should be done. Writing out your QA process will provide clarity and ensure that it’s repeatable. You want to keep it simple and quick, but also robust enough to catch any mistakes.

Implement a checklist. Make it easy for your team to go through the list and confirm that they’ve checked each item. You can add it to your PM tool, create a Google form, or even simply put it in a spreadsheet. Whatever it is, document it and make it easy to use.

Need some inspiration to get you started? Check out our How to Implement a Marketing Ops QA Process blog. It provides a bunch of ideas about what to add to your documentation, questions for a QA brief, and what to add to your checklist!

COPs QA Tips & Tricks

Test the checklist flow. There’s nothing worse than randomly clicking around a campaign because the checklist questions are out of order. Okay, maybe there are worse things, but it’s still pretty frustrating. Structuring the questions in an intuitive way will make it efficient for your team to work through the checklist and give them less headaches.

Make it a team sport. If you have multiple people on your MOPs team, use the buddy system for QAing campaigns. When you stare at something for too long you’re more likely to overlook a mistake, so have someone with fresh eyes perform the QA.

Work backwards. Look at your email and decide what would be considered an error with the email. What about the campaign or workflow? What you find then becomes the checklist for your team to QA.

Send those samples. Previewing an email in your MAP doesn’t always show you the email accurately. Always send yourself a sample of the email you’re QAing so you can see what it looks like in the inbox.

Do what works best for your team. Do they prefer having one master checklist consisting of every campaign type? Or is it better to have separate checklists for each campaign type? Involve your team when building your QA process and they’ll be more likely to use it and understand it.

POPs Testing & QA Strategies

Have a Plan. Your plan is your official guide to whatever it is you’re proving – big or small. Define your testing scenarios, timeline, record format, and parties involved.

Communicate! Whatever process you’re working on usually impacts other teams, so inform necessary parties of the updates and your testing plan. You don’t want to disrupt anyone’s daily operations!

Break it, but responsibly. Try to break what you’ve built. Leave no entry, relational, or dependent ‘stone’ unturned so you can ensure that it’s working as expected and doesn’t have any unintended consequences. You want to make sure you’re being responsible though, so use a Testing Brake! A Testing Brake is a filter configured across your starting/routing points that will allow standard records to continue as usual and isolate test records to flow through what you just created.

Review. Re-configure. Re-Test. Review the journey and note your findings, then re-configure your build. Any notation of a failure or unexpected outcome/impact should help guide how to re-configure your solution to a problem. This may mean going back to the drawing board – and that’s ok! Sometimes it’s easier to start over than to fix something that’s broken. Any failed scenarios should be re-tested after re-configuring and continue testing until all scenarios have passed.

POPs QA Tips & Tricks

Test it on the side. Use a Sandbox (if possible) or create your own “sandbox” in production by cloning the existing campaign or program you’re testing. You wouldn’t want to accidentally change something in the live version, so having a cloned version gives you freedom to make changes and play with it. You’ll want to push a couple test records through the program in production after go-live to ensure you have the same expected outcomes from “sandbox” though.

Go private when testing. Use Incognito window(s) to ensure your data isn’t cached or cookied, changing the results of your testing, assigning to a different lead, and all around making your testing more difficult.

Use unique email addresses. Use a testing naming convention to help you easily keep track of which email address you used for which scenario. Our favorite naming convention for testing is: username+[process][yymmdd][ID#]@domain.com.

Dedicate the time for testing. We recommend estimating at least 1-2x the amount of time spent on building. This will include planning, the initial testing, troubleshooting, reconfiguring, and retesting. Testing is one of the most important aspects of building a new framework, so don’t skimp on it!

Try to break what you built. Push the limits. Find out what you don’t know. It’s better to break it before you go live, so give it all you’ve got!

Use a global Smart List for test leads. If you’re working in Marketo, create a global Smart List as a Testing Brake that you can use on your new campaigns. It enables you to use Advanced Logic and gives you an accurate “Used By” list of all locations where the Testing Brake is used to ensure you’ve removed all blockers prior to go-live.

Find areas where there’s a potential issue for non-movement. Build in an alert or error list to capture when conditions are not met. This will make it easier for you to find people who didn’t process how you expected, fix the issue, and reprocess the leads.

Follow your test leads. Follow the test record activity from creation until processing is complete. Return to your test record(s) after 24 hours and at 7 days to see what else occurred.

Conclusion

We’re all human so mistakes will happen and we all have a MOOPs story or two. The goal of QA is to catch those mistakes before launch and to minimize risk. Hopefully these strategies, tips, and tricks will help you build your process and make you more confident when pressing that launch button.

Get in Touch with Us

At Etumos, we love what we do and we love to share what we know. Call us, email us, or set up a meeting and let's chat!

Contact Us