RESEARCH ALTERNATIVE

Posted on by


While it is extremely important to have a frame of comparison for post/event findings the pre/post approach may not always be the ideal tool to demonstrate impact.


Furthermore if you’ve run out of time prior to an event and cannot conduct a prewave all is not lost. Implementing a test vs. control design can afford you a similar basis of comparison and in some cases may be a sounder research design.


But first before we discuss the merits of the Test/Control approach lets take a look at some of the inherent pitfalls that often arise with a pre/post design.


Timing (I missed my window!): Proper set-up of the pre-wave calls for all interviewing to be conducted prior to any external communications that can bias research results. Often this calls for the pre-wave of research to be conducted months prior to the event which in turn calls for pre-planning and a commitment to measurement in the early planning stages.


Timing (I don’t have time!): Many times brands are so busy preparing for the actual running of the event that they have little time to discuss and approve any pre-wave research.


Timing (Too much lag time!): To control for the influence that other market factors may have on results the timing of the pre and post waves of research should be within a relatively short timeframe. Too much lag time between the pre and post waves of interviewing may have an adverse impact on trying to isolate the impact of the event on survey participants.


Population (How do I find these people?): Sometimes it is not feasible to identify who the target attendee will be especially at non-ticketed events. Ticket purchaser lists may not be available and when they are these lists are not completely representative of the entire attendee base. Conducting pre-interviews at the event is usually not feasible given the logistics of the event or study objectives (i.e. assessing awareness) and will limit the length of the questionnaire.


The above points are not suggesting that pre/post interviewing is flawed as we all take this approach. However there are a number of challenges and roadblocks one typically faces. As an alternative another approach that can be considered is test/control.


The fundamental philosophy behind this approach is comparing a test population exposed to a stimulus (e.g. sponsorship event activities etc.) vs. a control population not exposed. The analysis focuses on assessing significant shifts in key measures between those experiencing (test) and those not experiencing (control) the activity similar to analyses focusing on comparing pre results to post results.


The chart above graphically illustrates this type of analysis looking specifically at significant differences between those experiencing versus those not experiencing on key measures such as brand awareness overall brand opinion future brand consideration etc.


To eliminate any demographic bias the demographic profile of the control sample is set-up to mirror that of the attendees. Additionally there is no bias as a result of lag time because the control interviews are conducted within the same timeframe as the post event attendee research.


The next time you’re asked about conducting research rather than defaulting to the standard pre/post design consider suggesting this alternative approach. It may be the most appropriate way to demonstrate the impact of your event.

Jeff Eccleston is director of research at Wilton CT-based Sponsorship Research International (SRi). Study him at eccleston@teamsri.com.

© Copyright 2014 |