When we developed our first physician Internet panel in November 2000, the aspect that was quite astonishing was the speed with which physicians completed the study. We programmed our first study, mailed out our recently-recruited e-mails and waited in anxious anticipation to see what would happen. Every hour we would check the counter and every hour we would see the number of physicians completing the study at a faster rate than we ever saw before. We realized that we had a technique that would allow us to complete studies in half the time than traditional phone survey. But, as Ralph Waldo Emerson once said, for everything you gain, you lose something else.
One of the major downsides of getting a quick response was that if you made a mistake in the survey, you had a great deal of garbage data. When we do physician studies, not only do we have to pay for the sample, we also have to pay the physicians for their time. While honoraria and recruitment costs were much lower than in the past, for a 45-minute study, you could be paying as much as $300 for studies with surgeons, oncologists and other hard-to-reach groups. When you recruited the sample by telephone and were only able to obtain five to 10 completed respondents in the first few days, the costs of a mistake were high but bearable. However, when you recruited 100-200 physicians on the Internet in the first day or two, the cost of a mistake could impact someones job security.
Since none of us wanted to lose our jobs, we developed techniques to check Internet surveys that would reduce the possibility of a mistake. We needed to develop methods that were different from traditional checking methods the Internet required it. An Internet survey is a self-administered instrument. In a telephone interview, if the program has an error in it, the telephone interviewer may notice something. For example, if a skip appears and there are no brands to choose on the screen, the interviewer could quickly report something is wrong. There is no one who plays this role for an Internet survey.
Therefore, we developed a method that we called BOWLSR to check an online questionnaire. It was a simple acronym that allowed us to review each question on the following six parameters:
- B = blank screen
- O = order
- W = words
- L = logic
- S = skip
- R = range
The purpose of this article is to show how to use this technique to reduce the number of discrepancies from the written questionnaire to the final programmed survey instrument.
This check is one of the easiest to perform and probably the easiest to forget to perform. When a programmer sets up a question for a survey, one of the parameters they must set is the requirement that a respondent provides a response. I have gone over numerous questionnaires where the programmer may do this for all questions except one or two. It is especially true for open-ended questions. Many times the respondent will not enter a response to the open-ended question and click to the next question. When you look at the data, you wonder, did they really mean to leave it blank or did they just click [Next]? The check is really easy. Every time you go to a new screen that requires a response, the first thing you do is click [Next]. If the program does not provide an error, then you know you likely need to make a correction. It is important to do this before doing anything else on a screen, because for certain types of questions, you can change an answer but not remove all responses. If while checking the survey, you were to first select a response before testing the blank parameter, you would only be able to remove your selection by selecting another response and therefore you would not be able to check a blank screen without starting over.
The order parameter has to do with the order of how the rows or columns should appear on the screen. The survey developer may want to randomize the presentation of rows or columns on the screen. In reviewing the questionnaire, you need to look closely because the instruction to randomize may be hidden in the questions text. The good news is that typically when the questionnaire designer reviews the questionnaire, they will pick up this error right away. However, there is something that is more subtle that is often missed. Many times the order of presentation becomes important during analysis. Therefore, you may need to have a field in the data that indicates how the data was ordered. This is not difficult for the programmer and can be a lifesaver when the analyst starts to question why certain answers are not consistent.
This parameter seems obvious. Basically you need to proof the questionnaire to make sure that all the words in the online document are exactly as shown in the questionnaire. And if that is all you do, you have missed the point. While proofing is important, it is even more important to make sure that the words make sense. I would say that at least one out of every three questionnaires has something written that does not follow the true meaning of the questionnaire. For example:
- Questions reference the wrong information from a previous question. This is so easy for a questionnaire designer to do, since they may have revised the questionnaire numbering multiple times before you see a final version.
- The scale will say one thing and the questionnaire description will say something else.
- Questions will ask to compare different brands but the respondent may have said in an earlier question that they were only aware of one of the brands.
Reading the words is not nearly as important as making sure they make sense with one final caveat: Never change the smallest thing without telling the survey developer. When a designer sees that you made a change without telling them, they will immediately want to know if you changed anything else and will lose faith in your programming, even if you think you are correct.
The survey developer will provide the logic for many questions in a questionnaire. They will indicate, for example, that:
- In a best/worst scenario, only one option is allowed to be designated as best.
- Two choices cannot have the same rank.
- Many conjoint or max-diff questions will have a particular design logic.
The good thing about logic instructions is, because they are difficult, the survey developer usually makes extra effort to explain them in the questionnaire. The bad thing is, because they are difficult they are hard to check. The best thing to do is to write up different scenarios on the logic instruction and test them. Typically, this will allow you to see if things are going correctly.
Everyone knows that questionnaires have instructions to skip questions. Therefore, you usually look for them and make sure the skip is working properly. There are sometimes very difficult skip patterns but if you take your time to follow them all through, you will get them right. The one tip I can give here is to look for the questions with skips before starting to check the questionnaire. Usually this will help to understand the questionnaire better. In addition, there is nothing more frustrating than getting to the end of the questionnaire and realizing a question at the end required a particular choice at the beginning and you are forced to start over to test that particular skip.
A survey developer may specify that certain responses cannot be greater or less than a particular number. Or they may say that a response needs to be less than a response in a particular question. For example, if a respondent says that overall they treat 500 patients a month, they cannot say in a follow-up question that they treat 550 asthma patients a month.
Checking a range is easy. You test to see if one above the high range and one below the low range gives you an error. Then you check the high and low range to see if you can go to the next screen. The only real issue is to not forget to check all the ranges. If the analyst gets the data and finds inconsistency in the data because range controls were not incorporated into the questionnaire, it can get very tedious trying to clean the data.
This article provides a modality for checking an online questionnaire so that you can maintain quality and eliminate expensive mistakes. There are many other things you can also do to enhance quality: perform pretests; have a formal questionnaire review before starting to program; ask a programmer to look at the logic within the program using the programs checking tools; run dummy data; and review live data before going out to many respondents.
While the above are all necessary and important, the BOWLSR technique will allow you to gain a much deeper insight into the questionnaire. It provides such a good foundation that in many cases you will feel that you understand the questionnaire better than the survey developer. And if the survey developer is also your primary client contact, there is nothing that will cement your relationship with your client more than for them to realize how concerned you are that the research is being done properly.