Appendix C
Shortcut Navigation:
Change Text Size: A A A

Appendix C


Methodology Considerations
Methods for Surveys and Focus Groups

Table of Contents • Introduction • What Do I Need to Get Started? •  Falls Free® Logic Model  • Standard Set of Survey Questions • Next Steps • Appendix  

Use of Surveys for Program Evaluation


The aim of survey research is to get an accurate picture of what a specific population (e.g., older adults) feels, believes or does about a specific topic (e.g., fall prevention). The target population may be very large and therefore it would be impractical to contact all members. Moreover, even if you could survey all members of a specific population, it is likely that only a small percentage would respond.  This is a problem because responders may be different from non-responders and therefore inference drawn from your results to all members of the population may be incorrect.

Fortunately, sampling allows for surveying a few members of a population and if this is done correctly, those few will accurately reflect the population from which they were selected. However, in order to get a valid response from a sample, several important criteria must be met.

First, you must have a complete list of all of the members of the target population. This is important because in an unbiased sample all members must have a chance to be represented. In addition to the names of all population members, you need a means of contacting them, such as an e-mail address, mailing address, or phone number.

Second, you must have a sample size that is large enough to be representative of your population.

Third, you must have a random method for drawing your sample from the target population. In other words, each member of the population must have an unbiased chance of being selected.

And finally, you must have a response rate that is large enough so that those who respond will be representative of all of the members of your sample.

List of target population: Lists of the names of members of target populations can be hard to obtain. If you wanted to survey members of an organization, relatively complete and up to date lists may be obtained from the organization (e.g., state registry of physicians). But when the target population is more diffuse (e.g., adult children of older adults), compiling a comprehensive list may be difficult. Again, depending on your target population, we recommend seeking advice from experts.

Sample size: Tables (e.g. see: have been developed that show the number of people required for an unbiased sample, given the margin of error desired (e.g., ± 3%) and an estimate of the distribution of the item of interest (e.g., taken action to prevent falls in last year) in the target population. Fortunately, if done correctly, the power of sampling is such that you do not need very large samples in order to obtain representativeness and increases in representativeness levels off quickly so that a relatively small sample can be representative of a very large target population.

Random sampling: Random sampling can be accomplished by various methods, ranging from putting names or other identifiers in a hat and picking out the number required in the sample to assigning each member a number and using a computer generated list of numbers equal to the sample size required. But, whatever method is used, it is essential that each person in the target population have an equal chance of being selected for the sample.

Response rate: Response rate is the proportion of members in your sample who respond to your survey. Perhaps the most challenging aspect of surveying populations is obtaining a response rate that is sufficient to ensure that respondents are indeed representative of the sample and therefore of the target population. Your confidence that your data accurately reflects your sample is a direct function of your response rate. If, for example, your response rate is 10%, that means that the information on 90% of your sample is undocumented. It is possible that the 10% respondents are different from the 90% non-respondents, especially if the propensity to respond is associated with the salience of the survey topic to the sample members. A 50% response rate is better, and a 70-80% response rate allows for a high degree of confidence that your data is truly representative of your sample and the population to which you intended to draw an inference. Adequate response rates are usually dependent upon repeated follow-up and/or incentives.

Putting it all together: Let’s assume that you want to survey primary care providers in a state about the extent to which they refer their at-risk patients to community-based falls prevention programs. You might obtain a list of primary care physicians from a state medical society or a state registry. Assume that there are 10,000 members of this population in the community, that you aim for a ± 3% margin of error, and that you estimate that about 50% of your target population actually refers patients to falls prevention programs. According to sampling tables, you would need sample size of 964. (This number will be larger if you want to conduct subgroup analyses, such as comparing male to female physicians.) You would then assign each of the 10,000 members a number, say 1-10,000 and use a random number generator to select 964 physicians to be surveyed. If you assume obtaining a 70% response rate, you would then need an initial sample of 1,377 physicians to wind up with 964 responses. It is likely that on first contact you will receive only 20-30% of this number. But, through successive follow-ups, you can achieve an increasingly better response rate. Incentives, even small ones, can help particularly if they are given unconditionally with the initial request. They acknowledge that you are asking for a person’s time and effort in completing the survey and that you appreciate their efforts to help. Incentives can be amounts of cash, gift certificates, or the opportunity to participate in a drawing for a larger reward.

Questionnaire Design:

Equally important to your sampling methodology is the design of your questionnaire. It is essential that your questions are unambiguous so that all respondents share an understanding of what you are asking. Again, this can be harder to achieve than is often assumed. So the Committee recommends keeping the questions intact and obtaining the help of experts in planning your evaluation. There are several things to keep in mind.

First, ask only one question per question. For example, in the example above, if you have a question that reads: “What percent of your older adult patients do you assess for falls risk and refer to fall prevention programs?” you are asking two questions and the answers to each might be different. This could make it impossible for the respondent to provide a single answer.

Second, make sure that your response categories provide an adequate range of possibilities. If you example, you asked, “What percent of your older adult patients do you refer to community-based falls prevention programs?” and your response categories were: “about 25%”, “about 50%”, “about 75%” or “about 100%”, you have not allowed for responses below 25%. It is possible that a physician might never refers patients to a community–based falls prevention programs and that information would be lost if the response selection is truncated.

Third, make sure that your questions are written at a reading level that is accessible to all of your target population. Questions that use jargon or scientific terms may not be understood and lack of understanding could be another potential source of bias.

Fourth, keep questionnaires as short as possible.

Always pre-test questions with members of your target population. No matter how simple and straightforward your questions may seem at first, they are often subject to unforeseen ambiguities and interpretations. Suggested survey questions for each stakeholder group and level of progression are included in Appendix B.

The Survey Evaluation Matrix (or “Matrix”) reflects the Evaluation Committee’s recommendations pertaining to a) stakeholder groups, and b) topics to target through survey evaluation efforts.

Survey Delivery

There are multiple methods for delivering surveys, including e-mail, regular mail, and telephone. Each of these approaches has advantages and drawbacks and the selection of methods if often dependent upon what contact information you can obtain.  Again, we recommend that you consult an expert to assess the possibilities, strengths and weaknesses of these various deliver strategies.


Use of Focus Groups for Program Evaluation

In addition to quantitative data, state coalitions can consider collecting qualitative data as part of their evaluation. Qualitative data are information in non-numeric form. They usually appear in textual or narrative format. (CDC 2009) This kind of information may offer insights into the perceptions or experiences of your key stakeholders (e.g., older adults, primary care physicians). Qualitative data may also help describe more fully information obtained from questionnaires or surveys. Because individual interviews are time-intensive and costly, focus groups are a good option for coalitions who want to collect qualitative data. Focus groups provide data that is in-depth, similar to that of interviews, but allows you to hear from more people in less time. In addition, the communication among group members as they discuss the topic can enhance the information you gain. 

For program development and evaluation, focus groups can be used to:
• Identify needs for programs or information
• Gain input on design, topics, formats (how to reach particular groups)
• Pilot test curriculum, materials, handouts, or messages
• Get feedback on programs, policies, or outcomes
• Explore participant satisfaction with coalition activities

Focus groups also have some disadvantages. The group may try to answer questions the way they think the leaders want them to answer. It also can take more time to evaluate the data from qualitative interviews than from survey data. In addition, since a small number of people are interviewed, the information gained is not representative of other groups or populations.

As with quantitative or survey data, expertise is needed. To obtain credible data, accepted procedures associated with sampling, collecting data, and analyzing qualitative data must be followed (see resource list at end). The information below describes key considerations to make when consider when planning and conducting focus groups. We encourage state coalitions to work with people who know how to use these methods so that the final data provides the information you are looking for and helps you answer your questions.

Developing an Interview Guide

The interview guide is a set of questions based on the purpose of the interviews. The guide is used to organize the discussion and make sure all of the desired information is gathered.  A first question is designed to break the ice and get people talking to each other. A general first question for a focus group might be:

  • Introduce yourself and tell us how long you have lived in this community.

As the group gets comfortable, more specific questions can be asked. The exact questions used will depend on the goals of the coalition, for example identifying the needs in a particular community, or finding out to improve activities for a Fall Prevention Awareness Day. The following are some examples of questions that might be used to explore falls with a focus group, but your coalition’s interview guide should be developed along with your experts.

  • Where do you get information about preventing falls?
  • What have you heard anything about preventing falls in your community? On TV or radio?  Where did you hear it and was it helpful?
  • What did you learn from the program you attended?
  • What things have helped you or gotten in the way of making changes to help prevent a fall?

Samples for Focus Groups

Depending on your question and purpose, you will want to consider the make-up of your focus groups. You could recruit from a particular geographic area, or from a senior organization or group.  You may want the people who participate to be similar, so that their experiences or background are the same. Sometimes it is best to plan a mix of specific characteristics so that participants share different experiences and you can get their reactions to each other. 

Some key characteristics for fall coalitions to consider in terms of focus group make- up include: gender, age (young-old vs. old-old), ethnicity, fall or injury exposure (fallers vs. non-fallers), functional level (vigorous vs. frail),  living situation (home vs. apartment/condo)  or experiences (people who participate in your activities vs. people who did not). You may identify other important characteristics based on your local conditions and the questions you want answered. 

Most focus groups have 8-15 participants. If older, frail adults are participants, groups should have no more than 12 people. You want everyone to be able to hear and see each other, and sensory changes with aging can make it harder with a larger group.

Number of Group Sessions

Carefully consider the number of focus groups that need to be conducted. Some experts recommend 3-4 groups be conducted on a topic. With qualitative research, a rule of thumb is to reach saturation of information.  This means that you are not gaining any new information, but getting the same kinds of comments and experiences described over and over.
If you want to explore issues in one small town, one or two groups may be sufficient.  If you want to understand issues across your state you may need different geographic areas or groups represented. Consider the need for both rural and urban members or other stakeholders, and costs.

Setting Up the Focus Group

Your focus groups should be held at locations that are easily accessible to older participants who may have wheelchairs or walking aids, with nearby parking.  Use a quiet room and arrange chairs in a circle or around a table so participants can see the leader and each other. Offering incentives such as snacks or a meal can help increase attendance at sessions. Payments or gift certificates to a grocery store may help if recruitment is difficult.  A typical focus group lasts from 60-90 minutes.  A stretching and bathroom break should be planned.

Moderator and Conduct of the Focus Group

The leader or moderator should ideally have experience leading groups of older adults. To avoid bias, it is important that the moderator be impartial, so someone not directly involved with the coalition work should be selected as moderator. The moderator role is to ask the key questions, make sure everyone gets to speak and no one speaks too much, and probe for more details if people give brief answers. The moderator needs to help the group feel comfortable and encourage participants to express different points of view.

To record the discussion, you need at least one person to take notes. The moderator or recorder may want to use a flip chart to write down key points. It is ideal to audio- or video-record the group discussions. This allows those analyzing the data to listen to the complete focus group discussion later. Let participants know in advance that the session will be recorded and obtain verbal or written permission before recording a focus group.  Someone from the fall coalition may want to observe the focus group discussion from the back of the room (and then can have the moderator ask the group for clarification if needed during the discussion).

At the beginning of the focus group, the moderator will discuss the purpose of the focus group, have everyone introduce themselves, and provide some rules for participants (what names to use, one person speaks at a time). Participants should be asked to keep the information discussed within the group. The moderator will use a discussion guide to organize the session and make sure all of the coalition questions are asked.

Analysis of Focus Group Data

Focus group data is analyzed using qualitative techniques. The tapes, transcripts, and any notes are reviewed to identify common ideas or themes related to the questions asked. For example, people may identify common sources of information about fall prevention (or lack of information). A report of all of the discussion is developed. Reports should be written without identifying specific participants. If information is collected for research purposes or to publish beyond the fall coalition, review by an academic Institutional Review Board may be required. Coalitions are encouraged to consult with people experienced in conducting focus group to assist with the design, and evaluation of the resulting data.

Sample Findings

Focus groups were conducted with 6 groups of caregivers of older adults with Alzheimer’s disease to discuss home safety issues (Lach, 2007).  Falls were one of the most frequently reported concerns.  Those caring for people with more advanced disease had the highest concerns. These caregivers noticed many problems with balance and walking. One participant described a serious incident: “We had come back from shopping and she [my wife] had to use the bathroom… she opened the wrong door and fell down the basement stairs… so I put a latch on that door (Lach, 2007 p. 1001).”  These focus groups helped identify a group who needed and wanted more information about fall prevention and home safety – Alzheimer’s caregivers.  The participants identified barriers and facilitators to making changes to improve home safety. The findings from this focus group study were used to develop educational programs for caregivers.

Are there other recommended Evaluation Measures? Recommendations offered are not intended to be all- inclusive of an evaluation process that a coalition may adopt. Coalitions may be required to provide process or outcome data related to specific investments, programs or interventions and must answer to a variety of other stakeholders. Depending on the fall prevention program and its goals, additional participant evaluation measures applicable to older adults could include:

  • reduction in fall risk factors
  • fall frequency
  • injurious falls
  • balance and physical performance measures
  • measures of psycho-social status associated with fall risk (e.g., falls self-efficacy, fear of falling)
  • frequency of involvement in community-based fall prevention programs

Source: Jonathan Howland, PhD, MPH, Boston University School of Public Health in collaboration with Jane Mahoney, MD, University of Wisconsin, Madison and the State Territorial Injury Prevention Directors’ Association, working meeting April 2006, Washington, DC 

A new strategy is emerging to build partner/coalition member accountability into the coalition activities to facilitate better outcomes. Greater awareness and adoption of evidence-based strategies in communities could be facilitated in communities if each member of the coalition was challenged to be accountable for advancing fall prevention within their parent organization and professional areas of influence. State Coalitions could measure or track progress of member organizations in embracing evidence-based strategies, measuring their ability to engage colleagues and peers in this work.

Useful Data Sources

State fall coalitions use a variety of state and local evaluation resources to measure progress. For example, the Fall Prevention Center of Excellence ( has partnered with an injury prevention epidemiologist from the Los Angeles County Department of Public Health to obtain local data including fall “hotspots” – zip codes with the highest fall hospitalization and fall death rates among adults age 65+ compiled from the Death Statistical Master File, State Department Health Services, Center for Health Statistics and Hospital Discharge Data from the Office of Statewide Health Planning and Development (OSHPD).  A more recent project funded by The Archstone Foundation supports work between FPCE and local fire departments that will enhance their data collection on falls. A list of common, accessible data sets is included as Appendix D.


‹‹ Back   Next ››