Damema Details Top 5 Tips for the Best ICMA Conference Experience

Top 5 Tips for the Best ICMA Conference Experience

- Damema Details -

For NRC, it’s really not conference season until the Annual International City/County Management Association (ICMA) Conference. We’ve been going to it, as an organization, for well over a decade. Having gone to this big event myself for so many years, I’ve picked up some tips I’d like to share so that you can make the most out of your conference experience.

These are the top five can’t-miss things to do while you are at the ICMA Conference.

Damema Mann is a survey research expert who has managed hundreds of projects at National Research Center, Inc. (NRC) for well over ten years. She loves working with local governments and helping them improve their communities through data.

1.  Study up on the local area.

See what the biggest attractions are.  You might see something fun, like the Bronze statue of Fonz in Milwaukee.  Take a look at the weather and make sure you are dressed for it.  And I can’t say this enough: bring comfortable shoes!

2.  Use the conference app.

This app has become a truly comprehensive tool.  You’ll be able to see who your fellow attendees are, who the speakers are, where to find exhibitors, when and where all the sessions are.  You can even create your own schedule.  Make sure to pack chargers if you plan to use your phone or tablet at the conference.  You can also engage digitally on Twitter using the conference hashtag.

3.  Go to the sessions.

You will get a wealth of knowledge from leading experts in the field.  You’ll hear from other managers and local government employees who are facing similar challenges and have found unique solutions.  You never know where your next great idea will come from, so make sure to attend as many sessions as you can.  It’s also great to participate as a panelist in a session.  NRC holds multiple sessions every year, so let us know if you would like to speak with us next year.

4.  Network at the social events.

There are so many different kinds of events at the ICMA conference.  Pick the ones that are right for you. There is a 5K, yoga, golf, tours of the area, and of course the evening receptions and gatherings.  These events are great opportunities to see former colleagues and also to meet new folks.  You are all in the same business together of helping local governments move forward.  So you have a common interest, and I see friendships forged and reinforced at this conference every year.

5.  Visit the exhibit hall.

You can come see NRC in the ICMA Pavilion.  The other exhibitors are great too.  We’re all giving away free swag at our booths.  You can get gifts for your co-workers, friends and family.  And there’s free food in there every day.  So stop by and say hello.

I look forward to seeing you at the next ICMA Conference!

 

Related Articles


Time to Rethink Performance Measurement

- By Thomas I Miller -

Despite the contemporary erosion of facts, it’s impossible to run large organizations – private or public – without credible observations about what’s happening and, separately, what’s working. Performance measurement helps with both and can be as deliberate as Baldridge Key Performance Indicators or as impromptu as “How’m I doing?” made famous by once-Mayor Ed Koch’s ad hoc surveys of random New Yorkers. Metrics of success, like compass readings, keep the ships of state on course and because the enterprise is public, make the captain and crew accountable.

Over the years, thought-leaders like Ammons , Hatry and Holzer have made the case and offered conceptual frameworks for measuring performance in the public sector, especially with an eye to comparing results among jurisdictions. Across the U.S. and Canada there are scores of jurisdictions that measure and share their performance data. Regional performance measure consortiums (Florida, Tennessee, North Carolina, Arizona, Michigan, Ontario) remain active and ICMA continues “to advocate for the leading practice of creating benchmarking consortia.” All performance measuring consortiums are in roughly the same business - to allow “…municipalities to compare themselves with other participating units and with their own internal operations over time.” Other jurisdictions track their own performance and publish results without the benefit of knowing, or letting others know, how they compare.

For all of these places, measuring performance in the public eye is gutsier than it is complicated.

So local governments actively involved in public performance measuring should be lauded for participating in a show and tell that doesn’t always reveal any one place to be best in class or to prove improvement over time. Despite the value of measuring performance, especially when done collaboratively, the percent of jurisdictions actively measuring and publicly reporting performance is a small fraction of the 5,500 U.S. cities or counties with more than 10,000 inhabitants – those with enough revenue (probably between $8 million and $10 million) and staff to handle the effort. Across the five consortiums listed above, there only are about 120 jurisdiction participants.

So why don’t more jurisdictions participate in collaborative benchmarking?

The risk of looking bad is no small deterrent but neither are those stringent standards imposed to equate each indicator across jurisdictions. Although measuring performance is neither brain nor rocket science, it does take meaningful staff time to define and hew to extensive collection criteria so that indicators are similar enough to be compared to other places or in the same place across years. For example, police response time sounds like a simple metric, but should the clock start when a call comes in, when the police receive the notice from dispatch, when the patrol car begins to drive to the location, when a non-emergency is logged?

When a large number of indicators is identified for tracking, the staff time to collect them, following rigorous collection protocols, explodes. For example, in the Tennessee Municipal Benchmark Project, there are 22 measures captured for code enforcement alone by each of the 16 members as reported in the 426 page annual report for 2015. And the report covers 10 other categories of municipal service in addition to building code enforcement.

We need to lower the barrier to entry and expand the value of participation.

The “measure everything” approach (with thousands of indicators) has been found to be intractable and the detailed work required to equate measures remains a tough hurdle. If we choose a small set of indicators that offers a stronger dose of culture (outcome measures of community quality) than accounting (process measures about service efficiencies and costs), we will reduce workload and as a bonus more likely attract the interest of local government purse holders – elected officials.

Imagine, across hundreds of places, a few key indicators that report on quality of community life, public trust and governance and a few that measure city operations.  Then visualize relaxing the requirements for near microscopic equivalence of indicators so that, for example, any measure of response time could be included as long as the method for inclusion is described. Statistical corrections then could be made to render different measures comparable. This is what National Research Center does for benchmarking to equate survey responses gleaned from questions asked differently .

Besides the time-cost barriers to entry there have been too few examples of the value of the performance management ethos.  We know it’s the right thing to do but we also know that with relatively few jurisdictions collecting common metrics, researchers are hampered from exploring the linkages between government processes and community outcomes. Too often comparisons among jurisdictions become a game of “not it,” whereby staff explain away the indicators that show their jurisdiction to score poorly.  When we expand the number of places willing to participate, we will have a better chance of offering a return on investment in performance measurement. With many more participating governments, we can build statistical models that suggest promising practices by linking processes to outcomes.

We can broaden participation in comparative performance monitoring when common metrics are few, targeted to outcomes, easy to collect and proven to matter.

It’s a good time to make these changes.

This is an updated article originally published on the ASPA National Weblog in April, 2017.

Related Articles


Old School vs New survey technology differences

Old School or New Tech: What Is the Difference with Surveys?

- By Tom Miller -

Old school surveys invite a random sample; new tech surveys allow anyone to opt-in on the Web

There are surveys and there are surveys. These days, scientific surveys – ones with unbiased questions, asked of a randomly selected sample of residents, with proper weighting of results to make the sample’s demographic profile and aggregate answers similar to the community’s – compete with cheaper surveys that are offered to anyone on the Internet with a link to survey questions. The inexpensive surveys are called “Opt-In” because respondents are not selected; they choose to come to the survey with no special invitation.

As time crunches and budgets shrivel, the cheap, fast, Web surveys become hard to resist, especially if the results they deliver were pretty much the same as those that come from more expensive scientific surveys.  The problem, for now though, is that the results are not the same.

NRC and other researchers are examining the differences

National Research Center, Inc. (NRC) offers, alongside its scientific survey, an opt-in survey of the same content, simply posted on a local government’s website after the trusted survey is done.  Not only does the opt-in survey give every resident an opportunity to answer the same questions asked of the randomly selected sample, it gives NRC an opportunity to explore the differences in ratings and raters between the two respondent groups.

Over the last two years, NRC’s research lab has studied how scientific surveys (mostly conducted using U.S. mail) differ from Web opt-in surveys in response and respondents across close to 100 administrations of The National Citizen Surveys™ (The NCS™).  NRC is working to identify the kinds of questions and the best analytical weights to modify opt-in results so they become adequate proxies for the more expensive scientific surveys. We are not alone. The American Association of Public Opinion Research (AAPOR) studies this as well, and if you are in survey research but not studying this, you are already behind the curve.

Respondents to scientific and opt-in surveys are different

On average, those who opt to take the self-selected version of The NCS on the Web have different demographic profiles than those who are randomly selected and choose to participate. The opt-in respondents have a higher average income than those who respond to the scientific survey. The opt-ins are more often single family home owners, pay more for housing than the randomly selected residents, are under 45 years old, have children and primarily use a mobile phone.

But as noticeable as those differences are across scores of comparative pairs of surveys, the biggest “physical” differences in the two groups come in the activities they engage in. The opt-in cohort is far more active in the community than the group responding to the scientific surveys. For example, those who respond to the opt-In survey are much more likely to:

  • Contact the local government for help or information
  • Attend or view a government meeting or event
  • Volunteer
  • Advocate for a cause
  • Participate in a club
  • Visit a park

Responses also differ between the opt-in and the scientific survey takers

Even if the people who respond to surveys are from different backgrounds or circumstances, as is clear from the comparisons we made between opt-in and scientific respondents, their opinions may be about the same. Curiously, if we only look at the average difference between ratings given to community characteristics or services, the opt-in and scientific responses look a lot alike.  The average difference in ratings across 150 plus questions and close to 100 pairs of surveys amounted to only about 1 point, with the opt-in respondents giving this very slightly lower average rating.

But behind the average similarity lurks important differences. In a number of jurisdictions, there are large differences between ratings coming from opt-in respondents and the scientific respondents. This may be easy to overlook when the average differences across jurisdictions is small.

For example, take the positive rating for “neighborhood as a place to live.” The average rating across 94 jurisdictions for both the opt-in survey and the scientific survey is 84 percent rating as excellent or good. That’s right, for BOTH kinds of surveys. (Not every jurisdiction’s pair of surveys yield the exact same rating, but the average across many jurisdiction pairs reveals this result.)

Change in survey methods quote

When we examine each pair of the 94 jurisdictions’ ratings of “neighborhood as a place to live,” 20 of the results are 6 or more points different from each other.  In these 20 jurisdictions, ratings of neighborhoods are sometimes much higher from the opt-in respondents. Sometimes it was much higher from the “scientific” respondents.

Imagine that a local government decides to change from its trend of scientific surveys to conduct its next survey using only the opt-in version, and a steep decline in the rating for neighborhoods is found.  Given our research on differences in response between opt-in and scientific surveying, we would not be inclined to conclude that the rating difference came from a real shift in perspectives about neighborhoods when the turn could have come from a change in the survey method alone.

Data analysts are testing different weighting schemes

If we can determine the right weight to apply to opt-in responses, we are hopeful that the differences we see in our lab will diminish. That way we will be able to encourage clients to move to the faster, cheaper opt-in method without undermining the trend of scientific data they have built. Until then, the scientific survey appears to be the best method for assuring that your sample of respondents is a good representation of all community adults.

 

A version of this article was originally published on PATimes.org.

 

Related Articles


Your Citizen Survey is NOT a Political Poll

-By Tom Miller-

In 2016, election polls left many around the world expecting a Clinton victory that never came. The wrong calls grew from results that were largely within the margins of uncertainty both nationally and in the swing states, but pollsters were remorseful over the miss, even as much of the public remained shocked. That surprise has had some effect on the reputation of political polls, and some worry that perhaps this could morph into distrust of surveys. However, before local government stakeholders start to worry about their own citizen surveys, it’s useful to take a moment to understand how fundamentally different political polls are from local government surveys.

While surveys and polls are in the same class – like mammals –they are by no means the same species – e.g.  dolphins vs. foxes. Citizen surveys collect evaluations of local government services and community quality of life; political polls predict voter turnout for incumbents or challengers.

Seven Ways Citizen Surveys are More Trustworthy than Political Polls

1. Methodology

The most substantive difference between political polls and citizen surveys resides in the different purposes of the two which results in fundamentally different methods. Citizen surveys deliver policy guidance, performance tracking and planning insights based on current resident sentiment. Polls use surveys to prophesy a future outcome. While the base information is the same, polls apply statistical models to “guess” which demographic groups will vote and in what amount. To emphasize the difference between survey results and poll conclusions, The New York Times gave the same survey results to four different pollsters and got four different predictions for presidential victor.

 

2. Social Desirability Bias 

Political questions typically are burdened by strong emotional sway that influences respondents to give interviewers what respondents believe to be the socially acceptable response. This is why Trump did worse in telephone interview polls, but better when responses could be given with no interviewer involvement (e.g. in “Robo calls” or on the Web). In citizen surveys, the stakes are lower with no pressure to name an “acceptable” candidate. And if conducted using a self-administered questionnaire (mail or Web), citizen surveys avoid altogether the pressure for participants to inflate evaluations of community quality.

 

3. Gamesmanship

Political polls influence votes and must account for voter gamesmanship, but there are no such forces at play for citizen surveys seeking evaluations of city services and community quality of life. As elections draw near, those favoring third party candidates may change positions depending on the published poll results. For example, a supporter of the Libertarian candidate may decide at the last minute to vote for a main party candidate because polls show the two party race has tightened. Candidate changes may occur after the last survey is conducted, even for voters of the major parties. Citizen survey results often come just once every year or two, so there are no prior results that could shift a respondent’s choices and no winners to choose.

 

4. Participation

In political polls, some types of voters just won’t respond. Some analysts believe that, in the recent election, the enmity toward the establishment - government and media, including the polls – kept many Trump voters from participating in election surveys. When the most passionate group favoring one candidate doesn’t respond to election polls, the polls underestimate support for that candidate. In citizen surveys, those who don’t respond tend to be less involved in community. That’s not to say they have strongly different opinions about community than those more involved. Instead, life circumstances erode the priority for taking surveys among those non-responders.

 

5. Controversy

Political poll responses are driven by values that tend to be polarized in the U.S. Citizen surveys are about observed community quality, so residents are not motivated by doctrinaire perspectives that whiplash aggregate response depending on who participates. Those who participate in citizen surveys generally have similar perspectives to those who do not participate. So response rates, even if as low as polls, do not undermine the credibility of the citizen survey.

 

6. Response Rates 

Response rates for most telephone polls are much lower than are response rates for citizen surveys conducted by mail. Typical phone response rates are about nine percent these days, but well conducted citizen survey response rates range from 20 percent - 30 percent.

 

7. Purpose

Political polls must pick winners and losers, and those declarations occur within a generally modest margin of uncertainty. To stir excitement, talking heads usually ignore error ranges to name a winner who may as likely be a loser because the race is so close. Citizen surveys aim public sector decision-makers at differences that are larger – differences that are relevant to policy decisions. For example, whether 15 percent or 25 percent of residents give good ratings to street repair, government action may be required. The same is true for differences in survey sentiments over time or compared to other places. Properly interpreted citizen survey results assist government leaders by steering them clear of small differences whereas the lifeblood of media polls is to make alps out of anthills.

 

Related Articles

 

This is an updated article originally published in November of 2016.


Margin of error

What Does Margin of Error Mean? [VIDEO]

-NRC Q&A-

If you have dabbled in survey research and methodology, you’ve likely seen your fair share of industry jargon.  Terms like “confidence interval” or “sample” tend to sprinkle scientific conversations and reports.  Local governments that survey their residents want to understand how well the data reflects their entire community.  So “Margin of Error” is a particularly important term for local leaders to consider, and one researchers and analysts are often asked about.  NRC Survey Associate Jade Arocha has managed dozens of community surveys in cities and towns across the U.S. In this video, she explains what “Margin of Error” means and why it matters.

 

https://n-r-c.wistia.com/medias/a5tllj1h5v?embedType=async&videoFoam=true&videoWidth=640

Watch on YouTube

 

Why We Need a Margin of Error

Let’s talk about why we need a “margin of error,” or a way to measure how well survey results represent a population.  If we were to survey every person in a community, there would be no need for margin of error.  But surveying every single resident in a city of thousands is very costly in terms of money, time and staff.  Most local governments lack the resources to conduct a complete census of their residents.  (And not everyone will respond to the questionnaire.)

So for most cities, we only survey a randomly sampled portion of the population.  This gives us an overall sense of how residents think without numbering individuals. (You only need a spoonful to understand how an entire pot of soup tastes.)

 

How Margin of Error Works

For example, say we survey every person in a community, and find that 80 percent of them rate their quality of life as excellent or good.  We know for sure that number reflects exactly 80 percent of the population because we have the data from every person.  However, surveying a proportion of the population creates a range in accuracy.

So in this example, NRC’s methodology allows us to identify that 80 percent of the population rated their quality of life as excellent or good within a margin of error of five percent.  That means the results sit very near 80 percent, give or take up to five percent.  So somewhere between 75 and 85 percent of residents rated their quality of life as excellent or good.

 

Margin of Error Depends on the Number of Responses

The margin of error is determined by how many responses we get. Say the population contains 5,000 adults.  Receiving 300 responses will have a higher margin of error than 500, and 500 will have a higher margin of error than 1,000, etc.  The lower the margin of error the better.

A higher response rate means a lower margin of error.  So our recommended sample size is based on where we want that margin of error to be.  NRC survey researchers take measures to produce data that are highly accurate, so you get results you can rely on.

 

Related Articles


Leadership Trailblazer Award_LWG and NRC

Announcing the Premiere Leadership Trailblazer Award

-By Angelica Wedell-

The League of Women in Government (LWG) and National Research Center, Inc. (NRC) are proud to announce the inaugural Leadership Trailblazer Award. This award recognizes an accomplished leader in the local government profession who has championed and inspired other women to achieve as well.

“There are so many women in local government who are leading their organizations and their colleagues with courage and distinction. We wanted a way to acknowledge their efforts," said Ashley Jacobs, President of the League. “We hope the award will not only spotlight the nominees and winner, but also honor the valuable work of all women in our profession.”

Nominations for this award are submitted by local government professionals who would like to see a dedicated colleague recognized on a national scale. A panel of judges from LWG and NRC will review each nomination and select the final winner. This year’s award recipient will also be the first person inducted into the LWG Hall of Fame.

“To win this award means that you've not only had a big impact on the community you've served, but also on your organization, and other women whom you've mentored and helped succeed,” said award judge and NRC survey researcher Damema Mann. “Women still make up less than 20 percent of executive level local government staff. It's important to salute the accomplishments they have made, and their work in blazing the trail for future female leaders.”

“NRC President Dr. Tom Miller, and the entire team, has supported the League since our creation. It is no surprise that they are sponsoring the premiere Leadership Trailblazer Award,” said Pamela Antil, CAO of the League. “We are grateful for NRC’s continuing support of the League’s mission to advance and recognize women in local government.”

The first ever Leadership Trailblazer Award will be presented at the 3rd Annual League of Women in Government Symposium (in conjunction with the International City/County Management Association (ICMA) Conference) at the Baltimore Convention Center in Baltimore, Maryland on Saturday, September 22nd.

Related Articles


Do Incentives Increase Response Rate?

-NRC Q&A-

Incentives may seem like a sure way to increase your survey response rate, but the data shows differently. National Research Center, Inc. (NRC) explains the impact of incentives on survey responses, and how they affect your budget.

 

Watch on Youtube

Impact of Incentives

Throughout NRC’s vast database of resident opinion responses, we have found incentives don’t actually have much of an impact on your community survey’s response rate. Additionally, including incentives may increase the cost of the survey in the city’s budget overall.

Civic Duty

We find that residents typically want to engage and provide feedback about the quality of life in their community to their local government; therefore citizen surveys tend to have consistent response rates, which render the use of incentives quite unnecessary.

Publicize the Survey

To increase response rate, NRC recommends crafting a strong promotional plan to publicize your survey. Publicizing the survey through social media, local news outlets and on the city’s website can really make a difference in your final response rate.

Low-Cost Incentives

However, if you would like to include an incentive, consider a low-cost option such as a free pass to your city’s recreation center or another city program like a day at the aquatic center.

 

This is an updated article originally published in 2017.


Palo Alto Survey Report Strategy, survey reports

Palo Alto's Secret to Making Survey Data Actionable

-By Keifer Johnson-

A good recipe requires quality ingredients. But why buy ingredients for a recipe and never use them? If there is one rule of gathering survey data for your municipality, it would be to take action on that data.

Harriet Richardson, The City Auditor of Palo Alto, CA, understands that concept very well. She came into her role as a city auditor in 2014 and saw a decade’s worth of survey data. Palo Alto has conducted The National Citizen SurveyTM (The NCSTM) with National Research Center, Inc. (NRC) every year since 2003.  Richardson realized how overwhelming the amount of data could be for city leaders, so she began producing executive summaries to tell the story of the survey data in a clear and concise way.

“We needed to pull some of the important numbers to the forefront of our City Council and Executive Leadership Team,” Richardson said.

Every year, Richardson presents these executive summaries to city leaders at their annual retreat. It has become a key factor in identifying the council’s priorities for the upcoming year. “Several of our councilmembers have come to see the survey report as an important component of understanding residents’ concerns,” Richardson said.

The survey data acts as a catalyst for necessary change in the community. Richardson sees how vast the amount of data is, and shrinks it down into actionable reports that focus on the most pressing information. City leaders have a million things to worry about. By creating the executive summary, Richardson directs their attention to pressing items that need to be discussed.

NRC Senior Survey Associate Damema Mann says that Palo Alto’s continued engagement with the data greatly improves the effectiveness of their processes. The executive summaries bring public opinion into the discussion in a strategic way. “It’s one of the most important things to factor in because you’re there to serve your community and to serve the community as a whole. The NCS is bringing in the voice of your community residents,” Mann said.

Utilizing Survey Report Data for Improving Quality of Life and More

The highlighted information that Richardson brings to decision-makers helps shape their priorities in several ways. The executive summaries identify areas for necessary improvement. Any area of the survey that has an average rating below 50 percent positive is automatically included in the summary. Therefore, anything that bothers a majority of residents is brought up to council members.

While some issues highlighted by the data may already be a part of council discussions, the data confirm which issues are most important to the residents. The summary also shows areas that improved over time, which allows for performance measurement and helping to justify new initiatives. An example of this is the City’s response to low ratings for street repair.

In 2012, only 42 percent of residents gave a positive rating to street repair in the city. City leaders could see that this rating fell below national benchmarks, so they identified street repair as a priority. As a result of efforts to improve street conditions, the City has seen a 13 percent increase in resident satisfaction with street repair over the last five years.

Some other items Richardson adds to her executive summaries include:

  • A list of community characteristics that received significant changes in ratings from the prior year
  • A list of questions that have changed over the past ten years
  • A summary of results by facet: The overall average ratings alongside the positive rating percentages

Annual Surveys Offer More Information

Palo Alto demonstrates how running annual surveys provides an increased understanding about its residents’ wants and expectations. With a history of over a decade of annual surveys, the council has trend data that highlights the changing opinions of its 64,000 residents. Because trends can be identified more accurately with more data, Damema Mann says usefulness is all a matter of how much data you can collect.

“A survey is a snapshot in time, and by conducting The NCS annually, Palo Alto has built a very long trend line,” Mann said. “The longer that line is, the better understanding you can have of resident perception of the services you are providing to them.”

Public opinion is the driving force of local government. As local officials look to set priorities, survey data affords them an open bank of feedback from the community. So when public opinion changes, it is important to analyze those fluctuations and reevaluate current priorities in a timely manner. Palo Alto demonstrated this with its response to opinions about street repair. It was easier for Palo Alto officials to recognize the issue because they could see the data going back so many years. Not only could they compare recent survey results to the last year’s data, but they could juxtapose it to five or ten years ago in order to get a long-term view of what was happening. They could see if resident ratings were declining and decide to initiate changes to address falling opinions.

The sheer amount of data created by running annual surveys can be difficult to manage, and that is why Richardson’s executive summaries deserve a spotlight. Not every council member or city leader has time to read through full reports on each issue. By pruning down the information provided in NRC’s comprehensive reports, the most important data gets into the necessary hands. In this way, those readers have actionable information to work with right away.

Culling the quantity of data with a consumable summary is like having the perfect recipe alongside all the best ingredients: certain to taste better and leave people satisfied.

 

Related Articles

 


Damema Details: Answers to Top Five Benchmarking Survey FAQ questions

Damema Details: Answers to Top Five Questions About Benchmarking Surveys

- By Damema Mann -

We receive plenty of questions from both local governments new to survey research and long-time returning clients.  Today I’m happy to answer the top five most frequently asked questions we get about our suite of benchmarking surveys, like our flagship community assessment tool, The National Citizen SurveyTM (The NCSTM).  There is a lot of overlap in terms of community survey methodology and best practices between The NCS and other standardized surveys including The National Employee SurveyTM (The NESTM), The National Business SurveyTM (The NBSTM), The Community Assessment for Older AdultsTM (CASOATM), and even our custom survey work.


1. How much does the survey cost?

That answer can vary based on the scope of the project.  We have multiple cost models to choose from for our surveys.  The good news is we always have current pricing for our benchmarking tools posted on our website.  Even if you are planning for the next calendar year and you are wondering how the costs might change, give us a call or send us an email.  We are happy to provide a quote, talk you through the different options that are available, and let you know if you qualify for a discount.

2. What is the best timing for conducting a survey?

The best time of year to survey can depend on your unique community.  For instance, if you are a university city, or a community with a lot of snowbird residents, you will want to take those residents into consideration.  Timing may come down to who you are wanting to capture responses from.

Ultimately, your jurisdiction should plan for a time when you know you will need the results and what you’ll do with the data.  If you’re planning to use the results for budgeting or strategic planning, and you need the data by a specific date, give us a call.  We can work out a timeline for you.

For resident surveys, the most common frequency for cities and towns to conduct the  survey is every other year.  We also work with several municipalities that are able to survey annually.  Those are usually larger communities with more staff and resources devoted to incorporating the data with all of their planning processes each year.  For those jurisdictions with smaller budgets and less resources, every two years is still a great amount for a comprehensive assessment.

3. What is NRC’s community survey methodology?

Twenty years ago, phone surveys were the gold standard. That is no longer the case. Response rates for phone surveys have plummeted over the years, and they are very expensive.  Phone surveys also elicit messier data than surveys that are self-administered online or by mail.  For these reasons, National Research Center, Inc. (NRC) recommends mail and web surveys.  And most of our surveys are conducted by those modes.  We’re happy to talk to you about pros and cons of different methodologies, why we recommend what we do, and our findings of best practices in survey research.

4. Can benchmarking surveys be customized?

Clients often ask if there is any flexibility with NRC’s standardized, benchmarking surveys.  “Can we  change wordings?  Can we add or remove questions that don’t work for us?”

The answer is yes, to an extent.  The wording and items are intentionally fixed on our standardized questionnaires. This allows us to provide you with a high quality, tried and true resource at a relatively low cost. It also gives you benchmarks (or average ratings) for each item on the survey, to put your results into context.  It also helps establish a trend line, because the wording and question scales are the same year after year.

However, there is room on our benchmarking surveys to remove certain questions that absolutely don’t apply to your community. For instance, we would not make clients in South Florida ask about the quality of snow removal. But that question is very important for our clients in Minnesota. There is also a good portion of The NCS allotted for completely custom questions, included in the basic service.  We will help you word-smith these questions and give examples of what other jurisdictions have asked. We will also help you make sure the questions are neutral and clear, so you get clean and actionable data.

5. Are there add-on options?

Each benchmarking survey has a menu of options you can add to the basic service.  These add-ons are detailed on our website, where you can find pricing and information about each one.  You don’t have to purchase any of these extras if you do not need them, as the basic service already supplies a ton of information.  But these add-on options can be great tools to dig deeper into the data.

For example, some add-ons can drill further into the data by looking at geographic or sociodemographic differences within your city. We can also come out and present the results at a town or stakeholder meeting, or facilitate a strategic planning workshop with city leaders.  We are always happy to chat with you more about these options.

 

Related Articles


Survey Copyright

Does Copyright Apply to Surveys?

- By Angelica Wedell -

Just about everything – from books to music and everything in between - is easy to access, download and post online. With memes and other intellectual commodities freely traversing the Internet, copyright laws are easily over-looked. Most people think of published articles, books, pictures and music when they consider media protected by copyright laws. But is there such a thing as survey copyright?

Does Copyright Law Apply to Surveys?

The short answer is yes. A great example of a copyrighted survey would be any one of the templated benchmarking survey products, owned by National Research Center, Inc. (NRC). These surveys are intellectual property created and administered by NRC, and thus we remain the copyright holders. Copyright protections apply to printed and digital surveys equally.

Trademarks

The official titles of NRC’s benchmarking surveys are trademarked: The National Citizen SurveyTM (The NCS TM), The National Employee Survey TM (The NES TM), The National Business Survey TM (The NBS TM) and Community Assessment Survey for Older Adults TM (CASOA TM). So when you see these titles, you know the survey is authentic and carries the imprimatur of quality our clients have come to know and trust.

Survey copyright licenses

When you enroll in an NRC survey, a limited-time license to use that survey instrument is included. This gives you full rights to publish, promote, and disseminate the survey for the duration of the license.  Once that license has expired, you will need to enroll with NRC to get a new license and conduct the survey.

What About Our Survey Results?

NRC owns the survey copyright privileges over all survey data. However, you maintain permission to use survey data and reports of your own results in perpetuity. Clients must adhere to our request to direct other interested parties to NRC for permission to use our data. This is to avoid use of NRC data or materials for others’ financial gain. NRC also respects the privacy of your data and will not distribute individual client or jurisdiction information to any third party without first receiving express permission to do so.

How does NRC comply with Copyright Law?

You’ll notice our website has a great deal of content - articles, videos, webinars, etc. We create our own content in-house, ensuring that we own the copyright privileges to it. When we post content created by others - only with permission or if we have purchased the rights to use that media - we include attribution whenever necessary. If we do not have permission to use a particular piece of media, we will not post it onto our website, which is why we generally do not use Internet memes.

For more on copyright, privacy, and how it applies to NRC, check out our Terms of Use page.

Related Articles