AJR  Unknown
From AJR,   March 1999

Continuation of What Do Readers Really Want?   

By Charles Layton
Charles Layton (charlesmary@hotmail.com) is a former editor and reporter at the Philadelphia Inquirer and a former AJR senior contributing writer.     


As the discussion draws to a close, Cherie Sion invites the News staffers to come out and meet their critics, for a few minutes of Q&A. The group members are excited at the prospect of this, and initially it's a light-hearted little scene. A couple of people commend the journalists for having the spunk to show their faces in the lions' den. When Jennifer Files introduces herself as a business reporter, somebody asks, "Are you an intern?"

The journalists press for more particulars about what they've heard. Mike says he wants more news about his neighborhood. When Stallings names some stories the paper has run recently, Mike says, "I want more. I want more."

Finally, Stallings gets to the issue that's obviously sticking in his craw. Addressing the whole group but looking at one particular participant, he says, "Fred said earlier that we didn't cover the arena. If I'm not mistaken, then you mentioned the story about Mayor Kirk and his wife profiting on the stock options."

Fred jumps in and takes it from there. "I don't even recall that being covered in the Morning News," he says. "Maybe it was. I remember seeing Kirk on TV and nobody challenged him on this." Fred holds forth for another half minute or so, recounting the financial elements of the story in rather impressive detail.

"We worked on that story for literally three months," Stallings informs him. "And we broke the story on the front page of the Morning News, the story about Mrs. Kirk getting the options.... So, for some reason, you guys aren't aware of that or didn't notice it or--that's what I'm trying to figure out. Like, we're trying to get stories that have impact and show that we're trying to cover both sides, and yet it doesn't seem like you're picking up on those stories."

Fred is unregenerate. "Well, why did you let him get away with it?" He launches another filibuster, longer than the last one.

Nobody in the focus group says, oops, we didn't realize you'd covered this thing as well as you have. Instead, Mike begins praising Laura Miller, a columnist for the Observer who wrote impassioned accounts of Dallas politics and then ran for city council and won.

"I think the election of Laura Miller is a totally black eye on the Dallas Morning News," Mike says. "For someone to come from a small local newspaper that doesn't have the money behind it like the Dallas Morning News, to not only get citywide recognition but to win acceptance and then to run for political office--to me, this was a glowing thing. I was so happy."

Files says, "But we don't want to be politicians. We want to be journalists."

"I understand that," says Mike. "I don't think she wanted to either. But...she felt that she had to step up. I admire her for that. To me, that's a black eye. How could the Observer's reporter get more attention for what she's doing than what the Dallas Morning News is?"

Stallings: "We don't want our reporters to get attention. We don't work that way."

Mike: "Well how did she get attention? By covering stories that we wanted to know about."

Stallings: "I would take exception with that. I think she got attention by putting her viewpoint in her stories, which is what we don't do in the newspaper."

On and on it went, and it didn't stop until Cherie Sion's husband and business partner, Berrien Moore, entered the room and said, in his courtly Georgia accent, "Excuse me for interrupting. The people outside, who run this facility, they say they've got to go." Everyone laughed.

Afterward, Stallings, Files, Delgado and Harris held a little debriefing session with Barbara Quisenberry, the News' research director, who had organized the focus group. Stallings wondered what to make of the fact that the News' own readers didn't know the paper had investigated the arena. All the other media in town had picked up the story from the News. "And radio," Delgado said, "they just read our stuff right off the page."

"Does the Dallas Morning News' reporting just get to be part of the landscape?" Stallings wondered.

"Or," said Delgado, "do they forget they learned it from us?"

Files noted that when Channel 5 had an investigative report, the station promoted it. Maybe the News should do more of that.

Everybody was disturbed, too, that the group hadn't understood the paper's journalistic values--that people admired the Observer's reporter for being so partisan. Here was the News trying to keep its reporters free from bias, and here were readers, it seemed, demanding bias. At one point, grasping at straws, Stallings said, "If people don't like the paper but they read it, that's not all bad."

Later, standing over by himself against the wall, he mused, to no one in particular, "I wonder what they meant by good news."

This was one of a set of six focus groups Barbara Quisenberry organized last October. While the group with Fred and Mike and Jenny was by far the most cantankerous, each group was problematic in its own way. If there was a common thread, it was that local news means so many different things to different people. Cherie Sion began every session by asking group members to define local news and give examples. In the first focus group, a woman said "local" meant the state of Texas. Her example was coverage of the floods then occurring in South Texas, about 250 miles from Dallas. To another participant local news meant "Dallas and the suburbs." To one it meant Dallas and Fort Worth, while to another it meant Dallas but not Fort Worth. One man said any news about the corporation he works for was local news. Others mentioned real estate ads, news about brides and marriages ("to see if any old high school classmates got married"), news about garage sales, news about telephone rate increases, news about the oil business, about the zoo, about "what local community leaders are doing." One woman said she didn't think of local as a geographic term; local was whatever affected people she had common interests with.

Such a dizzying array of answers is not unique to Dallas. It's an article of faith in the newspaper business these days that people want more local news--all the surveys say so--but it's also true that the term is nebulous and ever-changing.

"It used to be that local news was pretty much defined by geographic boundaries," said Tony Casale, president of American Opinion Research, based in Princeton, New Jersey. "That's not true any longer." When people think about what's local, he said, "they'll think about what affects them and their families.... My wife got Lyme disease a couple of summers ago. That became a local story for me."

A recent report by the Newspaper Management Center at Northwestern University spoke of the Internet's impact on all this. On the World Wide Web, the report said, people "are able to find others who share their interests and backgrounds, which is redefining the term 'community.' "

Vivian Vahlberg, director of journalism programs at the McCormick Tribune Foundation, is concerned about the perennial problem of declining newspaper readership, and about why newspaper executives don't do very much about it. As she explained it to me, there are probably things the industry could do to reverse the declines in circulation, but publishers don't have the confidence to carry them out. Partly, she said, it's that the research findings still aren't convincing enough.

"For example, research study after research study says local news is the key," she said. "But can you tell me for sure that anybody knows what we mean by local news? That you really know enough to go out tomorrow and change a newspaper by giving people local news? There's a feeling that we need to understand more deeply what we mean by local news. For someone who grew up in El Salvador but lives here now, local news might be news about El Salvador."

Based on what I saw in Dallas, it will be a long time before focus groups settle this or any other major issue. Although an entire industry has grown up around the focus group--there are businesses that recruit participants, other businesses that rent out rooms with the see-through mirrors, and others that provide moderators--the reliability of the focus group is highly suspect.

The modern focus group evolved during World War II as a way to test the effectiveness of films and radio programs used for training and morale. Soldiers watching the films would be asked to press a red button on their chairs when anything evoked a negative response, and a green button for a positive response. The buttons were wired to a set of fountain pens on a primitive polygraph machine, which recorded the ups and downs of the soldiers' responses across a timeline. After the film was over, an interviewer would question the group as a whole, asking why they'd reacted as they did.

The researchers found that they needed both kinds of input. The polygraph record had the character of a controlled experiment. It was "quantitative"--the squiggles could be toted up and studied in a statistical way. The "qualitative" interviews added human detail to the bare-bones polygraph records. But since the qualitative comments lacked scientific rigor, they had to be used as hypotheses for further study. To leave out the quantitative step and just rely on the soldiers' recollections would have been inadequate.

One of the researchers on those early projects was sociologist Robert K. Merton, whom many now consider the father of the modern focus group. After the war, he saw the use of focus groups explode when the method was embraced by market researchers for the business sector. But he also saw it come under attack, because so many people tried to draw conclusions based on focus groups alone, without the corroboration of other, more "scientific" methods. In 1989, when Merton was invited to speak at a New York meeting of the American Association of Public Opinion Research, he seemed disappointed at how his brainchild had turned out. "I gather that much focus group research today...does not involve this composite of both qualitative and quantitative inquiry," he said. "One gains the impression that focus group research is sometimes being mercilessly misused."

Here is a smattering of the criticisms lodged against focus groups over the years:

• "Focus group interviewing...has not generated any significant body of research concerning its reliability and validity."--Albert E. Gollin of the Newspaper Association of America.

• "The focus group is the most abused research technique in all phases of market research."--Daniel T. Seymour, a consultant to industry and higher education.

• "People who can be enticed [into a focus group] do not always represent a true cross-section of potential customers...and loudmouths can dominate and sway the discussion."--Leo Bogart, the veteran newspaper researcher, as quoted by Robert Merton.

• "Samples are invariably small and never selected by probability methods. Questions are not asked the same way each time. Responses are not independent. Some respondents inflict their opinions on others; some contribute little or nothing at all. Results are difficult or impossible to quantify and are not grist for the statistical mill. Conclusions depend on the analyst's interpretative skill. The investigator can easily influence the results."--William D. Wells, in "Handbook of Marketing Research."

• "Because of the persuasiveness of the technique, unsophisticated clients may believe they have witnessed The Truth, and may make precipitous decisions on slender evidence."--Jane Farley Templeton, author of the book "The Focus Group."

In spite of all this, focus groups remain very much with us--and not just because they're quicker and cheaper than other research, although they are. One of the weaknesses of the formal "quantitative" survey is that people with strong, insightful opinions may never get to express them, because the interviewer isn't asking the right questions. As far back as 1931, a social scientist named Stuart Rice was complaining about this problem. He wrote that the results of an interview "are as likely to embody the preconceived ideas of the interviewer as the attitudes of the subject." In theory at least, a focus group strips away the straitjacket of the formal survey and lets people just speak their minds.

Another point in favor of qualitative research--the kind that comes from focus groups--is that newspaper people find it easier to understand than a survey. Even if the participants in a focus group are full of bull, which is often the case, what they say is more accessible than the tables and statistics of a quantitative research study. Belden's Deanne Termini told me that newsroom people "are very suspicious of the quantitative stuff sometimes."

And they should be. One day last fall I went to the Princeton University library, hoping the social science journals could shed light on those perplexing contradictions in Orange County. Maybe something was wrong with the questions they were asking out there.

The day was an eye-opener. One of the most confounding documents I came upon was a report in Public Opinion Quarterly about the wording of comparative questions. In 1995, in a poll conducted in Germany, the authors asked people whether they found tennis more or less exciting than soccer to watch on TV. Thirty-five percent judged tennis more exciting, while 65 percent judged it less so. Then the authors reversed the wording, asking people whether they found soccer more or less exciting than tennis. This time, only 15 percent found soccer more exciting, down from 65. Seventy-seven percent found it less exciting, up from 35. And while zero percent were undecided on the first wording, 8 percent were undecided on the second. Which sport these people would really rather watch, God only knows.

The authors conducted two other, similar experiments, and in both cases the order of the words again changed the outcome. When they asked, "Would you say that news reporting in your newspaper is better or worse than the news reporting on television?" 52 percent said the reporting in the paper was better. When they asked, "Would you say that news reporting on television is better or worse than the news reporting in your newspaper?" only 43 percent thought newspaper reporting was better.

Another journal article, from 1992, examined the wording of questions in a national health survey. In seven out of 60 cases, people had trouble understanding the question. When asked how many times a week they ate butter, many people didn't know whether to count margarine as butter. When asked how many "servings of eggs" they had "in a typical day," many didn't know what was meant by "a serving of eggs" (does a single egg count as a serving?) and others didn't know what was meant by a typical day. Asked whether they exercised or played sports regularly, many didn't know whether to count walking as exercise. And when asked whether their last consultation with a doctor took place at an HMO, many didn't know what an HMO was. None of these misunderstandings had been foreseen by the people who designed the survey.

A 1995 study found that often, when people don't know the answer to a question, they guess. Asked, for instance, about U.S. policy toward Nicaragua, a respondent who knows nothing about that policy may nonetheless form an opinion on the spot. In one classic case from the 1950s that has since been replicated, large percentages of people expressed opinions about the Heavy Metals Act, even though no such act had ever been passed, debated or proposed.

A study in 1994 found that survey questions often contain unstated assumptions about people. "How long does it take you to drive to work?" assumes that the person has a job. These unwarranted assumptions "are pervasive in all forms of social intercourse, including standardized surveys," the researchers wrote.

People are typically asked how intensely they hold some attitude or belief--"How important to you is protecting the environment?"--before it's been established that they hold any such belief at all.

A possible way around this kind of bias is to precede the main question with what's called a filter question. "Is protecting the environment important to you?" would be the filter question, and if the answer comes back yes, then you follow up with the main question, "How important is protecting the environment?" In this example, though, the filter introduces another kind of bias, known in the trade as socially acceptable response bias. How many people want to come out and say they don't give a damn about protecting the environment?

When the authors of this report ran experiments using the same questions with and without filters, they got different results. In three out of four cases, the use of a filter question doubled the percentage of respondents who said they were "not concerned" about a problem. In other words, without the filter, this survey would have been tainted with the opinions of a lot of people who really didn't care.

Surveyors have another big problem: getting people to cooperate. Increasingly, people don't like to be bothered by pollsters. In two-career families, it's hard for telephone surveyors to catch anyone at home. When they are at home, people may suspect that the survey is one of those telephone solicitation scams in which a caller poses as an opinion pollster but turns out to be a salesperson.

Accordingly, the "response rate"--the number of people who cooperate with phone and mail surveys--has been going down and down. Newspaper researchers say it's very hard now to get a response rate higher than 50 percent, which used to be considered the lowest acceptable number. The problem is that as response rates fall, a self-selecting bias is introduced. The people most likely to answer a survey questionnaire and mail it in, for instance, are those who feel most strongly about the issues in the survey. People who feel less strongly just pitch it. Two Ohio State professors, writing about nonresponse bias in Public Opinion Quarterly, felt moved to warn, "Trusting the results of a mail survey is like lowering yourself into a dark pit in search of treasure and trusting that you won't be bitten by a snake."

You can raise the response rate by paying people a cash incentive, usually from $5 to $20, to return the questionnaire. But this can overweight your sample with people who need the five bucks.

Yet another kind of bias seems to have been created by the use of telephone answering machines. Households that screen nuisance calls with these machines tend to have higher than average incomes, more young people living there and a higher level of education. Such households also tend to be urban or suburban, not rural. David Neft, Gannett's chief researcher, told me he foresees a time "when we're going to pretty much lose the phone as the basic mode of interviewing." He isn't sure what will replace it, although the Internet comes naturally to mind.

"Any good survey is only as good as the questions you ask," says Tony Casale, the Princeton-based researcher. "There's a lot of bad questions asked on a lot of surveys."

Casale is a hands-on guy, not a scholar. He started out as a reporter for Gannett, then was an editor at USA Today for a while, and then he became the company's director of news research, where he got to see a lot of the work done for newspapers by outside vendors. Finally, with the formation of his own company, he became an outside vendor himself. So when the talk turns to built-in biases, leading questions and misinterpretation, Casale has seen it from all sides.

"I saw this in a survey once," he says, "this exact question: 'What do you believe in most--radio, television or large, prestigious newspapers such as the New York Times and the Los Angeles Times?'

"What they're saying is, 'Well, whaddaya think, moron?' "

Most of the problems Casale encounters are more subtle than that. He says he's noticed some questions get different responses when you ask them on the phone than when you ask them in a survey booklet mailed to people's homes. Questions about columnists, for instance. If you ask about a newspaper column over the phone, people may say they haven't read it, but if you show them a picture of the column with the logo, they'll remember that they did. "The visual awareness helps get more accurate data."

How did Casale learn that? I ask. "A lot of mistakes," he says. "Believe me, a lot of mistakes."

A typical modern survey has two parts--a telephone interview and then a follow-up mailing that contains questions in a booklet. "Things in need of probing we do on the telephone interview," he says. If people tell you in the mail survey they don't read the paper because they don't have time, you're stuck with that vague answer. "On the telephone interview, you can say, 'What do you mean you don't have time? Is it you really don't have time? Is the paper not worth your time? Is it not delivered early enough?' So the probing type of questions we need to put in the telephone interview."

It's also possible, theoretically, to correct for "order bias" in a telephone survey. In the case of the tennis-soccer question, for instance, one could reverse the order of the terms with every other respondent, so the built-in biases balance out.

"Question order is an absolute art form," Casale says. "Once when I was at Gannett, a vendor gave us a survey. He'd asked about 10 minutes' worth of questions about his local newspaper. 'What do you read in it? How much do you like it?' And then he says, 'OK, what's your primary source of local news and information?' Well, you've already sensitized him to say that the local newspaper is. So you have to put that earlier in the questionnaire, before the sensitizing."

Another general rule is to save the purely factual questions--those about people's interests and activities--for later in the questionnaire, because these are less subject to question order bias.

On the other hand, a person's patience starts to wear thin after too long on the phone. "If you go past 30 minutes," Casale says, "people start giving you answers that they think you want to hear, just to get you off the phone."

One of the most common mistakes of all, in Casale's opinion, is to ask people to speculate about how they might respond if a newspaper changed the way it presents the news. "I've seen too many times where people would say, 'If we put five more stories on page one would you read it?' And the increased story count didn't have any effect. Or the advertising question, "How likely are you in the next 12 months to buy a new car, a major appliance?' Well, most people don't buy a major appliance until theirs breaks down, let's face it. But you'll get 40 or 50 percent saying they're going to do it. And then when you ask the question on the survey, 'During the past 12 months have you done it?' it's really like 15 percent....

"Questions like, 'Is the paper better or worse or about the same as it was two years ago?' I've just seen paper after paper after paper where the answers are the same every single year, no matter what they do."

Given all this, it's easy to understand why pollsters did not foresee that the Democrats would gain seats in last fall's congressional elections. Or why so many television programs, after being vetted by focus groups, fail in the market. The focus group and the sample survey are blunt instruments at best. Unfortunately, they are just about the only basic tools market researchers have.

In his book "Focus Groups: A Practical Guide for Applied Research," Richard A. Krueger explains how the focus group and the survey can be used in such a way that one complements the other. He cited a study of morale problems among student volunteers at the University of North Carolina. The first step in assessing this problem was to survey the volunteers with a short questionnaire. The key question was: How satisfied are you with your volunteer position?

[ ] 5. Very Satisfied

[ ] 4. Satisfied

[ ] 3. Neither Satisfied nor Dissatisfied

[ ] 2. Dissatisfied

[ ] 1. Very Dissatisfied

The analysis showed that one subgroup of the volunteers, those with the most experience, had an average score of 1.7--a cause for concern. The second step was to interview the students in a "qualitative" way, asking them to explain in their own words why they were satisfied or dissatisfied. This produced a more complete picture than the bare statistics.

Although the North Carolina experiment stopped there, it could have been taken further. One could have made a list of the issues the students raised, written those into a formal questionnaire and then asked everyone to rate their relative importance. This is the same dialectical, back-and-forth approach Robert Merton and his colleagues used with the soldiers in World War II.

It's also, in a general way, what has been happening in newspaper research with the issue of local news. First, newspapers discovered that readers rate local news very high in surveys. Next, probing this finding in focus groups, they found that the term "local news" is extremely vague. Now, in surveys, many researchers try to use more specific language, asking not about "local news" but about "news of your neighborhood" or "news about your city."

At the Philadelphia Inquirer, because readers gave local news such a high rating on surveys, executives wondered whether the paper should play more local stories on page one. So in 1997, they produced six front-page prototypes and showed them to focus groups. The prototypes were:

• The Inquirer's actual front page for June 26, 1997, which had three local and four national and foreign stories, sky boxes (prominent graphic keys to inside stories) at the top and a dominant photo taken at a Phillies game.

• A page with only one local story, all national and foreign stories above the fold, but three local sky boxes.

• A page containing mainly "soft news," with an informational graphic as the dominant piece of art.

• A page with only five stories, two of which did not jump.

• A page on which all but one of the stories were local.

• A page containing only four stories, plus an expanded index.

After running these prototypes through 10 focus groups, Tom Holbein of Belden Associates concluded that Inquirer readers liked the existing front page--the one with a mix of local, national and foreign stories--better than any of the other samples. True, these readers wanted plenty of local news, but they were happy to have it back in the local sections. If a local story made page one, they thought it should be important. In fact, any story that made page one was expected to be important. They didn't want any fluff out there.

Holbein's report said readers objected to photo features displacing more substantive news on page one, that they preferred six or seven stories out front rather than four or five, that they found sky boxes helpful and that they followed the jumps but wanted them to be clear and easy to find.

"Basically, people expect the Inquirer to be like the Washington Post," Holbein told me. "It has that national and international prestige, and essentially they want a mix of news on the front page." This is not the conclusion one might have drawn based solely on the mail and telephone surveys, with their high ratings for local news.

» Click here for the continuation of the article.

###