Fat City  | American Journalism Review
 AJR  Features
From AJR,   December 2002

Fat City   

Is fat really bad for you? Articles in two top newspapers reached opposite conclusions. No surprise--readers and viewers often find themselves whipsawed by reporting on health issues. How can journalists keep confusion to a minimum?

By Rachel Smolkin
     

Related reading:
   » Health-Writing Tips
   » Helpful Web Sites

A succulent slab of steak, the kind that we all long ago learned would make us fat and give us heart attacks, graced the cover of the New York Times Magazine. An equally naughty, mouth-watering glob of butter swam atop the steak.

"What if Fat Doesn't Make You Fat?" asked the July 7 cover in bold, black lettering. Next to that provocative question, in smaller orange type, came these startling words: "Influential researchers are beginning to embrace the medical heresy that maybe Dr. Atkins was right."

In a contentious 7,909-word article, freelance science writer Gary Taubes asserted that conventional medical wisdom advocating low-fat, high- carbohydrate diets actually might have contributed to America's obesity epidemic. He advanced a largely untested hypothesis that high-fat, low-carbohydrate diets--the kind recommended by the disdained and unorthodox Robert Atkins--may well be healthier and more successful when it comes to shedding unwanted pounds.

On August 27, the Washington Post struck back. "What If the Big Fat Story Is Wrong?" the Post asked. Health reporter Sally Squires examined the research that experts had accused Taubes of "ignoring or downplaying," reinterviewed many people Taubes had talked to and concluded that a heap of good science suggests Taubes was mistaken. Squires declared that a "significant amount of high-quality research" contradicts Taubes' central arguments, including his contention that low-fat diets are proven failures at weight loss.

"Probably both journalists selectively used information to make their point," says Walter Willett, chairman of the department of nutrition at the Harvard School of Public Health, who talked with both reporters for their stories. "Maybe if you put [the stories] together, they're adequately balanced. I think together they would still leave the public confused and without a balanced view of the limitations of information on both sides and also the fact that there is this huge middle area that emphasizes healthy forms of fat and healthy forms of carbohydrates."

The dueling fat stories and opposing messages from two of the nation's top newspapers illustrate the challenge of communicating controversial and important health information without hopelessly confusing readers. On many major health issues, contradictory studies are as common as the cold, and experts disagree over how to interpret the data. Reporters confront the difficult tasks of deciding which studies they should include, what study limitations to highlight and how much credence to give minority opinions in medical disputes.

Reporters covering disagreements over the benefits of mammograms must figure out how to handle conflicting information. Journalists who reported that scientists unexpectedly halted a major hormone replacement therapy trial for women had to convey persuasive conclusions from a large and well-executed study but also communicate the uncertainty that still surrounds the issue. Stories exposing emerging questions about low-fat diets must inform readers that the minority view, though intriguing, has not been rigorously studied, and the majority view, while accepted, is not completely supported by research.

The dangers of incomplete or misleading health stories are more personal for consumers than imperfect pieces about political debates or court decisions.

A June article in the Journal of the American Medical Association cites the news media's "powerful influence on public perceptions." The article, which questions whether coverage of preliminary studies presented at scientific meetings is "too much, too soon," concludes: "Press coverage at this early stage may leave the public with the false impression that the data are in fact mature, the methods valid, and the findings widely accepted. As a consequence, patients may experience undue hope or anxiety or may seek unproved, useless, or even dangerous tests and treatments."

Marcia Angell, then executive editor of the New England Journal of Medicine, explained the risks of reporting on medical studies in a 1997 New York Times Magazine piece. "No sooner do we publish a study on diet or lifestyle than news of its conclusions, though virtually none of its qualifying details, hits the airwaves," Angell wrote. "Within 24 hours, millions of people consider eating fewer egg yolks or more oat bran to fend off disease. This meritocratic faith ignores one critical fact: Science has hardly begun to touch the big mysteries about diet and other habits. We simply do not know much about what is risky and what isn't, and what we do know is often distorted or misinterpreted."

And a 2001 article in the now-defunct journal Effective Clinical Practice says: "Distorted journalistic reports can generate both false hopes and unwarranted fears.... News about such topics as diet, cholesterol, the toxic shock syndrome, and breast implantation affects individual behavior and sometimes causes panic."

The axioms of good science--that a single study seldom proves something, that breakthroughs are infrequent and cures rare indeed--don't necessarily make for compelling news stories, complicating the mission of health reporters.

"In the final analysis, good health and medical writing and hot news don't really go together very well," says Andrew Holtz, an independent health reporter and president of the Association of Health Care Journalists. "There's a culture clash that you have to deal with. You want your story to be clear, exciting, interesting, and a lot of times real medical research is much more complicated and equivocal than that."

Health reporters and physicians alike caution journalists to avoid the temptation of a flashy headline. Not all studies are high quality, and those that are usually advance scientific understanding rather than provide a breakthrough. Not all sides of a debate deserve equal space in a story if the evidence that supports their side is not equal. And while scientists are smart, they may not have all the answers.

"If you do stories that leave people with a good sense that there are uncertainties and caveats and things that are unknown, that's good," Holtz says. "That's a mature, healthy way to look at a lot of these issues."

When Gary Taubes wrote his New York Times Magazine article on fat, he wanted to expose what he felt were uncertainties in an accepted scientific viewpoint. "We have been bombarded for two decades with low-fat dogma that could be wrong," says Taubes, a contributing correspondent for the journal Science. "The goal was to show that the established science doesn't stand up to scrutiny, that the evidence is a house of cards, and that there's an alternative hypothesis which potentially fits the evidence better."

Taubes intended to "shake up" the health establishment, and he certainly succeeded. The "author now finds himself in the center of a scientific stew of his own making," Squires wrote on August 27. "Experts criticized Taubes for suggesting the Atkins diet has merit when it has not been rigorously studied, much less been proven effective or safe.... Many members of the professional nutrition community, whose work was directly or indirectly questioned by the Times piece, were incensed."

Squires, a full-time health reporter since 1981, read Taubes' story and believed it created an impression at odds with much of what is known about low-fat diets and saturated fats--those fats in meat and whole-milk dairy products associated with clogged arteries, heart conditions, strokes and other blood vessel diseases.

"We got a lot of questions from readers who were curious about it, and I heard from a number of scientists that some things hadn't been as well explained as they might have been, that it might have created some misperceptions that might be confusing to the public," Squires says. "We thought it was worth a closer look."

Other health and science writers echoed the Post's concerns about Taubes' piece.

Paul Raeburn, a senior writer at BusinessWeek and president of the National Association of Science Writers, credits Taubes with exploring an emerging viewpoint in the nutrition field--a valuable endeavor for a science writer. But Raeburn objects to Taubes' presentation. Although Taubes acknowledges at the outset that he's writing about the views of a small but growing minority, Raeburn says Taubes should have emphasized throughout the article that he was advancing an unproved viewpoint and that many studies support the other side.

"I do think that Gary Taubes' piece was misleading," says Raeburn, who covers science, medicine and the environment. He notes that the cover art and the story's opening anecdote suggest that people can eat as much fat as they want. "Not many people believe you can eat all the fat you want, and people quoted in the article didn't make that case," Raeburn says. "If you just look at the article, you might think that."

James O. Hill, director of the Center for Human Nutrition at the University of Colorado Health Sciences Center, agrees that Taubes presented a misleading message to the public. "If it wasn't affecting the consumer, and it was just a dialogue among the experts, I'd say this is great" for stimulating dialogue, says Hill, a proponent of low-fat, high-carbohydrate diets. "But a lot of people change their diet based on an article like that."

Hill spoke with both Taubes and Squires. He says Squires presented "a more balanced view, but she was writing in response [to Taubes]. She was generally true to the data."

But Harvard's Willett, who advocates testing the high-fat hypothesis, says both reporters chose information selectively, including quotes from him.

He praises Taubes for doing a "very good job" explaining how scientists came to advocate the low-fat diet. "He actually made a pretty good case there that would be useful to many readers to try to understand how this almost religious belief in low-fat diets came to be," Willett says. But he adds that Taubes made an unsubstantiated leap to the other extreme by positing high fat as the ideal diet.

"The cover on the magazine was pretty inflammatory," Willett adds. "It was probably meant to be, but if it had olive oil and salmon instead of butter and steak, it would have been a whole lot better."

Janet Froelich, art director of the New York Times Magazine, says she and her colleagues created the cover art because steak and butter symbolize fat in readers' minds. "It's not an infographic," Froelich says. "A piece of fish is in people's minds not aligned with fat at all." She says the cover's function is to pull readers inside, where the text details the complexities of the issue.

In the Post story on fat, Squires quoted Willett as saying that Taubes hadn't included his cautions about unhealthy fats, but Willett says she didn't include his positive statements about Taubes' article. "Reading [Squires' story] you came away with the idea that low-fat, high-carbohydrate diets were good because various experts had been against Gary Taubes," Willett says.

Addressing the notion that he had concentrated on material that supported his thesis, Taubes says: "The point is, when you're questioning a dogma in limited space, you're not going to use half of it to repeat what we have been told we should believe with religious certainty for the past 20 years. You raise the questions that the dogma can't answer." Asked about the same charge regarding her work, Squires says simply, "The story speaks for itself."

Willett suggests a third journalistic approach: delving into the existing studies but also exposing their limitations and exploring a dietary middle ground that might prove healthier and more effective at weight loss than either the Atkins diet or the low-fat, high-carbohydrate approach.

"There's a spectrum of evidence, and in between a whole spectrum of uncertainty," Willett says. "It's very important to point out where the evidence is simply lacking one way or another, and that's very often where people have the strongest positions."

Another important health topic that evokes fervent debate is the value of mammography screening as a cancer-detection tool. While the controversy is not new, the press coverage has intensified.

In a paper published by The Lancet, a British medical journal, on October 20, 2001, two researchers in Denmark asserted that existing evidence does not show a benefit to mammography. In a new analysis of data from a study done in 2000, they concluded that mammograms promoted more aggressive treatment yet have not been shown to bring about a lower mortality rate from breast cancer. The researchers, Peter Gotzsche and Ole Olsen of the Nordic Cochrane Center in Copenhagen, analyzed seven large studies of mammography and concluded that five were designed so poorly that the results were untrustworthy.

When their paper appeared, it received media coverage in England but virtually none in the United States, where health reporters were scrambling to cover the anthrax attacks. Then, on December 9, 2001, the New York Times ran a front-page Sunday story headlined, "Study Sets Off Debate Over Mammograms' Value."

"The question is dividing experts and women's health advocates, many of whom acknowledge that they do not know what to think about the new report," science and medical reporter Gina Kolata wrote. "For more than two decades, annual mammograms have been part of life for millions of women, with the American Cancer Society and the National Cancer Institute urging women to have them. Experts are still digesting the new findings...and few if any authorities in the United States are suggesting that women abandon routine mammography on the basis of this study."

Kolata explained that Gotzsche and Olsen say studies showing benefits from mammography were flawed and that recent studies, "more rigorously designed and conducted, found no such effects."

She also included perspective from experts unconvinced by Gotzsche and Olsen's analysis and approach. Robert A. Smith, director of the cancer screening division at the American Cancer Society, defended the studies that Gotzsche and Olsen criticized.

The moment the Times placed the story on page one, it exploded into other newspapers and onto television broadcasts. "When the phones started ringing, it wasn't on our radar at all," says Michael McCarthy, the North American editor of The Lancet. "I thought Gina Kolata did a very good job. She was writing about the controversy caused by the paper, not the paper itself."

Kolata had been slow in noticing the article because she was mired in reporting on anthrax. When she finally noticed the study in The Lancet, she began calling experts in the field to ask what they thought. Some told her that the study had prompted them to rethink the evidence on mammograms, and even some ardent advocates of mammography said the benefits had been oversold to women.

"As a reporter, if there's something out there that's important to our readers and is newsworthy, it's not for me to tell them what to think," Kolata says. "If there's a debate, I can't censor it. All I did was report what's in The Lancet and what people said about it."

But at least one mammography proponent argues that the New York Times erred in providing such a prominent airing of the dispute. "The medical journal Lancet irresponsibly published a letter that called into question whether mammographic screening could save lives," Daniel B. Kopans, a radiology professor at Harvard Medical School, contended in a January op-ed in the Boston Globe. "Ignored by most physicians and scientists, the letter was given credence two months later by an unconscionable front-page article in the New York Times, highlighting its conclusions. Like a stone dropped in the water, waves of confusion spread from the article."

New York Times Science Editor Cornelia Dean responds: "Obviously he disagrees with the Danish researchers. We felt that the report was interesting and newsworthy." She considered it worthy of page-one play because of the issue's significance. The idea that a "challenge to the value of mammography would be taken seriously seems like an important news development," she says.

On April 9, the Times tackled the mammography controversy in even greater detail, devoting an issue of Science Times to exploring many aspects of "Confronting Cancer." Kolata wrote several stories explaining how experts could look at the same clinical data on mammography screening and disagree over its meaning.

"The topic is confusing. There's a lot of uncertainty," Holtz, of the Association of Health Care Journalists, says of the mammography debate. "An honest journalist can only lay out that, 'Look folks, there are questions here. There are concerns. There is debate.' "

Health reporters faced a somewhat different challenge in covering the discontinuation of a major hormone replacement therapy trial.

"The announcement yesterday that a hormone replacement regimen taken by 6 million American women did more harm than good was met with puzzlement and disbelief by women and their doctors across the country," wrote New York Times reporters Kolata and Melody Petersen on July 10.

Added Los Angeles Times reporter Rosie Mestel: "Three years ahead of schedule, scientists have unexpectedly halted a critical clinical trial testing the effects of hormone replacement therapy on women after menopause because of a slight but significant increase in the risk of breast cancer, heart attacks, blood clots and strokes." The trial, part of a government-financed research program called the Women's Health Initiative, tracked more than 16,000 women and found that a combination of estrogen and progestin caused small increases in breast cancer, heart attacks, strokes and blood clots. "Those risks outweighed the drugs' benefits--a small decrease in hip fractures and a decrease in colorectal cancer," the New York Times reported.

Some stories, including the New York Times article, carefully pointed out that the risks were "minuscule" for an individual woman. Those stories reported the numbers in absolute terms, showing, for example, that if 10,000 women take the hormones for one year, eight more would have breast cancer than a comparable group not taking the hormones.

The Los Angeles Times adopted a cautious tone, telling readers, "Because the effect is small, there is plenty of time for women to digest the new information and carefully weigh the options with their doctors. Some may decide, after doing so, to continue taking hormones." That article also noted that many women "may still opt to take hormones for short spells" to treat menopause symptoms such as hot flashes, night sweats, mood swings and vaginal dryness.

But not every news report made such a deliberate effort to avoid scaring women taking the hormones.

"Doctor's offices and clinics across the country today were flooded with phone calls from concerned women responding to the alarming news regarding hormone replacement therapy," ABCNews.com reported on July 9.

CBSNews.com breathlessly declared: "The hormones harm, not protect, the heart--they actually increase previously healthy women's risk of a heart attack by 29 percent and a stroke by a stunning 41 percent." Only much later did the reporter explain that if 10,000 women take the hormones for one year, eight more will have strokes than a similar group not taking the hormones and seven more will have heart attacks.

And the "CBS Evening News" added: "In medical news tonight, they were hailed as a wonder drug combination for older women--building bones, even warding off heart attack. But tonight the wonder has come out of hormone replacement therapy."

Holtz, of the health journalists association, describes the trial as "the biggest and best study to date that really clarified things." Although the study results contributed important new information about the risks, "it didn't come out of the blue. There had been some smaller studies that hinted the benefits weren't as big as had been believed, and that the risks might be larger than had been thought. However, some of the news coverage [of the latest study] made those risks appear bigger than they really are."

William L. Lanier, editor in chief of the journal Mayo Clinic Proceedings, agrees the news coverage tended to emphasize the adverse effects of hormone replacement therapy without adding context and balance. In September, Lanier co-chaired the Mayo Clinic National Conference on Medicine and the Media, which examined medical news coverage.

The reporting "could possibly have placed into better perspective the risk-benefit ratio of hormone replacement therapy and could have emphasized that it wasn't hormone replacement therapy in general that this report investigated, but it was a study of a single two-drug combination," Lanier says. "Those results may not be generalizable to all hormone replacement therapy and all ages of patients and all populations."

Raeburn, of the science writers association, adds that stories about hormone replacement therapy were not accurate and complete unless they made clear that scientific uncertainty persists and that many people aren't sure what conclusions to draw.

"It will probably turn out that hormone replacement therapy is good for some people in some circumstances and not others," Raeburn says. "Stories that said flatly that it's dangerous and harmful may have gone too far."

Health journalists and doctors warn against inflammatory stories that could panic readers. Often, it is not the story itself, but the headline that sounds an alarmist tone. JAMA Editor Catherine DeAngelis says that a headline might announce the discovery of a cure when the story itself doesn't indicate that at all. Headline writers "have to say, 'Wait a minute. This may sell a paper, but it's not true; it's misleading,' " DeAngelis advises.

Joanne Silberner, a health policy correspondent at National Public Radio, recommends that reporters check their headlines even if that involves staying late or calling the copy desk after work. She recalls a story she wrote as a reporter at U.S. News & World Report that nearly received the cover line: "HOW YOU CAN BEAT BREAST CANCER." Silberner objected--and was told she knew nothing about selling magazines. Then a senior editor complained. The final cover line read: "HOW TO BEAT BREAST CANCER."

Silberner says she didn't love the final choice, but at least it no longer implied that "you personally could beat breast cancer if only you read what was in that magazine."

Careful choices about which stories to cover also contribute to responsible health reporting. "Some stories are just not ready to be done," Silberner says. "At NPR I have wonderful editors who are very knowledgeable about how science works. We might not think that the conclusions [of a study] are solid or really well worked out." But if a study in that category is clearly destined to generate news or already has received media coverage, Silberner says she and her editors may opt to run a "corrective" story explaining that a particular study is being published but that it has deficiencies.

Ivan Oransky, who teaches medical journalism at New York University, tells his students to evaluate each study's quality. "It's kind of important for everyone to know, and for reporters who are covering medicine to know, that not all studies are created equal," says Oransky, who is also Web editor of The Scientist magazine. "That sounds almost trite, but if you read a lot of coverage, reporters don't seem to grasp that. There's a perception that once something is published, it can stand with any other study."

Oransky recommends that reporters start with three questions in evaluating a study: Where is it being published? Who are the patients or subjects? What kind of study is it?

Peer-reviewed journals, in which other experts in the field evaluate a study before it is accepted, don't always block shoddy research from being published

but provide a way to promote quality studies. Evaluating the study subjects also helps reporters assess researchers' claims. A study about high-blood-pressure medication in 50-year-old women, for example, may not have much significance for 70-year-old men. In many medical studies, the subjects quite literally aren't human, and the results don't necessarily apply to people.

"Say Vitamin E protected rats against heart disease," Oransky says. "That's really great if you're a rat, and if I were a rat, I'd want to know that. But it isn't necessarily going to help me. The road of medical advances is littered with studies that worked great in rats but didn't look so great in humans. None of this means that doctors or researchers are doing the wrong thing. But humans are tough. You can't do things to humans that you can do to rats."

Oransky suggests looking carefully at the type of study. Is it a randomized controlled trial? In these trials, regarded as the gold standard of research, patients are randomly divided into groups, with each group taking a different treatment or a placebo. Does the study really present new data, or does it reevaluate data, sometimes by combining various studies with different methodologies and subjects? The mammography study in The Lancet that provoked a delayed media furor was actually a reanalysis. That does not negate a study's news value, Oransky says, but reporters should not mislead readers into thinking the information is new.

Nascent data present other challenges for journalists. A JAMA article on coverage of scientific meetings, written by three researchers from the Department of Veterans Affairs Medical Center in White River Junction, Vermont, suggests raising the threshold for reporting on studies presented at such meetings. If reporters do cover these presentations, the authors advise that they emphasize the preliminary nature of the results and apply the same level of skepticism they bring to political coverage.

"In this way, the press might help readers to develop a healthy skepticism about the breakthroughs they repeatedly encounter in the news," wrote Lisa Schwartz, Steven Woloshin and Linda Baczek.

Kolata of the New York Times recommends that reporters be willing to call sources repeatedly for clarification or to solicit their opinions about the value of a published study. "I try not to be embarrassed to call somebody up, even if I've called them 10 times," Kolata says. "I never stop checking with people." She asks experts to cut to the heart of an argument and simplify their quotes. "I'll call them back and say it's just not going to be clear the way you said it," she says.

But not all reporters enjoy the luxury of space and editing provided in the New York Times. In the rush to deliver health news to a public fascinated by medical advances, local television reports on complex medical studies are squeezed into an average of 71 seconds, according to a Project for Excellence in Journalism study of TV news. Half of all health and medical television news stories on local newscasts are 42 seconds or less.

"With most medical studies, as they evolve through media sources until they finally reach the local television station, the content of the report gets distilled and attenuated," says Lanier of the Mayo Clinic. "By the time you get to the most distilled report, the patient may hear, 'Hormone replacement therapy is bad for you,' which is far from the conclusion of the original study."

Barbara Cochran, president of the Radio-Television News Directors Association, says there's no reason television health reports can't be accurate, but they do summarize news by necessity. "Broadcasting is an abbreviated window on the world, and people who tune in expect to get a summary of the important news of the day," Cochran says. "If they're interested, they fully expect to turn to sources of more detailed information."

Even a lengthy and detailed print story can cause confusion, as Taubes' fat story in the New York Times demonstrates. And, Taubes says, that's not necessarily bad.

"If the data are confusing, if the issue is unclear, then is it better that the public think they know the answer, even if the answer might be wrong?" Taubes asks. "Or is it better that the public know what the issues are?... The public should be confused. They should know with certainty what they've been told for 20 years is still controversial."

The Post's Squires has a slightly different approach. She often writes that there is scientific debate on an issue, but she tries to make the evidence as clear as possible. "We don't leave the reader hanging," Squires says. "We may not be able to give all the answers they would like, but we try to give as much as possible."

The contrasting approaches by two experienced science and health writers illustrates how when the complex kingdom of science enters the realm of journalism, the result can be a fat lot of confusion for the public.

###