AJR  Columns
From AJR,   April 2001

Hit Me, View Me   

Confusing terms make it difficult to track traffic on the Web.

By Barb Palser
Barb Palser (bpalser@gmail.com), AJR's new-media columnist, is vice president, account management, with Internet Broadcasting.     



MAY MARKS ONE OF THE four major ratings periods for U.S. television stations, when audience numbers and demographics are compiled by Nielsen Media Research for each minute of programming in the broadcast day. These numbers are used to rank stations within each market and to order them by audience size, ultimately setting advertising rates for the coming months. It's sort of like the SATs of broadcasting.
Traffic tracking on the Web more resembles a politically correct kindergarten class, where everyone can be a winner--and usually believes they are.
Web sites can track their own audiences with detail that would make a news director drool. Software programs spit out scads of data ranking the most popular headlines at the moment, mapping traffic volume throughout the day and listing the external sites and search engines that are sending viewers there.
With this data, sites can deliver news with precision: arranging staff schedules to cover peak traffic times, making quick choices about which stories should remain in prominent positions, even ensuring that the layout is optimized for the most popular Web browsing software.
While improvement is easy to demonstrate, letter grades are a different story.
In the established media, terms like "circulation" and "gross ratings points" work like currency. Publications and stations know how they rate, in absolute terms and in relation to the competition.
On the Web we scarcely speak the same language, let alone believe one another when we do. Because Internet traffic can be measured in so many different ways and the glossary used to describe it is so mangled and misunderstood, audience information is usually subject to dispute and cautiously guarded.
The differences are most apparent in mainstream conversations and news reports.
One term often used to describe site traffic is "hits." Used properly, "hits" refers to the number of individual files requested from a Web server: each picture, document or frame counts as a separate file hit.
By that definition, "hits" would be a ridiculous standard of site traffic comparison, since a single screen on a sophisticated site might use dozens of different image and text files, while another site could use as few as one. Today the only appropriate use is to quantify the number of times a single file (say, a video clip) is requested.
A better way to track the popularity of a site or story is "page views." A page view is counted when a single page of content is requested, no matter how complex or simple.
Unfortunately, "hits" sounds hipper than "page views," and the two are often used interchangeably or combined in terms like "page hits" or "URL hits." To that mix, add terms like "visitors" and "user sessions," which try to describe the number of individuals looking at a site.
Because of this confusion, it's difficult to be certain what is being discussed in press releases or news reports. And there's a tremendous difference between a site that gets 10 million file hits in a month and 10 million page views. No wonder sites are shy about sharing their numbers.
This hasn't stopped anyone from doing business. For big sites and sophisticated advertisers, the problem is addressed by third-party auditing companies and number-crunching software that provide accurate reports of the number of times a banner is viewed or clicked.
But the lack of clear terminology has hindered sites in comparing themselves with one another or keeping the confidence of advertisers, particularly in smaller markets where the sales force and clients are less familiar with online ad conventions.
A few companies are attempting to monitor Web users the same way that Nielsen monitors TV viewers (Nielsen//NetRatings is one of them) by tracking the surfing behavior of a sample group.
This approach will be tough considering the youth of the industry, lack of geographic boundaries and high turnover rate among sites. And sites that don't get top rankings are likely to dispute the results when they believe they have more detailed and accurate data in-house.
The first step toward accurate reporting is to agree on a language and to educate old-media partners to use it properly. Because the Web has no geographical market boundaries, a valid system of local rankings may never be possible. But we should be able to evaluate total traffic to sites, sections and stories with confidence.
Some might ask why reporters should care about ratings that ultimately determine ad rates. Anyone who would suggest that probably has never stood in a TV newsroom when the Nielsen numbers come in. The accuracy with which reporters reach and serve their audience--whatever the medium--is a matter of professional pride and confidence. The score does matter.

###