AustLit logo

AustLit

Audience Research single work   companion entry  
Issue Details: First known date: 2014... 2014 Audience Research
The material on this page is available to AustLit subscribers. If you are a subscriber or are from a subscribing organisation, please log in to gain full access. To explore options for subscribing to this unique teaching, research, and publishing resource for Australian culture and storytelling, please contact us or find out more.

Notes

  • AUDIENCE RESEARCH

    Most immediately identified with regular, large-scale, industry-sponsored sample surveys of radio listeners and television viewers (known as the ‘ratings’), audience research helps determine what programs go to air, when and how much advertisers pay. Similar surveys cover newspaper and magazine readers. These surveys, organised by the commercial stations and paid for principally by them, are designed to provide advertisers, media buyers and media managers—commercial and non-commercial—with data about the size and composition of various audiences. Stations or networks that top the ratings overall can charge a premium for access to their audience. However, what the ratings measure and how well they do it is contentious.

    Ratings surveys have a long history. From 1934 to 1936, W.A. McNair (1902–79) from the advertising agency J. Walter Thompson (JWT) attempted to establish who—by age, sex and breadwinner’s occupation—was listening to what, each quarter-hour of every selected day, mainly in Sydney. After trialling various methods—interviewing by telephone (only 15 per cent of households had landlines), having children answer questionnaires in schools, distributing survey forms to offices and factories—McNair settled on face-to-face interviews, predominantly with housewives. In 1944, McNair established McNair Survey Pty Ltd within JWT, and by the late 1940s surveys were being conducted in all mainland capitals. Surveys were infrequent; in Adelaide, Brisbane and Perth, they were no more than twice yearly. And the method, ‘aided recall’, assumed that respondents could remember the household’s listening patterns the day before. In 1952, McNair became independent of JWT. Once television arrived, it widened its focus from ‘yesterday’s listening’ to ‘yesterday’s viewing’.

    Though clients bought both surveys, McNair’s biggest competitor was the Anderson Analysis of Broadcasting, established by George Anderson (1897–1974) in 1944. Daily, for two weeks, Anderson Analysis interviewed 22 different respondents in each of 12 Sydney ‘zones’. Using aided recall, Anderson graphed the proportion of sets in use every 15 minutes, from 6 a.m. to midnight by day of week. The graphs were accompanied by data on the ‘leading’ sponsored programs, which enabled advertisers to calculate their expenditure on a cost-per-thousand-radio homes basis.

    In 1947, Anderson switched to diaries. These were left with, completed by and collected from what was supposed to be a representative sample of listeners (and, later, viewers). Eventually, diaries would show the frequency with which individuals listened to particular programs—though, until survey periods were extended, only for programs that went to air more than once a week—and the number of listeners a station reached over a week (the ‘cumulative audience’, especially important to assessing the appeal of the ABC and SBS).

    Initially, diaries simply reported how many men, women and children under 16 listened to a station at home for at least five minutes in any quarter-hour. The system allowed for no more than one radio per home. Later, diaries would cover listening done in cars, at work and elsewhere; with the introduction of television, they were extended to viewing as well.

    In the mid-1960s, under industry pressure, McNair switched to diaries, despite concerns about the reliability of diary-keepers. In 1973, Anderson and McNair merged, sample sizes were increased to 400 in Sydney and in Melbourne, and diaries extended to two weeks; however, demographic data remained restricted to sex and age. In 1980, the majority of McNair Anderson was acquired by AGB, with the US ratings conglomerate ACNielsen (later Nielsen) acquiring AGB McNair in 1994.

    Introduced by Roy Morgan Research in 1979, and later trialled by both AGB McNair and ACNielsen, ‘PeopleMeters’ became the industry standard for measuring audiences after ACNielsen won the commercial television industry tender to install them across the capital cities in 1991. A way of recording the station to which the television was tuned and, less reliably, the presence of individual viewers, meters enabled viewing data to be transmitted by telephone (now owned by 95 per cent of households) and processed overnight. Second by second, they tracked when viewers turned on, switched off or changed channels. Meters appealed to the industry because they suggested that diarists generally under-reported viewing times, especially ABC viewers, viewers on weekday afternoons and after 7.30 p.m., viewers on Saturdays and teenagers on weekends. Meters also allowed for visitors in homes. Given access to sophisticated software, and data over periods longer than a week, industry players developed databases to model programming.

    In 1999, OzTAM (owned by the Seven, Nine and Ten Networks) was established to manage and market television audience measurement (TAM). In 2001, its first contract was won by ATR (Advanced Television Research) Australia, a subsidiary of AGB Italia, later Nielsen TAM. Using 3000 meters across the five mainland capitals, it was to measure audiences for the free-to-air channels. Subsequently, its remit expanded to cover the national pay television service, digital terrestrial television and—recognising the importance of personal video recorders—the viewing of recorded content played back within seven days of the original broadcast. It does not include television viewing on mobile and portable media: personal computers, laptops, phones and tablets. Differences in the early results produced by OzTAM, which owned the data, and ACNielsen, which operated independently of the industry, generated controversy—particularly as both used meters. Regional television (dominated by Prime Television, Southern Cross Television and WIN Television) remained ACNielsen’s domain, but small samples and wobbly numbers displeased its clients.

    Radio ratings, once organised by the Federation of Australian Radio Broadcasters (now Commercial Radio Australia), have also changed. With eight survey periods, compared with four for Melbourne and Sydney and two or three in other mainland capitals in the mid-1970s under McNair Anderson, their frequency has increased. Roughly 60,000 individuals aged 10 or over—one per household—complete a diary each year. Under Gfk, the German firm that replaced Nielsen in 2014, one-in-five diary-keepers are recruited online and keep an e-diary. Results from Newcastle, Canberra and the Gold Coast are released three times a year. Elsewhere, including Hobart, stations continue to commission research ad hoc. The ABC supplements ratings data with its own audience research.

    The impact of research on the editorial direction of newspapers and magazines has been much less marked than the impact of the ratings on the programming of radio and television stations, where even small shifts in audience numbers can have real consequences. Until the late 1960s, readership surveys, published at six-monthly intervals, had limited impact even on advertisers. For many years, figures published by the Audit Bureau of Circulations, also at six-monthly intervals, were regarded as a more credible measure by many advertisers. One problem was that readership data were not linked to a wide enough range of consumption data. Another was the lack of detail about where newspaper buyers actually lived; a large proportion of readers were commuters who bought their papers from vendors distant from their homes. News Limited commissioned its own research to tackle this. Studies of advertising readership and brand recognition were also undertaken by Australian Consolidated Press (ACP) as early as the 1960s.

    Syndicated surveys in the 1960s were conducted by Anderson Analysis, using diaries, with readers classified in terms ‘such as occupations, home-owners, car-owners, smokers, and beauty-conscious women’, and by McNair, using aided recall, with additional items related to shopping. From 1970, this enabled McNair to approach advertisers with its single source Prime Prospect Profiles. In 1968, John Braithwaite’s Survey Research Centre, using larger samples, developed an approach that captured a wider range of data on income, life cycle, company purchases and product use.

    From the early 1970s, readership figures were also supplied to the industry on a quarterly basis by the Roy Morgan Research Centre. Morgan, which had begun its readership surveys in 1968, interviewed 1000 people over 14 years of age almost every weekend. By generating more conservative and, it was felt, more credible levels of readership, Morgan’s survey became the first to be endorsed as the industry standard. Respondents were asked whether they had ‘read or looked into’ particular dailies ‘at least three times a week’, ‘any issue’ of any of the weeklies ‘in the last seven days’ and any monthly ‘in the last month’—later shifting to specific issues of each monthly in the last 10–12 weeks, prompt- ing respondents with miniaturised covers reproduced in black and white to remove the ‘prestige’ effect of colour. Disparities between Morgan’s figures and those of McNair Anderson, which also gathered its data face to face, were highlighted in 1983 after the Australian Women’s Weekly switched from a weekly to a monthly. McNair Anderson argued that Morgan’s measures were affected by its failure to call back when respondents were not available, and for monthlies by ‘memory loss’.

    Later, Morgan switched to diaries. These tracked print and online newspaper readership day by day, asked respondents to indicate the sections they ‘usually read or look into’, and readers’ reports of the most useful newspaper for information about different products. They also tracked weekly magazines ‘read or looked into’ in the last two weeks, fortnightlies and advertising catalogues over the last four weeks, monthlies in the last two months, and other magazines in the last six months’. And they tracked radio listening and television viewing, half-hour by half-hour; preferred media—including cinema and the internet—by time of day; and the medium respondents found most useful when making purchases. The diaries were incorporated into Morgan’s omnibus survey, the basis of its unrivalled single-source database that would eventually cover ‘lifestyle and attitudes, media consumption habits, brand and product usage, purchase intentions, retail visitation, service provider preferences, financial information and even recreation and leisure activities’, as well as ‘values segments’. Diaries defrayed costs. But the extraordinary length of the omnibus raised questions about how many respondents actually completed its surveys, and how representative they were of the original sample.

    The biggest newspaper companies (APN News and Media, Fairfax Media (now the Nine Entertainment Co.), News Limited and Seven West Media) had long criticised Morgan’s reports for being ‘too infrequent, lacking depth and transparency, and generating confusing results’—readership and circulation numbers sometimes moving in opposite directions. In 2006, these companies formed Newspaper Works (subsuming the Newspaper Publishers’ Association) to promote the press and to provide new measures of readership. From November 2013, EMMA (Enhanced Media Metrics Australia) provided monthly readership data on individual sections, across print, websites, smartphones and tablets, linked to Nielsen Online Ratings data. Morgan, which had collected detailed sectional reading in a form not sanctioned by the newspaper industry, had already moved to publish data on individual sections, including readers’ ‘engagement’. Now it started releasing data on a monthly basis. But the old story about audience surveys hadn’t changed: the figures generated by EMMA—generally higher, especially for magazines—proved difficult to reconcile with the Morgan numbers.

    Even if valid and reliable, ratings and readership data are inherently limited. Apart from focusing on English-speakers and demographics of interest to advertisers (regional, ethnic and Indigenous audiences, as well as audiences for community media, are under-sampled), they don’t measure affect, media effects or what the audience actually accepts; audiences are ‘consumers’, but other metaphors might fit better. There are alternative approaches, including the use of distant ‘memories’, as in Martyn Lyons and Lucy Taksa’s Australian Readers Remember (1992) and Kate Darian-Smith and Sue Turnbull’s Remembering Television (2012), and the use of contemporaneous sources—letters to television stations or newspapers—as well as ‘favourite television memories’, assayed by Alan McKee in Australian Television (2001).

    Much of the work that goes beyond the syndicated surveys is not publicly available. News Limited from the 1960s and ACP in 1980s and 1990s had substantial budgets for confidential research. However, surveys undertaken by opinion pollsters and academics, or for media regulators and public broadcasters, are generally available. Henry Mayer et al. (1983) listed over 500 items derived from the major opinion polls and national academic surveys, conducted between 1942 and 1980, as well as from regulatory and industry bodies.

    Excluded because of small samples or limited populations, but important because data from the immediate post-war years are scarce, are A.P. Elkin’s Our Opinions and the National Effort (1941), with its information about radio and questions on the readership, credibility and impact of newspapers; Alan Walker’s anthropological study of Coaltown (Cessnock) in New South Wales (1945); and both A.J. and J.J. McIntyre’s Country Towns in Victoria (1944) and A.J. McIntyre’s survey of Sunraysia (Mildura) (1948), which consider radio listening along with other forms of leisure. Mayer notes other surveys in The Press in Australia (1964).

    An updated compendium of questions and answers might include items from: the Australian Bureau of Statistics’ ‘Time Use Survey’ (1992– ); several reports commissioned by the Australian Broadcasting Authority (ABA) and its successor the Australian Communications and Media Authority (ACMA), most notably, Sources of News and Current Affairs (2001), Media and Communications in Australian Families 2007 (2007), and Community Attitudes to the Presentation of Factual Material and View- points in Commercial Current Affairs Programs (2009); and the Morgan series (1976– ) on the perceived ‘ethics and honesty’ of journalists. It might document surveys commissioned by the ABC for The ABC in Review (1981), again in 1990 and, more regularly and recently, from Newspoll. It might encompass studies of the audience for ethnic broadcasting—those noted, for example, in the SBS annual reports. It might cover studies like A Report on Migrant Education Television in Australia (1979), for the Commonwealth Department of Education, or those commissioned on Multicultural Television (1986) by the Australian Institute of Multicultural Affairs. It might also include surveys with Indigenous respondents, such as Lelia Green’s Television and Other Frills (1988). And it might cover surveys like Kevin Durkin and Kate Aisbett’s Computer Games and Australians Today (1999), published by the Office of Film and Literature Classification (OFLC), and that on Community Perceptions of Sex, Sexuality and Nudity in Advertising (2010) produced for the Advertising Standards Bureau.

    There is also work funded by academic bodies. Attitudes to newspapers, radio and television, based on surveys conducted in 1966 and 1979, are reported in J.S. Western and C.A. Hughes, The Mass Media in Australia (2nd edn, 1983). Data on the use of these media, and the internet, during election campaigns are tracked by the Australian Election Study (1987– ). A mid-1990s survey exploring a wider range of media is reported in Tony Bennett et al.’s study, Accounting for Tastes (1999). Overlooked by Bennett is the taste for videos, DVDs, magazines and the internet as sources of pornography—the subject of Hugh Potter’s Pornography (1996) and The Porn Report (2008) by Alan McKee et al. Also overlooked is research into book tastes. Hans Guldberg’s Books—Who Reads Them? (1990) is one of a series of studies commissioned by the Australia Council since 1978 on books, television and other leisure activities.

    Qualitative research, inevitably omitted from such compendia, should also be noted: the ABA’s Living with Television (1992); Toni Johnson-Woods’ audience study on the Big Brother phenomenon, Big Bother (2002); the ethnic audience’s experience of the mass media, reported in Bronwyn Coupe et al.’s report for the Office of Multicultural Affairs, Next Door Neighbours (1993); the diasporic audience analysed in Stuart Cunningham and John Sinclair’s Floating Lives (2000); the report for the ABA on The People We See on TV (1992), which focused on attitudes to the representation of Aborigines and people of non-English-speaking background; and the report by Michael Meadows et al., Community Media Matters (2007), which includes Indigenous broadcasting.

    With the development of radio in the 1930s, much of the media research by academics focused on children and the media. Children also became the main focus of content regulations for broadcasters. In the first large-scale study of Growing Up in an Australian City (1957), W.F. Connell and his students at the University of Sydney examined adolescents’ use of radio, films, newspapers, books and comics. In 12 to 20 (1975), Connell and his colleagues compared the reading, listening and viewing behaviour of boys and girls in and out of school; the discovery that newspaper reading at ages 13–14 was little different from that at 17–18 helped lower the minimum age in audience surveys. Newspapers and television, though not radio, also figure in R.W. Connell’s The Child’s Construction of Politics (1971).

    Other studies, mostly of children in Sydney and/or Melbourne, followed: John Blizard’s Individual Differences and Television Viewing Behaviour (1972); Kevin Tindall and David Reid’s Television’s Children (1975); Patricia Edgar’s Children and Screen Violence (1977); Edgar and Ursula Callus’s The Unknown Audience (1979); Mary Nixon’s TV is Funny, Boring, Exciting but I Love It (1981); Patricia Palmer’s The Lively Audience (1986); and Bob Hodge and David Tripp’s Children and Television (1986) and their reading of cartoons, the most theoretically sophisticated study since Raewyn Connell’s. Later, the ABA and the OFLC funded Margaret Cupitt and Sally Stockbridge’s Families and Electronic Entertainment (1996); the ABA sponsored Cuppitt and colleagues’ study Infants and Television (1998), and Linda Sheldon and colleagues’ ‘Cool’ or ‘Gross’ (1994) and Kids Talk TV (1996), as well as Children’s Views about Media Harm (2000); while ACOSS published Young People, Gambling and the Internet (1997). More recently, ACMA has published studies of children’s viewing patterns.

    Educational researchers, alive to the moral panics it engendered, were attuned to television from the start. The first research monograph on Television and the Australian Adolescent (1962) by W.J. Campbell was sponsored by the Australian Broadcasting Control Board (ABCB). Groups of Sydney adolescents were interviewed in 1956 and re-interviewed in 1959; other groups were interviewed in 1959. As well as answering questionnaires, students kept daily diaries for a week. By 1959, television had emerged as the biggest leisure activity, with the time spent listening to the radio halved. Other data shed light on the effects of television on family behaviour, neighbourhood relations and the impact television personalities had as ‘models’. The report was mostly reassuring. In 1958, R.J. Thomson’s Melbourne study of the impact on children and adolescents of Television Crime Drama (1959), also sponsored by the ABCB, was equally reassuring about media and violence: there was ‘no evidence’ in ‘the great majority of viewers’ that crime films ‘provoked any criminal or psychopathic tendencies’.

    REFs: G.M. Anderson, Radio Audience Research in Australia (1944); M. Balnaves et al. (eds), Mobilising the Audience (2002); M. Balnaves et al., Rating the Audience (2011); H. Henry (ed.), Readership Research (1984); K. Inglis, This is the ABC (1983) and Whose ABC? (2006); J.F. Kiernan (ed.), A Forum on Australian Media (1975); H. Mayer et al., The Media (1983); W.A. McNair, Radio Advertising in Australia (1937); E. More and K. Smith (eds), Case Studies in Australian Media Management (1992); R.R. Walker, Communicators (1967).

    MURRAY GOOT

Publication Details of Only Known VersionEarliest 2 Known Versions of

Last amended 29 Jul 2021 13:13:49
Newspapers:
    Powered by Trove
    X