|
MEDIA CONTENT RATING SYSTEMS: INFORMATIONAL ADVISORIES OR JUDGMENTAL RESTRICTIONS?
Donald F. Roberts
Thomas More Storke Professor in the Department of Communication
Stanford University
The Wally Langenschmidt Memorial Lecture
South African Broadcasting Corporation
Johannesburg, RSA August 28. 1996
Today I’m going to talk about what has become one of the hotter issues in the world of
mass media in the U.S. Over the past two or three years in the states, we have conducted a
remarkable debate over whether and how to control media content. The discussion has included
most of the media – film, television, popular music recordings, computer games and video games,
and of course, the internet and the World Wide Web (ironically, books have been largely ignored),
and has ranged from arguments about whether controls are needed at all, to what kinds of controls
best fit U.S. political and social needs. The most recent upshot of this debate, although I
suspect not the end of the discussion, has been Federal legislation mandating that a
"V-chip" be installed in every new television set sold in the U.S., and that some kind
of rating system be implemented by January of next year (1997), a point to which I will
return.
In the next few minutes, I will try to explain why the ratings issue has gained such
momentum, present some preliminary results from the National Television Violence Study, discuss
how the V-chip works, and close with a discussion of what is called the RSAC rating system and my
reasons for preferring and informational advisory system to an evaluative rating system.
Protecting Children
For the most part, calls for controls on media content are cast as efforts to
"protect the children." That is, attempts to do anything about media content, whether
to label it, to restrict access to it, or to censor it totally, are generally justified in terms
of keeping children from harm. Children are presumed, quite justifiably, to be different from
adults -- to be more vulnerable, less able to apply critical judgmental standards, more at risk.
Such arguments are not new. Consider these comments by a psychiatrist, Dr. Edward Podolsky
to a U.S. Senate Subcommittee on Juvenile Delinquency. He spoke following the Committee’s
viewing of excerpts from several televised crime shows.
Seeing constant brutality, viciousness and unsocial acts results in hardness, intense
selfishness, even in mercilessness, proportionate to the amount of exposure and its play on the
native temperament of the child. Some cease to show resentment to insults, to indignities, and
even cruelty toward helpless old people, to women and other children (in Starker, p.
137).
I selected that particular quote because it implicates several of the consequences I will be
discussing in a few minutes – and because of its date – 1954. I cannot help but think that the
programs that committee viewed 42 years ago would elicit smiles, or yawns, if they were held up
as examples television violence today.
Here’s another statement about children and the mass media:
The tendency of children to imitate the daring deeds seen upon the screen has been illustrated
in nearly every court in the land. Train wrecks, robberies, murders, thefts, runaways, and other
forms of juvenile delinquency have been traced to some particular film. The imitation is not
confined to young boys and girls, but extends even through adolescents and to adults (quoted in
Starker, 1989).
That is taken from a now defunct periodical entitled Education, commenting on the new
mass medium – film – in 1919.
I could continue moving back through history in hundred year chunks, reading similar
expressions of concern about media content that refer to each and every new medium – including
print. But let me end with one final quote:
Then shall we simply allow our children to listen to any story anyone happens to make up, and
so receive into their minds ideas often the very opposite of those we shall think they ought to
have when they are grown up? (Plato, The Republic)
The classicists among you may have recognized that: its Plato, giving his justification for
censorship as a necessary condition for building the ideal citizen to inhabit the Republic.
My point is simply that fear of what the media may do to children is nothing new.
Humans have always wrestled with the issue of what kinds of media content might be inappropriate
for children -- and what should be done about it.
Calls for Ratings
Responses to the question seem always to have ranged from "do nothing" at one
extreme, to "burn the books" (films/games/records—authors!) at the other. The middle
ground, in the U.S. at least, has taken the form of calls for implementation of some kind of
rating or labeling system – some means to identify the appropriateness of media content for
children, and then to use that system either to empower parents, or to legislate children’s
access, or some combination of the two.
Most of us are familiar with motion picture ratings. They have been around in many countries,
the U.S. and South Africa included, for a long time (see Federman, 1996). So why all the recent
concern? Why have ratings become such a social and political issue in the 90s?
There are probably many reasons that public concern with "doing something" about
media content has reached such a crescendo in the past few years. In the case of the U.S., I
believe two of the more important factors are that several disturbing social trends began to peak
at the same time that advances in communication technology enabled popular media to present
content in new and more disturbing ways than ever before. That is, just when our society was
experiencing dramatic (and unconscionable) increases in real-world violence and violent crime,
in teenage pregnancies and venereal disease, and in just plain incivility in general -- at just
that time, the media also began to portray violence, sex, and incivility in greater amounts and
more graphically than every before. Film and television have now developed techniques to make
bodies explode and blood spray right before – if not into – our eyes; video games now reward kids
for the number of on-screen enemies they can decapitate, with bonus points for extra blood and
gore; some popular music lyrics, Web sites, and premium cable channel films make available –
indeed, make almost commonplace – sexual content that once resided almost exclusively within
"brown paper wrappers."
Given that adults have always worried that media exert undue influence on children, it is not
surprising that the co-occurrence of these two trends lead to accusations that the mass media are
"obviously" having a negative impact on society , thus that controls or restrictions
are needed. Since the U.S. constitution guarantees freedom of expression, one of the few viable
options for exercising some kind of control seems to lie with a ratings or advisory system – as
long as it is not implemented by the government. A New York Times poll published in July, 1995,
found that over 80% of all adult Americans and 91% of all parents favor the establishment of a
ratings system for television; 80% of parents believed that music recordings should be rated, and
86% of parents think video tapes and video games need ratings (Sex and power in popular culture,
1995).
Research on Media Violence
Before considering the kinds of ratings systems that have been proposed and
implemented in the States, I want to define some boundaries, make clear a premise or two, and
briefly consider several of the issues that have been discussed in the ongoing debate about
ratings, issues that I believe are central to understanding what a good rating system will look
like. For the most part (albeit not exclusively), I will focus on television and television
content; that’s what most scientific research has concerned itself with, and that’s what I know
best. Nevertheless, and this is one of the premises I want to make explicit, basic to almost
everything I say is my belief that a screen is a screen is a screen. That is, viewers,
especially children, do not respond differently to movie screens, television screens, and
computer screens; what holds for one probably holds for the others. I also believe that most of
the psychological principles that guide human responses to screen portrayals of violence, also
guide responses to portrayals of any other kind of behavior, from sexual to altruistic to how to
kick a football; the same kinds of things that increase a child’s learning of a violent act from
television will also increase learning of an altruistic act. Obviously there are some differences
across media and across types of content, but the evidence indicates that the similarities are
far more important than the differences.
Given those caveats and assumptions, let us look at some of the issues in the debate over
whether ratings are needed, and if so, what kind. We need to spend a few minutes looking at what
the research tells us about the impact of television (i.e., screen) violence on children, and
about what is currently on U.S. television.
First let’s consider research on the consequences of exposure to media violence. Most
research has focused on whether or not viewers learn aggressive behaviors or attitudes
through exposure to media portrayals of entertainment violence. Several exhaustive reviews of
over 2000 scientific studies conducted during the past 40 years lead to the unequivocal
conclusion that exposure to mass media portrayals of violence contribute to aggressive
attitudes and behavior in children, adolescents, and adults. Obviously media violence is not
the only cause of violent social behavior, but few social scientists would debate that it plays a
contributory role. Indeed, as long ago as 1982, a National Institute of Mental Health report on
television and behavior concluded: "In magnitude, television violence is as strongly
correlated with aggressive behavior as any other behavior variable that has been measured"
(National Institute of Mental Health, 1982). Studies conducted in the intervening 15 years have
not altered that judgment (Comstock & Paik, 1991). More often than not, those who still
claim that there is no evidence for such a causal connection are representatives of the media
industry – and simply have not read (or choose to ignore) the scientific research literature.
What the past decade and a half of research has added, however, is evidence that exposure to
media violence can have negative consequences beyond increasing the likelihood of viewers’
aggressive behavior. We now know that prolonged violence viewing can lead to emotional
desensitization, engendering callous attitudes toward real-world violence and decreasing
the likelihood of helping real victims. In addition, a third consequence of violence viewing
is increased fear of becoming a victim, which in turn leads to such things a
mistrust in others and to increases in self-protective behavior. In short, there is now
research evidence that excessive exposure to media violence can lead to learning
aggressive behavior, to desensitization, and to fear.
Given that such consequences of viewing are well-documented, the more interesting research
questions (particularly when faced with developing a media content rating system) concern
identification of the contextual factors within media content that seem to make a difference.
What are the ways of portraying violence that increase or decrease the likelihood of a negative
effect? Both intuitively and on the basis of scientific research, we know that some violent
programs are more problematic than others, that some ways of displaying violence are likely to
increase learning, fear, or desensitization, but that other depictions are quite likely to
decrease these outcomes. It does not take a scientific background to sense that the consequences
to viewers of a film like Schindler’s List (a man saves numerous Jews from the Nazi
concentration camps during World War II) are probably quite different than the consequences of a
film like Natural Born Killers (in which young adults blast a bloody swath across the
U.S.). Both films portray brutal violence, both show a number of killings, both are
relatively graphic – yet one is generally thought of as an anti-violence statement while the
other has been accused of celebrating violence. The interesting question is: "Why?"
What are the differences in context that make the two films so different? If we are to design a
content rating system that will differentiate between two such different media portrayals, such
questions are critical. A simple body count will not do the job.
Experiments over the past several decades have begun to provide some answers. As part of a
massive analysis of violence in U.S. television that I will describe in a moment (The National
Television Violence Study, 1996), Barbara Wilson and her colleagues (1996) at the University of
California at Santa Barbara recently completed an extensive review of extant experimental
research on media violence, with an eye to identifying those contextual factors that make a
difference in how viewers respond to violent content. Nine such contextual factors emerged from
the experimental research literature: 1) the nature or qualities of the perpetrator; 2) the
nature or qualities of the target or victim; 3) the reason for the violence – whether it is
justified or unjustified; 4) the presence of weapons; 5) the extent and/or graphicness of the
violence; 6) the degree of realism of the violence; 7) whether the violence is rewarded or
punished; 8) the consequences of the violence as indicated by harm or pain cues; 9) whether humor
is involved. Although the amount of research on each individual factor varies (i.e., we know a
great deal about the role of rewards and punishments but not a great deal about the role of
humor), there is adequate experimental evidence safely to conclude that each contextual factor
can either increase or decrease the probability that a violent portrayal will pose a risk
to viewers on at least one of the three outcomes: learning, desensitization, or fear. When
these contextual elements are mapped over the three outcomes, the matrix shown in Table 1
results.
Insert Table 1 about here
The arrowheads show what research says about how each contextual factor affects each outcome.
Thus, for example, when violence is rewarded we expect an increase in both learning and fear;
when violence is portrayed as unjustified, we expect a decrease in learning but an increase in
fear; humor should increase both learning and desensitization; and so on. The spaces where no
arrowhead occurs indicate a lack of evidence concerning how that particular contextual factor
affects that particular outcome. For example, no research examining how harm and pain cues
affect either fear or desensitization was located.
The National Television Violence Study
This matrix served to guide the National Television Violence Study, an ongoing,
multifaceted, three-year study of violence as portrayed on U.S. television funded by the National
Cable Television Association. The first year of the study was conducted under the administrative
auspices of Mediascope, an independent, non-profit media advocacy organization located in
Hollywood, and different facets of the research were carried out by faculty and staff from
Departments of Communication of four major U.S. universities: The University of California at
Santa Barbara, the University of Texas at Austin, The University of Wisconsin at Madison, and the
University of North Carolina at Chapel Hill. I will limit my description to the work carried out
by the Santa Barbara group which performed a content analysis of a representative week of U.S.
entertainment television portrayals of violence.
Think about that. They began with a data base consisting of a representative week of all
U.S. television. In order to get this, they sampled 23 channels of television available in
the Los Angeles area, including broadcast networks, independent channels, public television,
basic cable stations, and premium cable channels. For each of these channels they randomly
selected two daily, half-hour time slots between 6:00 A.M. and 11:00 P.M. over a period of 20
weeks, ultimately taping a total of 3,185 programs. After eliminating news programs, game shows,
religious programs, sports, instructional programs and "infomercials" (none of which
fell within their definition of entertainment programming), they were left with a sample of 2,693
programs – 2,737 hours of programming. This resulted in a representative 7-day composite week of
programming for each of the 23 channels. I think it is safe to say that there has never been a
more representative sample of television programs.
Not only is the sample impressive, but so is the coding scheme developed by this research
team. They defined violence as :
…any overt depiction of a credible threat of physical force or the actual use of such force
intended to physically harm an animate being or a group of animate beings. Violence also
includes certain depictions of physically harmful consequences against an animate being that
occur as a result of unseen violent means. Thus there are three primary types of violent
depictions: credible threats, behavioral acts and harmful consequences.
(National Television Violence Study, Content Analysis Codebook, 1994-1995, p. 3).
But more important than the definition of violence per se, they also developed precise
definitions for all of the contextual factors we have just noted. That is, they created coding
instructions that enabled content coders reliably to identify such things as harm, pain, humor,
justification, attractiveness of the target, and so forth. In other words, they coded not just
the occurrence of violence, but more important, they looked at the contextual factors associated
with violence. And finally, their coding scheme operated at three distinct levels – that of the
overall program, of the scene, and of the act, enabling independent inferences about the nature
and context of violent acts, violent scenes, and violent programs. This is necessary if we are to
be able to differentiate between a program that glorifies violence and one that condemns
it.
A study of this magnitude produces extensive results, so numerous that I’m not even going
to try to detail them for you. Rather, let me just summarize a few findings – a few that help
form the foundation of my argument about what kind of ratings and advisory system will best serve
the television audience.
The first conclusion from the study will not surprise you -- there is a great deal of violence
on U.S. entertainment television. Fifty seven percent of all entertainment programs contained
violence. That’s a lot, but please don’t ignore the fact that 43% had no violence whatsoever.
More interesting to me than the total amount of violence is the nature of that violence. I’ve
summarized a few of the results in Table 2.
Insert Table 2 about here
My point should be clear. Although a substantial proportion of U.S. entertainment
television contains no violent content whatsoever, over half of the programs do portray violence.
Moreover, and this is what I find particularly disturbing, when violence does occur, it is often
portrayed in ways that are more likely than not to increase the chances of some kind of negative
effect on viewers. Violence often goes unpunished, seldom seems to result in either immediate
pain or negative long-term consequences, is often portrayed as something to laugh about, and so
on – all factors that have been shown to increase the likelihood of viewers learning to be more
aggressive, or becoming more fearful, or more desensitized.
Another way of putting this is that if one were developing a rating system to advise
parents about the likely problematic content within individual television shows (or films, or
video-games, or WEB sites), the contextual factors characteristic of much (not all, but much)
violent programming on U.S. television are just those that should increase the need to
label that content as inappropriate for children.
The V-Chip
Let me now turn directly to the issue of the new V-chip legislation in the U.S. and
the ongoing discussion about how it will be implemented, that is, to the rating system (or
systems) that will make the chip work.
As some of you may be aware, in February, President Clinton signed into law the
Telecommunications Act of 1996, a far-reaching piece of legislation which we are told is destined
to change the face of the telecommunications industry in the U.S. One small part of that
legislation is intended to empower parents by providing a way for them to control the television
content to which their children can have access. This was accomplished by mandating that within
two years of the signing of the bill, all new television sets sold in the U.S. must contain a
V-chip and that within one year the television industry must develop a system to implement V-chip
capabilities (otherwise, the Federal Communications Commission will appoint an independent
committee to do it for them). Now what does this mean? What is the V-chip, and what system
might implement it?
Well, a V-chip is simply a piece of hardware – a very tiny piece of hardware – to be
included in the electronics of new TV sets (or added to existing TV sets). It allows consumers
to block shows depending on their content rating. The chip reads a signal which is not visible
to viewers (it is embedded in the vertical blanking interval, the portion of the signal that
currently carries "closed caption" services for the hearing impaired). That signal,
which is to be included within every television program, will carry information about the content
rating of the program. The consumer can program the chip to any particular rating or level of
intensity; programs that exceed the selected level are blocked. Thus, for example, a show might
be rated somewhere between 0 for no violence to 5 for a great deal of violence, or from G
(General Audience) to NC-17 (No child under l7) (as is currently the case with the Motion Picture
Association of America system). Parents decide what kinds of shows are to be allowed in their
home on the basis of the ratings, and set the V-chip to block anything in excess of the selected
level. Once that selection is made, the chip automatically decodes the ratings signal embedded
in each program and acts in accordance with parental (or other consumer) decisions. If the
program exceeds the rating, the V-chip will pick up the signal and the screen simply goes blank.
In short, the chip is nothing more than a device that enables consumers to decide what kinds of
television content they want to allow into their homes at any given time, and to block out any
content that does not meet their standards.
Several other things about this technology are important to note: 1) the V-chip is capable
of accommodating any one of a number of different ratings systems; 2) the chip can accommodate
several different systems simultaneously (there is no requirement to settle on a single
approach); 3) a single program can have independent ratings for different kinds of content; that
is, there can be one rating for violence, another for sex, and another for language, all
pertaining to the same program; 4) the chip can be turned on or off, or reprogrammed, at any
time.
For these reasons, I believe the chip has been misnamed. Initially the V was appended to
indicate violence chip, but since it can do much more than respond to violence levels, I think a
more appropriate name would be C-chip, standing for choice chip. In order to make it a choice
chip, of course, requires giving consumers the necessary information to make reasoned choices –
what I call an informational content advisory.
The RSAC System
Clearly, the electronic capabilities of the V-chip are only part of the story. The
value of the chip depends at least as much on the system used to rate program content as on its
electronic screening and blocking capabilities, so let me turn to different approaches to how the
ratings might be assigned. The main distinction to be made is between descriptive versus
evaluative systems.
I’ll begin by asking how many of you would be happy if a program’s rating consisted of a
simple G (good for children) or NG (not good for children)? Prior to viewing, that’s all you
would know about the program – either its G or NG. Would the situation be better if there were
four or five ratings level – say from 0 to 5, or from G through PG, PG-13, and R, to NC-17
(again, the MPAA system). Is that enough? Would your answer to the questions change depending
on who gave the rating – on whether , for example, the G or NG was assigned by a leader in your
educational system or by a prison convict serving time for child abuse? What if the rating was
always assigned by one of two educators, one of whom was obsessed with keeping violence off
television while the other made keeping children safe from nudity his life’s work – but you never
knew which gave a particular rating? Would it make a difference to you which one assigned the
rating? Are there some of you who are more concerned with portrayals of violence than with
portrayals of sex? Or vice versa? Do some of you have different feelings about depictions of
nudity? Of bad language? And even if you can agree on which kind of content is of most concern,
are you sure you can agree on how any particular content should be rated? Perhaps what you see
as brutal violence someone else will judge to be little more than a friendly tussle.
I began to realize the importance of such questions when I was asked by the U.S. Software
Publishers Association to develop a system to be used to label computer games. As I indicated
earlier, in the US the debate over ratings has implicated almost all media. Three years ago, in
response to the release of some video games that were particularly violent and bloody, several
members of the U.S. Congress brought pressure to bear on the video game and computer game
industries to develop some kind of parental advisory label to be placed game packages. At
minimum, the argument went, consumers (parents) should have some indication of what was in the
game before they purchased it for their children. I won’t detail the history of that particular
debate, except to tell you that in order to preclude threatened government action, each of the
two industries (video games and computer games) developed a rating system. Ultimately, when the
system I worked on complete, it was turned over to a non-profit advisory board that is
independent of the computer industry -- the Recreational Software Advisory Council (RSAC). The
rating system is now known as the RSAC system. In the past few months, a revised version, called
RSACi (i for internet) has gone into effect on the World Wide Web, and over 2000 WEB sites are
currently rated using it.
Several factors influenced the shape of the RSAC ratings system. Perhaps most important was
the whole issue of whether a rating should be judgmental or informational – (evaluative or
descriptive). That is, should a rating make an evaluative judgment about what a child should
see, or should it descriptive information about what is in the game, leaving the parent to make
the evaluative judgment? Is it better to label a program as "inappropriate for children
under 13 years," or to say "this programs depicts violence that goes unpunished and
that results in injury to humans," leaving parents to decide whether their children should
view? I found that many parents object to judgmental, age-based ratings because they believe
such ratings are not appropriate for their own children. Some felt their 10-year-olds were
perfectly capable of handling some kinds of content likely to get a PG-13 rating, but not other
kinds; some felt that their 14-year-old should not see a PG-13 movie, but had a great deal of
trouble defending their position in the face of such "expert" ratings; and most
complained that the simple lettering system simply did not tell them enough to enable them to
exercise informed judgment. Since a PG-13 can be assigned on the basis of violence, or sex, or
language, they often are uncertain about what is at issue in a particular film. All they really
know when faced with such a rating is that someone has made an evaluative judgment that the
content is inappropriate for younger children.
Of course, to the extent that an advisory can do both, parents can exercise their own judgment
based on the information, with the additional knowledge of someone else’s evaluative judgment
about appropriate age levels. That evaluative judgment is or is not valuable depending on who
that someone else is and what the criteria underlying the judgment were.
In any case, given that most game developers did not want others making evaluative judgments
about their products, and that many parents pleaded for more information, we decided to opt for a
descriptive, informational advisory system as opposed to a evaluative rating
system. Ultimately, we took as our model the food labeling system in the U.S., a system
requiring food packagers to provide on the package label information about all ingredients in the
package. Consumers are not told what they should or should not eat. Rather, they are given
adequate information and the decision is left to them. We developed the RSAC content advisory
system using the same principles.
Another important factor that shaped the final form of the advisory system was logistical.
The nature of computer games makes it very difficult to require that they be screened by
independent raters. Unlike films or videotapes which can be viewed in 90 minutes or so, it can
take upwards of 100 hours to review a computer game (make that 200 hours if you are over 40).
Given the hundreds of games that need to be rated each year, that is simply impossible.
Therefore, we needed to develop a self-rating system – that is, a system whereby the game
developers themselves rate their own games. This, of course, created a new problem. To ask a
developer to rate his or her own game, particularly when developers tend to believe that lower
ratings for violence will decrease sales, is like asking the fox to guard the hen house…it would
seem to invite the developers to bend the rules. We had to find a way to keep them accurate and
honest, and just as important, a way that would also convince the public that self-administered
ratings were accurate and honest? The solution turned out to be quite simple. We simply took the
norms and canons of science and moved them into the public arena. That is, we developed a rating
system that was highly reliable and completely public, and those two attributes largely solved
the problem. Let me explain.
A highly reliable system means that any two individuals using the procedures correctly
will rate a game identically. This requires very concrete, very detailed definitions of
everything to be rated, and a set of questions about the content that ask for nothing other than
yes/no responses once those definitions are understood. The idea is that no matter how different
the individuals, if they use the same objective definitions correctly, and answer the questions
honestly, they cannot help but assign the same rating to a game or program.
A public system means open to public oversight -- that anyone and everyone has access
to the system, its definitions, and its procedures. To the extent that open access to a reliable
system is guaranteed, then anyone can check the rating given to any game at any time. The idea
is that if we make easy for anyone in the public to raise questions or objections in those
instances when they do not agree on the rating (using, of course, the same rating system), the
threat of such checks keeps game developers honest. If the developers misuse the system, they
face loss of their rating (which can cost them access to retail outlets) and heavy fines.
Finally, with the decision that our system would be informational, self-administered, public,
and designed to be as objective as possible, we turned to what to rate and how to rate it.
Both public opinion and prodding from Congress dictated that the games would be rated on each
of four dimensions – violence, sex, nudity, and language. (Final ratings combine sex and nudity,
but they are still rated separately.) For today, I will focus on violence. We decided that
there would be five levels – from 0 for no violence to 4 for the most potentially harmful
portrayals of violence. I reviewed the research literature in much the same way as Wilson and
her colleagues did, and identified dimensions of content – what they called contextual factors –
that were most likely to increase negative effects and that seemed most appropriate to the
content of games (since games do not have the kinds of story lines found in dramatic
narratives, the ratings focus on slightly different things). By this procedure we settled on
five primary factors that would make a difference in the level of the rating. These are:
- the nature of the target—is the target human-like, non-human, or an object?
- the stance of the target – is the target threatening or non-threatening?
- consequences to the target – death v. injury v. disappearance v. no consequences?
- depiction of blood and gore;
- consequences to the player – is the player rewarded or not rewarded for aggressive
behavior?
To the extent that one or more of these attributes occur within a computer game, the advisory
of the level of violence increases. For example, if the game portrays a threatening human
attacked but not injured, the game gets a 1; if the threatening human is injured, it gets a 2,
and so on. The combination of these various dimensions results in the logic chart shown in Table
3, which illustrates the various attributes in a computer game that result in different violence
advisories.
Insert Table 3 about here
Of course, we don’t ask the person rating the content to make his or her way through that
chart. Rather, we created a set of highly concrete, highly objective definitions for every term
used in the chart, and a set of yes/no questions employing those definitions. For example, one
question is:
"Does the software title depict blood and gore of sentient beings?"
That question results in a straightforward "Yes" or "No" response from the
person doing the rating because the terms "depict," "blood and
gore," and "sentient beings" are each explicitly defined. For example,
here is part of one definition – for blood and gore:
Blood & Gore: Visual Depiction of a great quantity of a Sentient Being’s blood or
what a reasonable person would consider as vital body fluids, OR a visual Depiction of innards,
and/or dismembered body parts showing tendons, veins, bones, muscles, etc., and/or organs, and/or
detailed insides, and/or fractured bones and skulls.
The depiction of blood or vital body fluids must be shown as what a reasonable person
would classify as flowing, spurting, flying, collecting or having collected in large amounts or
pools, or the results of what a reasonable person would consider as a large loss of the fluid
such as a body covered in blood or a floor smeared with the fluid… etc. ….
There are literally dozens of pages of such definitions.
The questions are all arranged in a branching format, which is usually administered on an
computer. That is, depending on the response to a question, the system either gives a rating or
determines what the next questions should be. Depending on the amount and nature of violence in
the game being rated, the person doing the rating may respond to as few as 2 or as many as 15
questions. The same procedure is followed for the Sex/Nudity section, and for the language
section. Finally, depending on how the questions have been answered, the program determines what
the rating advisory should be and what the information explaining that advisory should be. That
is, in addition to the number on the thermometer, there is always a phrase explaining why that
number was assigned. As you can see in the logic chart shown above, that explanation may vary.
For example, there are six different ways a game can receive a violence rating of 2, and four
different ways it could earn a violence rating of 3, and the advisory informs the consumer about
the specific kind of content that lead to each determination. A copy of the RSAC advisory label
is shown in Figure 1. Note the descriptive information in the violence section. Another
descriptive phrase paired with a level 2 advisor might have been: "Humans
injured."
Insert Figure 1 about here
Finally, Figure 2 shows an advisory label earned by one of the more violent games
available in the states a year or so ago (I understand that even more violent games have since
reached market), a game called "Doom." As you can see, the game received the next to
highest advisor for violence – in this case because it portrays blood and gore – and a very mild
rating for language (mild expletives). There were no instances of either sex or nudity in the
game.
Insert Figure 2 about here
Now, how does this RSAC system relate to the V-chip described earlier. Well
clearly some modifications would have to be made in the dimensions employed and some of the
questions. But for the most part, the approach strikes me as ideal for the new V-chip
technology. It would be inexpensive and quick, because the producers/writers of each television
show could rate their own. And more important, it would serve the consumer well because it has
the advantage of being descriptive and informational rather than judgmental.
Joel Federman concluded his recent book on media ratings with the recommendation that
whatever rating system is adopted, it should make every effort to maximize information and
minimize judgment. Another ways of saying this is that a descriptive system is preferable to an
evaluative system. Of course, "descriptive" and "evaluative" are relative
terms. Since even the act of choosing to rate implies evaluation, no rating system can be purely
descriptive. Nevertheless, I would contend that because the RSAC system I have just described
leans far more in the direction of description than evaluation, it has several valuable
advantages evaluative systems. First, and most important, it puts the decision making power in
the hands of the parents rather than some outside agency with which the parent may or may not
agree. It presumes that children are different from each other and that parents know the needs
and capabilities of their own children far better than anyone else can. Second, it the advantage
of consistency because the criteria for labeling any content are objective, concrete, and public.
And this, in turn, means that it can be used in highly flexible ways. Parents whose primary
concern might be media violence and parents whose primary concern might be language or sexuality
can all use the system with confidence.
Of course, the is probably no such thing as a perfect solution to the problem of
protecting a highly vulnerable audience such as children while simultaneously protecting people’s
right to say/write/film/program freely. Nevertheless, providing parents with descriptive
information on which they can based informed judgments is a big step in the right direction –
step which attempts to respond to the needs and right of all concerned parties.
References
Comstock, George, with Paik, Haejung (1991). Television and the American child. San
Diego, CA: Academic Press.
Federman, Joel (1996). Media ratings: Design, use and consequences. Studio City, CA:
Mediascope.
National Television Violence Study (1996). Studio City, CA: Mediascope
Executive Summary: 1994-95
Scientific Papers: 1994-1995
Content Analysis Codebooks: 1994-1995\
Sex and violence in popular culture (1995). New York Times, July 23-26, p.
6.
Starker, Steven (1989). Evil influences: Crusades against the mass media. New
Brunswick, NJ: Transaction Publishers.
Wilson, Barbara J., Donnerstein, Ed, Linz, Dan, Kunkel, Dale, Porter, James, Smith, Stacy L.,
Blumenthal, Eva, & Gray, Tim (1996). Content analysis of entertainment television: The
importance of context. Paper presented to the Duke University Conference on Media Violence and
Public Policy, June 28-29, Raleigh-Durham, North Carolina.
Table 1
Predicted Impact of Contextual Factors
on Three Outcomes of Exposure to Media Violence
OUTCOMES OF MEDIA VIOLENCE
LEARNING
DESENSITI-
CONTEXTUAL FACTORS AGGRESSION FEAR ZATION
Attractive Perpetrator Ý
Attractive Target Ý
Justified Violence Ý
Unjustified Violence ß
ß
Ý
Presence of Weapons Ý
Extensive/Graphic Violence Ý
Ý
Ý
Realistic Violence Ý
Ý
Rewards Ý
Ý
Punishments ß
ß
ß
ß
Pain/Harm Cues ß
ß
Humor Ý
Ý
From the National Television Study, 1996. Predicted effects are based on review of social
science research on contextual features of violence. Spaces are used to indicate that there is
inadequate research to make a prediction.
Ý
= likely to increase the outcome
ß
= likely to decrease the outcome
Table 2
Selected findings from National Television Violence Study
57% OF ALL ENTERTAINMENT PROGRAMS CONTAINED VIOLENCE
Violent Programs
33% contained 9 or more violent interactions
51% portrayed violence in realistic settings
- 16% showed long-term consequences of violence
- 4% had an anti-violence theme
Scenes within Violent Programs
- 15% of violent scenes portray blood and gore
- 39% of violent scenes use humor
- 73% of violent scenes portray violence as unpunished
Interactions within Violent Scenes
25% of interactions employed a gun
- 35% of interactions depicted harm unrealistically
- 44% of interactions showed violence as justified
- 58% of interactions did not depict pain
Adapted from the National Television Violence Study, Executive Summary, 1996.
Table 3
RSAC Methodology Logic Chart
|
|
All |
1 |
2 |
3 |
4 |
Maximum Violence |
|
|
|
|
|
Rape |
|
|
|
|
X |
Wanton and Gratuitous Violence |
|
|
|
|
X |
Blood/Gore |
|
|
|
X |
|
Human Threatening Victims |
|
|
|
|
|
No Apparent Damage/No Death |
|
X |
|
|
|
Damage with or without Death |
|
|
X |
|
|
Death/No Damage |
|
|
X |
|
|
Human Non-Threatening Victims |
|
|
|
|
|
Damage/No Death |
|
|
|
|
|
Player Not Rewarded (unintentional act) |
|
|
X |
|
|
Player Rewarded |
|
|
|
X |
|
Death With or Without Damage |
|
|
|
|
|
Player Not Rewarded (unintentional act) |
|
|
X |
|
|
Player Rewarded (gratuitous violence) |
|
|
|
|
X |
Non-Human Threatening Victims |
|
|
|
|
|
No Apparent Damage/No Death |
|
X |
|
|
|
Damage With or Without Death |
|
X |
|
|
|
Death/No Damage |
|
X |
|
|
|
Non-Human Non-Threatening Victims |
|
|
|
|
|
Damage/No Death |
|
|
|
|
|
Player Not Rewarded (accidental) |
|
X |
|
|
|
Player Rewarded (intentional) |
|
|
X |
|
|
Death With or Without Damage |
|
|
|
|
|
Player Not Rewarded (accidental) |
|
X |
|
|
|
Player Rewarded (intentional) |
|
|
|
X |
|
Natural/Accidental Violence |
|
|
|
|
|
Damage/Death-Human Victims |
|
|
X |
|
|
Damage/Death Non-Human Victims |
|
X |
|
|
|
Blood/Gore (humans and non-humans) |
|
|
|
X |
|
Objects (Aggressive & Accidental Violence) |
|
|
|
|
|
Damage and/or Destruction of Symbolic Objects |
X |
|
|
|
|
Realistic Objects |
|
|
|
|
|
Disappear w/o Damage or Implied Social Presence |
X |
|
|
|
|
Disappear w/o Damage with Implied Social Presence |
|
X |
|
|
|
Damage With or Without Destruction |
|
X |
|
|
|
###
For More Information - Press Only:
For RSAC, Pat Arcand or Carolyn Wilkins,
Copithorne & Bellows, (617) 450-4300.
|
|