Clips    Welcome!


National captioning Institute: commemorative history

  Chapter 1

Imagine how it must feel to be deaf. Conversation is inaudible, why others are laughing, often unfathomable. Someone has to explain everything to you, either with scribbles on paper or through sign language. The biggest mystery and disappointment of all might be that glowing information machine of the electronic age—television. For people who are deaf, watching television can be somewhat like seeing a silent movie from the 1920s with the dialogue frames removed—an incomplete experience at best.

Actress Marlee Matlin didn’t have to picture all of this in her mind. She was herself deaf, had been since early childhood, and remained so when she addressed the U.S. Senate Subcommittee on Communications in 1990 in sign language. She told eloquently of her life, her hopes, and her aspirations before and after the arrival of closed captions—those white-on-black capital letters that render legible the televised spoken word.

“You know, as a little girl growing up in Chicago,” Matlin recounted at a Senate hearing into closed captioning for television, “I had dreams just like any other child—to be a policeman, to be a dancer, to be a teacher, to be an actress. I was always told to follow my dreams and be what you want to be, that no dream was beyond my reach.

“But in many of my dreams, I sat by and watched without understanding a single word of what was being said. As a child, only through my mother could I understand the antics of The Electric Company, or only through my father could I understand what Mannix was saying to his Girl Friday. Only through my brother could I understand the laughter in that program All in the Family.

“One day, then-President Ford was speaking on television,” Matlin continued, “and my mother had to explain what the President was saying. By that time, I was ten years old, and I thought: Why does Mom have to tell me what he is saying? Why doesn’t he tell me himself? So I wrote a letter and asked for closed captioning.

“When television finally began to be closed captioned,” remembered Matlin, “I watched everything that came on, and I’m telling you, everything. Who wouldn’t? Imagine being able to watch TV all your life without any sound, and then someone, suddenly, one day turning on the sound for one hour. It was really a shock. (Top)

“So I watched everything from Three’s Company all the way to Masterpiece Theater, which, for a ten year old… I guess I didn’t understand what was going on. The words were the most important thing. The words connected to the dreams I had, and the dreams finally became true for the first time. It was really thrilling for me.

“Now the words of the people who make the news—the artists, the actors, the small and the large—have been coming through. From ‘There’s a hostage crisis in Iran,’ to ‘Blake Carrington! I’ll get even with you,’ the world has finally opened up to 24 million hearing-impaired and deaf individuals.”

Marlee Matlin’s encounters with television before and after the arrival of closed captions sounds like the difference between night and day. Moreover, her experience was far from unique. Soon after the National Captioning Institute introduced the closed-caption television service in the early 1980s, there were first thousands then tens of thousands of Marlee Matlins throughout the United States. Twenty-five years later, the beneficiaries of closed captioning number in the tens of millions, all of them now able to make sense of television, arguably the most powerful communications medium yet devised.

Closed captions, which nowadays can be rendered visible with the touch of a button on a television remote control, had been preceded by other methods of helping the deaf and hard-of-hearing community to appreciate television. But all fell short in one way or another. Sign language, for example, showed potential, but could be distracting at a moment of intense drama. Besides, only about ten percent of deaf and hearing-impaired Americans are fluent in American Sign Language, the standard in the United States. Lip reading, difficult to master even when the speaker faces the audience, turned out to be all but useless for following television drama, where characters often speak in profile. Other approaches intruded too much on the hearing world. Sure, someone with weak hearing could turn up the volume on the TV set, but doing so could drive hearing folk from the room. (Top)

So the search began for a technology to let individuals with hearing loss enter the world of television without compelling the acute of hearing to participate in the same way. In 1970, experiments conducted at the National Bureau of Standards proved the feasibility of closed captioning. But nearly ten years would pass before the National Captioning Institute produced the first closed captions for nationally televised programming. Another decade and more would slip by before decoders, the electronic gadget that lifts the cloak of invisibility from closed captions, would become at a stroke both inexpensive and ubiquitous. And not until 1996 was there a law obliging all of television programming, with few exceptions, to be accompanied by closed captions.

Along the way, this quest to bring America’s cultural mainstream to the deaf community would involve the efforts of a host of people, some of them famous and powerful, but most of them unknown even to those who most benefited from their work. At times, the whole enterprise would seem in doubt, as proponents clashed with opponents in the halls of Congress, in the marketplace, and in the courts.

Yet, as the National Captioning Institute celebrated its 25th anniversary in 2004, closed captions had come of age. Not only do they supplement virtually every television show produced—including live, unscripted events like football’s Super Bowl and baseball’s World Series. Closed captions appear on many Web sites that offer spoken information. And beyond appealing to the hearing-impaired population, closed captioning seems poised to revolutionize the teaching of English to immigrants and of reading to everyone. The idea of access to the media has even extended beyond the deaf and hard of hearing communities to embrace individuals who are blind or have poor vision. A technique called described video, reminiscent of the golden age of radio drama, features a narrator who tells his audience, during gaps in dialogue, what’s happening on the screen.


Long before television came movies, and that’s where Emerson Romero first attempted captioning for deaf people. Using the screen name Tommie Albert, Romero had been an actor in silent films during the 1920s. He was himself deaf, and as movies with soundtracks began to outnumber and ultimately displace silent films, he felt increasingly excluded from the art form he still loved. In 1947, when he was working for an aircraft company, Romero decided to remedy the situation if he could, not only for himself but also for the entire deaf and hard-of-hearing community. With the aid of his cousin, well-known actor Caesar Romero, Emerson acquired prints of a handful of movies that he could use for captioning experiments. (Top) 

By the time Emerson started work on his movie project, subtitles in films were well established. One of earliest movies to be subtitled was The Jazz Singer, with French subtitles for its Paris opening in 1929. But no movie studio would make the effort or spend the money to caption American films in English. From the studios’ point of view, captioning made no sense for movies filmed with dialogue in the public’s native tongue. It is doubtful that Hollywood ever considered America’s deaf audiences in their decision, and fifty years hence, America’s television networks would exhibit a similarly dismissive posture toward the nation’s deaf and hard-of-hearing communities.

With little in the way of resources—and no support from the film industry—it is perhaps not surprising that Romero chose to resurrect the silent-film technique of interleaving frames of dialogue and explanation between action sequences in the movie. Romero’s technique was old-fashioned, the audience he could reach small. Nonetheless, his effort remains the first to make the medium of film fully accessible to America’s deaf population.

As the 1940s drew to a close, British movie magnate J. Arthur Rank tried his hand at helping deaf people enjoy movies. Rank had entered the film business more than two decades earlier. A devoted Methodist churchman, he had founded a company called British National (Film) to distribute religious movies. By the time of his movie-captioning experiments, he headed the Rank Organization, destined to become a major force in British film production.

Rank devised a system in which dialogue from a movie was etched onto hundreds of thin glass slides that a specially adapted projector beamed to a small screen below the one on which the movie played out. No doubt tedious and stressful for projectionists, Rank’s procedure, like Romero’s, would fall by the wayside, but not before American educators of deaf children replicated it. (Top) 

Not long after Rank began captioning films, Dr. Ross Hamilton, assistant superintendent of The Lexington School for the Deaf in New York City, used the Briton’s slide method to caption a short film. Unbeknownst to Hamilton, Dr. Edmund Boatner, who headed the American School for the Deaf in West Hartford, Connecticut, had begun to investigate how movies might be made to explain themselves to his students. In a foreshadowing of the bewilderment that deaf audiences would experience early in the coming television age, Boatner had seen them utterly confused as they sat through an exciting adventure film. “I took our basketball team into town for dinner and a movie one evening,” he recounted later. “I recall that the movie was The Son of Monte Cristo. As I watched the boys’ reactions, I could see the looks of bafflement on their faces. In one scene, for example, a group of men were casually sitting around a table talking when suddenly they jumped up and started [fighting] with their swords. Why? Our boys couldn’t see any reason for their behavior; they hadn’t heard the conversation. It was then that I made the resolution to see that understandable films were provided for the deaf.”

Visiting his colleague Dr. Clarence O’Connor, superintendent of The Lexington School, to discuss the matter, Boatner saw for himself the inadequacies inherent in Hamiltons’s glass-slide system of captioning. Chief among them was that viewers had to look away from the action to read the captions. Clearly, a more workable method of captioning movies would have to be found.

The method turned out to be a form of subtitling. By the time of Boatner’s meeting with O’Connor, several methods of adding subtitles to film had been perfected in Europe. All required that a subtitle be added to virtually every frame of the action. Given that a sixty-minute film contains more than 86,000 frames, any method of subtitling, even when partially automated, was labor intensive and no doubt costly. (Top) 

When is a Subtitle a Caption?

The process of subtitling a movie is identical to captioning one, but different meanings have evolved for the terms. Subtitling refers to the practice of translating the dialogue of a movie into another language before printing it on the film. Captions render the dialogue in the same language as a movie’s spoken words. Captions may also indicate who is speaking and display offscreen sounds (voices,  music, and sound effects, for example) as words, usually in brackets—[BANG!]


But that didn’t deter Boatner and O’Connor. In 1948, the two educators established Captioned Films for the Deaf, Inc (CFD). At first, the company was more dream than reality. The incorporation documents named Boatner president of the company; O’Connor was to serve as vice-president. There was neither staff nor money to purchase movies or put words to them. Working from office space donated by the American School for the Deaf in Hartford, the two men began to assemble a board of directors from some of the best-known names in industry and in Hollywood.

As would happen decades later in the context of captioned television, movie stars would play important supporting roles. Among the board members, for example, were actors Spencer Tracy and Katherine Hepburn. Whether by accident or design, the Hepburn connection proved fortuitous; Katherine Hepburn’s sister, Marion Hepburn Grant, served as president of the Hartford Junior League. In short order, the community volunteer organization pledged $5,000 to the new company and followed it up with a second pledge of $2,500, all of the money to come from Junior League fund-raising activities. (Top)


To showcase this new captioning initiative, CFD chose for its debut a 25-minute short feature, America the Beautiful. Made in the early 1940s by Warner Bros. studios to sell war bonds, the film had a number of advantages for captioning. Paid for by the government, it probably was inexpensive for CFD to acquire. At less than a half hour in length, it would not be as costly to caption as a full-length feature. Moreover, its patriotic theme would offend few prospective donors. For the premiere, Boatner assembled an audience of deaf people. The production was a hit. “One woman, Mrs. Elsie Durian, wept,” he recalled later. “It was the first time she had understood [a movie] in more than 20 years.”

Despite the appeal of America the Beautiful, financing future captioning projects would become a hand-to-mouth endeavor. So difficult was money to find that by 1958, when Captioned Films for the Deaf, Inc. was nearly a decade old, the company had succeeded in captioning no more than 30 titles. As it happened, the U.S. government, through the Department of Health, Education, and Welfare (HEW), had established a program called Talking Books for the Blind, wherein Congress appropriated many thousands of dollars a year to record books on tape and to establish a free lending library to distribute them to the blind community in the United States. Arguing to themselves that captioning of movies for deaf people was no different from recording books for those who can’t see, Boatner and O’Connor set out for the halls of Congress.

Meanwhile, businessman and CFD board member Graham Anthony recently had become an advisor to HEW’s vocational rehabilitation division. Suddenly, Boatner and O’Connor had a friend in high places to open doors for their film-captioning endeavor. In short order, the two were invited to present a sample of their work to the head of the division, who showed it to her boss, Secretary of HEW Marion Folsom., Next, Senator William Purtell of Connecticut, where CFD had its offices, persuaded 40 of his colleagues to become co-sponsors of a bill to establish a movie captioning-and-distribution service for the nation’s deaf community. The Captioned Films for the Deaf Act passed handily in the Senate, and then went to the House Committee on Education and Labor. (Top)  

There it stalled, however, victim of political strife over a labor bill. Determined to kill the bill, the committee chairman Graham Barden refused to let the House vote on it, a tactic that effectively tied up all other pending legislation on the committee’s docket. If the Captioned Films Act were not sent to the House for a vote before Congress adjourned, it would die in committee, wasting all of the energy expended on the bill.

Seeking to head off defeat, Anthony sought a meeting with Barden, to no avail. But the clerk in Barden’s office, funnel for all of the chairman’s paperwork, was a friend of Anthony’s. As boys, the two had hunted birds together in North Carolina. Moreover, the clerk was deaf. In a last-ditch effort to rescue the captioned-film act, Anthony paid a visit to his longtime friend. Boatner later described what happened next:

“Anthony told him about the bill, but the clerk said, ‘Graham, you know I can’t do anything about the bill.’

“Anthony replied, ‘Joe, you have got to think of the need of our deaf people. If you don’t you will never be able to live with your conscience. I know you can do something!’

“After considerable argument the clerk finally said, ‘Graham, I ought not to do this, but for the deaf, I will, just this once.’ (Top)  

“The clerk directed a girl to find the bill and bring it to his desk. He then reached in his drawer, pulled out the committee stamp, slapped it on the bill, and put the bill in the chairman’s ‘out’ bin. Shortly thereafter, the bill was routinely passed by the House!”

Presaging federal support for captioned television in the coming years, the Captioned Films for the Deaf Act provided $78,000 per year for the captioning and distribution. Having achieved their goal, Boatner and O’Conner dissolved their company and transferred all of the captioned movies in their library to the government, which shortly thereafter opened a new agency also named Captioned Films for the Deaf.

As this free lending library of captioned movies became widely known, there seemed always to be a greater demand for its services than time and money could provide. Subsequent revisions of the Captioned Films Act substantially raised funding for the program to $1.5 million in 1962, $3 million in 1965, $7 million by 1969, and $13 million by the early 1970s. The United States Office of Education, a component of the Department of Health, Education, and Welfare (HEW), distributed these funds through a section of its Bureau of Education for the Handicapped called the Media Services and Captioned Film Branch. Chief of this corner of the federal bureaucracy was Malcolm Norwood, deaf himself and destined to become a driving force in the captioning of television.

Over the years, Captioned Films for the Deaf made accessible to deaf people thousands of educational films and other kinds of movies.  The audience loved them, both for the movies themselves and for the social occasion they offered to the deaf community. Wrote web developer Jamie Berke, an appreciative consumer of captioned movies in those days: “We saw the films in a community college classroom. People brought their own snacks—popcorn, candy, or even real food. After the movie was over, everyone socialized, especially the families with children. All the movies that were shown were films suitable for all ages.” (Top)

The Weitbrecht Revolution

No less disappointing to deaf people than uncaptioned movies was the telephone. Its ring inaudible, a conversation over it impossible, the telephone was useless to the deaf community. They could communicate with each other or with anyone else only in a face-to-face meeting. When such encounters were possible, they tended to be time-consuming, inconvenient and, if the parties lived far apart, rare and costly. As the telephone became more and more a social and business necessity, those who could hear poorly or not at all became increasingly isolated.

Robert Weitbrecht changed all of that. A deaf scientist at the Stanford Research Institute in the early 1960’s, Weitbrecht was a man of many talents and interests. Among them was amateur, two-way[LH1] , radio—his means of communicating with distant deaf friends by telegraphy, substituting long and short flashes of a light for the more familiar beeps of Morse code.

In 1964, frustrated by his inability to converse with a friend who was not an amateur radio enthusiast, Weitbrecht invented a new way for deaf people to communicate with each other. By this time, Teletype systems largely had replaced Morse code and the telegrapher’s key. With a keyboard similar to that of a typewriter, a Teletype machine at one end of a connection allowed virtually anyone to compose a telegram; the resulting message appeared as text on a machine at the other end. Teletype systems might have been a boon for deaf people, but for many potential users they were prohibitively expensive: each required its own communications line, different from the telephone network connection in most homes and businesses.

Weitbrecht solved this problem by devising a way for Teletype machines to communicate over telephone lines. In essence, his invention converted electrical signals representing letters from a Teletype keyboard into sound and sent them to a device called an acoustic coupler, which accepted a telephone receiver. At the other end of the connection, the sounds passed through another coupler to be reconverted to Teletype code and printed as words on paper. He called his new system a TTY, the abbreviation for Teletypewriter. In the 1980s, the name changed to telecommunications device for the deaf (TDD). Although TTY and TDD are used interchangeably, TTY has become the more preferable of the two.

The earliest machines were heavy, bulky, and in the mid-1960s, a new Teletype machine cost about $1,300, the equivalent of more than $7,500 in 2005[LH2] . For those reasons, only 50 of them were in use by 1967. After American Telephone and Telegraph agreed to release some of its surplus Teletype machines, prices declined. By the mid-1980s, more than 100,000 TTYs were in service, mostly in the homes of deaf people. Since then, prices have declined dramatically, and TTY devices fit easily on a desktop. Others outside the deaf community also benefit from TTY. People with severe speech impediments are enthusiastic about the technology.

Many government offices and an increasing number of businesses have a TTY for their deaf and speech-impaired constituencies. Another innovation, relay services, permits someone who has a TTY to talk with someone who does not. The service employs an intermediary who reads typed messages from the TTY of one party to the other, then types that person’s oral responses. Special equipment and software allow a personal computer to serve as a TTY.,



“Today,” said U.S. Secretary of Commerce Herbert Hoover on April 9, 1927, “we have, in a sense, the transmission of sight for the first time in the world’s history. Human genius has now destroyed the impediment of distance in a new respect, and in a manner hitherto unknown.” Hoover had just witnessed the first long-distance transmission of a television signal, in this case between the American Telephone and Telegraph Company’s Bell Laboratory offices in New York City and Washington, D.C. A booklet published by Bell Labs a few years later described the event: “In the telephone conversation between guests at Washington and those in the Laboratories at New York, the television equipment permitted an individual listener in New York to see as well as to hear the person in Washington with whom he was conversing.”

The 1927 inter-city transmission revealed the true value and potential of television, which until than had been little more than a series of increasingly provocative laboratory experiments. The following year, the Federal Radio Commission issued the first U.S. television license to Charles Jenkins, who had been developing an apparatus similar to that used in the Bell Labs demonstration. Paralleling progress in America, John Baird, a Scotsman, also had invented a television system. In 1928, he used it for the first transatlantic television transmission, and in 1930 the British Broadcasting Company (BBC) launched the world’s first scheduled television programming. Among its earliest offerings was a play entitled The Man with the Flower in his Mouth. (Top)

Today, Jenkins’s and Baird’s devices are unrecognizable as television receivers. Instead of a picture tube, they used a rotating disk perforated in a way that produced an image from an electrical signal. This so-called mechanical system ultimately gave way to a fully electronic approach to television, which became the standard.

The new entertainment medium took off slowly at first. Six years passed before the number of television sets in use worldwide climbed to 200. But television would soon begin to grow at what would become an astronomical rate. In 1947, the year that Emerson Romero began his experiments with captioning movies for deaf people, there were some 60,000 television sets in the United States. Except for the 3,000 or so of them in bars, they graced the homes of well-to-do Americans. Twenty years later, the number of television sets in the United States was fast approaching 75 million. On average, almost one-third of the U.S. population owned one.

At the same time, theatrical films were losing their luster. It is difficult nowadays to appreciate how popular they once were in the United States. In the benchmark year of 1947, 72 million Americans went to the movies every week. Put another way, every seven days half the people in the U.S. paid admission to a neighborhood movie theater. Thereafter, weekly movie attendance fell steadily, stabilizing at about one in ten Americans in the mid-1960s. That the ascent of television coincided with a decline of the movies is no coincidence. To a large extent, television was the single greatest cause of the film industry’s shrinking audience.

Ironically, just as federal coffers were opening ever wider to Captioned Films for the Deaf—$7 million in 1969—movies began to wane. To be sure, much of the footage captioned by CFD consisted of educational films for the classroom. Furthermore among the deaf community, television was as disappointing and frustrating as an uncaptioned movie, if not more so.  (Top)

Television had begun as a commercial enterprise, and that is where the growth of the medium was concentrated. Yet as early as 1951, the U.S. government had seen the educational potential of television. At that time, the Federal Communication Commission reserved nearly 250 channels for noncommercial broadcasters. The first educational television station went on the air in 1953. By 1965, there were well over 150 such channels.


It was in this context that the Carnegie Corporation of New York established a 15-member commission to examine educational television in America. Their report, published on January 26, 1967, concluded that the country needed a “well-financed and well-directed educational television system.” The document continued: “The programs we conceive to be the essence of Public Television are in general not economic for commercial sponsorship, are not designed for the classroom, and are directed at audiences ranging from the tens of thousands to the occasional tens of millions.” Near the top of the twelve actions the report recommended was that Congress “establish a federally chartered, nonprofit, nongovernmental corporation, to be known as the ‘Corporation for Public Television.’ The Corporation should be empowered to receive and disburse governmental and private funds in order to extend and improve Public Television programming.”

Educational television stations were indeed struggling at the time. “Practically all noncommercial stations,” wrote President Lyndon B. Johnson, lobbying Congress enact the recommendations of the Carnegie report just 33 days after it appeared, “have serious shortages of the facilities, equipment, money, and staff they need to present programs of high quality. There are not enough stations. Interconnections between stations are inadequate and seldom permit the timely scheduling of current programs. Noncommercial television today is reaching only a fraction of its potential audience--and achieving only a fraction of its potential worth.” (Top) 

In the fall, Congress passed the Public Broadcasting Act of 1967, which established the Corporation for Public Broadcasting (CPB) as a not-for-profit company to be overseen by Congress, but not to be part of the government. President Johnson, in signing the bill, set forth the mission he envisioned for the new organization. “The Corporation,” he said at the signing ceremony on November 7, “will assist stations and producers who aim for the best in broadcasting good music, in broadcasting exciting plays, and in broadcasting reports on the whole fascinating range of human activity. It will try to prove that what educates can also be exciting.” Furthermore, he continued, the Public Broadcasting law “announces to the world that our Nation wants more than just material wealth; our Nation wants more than a ‘chicken in every pot.’ We in America have an appetite for excellence, too.”

The Broadcasting act laid out in exacting detail how CPB would conduct its affairs in support of public television. Deep within the law’s provisions for the structure of the corporation’s board of directors, how it could receive and disburse funds, and when it must submit reports to Congress among other things, stood a clause defining what kind of television equipment CPB could help educational TV stations acquire. Among the considerable variety of hardware was listed apparatus for the captioning of television programs. Even so, several years would pass before any television program appeared with captions, and the volume of captioned programming would remain small for many years beyond that.

Although focused not at all on the captioning of educational television programs for the deaf community, CPB got right to work fulfilling its mission. The first fruit of its funding was a national audience for Mr. Rogers’ Neighborhood. The creation of Fred Rogers, a 15-year veteran of children’s television, the Neighborhood sought to help preschool-age children feel good about themselves by addressing them directly at a pace they could follow, to prepare them for learning to read, and to promote appreciation and respect for others. Produced in Pittsburgh and originally seen only in the eastern part of the United States, Mr. Rogers’ Neighborhood ultimately became a national icon in 1968. At that time, funds from the Ford Foundation helped National Education Television (NET) —a network best known for its somewhat arid, academic offerings for adults—distribute the program nationwide. (Top)

Sesame Street debuted the following year. Featuring puppeteer Jim Henson’s lovable Muppets, this groundbreaking program entertained children while teaching them how to count, the letters of the alphabet, and basic concepts such as “in” and “out.” Over the next 30 years and more, Sesame Street would win 22 “Best Program” Emmys—television’s highest award for excellence—in the children’s series category, more first places by far than any other series of any genre. CPB seemed to be well on the way toward realizing its goal of providing the highest possible quality of educational television, which included not only programming for children but for adults, as well.


Not long after Sesame Street went on the air, in November, 1969, the Corporation for Public Broadcasting founded a new non-profit organization called the Public Broadcasting Service, or PBS, which supplanted NET as the America’s chief arbiter and distributor of educational programming. Signing on to PBS’s standards for public television were scores of educational television stations throughout the United States. Many of them created programming that aired only locally, but some became producers of material seen by a much wider audience. Among them was Boston’s public television station, WGBH.

Co-founder—along with WENH in Durham, New Hampshire—of the Eastern Educational Network, a consortium of educational television stations in the northeastern United States, WGBH had been producing The French Chef with Julia Child since 1962. Distributed nationally by NET starting in 1964, the culinary demonstration series migrated to PBS shortly after its incorporation. Viewers loved the wildly successful show, inundating it with 200 letters in the 20 days following its first broadcast. (Top) 

Malcolm J. Norwood who, as chief of HEW’s Media Services and Captioned Film Branch, had for years been financing the captioning of films for deaf and hard-of-hearing Americans, began to focus his attention on television in 1971. For the first captioning endeavor, he turned to WGBH, which had become preeminent in the field of educational TV. Phil Collyer, a producer in the Education Division at WGBH, recalled that Norwood was more than pleased that WGBH proposed The French Chef, the station’s most popular fare, as the program to be captioned. To make matters easier WGBH, having produced the show, needed no one’s permission to caption it. Through Norwood, HEW agreed to fund the project.

Some of the underlying technology for captioning television images already existed. For example, when a batter came to the plate in a televised baseball game, his name could be made to appear on the screen. A computerized device with a keyboard, called a character generator, produced the letters and sent the player’s name to be mixed with the rest of the television signal as it went out over the airwaves. However, merely naming a baseball player turned out to be a relatively simple job when compared to captioning an entire television program.

Unlike the player’s name, which soon disappeared from view, one or another caption would be onscreen most of the time during an episode of The French Chef. Full-time captions led WGBH producers to consider where on the screen to place captions so that they would not obscure the audience’s view of Julia Child’s cooking techniques. When to switch from one caption to the next became an issue. Another was how much, without obscuring the cooking lesson, Julia’s monologue could be condensed to fit the interval reserved for a caption. (Top)

Phil Collyer became the producer at WGBH to take charge of captioning The French Chef. The process began with transcribing Child’s monologue into text and dividing it into two- and three-line captions that might be lightly edited, the better to serve their purpose. Recalled Collyer: “I passed the scripts past Ruth Lockwood, who was Julia’s producer at the time, and she would check over the changes we make to be sure they didn’t alter the basic content and instructions.”

When the captions were ready, decisions had to be made about placement on the screen. Typically, The French Chef had three basic camera shots: a wide view showing Child and her work area, a close-up of the star, and a shot taken with a mirror suspended above her stove, so that viewers could see into the pots and pans. In the long view, said Collyer, “we brought the captions up in the middle of the screen so they’d be closer to her mouth, even though her mouth was not that readable.” For the close-ups, “we put the captions down in the bottom of the screen, because that did not block what she was doing.” In the mirror shots, “it just so happened that Julia’s hand came in from the bottom of the screen. So it was not appropriate, to my way of thinking, to put the captions down there over her hands if that was part of the action, so in those shots, I put the captions at the top of the screen.”

Caption timing became all-important. “We began to discover,” remembered Collyer, “that if you had a shot change and a caption change at close to the same time, it was a visually disturbing event. It would cause you to blink, and to the extent that you do that, a person who is relying on the captions loses a bit of reading time.”  The solution was to have no caption change within about 1/10 second of a shot change.

In the wake of these decisions WGBH caption writers programmed a computer with the words for each caption from the character generator, as well as each caption’s position on the screen. Following a timing script that cued each caption to onscreen events, a caption writer watched a tape of the show that displayed a visible time code, which facilitated the coordination of captions and program. As all of this took place, a video recorder taped the resulting captioned version of The French Chef for later broadcast. (Top)

The first French Chef program to be captioned was a remake of the inaugural episode, in which Julia Child showed her viewers the secrets of preparing coq au vin. When after about three months’ writing captions and making the computer work with the character generator, the captioned version of the show was complete, the WGBH team showed the results to Malcolm Norwood at HEW. More than pleased, he enthusiastically agreed in 1972 to seek funding for the captioning of 25 additional episodes. To do the work, WGBH established its Caption Center, headed by Phil Collyer.


WGBH first broadcast the captioned The French Chef without fanfare in the summer, during the eight weeks in which the program customarily aired as reruns. The experiment seemed to have little impact on the deaf community, since neither WGBH nor PBS had alerted these Americans that something of potential interest and benefit to them was about to appear on TV. The response from the hearing community, however, was in some ways unexpected. “We thought,” said Collyer, “that everyone would know why the captions are there—because of deaf people. Well, boy, was that off-base. Some of the letters we got were just an indication of how many people were unaware of deafness, or a deaf community, or anything like that.” One of the letters, recalled Collyer, accused WGBH of making busy work for someone who would have been unemployed had he or she not been hired to put Julia Child’s cue cards on the screen. Another letter recognized that captioning “was helpful to the deaf but was not in favor of having captions on their television set. So they told us that all the money we might spend on captioning should be spent on hearing aids and—get this—footstools,” presumably for deaf people to sit very close to the television screen, perhaps the better to hear what was said or to read lips.

As WGBH captioned more and more episodes of The French Chef, the crew became increasingly adept at the process. Soon the captioning work was taking less time—and money—than had been budgeted. So, by the time of President Richard Nixon’s second inauguration in January, 1973, enough surplus had accumulated for WGBH to caption the inaugural address for rebroadcast later in the day. Over at HEW, Norwood gave the go-ahead; he thought it a great idea to begin captioning current events for the deaf community. Moreover, the price was right. Yet, a substantial obstacle stood in the path of this initiative. Believing that it would be redundant for PBS to air the inauguration when all of the television networks were doing so, the Public Broadcasting Service had chosen not share in the cost of press pool coverage. As a result, public television stations, WGBH included, would not be permitted to broadcast Nixon’s speech, with captions or without. (Top) 

In an effort to overcome this barrier, Collyer telephoned the NBC producer in charge of inauguration coverage and told him of WGBH’s wish to receive the network “feed” of the President’s speech and why. Though sympathetic with the idea of captioning the event, NBC’s hands were tied, said the producer. Under pool rules, the network could not donate to PBS or WGBH a service that public television had so recently declined to buy. On the other hand, NBC had never offered to sell PBS broadcast rights to the video portion of the coverage without the sound, so the producer offered Collyer what amounted to a “silent movie” of the speech. Collyer snapped it up. His team could pull captions from the text of the speech released ahead of time by the White House.

But the removal of one roadblock revealed another. Federal Communications Commission regulations required that a television soundtrack closely match the video part of the signal. So WGBH could not, for example, play music during the speech as a substitute for Nixon’s voice or have the words read simultaneously by someone else. But Collyer had an idea. “I don’t know if I dreamed it,” he recounted many years later, “or whether I just woke up in the morning and said, ‘I know what I’ll do. I’ll do the sound in Spanish and serve two audiences at one time. That will solve it.’” So, as Collyer’s team labored over the captions, two translators from Berlitz (the language school) working from the advance script, converted the speech into Spanish, a feat that to Collyer’s surprise took more time than the four hours needed to write the captions and feed them into the character generator. Nevertheless, all was ready in time for the 6 p.m. PBS broadcast of Nixon’s captioned address in Spanish. (Top)

Nixon had spoken for about 20 minutes, which just happened to be the length of a typical news broadcast of the era, minus the commercials. Preparing captions for the speech had taken four hours or so. What all of this meant, Collyer realized, was that WGBH could caption an early-evening newscast and rebroadcast it later the same evening. Excited about the possibilities, Collyer wrote up a proposal for Malcolm Norwood at the Captioned Films Branch who, the producer already knew, had wanted for some time to fund the captioning of news for deaf people. Before long, WGBH got the green light from Norwood.

The station wasn’t in the nightly news business, so to the make this idea work, Collyer would have to persuade one of the networks to let WGBH caption its newscast for later rebroadcast over educational television stations. CBS and NBC resisted the idea of captioning anything for a deaf audience. They feared that success with captioned news, for example, eventually would lead to demands that they caption their primetime programming. This they were loath to do—with some justification. A 1970 field study in Centre Country, Pennsylvania revealed that about ten percent of local public television viewers objected to captions superimposed on the handful of movies they were shown. Neither CBS nor NBC wished to make uncomfortable such a large minority of their viewers.


At ABC, however, the idea of a captioned news program received a warmer reception. Much smaller than its rivals, the network had a history of doing things differently. ABC, for example, had been the first to look to Hollywood rather than radio for programming, signing a seven-year agreement to purchase television shows from Walt Disney Studios and to help fund the construction of Disneyland. In return, ABC would receive a percentage of the theme park’s profits. ABC’s chairman, Leonard Goldenson, one of whose daughters had died at an early age of cerebral palsy, probably exerted an influence. He describes himself in his autobiography as wishing to bring persons with disabilities “back into America’s life stream.” Julius Barnathan, ABC’s top engineer at the time, later reported that Goldenson’s interest in helping those populations extended beyond the victims of cerebral palsy. In any event, it soon was arranged that the WGBH Caption Center would add captions to the ABC Nightly News and rebroadcast it at 11:00 p.m. Launch date: December 3, 1973. (Top)

It was not to be. As the Caption Center revved-up the news-captioning operation, Senator Sam Ervin, a Democrat from North Carolina, convened the Senate’s Watergate hearings, an investigation into the infamous Watergate scandal that ultimately would bring down Richard Nixon’s presidency. PBS stations across the country were rebroadcasting the Watergate hearings every night at 11 p.m. Thus for the moment, there was no time slot available for the captioned newscast. The Caption Center turned this disappointing delay into an opportunity to practice captioning The Nightly News in the four-and-a-half hours between the end of ABC’s live broadcast at 6:30 p.m. and the transmission of the captioned version at 11:00.

During this period, Phil Collyer decided to show some of the Caption Center’s work to Jim Reina, his contact in ABC’s special projects department. Wishing to let Reina know that captioning, though superbly accurate, could never be perfect, Collyer chose to take with him to ABC an edition of the news that contained an error. At the time, Israel’s Prime Minister Golda Meir “was in the news,” recounted Collyer, “and there was some talk of disagreement between Israel and the U.S. around this time.” During her appearance in that evening’s news, Meir commented on the relationship between Israel and the United States using a word that was unintelligible in her heavily accented English. “We had a better recording on our master videotape,” said Collyer. “I went down and listened to it on better speakers. We’re rocking the tape over the heads of the tape machine trying to get it out. And it sounds like fracture.” With a shorter word substituted for fracture, the Israeli head of state’s comment appeared in the caption as: “The split between Israel and the United States still exists.” (Top)

As Collyer reviewed the tape before showing it to Reina at ABC, he became convinced that the caption had to be wrong. Listening to the original videotape on ever better-quality playback equipment, he at last understood that instead of fracture, Golda Meir had said friendship. “Oh, my God,” thought Collyer, “we’re 180 degrees off, here.” Realizing that a mistake like this would all but inevitably become part of a captioned newscast at some point, Collyer decided to show the tape to Reina, blooper and all. When it appeared on the screen, Collyer pointed it out to him. "You know,” said Reina, “we have people here transcribing the news every single night, and that’s a word they couldn’t get either."

The Captioned ABC News debuted in the wake of the Watergate hearings, on December 3, 1973. Because PBS stations could not broadcast the advertising that accompanied the network’s edition of the program, WGBH had to fill six, one-minute commercial breaks with other information. Every afternoon, a team at the Caption Center researched topics to fill these voids in the program. Some, like a weather map scattered with temperatures for major cities and sports scores, were regular features. Short takes on items of special interest to the deaf community—a piece on cochlear implants is one example—filled many of the breaks. Once, presidential candidate Jimmy Carter did a 60-second campaign pitch, which WGBH captioned and inserted between news segments. (PBS offered Carter’s opponent, President Ford, declined PBS’s offer of a similar opportunity.) In 1980, WGBH even sent a 10-person production crew to film the Winter World Games for the Deaf, which preceded the Winter Olympics at Lake Placid, New York that year. Every day for a week reporters, camera crews, and captioners produced a six-minute segment of Games coverage. Each night, WGBH edited the captioned broadcast of The Nightly News, joining the commercial breaks into a single six-minute gap in which to show highlights from competition earlier in the day. (Top)  

WGBH’s rebroadcast of The Captioned ABC News was about as much programming as the networks would allow to be burdened with a feature that, in their view, a large number of viewers found irritating. Television executives simply refused to diminish the viewing experience for the vast majority of their audience to serve the special needs of a comparatively small deaf community. Even ABC’s Barnathan, sympathetic to the idea of captioning television shows so that deaf Americans could enjoy them, would not endorse the captioning of other programs, especially not during the after-dinner, primetime viewing hours. Some other way would have to be found. Remarkably, just such a development had been taking shape at the Time and Frequency Division of the National Bureau of Standards in Boulder, Colorado.


The Time and Frequency Division (TFD) was responsible for the care and feeding of the most accurate timepieces of the era—atomic clocks that used the vibrations of atoms as a precision pendulum to achieve accuracy within a few millionths of a second (microsecond). To distribute this information to those depending on it and to others who wanted to know what the time really was, TFD broadcast a time signal by short-wave radio from a station with the call letters WWV, near Fort Collins, Colorado. Anyone having a radio with a short-wave band potentially could tune in the signal to hear the distinctive electronic-sounding tick-tock of a simulated pendulum leading to an announcement of the time: “At the tone, the time will be 14 hours, 23 minutes.”

This method of distributing the time had its drawbacks. Not everyone in every place who tried to tune in the signal could receive it, either because the distance to the transmitter was too great or because atmospheric conditions such as thunderstorms or sunspot radiation could disrupt the signal. Beyond that, there was no predicting the path that the radio waves would take to a radio receiver from one day to the next as the signal ricocheted between earth and the electrically charged layers of the upper atmosphere. Slight differences in this path could affect the apparent time—not a satisfactory state of affairs for an agency charged with providing the same time, with equal precision, to everyone, everywhere. (Top)

In casting about for a better solution, James Jespersen, chief of the TFD—along with colleagues Dick Davis and George Kamas—took note of experiments performed in Germany and Czechoslovakia, which seemed to show that television signals could be used, in effect, to transmit time information accurately and reliably over considerable distances. Here’s why: whenever a television network such as ABC, NBC, or CBS transmitted a television signal from studios in New York to their affiliated stations across the country, it almost invariably followed the same route as all of the signals before it and after it.

This happy state of affairs arose because TV programs traveled to their destinations partly over wires and partly by way of extremely high-frequency, ricochet-free transmissions between radio towers separated by a mere 30 miles or so. Close observation by Jespersen, Davis, and Kamas over a period of a 18 months or so demonstrated to their satisfaction that a signal took five thousandths of a second to travel 1,000 miles and that the arrival times of signals transmitted from network studios cross-country to a distant TV station varied by no more than a few microseconds.

Television transmissions, in addition to pictures and sound, included time pulses used to synchronize lip movements with the words heard by a viewer. If the pulses were generated according to an atomic clock installed at network headquarters, accurate time might be sent to any television set capable of receiving a network signal. (Top) 

Television is much like a movie filmed at the rate of 30 images (frames) per second. Between each frame there is a short span of time lasting little more than a thousandth of a second, during which the picture tube prepares to display the next image in the stream. This pause, known as the vertical blanking interval or VBI, makes space for the TV signal to carry time pulses without disturbing the picture seen by the television audience. (Television sets of the era required manual adjustment of the picture’s vertical placement on the screen. Purposely misadjusting the set revealed the VBI as a black band. Time pulses and affiliate messages appeared as sparks of light  in the band.)

Davis, Jespersen, and Kamas of the Time and Frequency Division saw in the vertical blanking interval a solution to an ongoing problem: how to set a distant atomic clock to match the time of a master atomic clock hundreds or thousands of miles away. In a practice typical of the era, a technician would first set the time of a portable atomic clock to that of the master, then travel by air with the 100-pound portable timepiece to the distant one. After setting it, the technician would fly home. Since all clocks, even the atomic variety, gain or lose time, this method required several such visits annually to every remote clock—an expensive and time-consuming process. The VBI would enable these same technicians to reset a remote clock as often as desired at little cost beyond a few hundred dollars for an electronic device called a decoder, which could display the time on a television screen.

The TFD team first published a description of their process in 1970, calling it TvTime. They began to promote their idea enthusiastically within the National Bureau of Standards, pointing out that the plan would provide a source of accurate time to anyone who wanted it. They lobbied the television networks, for without their cooperation, minimal as that might be, in placing time codes in their signals the idea could go nowhere. American Telephone and Telegraph (ATT), whose wires and radio relay towers the networks used to carry television signals cross country, would have to allow the time codes to be transmitted without a surcharge to the television industry. Letters were written, evidence of TvTime’s usefulness and reliability offered, demonstrations arranged. Wrote one NBS staff member for a 1971 presentation of the system, “We have found that the TV distribution network is so good that it’s almost impossible to tell whether you’re looking at the atomic clock or the TV signal. In other words, TvTime can bring an atomic clock to anybody.” (Top)

But initial hope within the Time and Frequency Division that the concept would succeed ultimately faded to disappointment and even dismay as TvTime encountered opposition that proved to be insurmountable. In 1972, The National Association of Broadcasters, which represented network interests, recommended after studying the matter that TvTime not be adopted. Among other technical issues, the TV networks cited as arguments against TvTime the potential confusion caused by America’s multiple time zones, as well as delayed broadcast from videotape of some programs that would cause incorrect time codes to be hidden in the vertical blanking interval. With the networks opposed, it became less and less likely that the Federal Communications Commission, the agency in charge of television’s technical standards, would agree to reserve space in the VBI for a precise time signal. It seemed that TvTime was dead, at least for the purpose originally envisioned for it. 

However, Jespersen and his team were not prepared to abandon TvTime. From the earliest days of their research in 1967, they had demonstrated that the vertical blanking interval could carry information other than a time signal. For example, a network could send text bulletins, concealed within the VBI, to their affiliates, which would be equipped with special equipment that could display the hidden communications. Unseen by viewers, such messages could alert network television stations to programming changes, news bulletins, and the like. And that’s the capability that the TFD team emphasized when they again pitched their technology to the television networks. Traveling to New York in October, 1971, they staged a test of the technology at the headquarters of the American Broadcasting Company. ABC’s Julius Barnathan was on hand for the demonstration, which included the transmission of timing information as well as text messages to ABC affiliates. Upon seeing the demonstration—and much to the surprise of the TFD team—Barnathan immediately suggested that the National Bureau of Standards had on their hands what appeared to be an ideal way to provide captions for deaf and hard-or-hearing members of the TV audience while inconveniencing none of the hearing audience. (Top) 


Elsewhere, plans were already well under way for the first National Conference on Television for the Hearing Impaired, to be held in December at the University of Tennessee in Knoxville. The hope behind the conference was that, when leaders of television met with leaders of the deaf and hard-of-hearing community, some way might be found to make TV more accessible to those who could see it but who could hear it poorly or not at all. The National Bureau of Standards staff immediately began a successful effort to add a demonstration of TvTime to the conference agenda.

As a University of Tennessee press release noted somewhat dryly of the event, “the flip of a switch on a special television monitor at UT allowed deaf viewers to see subtitles superimposed over a regular television program.” But the TvTime demonstration of hidden captions surely upstaged the sample episode of The French Chef—in which the captions could not be turned off—that WGBH had brought to the colloquium. TvTime also overshadowed an alternative system for hidden captions, developed by a company called HBR-Singer, that threatened both to encroach on the television picture and to shorten the lives of television tubes.  One viewer, writing in the periodical Television Watching, enthused that TvTime “was the highlight of the conference. This technical breakthrough might be considered the ‘moon shot’ for the millions who never heard the words ‘one small step for a man.’”

Malcolm Norwood from HEW’s Captioned Films Branch also witnessed the demonstration. At Jespersen’s behest, he set wheels in motion at HEW that would lead to $1 million in federal support for hidden-caption initiatives. Norwood also encouraged the TFD team to stage a demonstration of TvTime at Gallaudet University, a school for deaf students in Washington, D.C. Gallaudet’s proximity to the seat of government would make it a convenient location for congressional leaders and other influential individuals to see the demonstration.

Whereas in Knoxville only a short segment of a TV program had captions, for the exhibition at Gallaudet a full, half-hour episode of The Mod Squad—a contemporary ABC crime drama featuring a trio of “hippie” undercover police officers—would be the fare. Sandra Howe, Jespersen’s secretary, worked from an advance copy of the script and videotape of the program to write all of the captions, which were in turn punched into a strip of paper Teletype tape. As the demonstration began on February 15, 1972, Howe sat in the ABC television studio in New York. Her job was to regulate the speed of the paper tape through a tape reader so that the captions appeared on the screen, synchronized with the action as ABC transmitted the program to its viewers. (Top)

Jespersen had traveled to Washington to connect the decoding equipment to the television set at Gallaudet and watch the show. It was a profoundly moving experience. “Nothing,” said Jespersen in Television Watching, “ could match the growing excitement of the students as a whole new world opened up to them. Many motioned with their hands; others had tears in their eyes as they watched the show. For the first time, they could actually understand the story.”

Yet, just as the National Bureau of Standards and its Time and Frequency Division seemed poised to bask in the limelight, NBS decided to abandon the project. Try as he might, Jespersen could not persuade his organization to continue. “TvTime,” he wrote to the NBS Director in April, “has received much national publicity; has generated inquiries to Congress, to the FCC, and even to the White House; and has received the enthusiastic endorsement of at least one major TV network. Various organizations for the deaf are especially excited about the TV captioning prospects. If the proposed system dies at this point from lack of funding support, it seems unavoidable that NBS will receive somewhat of a ‘black eye.’”

All to no avail. Captioning, in the Director’s opinion, simply fell outside the Bureau’s purview. For several years, the Time and Frequency Division would continue to provide support in the form of technical advice to whomever wished to pursue the matter. They also would design and build prototype caption encoders and decoders, but that was to be the extent of their involvement. TvTime, if it were to fulfill its apparent promise as a captioning technology, would have to find another mentor—and other sponsorship. Beginning in September 1972, the Public Broadcasting Service would become the mentor, and the Captioned Film Branch of HEW under Malcolm Norwood, the sponsor. (Top)


Although the National Bureau of Standards had demonstrated conclusively that the vertical blanking interval could carry captions to the television audience, many of the details of implementing such a system had yet to be worked out. Of the 21 lines available in the VBI, none had been thoroughly tested yet for the job of caption-bearer. No one knew how many lines of the VBI captioning would require in order for the words to keep pace with the images on the screen. Furthermore, the Federal Communications Commission had not approved the use of the VBI for captioning. The challenges of mass producing and of marketing decoders numbering in the thousands had yet to be addressed. Many months passed before there was agreement even on what to name the new technology. This awkwardness ended when Willard Rowland, director of Long-Range Planning and Research at PBS, coined the term closed captions—closed to viewers without a decoder—to describe the service for the deaf community that PBS had committed itself to. Henceforth, the kind of captions that WGBH had added to The French Chef and The ABC Nightly News would be called open captions.

In its effort to launch a closed-captioning service, PBS advanced on all of these fronts more or less simultaneously. Within weeks of having taken over from the National Bureau of Standards, PBS began testing not only NBS’s TvTime system, but also another method of closed-captioning that had appeared in the meantime. Developed by Hazeltine Research, Inc., this competing technology made no use of the VBI. Instead, it added a new component to the television signal that would carry the captions. Internal testing continued for two years or so.

PBS, satisfied by 1974 that it had perfected the two systems in the laboratory, was ready to try them in the real world of television. From the FCC, they sought and received temporary permission to transmit captions on line 21 of the VBI, as well as by means of the Hazeltine process, to selected PBS stations distributed across the United States—from Topeka, Kansas to Spokane, Washington to Jacksonville, Florida.  More than 1,400 deaf and hearing-impaired viewers, invited by the stations to watch the programs, filled out market-research questionnaires about the experience. 

The results of the tests steered PBS away from the Hazeltine approach and toward adopting the line-21 modification of the original NBS idea. When Gallaudet University analyzed the survey of those who participated, statisticians found that 90 per cent of respondents who were deaf or had significant hearing loss felt that captions were a marvelous enhancement of their television-watching experience. An even greater percentage expressed interest in buying a decoder when it became available. (Top)

Parsing the results of the survey more closely, it seemed clear that initially as many as 400,000 decoders at a price of $250 might be sold to those who had no hearing at all. The much larger audience with some ability to hear appeared ready to purchase an additional 350,000 of the devices. Television sets with a built-in decoder could replace worn out sets within the deaf and hard-of-hearing community at the rate of at least 100,000 per year. An encouraging development, these numbers certainly were large enough to realize important economies of scale in the production of decoders, either as stand-alone components to be connected to existing TVs or as circuitry built into the sets themselves. With these results in hand, PBS petitioned the Federal Communications Commission to reserve line 21 of the vertical blanking interval for the purpose of transmitting closed captions. On December 8, 1976, the Commission agreed.

Assured of a home for closed captions in the VBI, the Public Broadcasting Service turned to the matter of hardware. For the decoder’s electronic innards, PBS journeyed down two roads. They first asked Texas Instruments, a well-know maker of ICs, or integrated circuits—silicon chips engraved with complex circuitry—to design and prepare prototypes of ICs around which to build decoders. Later, PBS hired Rockwell International to pursue a different approach, in which a small computer chip, or microprocessor, instructed by suitable software, would do the decoding. Using mostly off-the-shelf circuitry, the Rockwell decoder promised to be less expensive than the customized ICs from Texas Instruments. But a decoder based on a microprocessor might also be slower.

In the end, and PBS awarded the contract for the first decoders to Texas Instruments. General Instrument Corporation, which specialized in television technology, would assemble the ICs and other components into a set-top decoder christened TeleCaption. Sears, Roebuck and Company—the country’s largest retailer at the time, with hundreds of stores around the country and a prosperous nation-wide mail-order operation—would become the exclusive dealer for the set-top decoders, as well as a 19-inch television set with the decoder circuitry built into it. (Top)

While arranging for the decoder side of closed-captioning, the Public Broadcasting Service also developed a caption-editing console that that facilitated the captioning of TV programs. Working with funds provided by HEW, PBS assembled the console around caption encoders, built by an electronics firm called EEG Enterprises, and a small computer for which PBS developed suitable software. These programs enabled a caption editor to type an advance copy of a script as a series of captions—each with its own number—that the computer stored on a floppy disk. From there, the editor called up the captions on a video monitor displaying the television program to be captioned. By means of a second keyboard and a light pen, the editor could alter the captions to better fit the space available for them and change the position of a caption on the screen, details that became part of each numbered caption on the floppy disk. The final step of the process was to synchronize the captions with the picture. An editor accomplished this by pressing a button on the console first to display and then to remove each caption in turn, according to the progress of the action on the screen. The result was a collection of captions, each of which included time codes gleaned from the corresponding video. They acted as cues to the captioning system at the television studio to call for each caption as its turn came to be encoded into line 21 of the vertical blanking interval. PBS arranged for the construction of 20 such consoles.

An important question remained. What organization would do the actual captioning of TV shows once the equipment, processes, and procedures were ready to go? PBS had no interest in immersing itself in the day-to-day minutiae of captioning; it saw itself as the minister of a television quasi-network that left the creation of television programming to others. The major networks were more than reluctant to take on the task, arguing that it would add intolerably to the already hectic business of keeping to a tight broadcast schedule.

To address this problem, PBS hired Arnold & Porter, a Washington, D.C. law firm. Myron Curzan, one of the firm’s attorneys, teamed up with John Ball, Vice President for Engineering at PBS, to organize a study group. From their investigations emerged a number of considerations that would begin to suggest an answer. For example, not only were the TV networks unenthusiastic about captioning their own programs, they intended that whoever did the work should not profit from it, inasmuch as the networks would offer their programs free-of-charge for captioning. The initial scarcity of captioning consoles argued for a single captioning service where the equipment would all reside. Uniform training of caption writers and editors in this new craft could best be accomplished if captioning equipment and expertise were concentrated in one place. (Top) 

The WGBH Caption Center might have seemed an obvious choice. As a public television station, WGBH was a non-profit organization. It had years of captioning experience under its belt—albeit with open captions—and the Boston station was eager to take on the work. But the major networks saw WGBH, despite its not-for-profit status, as a competitor in the quest for viewers. For that reason, the networks would not accept WGBH as the providers of closed captioning. Furthermore, the study group recommended that the captioning be based in Washington, DC, in order to simplify the new organization’s communication and consultation both with PBS, its parent, and with the Department of Health, Education and Welfare. HEW, which had already given PBS more than $5 million toward the development of closed-captioning technology with more to come in the future, would have to approve any plan for a captioning agency.

After months of study-group meetings and deliberations, HEW agreed not only with Curzan’s and Ball’s broad conclusions, but also with the details of the emerging organization’s goals, responsibilities, and governance. HEW Even adopted the name proposed for the captioning outfit. Despite vigorous remonstrations from the WGBH Caption Center, HEW secretary Joseph Califano announced in the spring of 1979 the formation of the National Captioning Institute—NCI.

 “The mission and importance of NCI were clear from the beginning,” notes NCI’s thumbnail history of itself. “It was to promote and provide access to television programs for the deaf and hard-of-hearing community through the technology of closed captioning.” Yet the National Captioning Institute, like many new undertakings, consisted of little more than a skeletal outline of what NCI was to become. Fleshing out these bones in the ensuing years would present many challenges. (Top)


Copyright National Captioning Institute, 2004

Clips    Welcome!