Museums and the Web 1999

Best of the Web

Archives & Museum Informatics
2008 Murray Ave.,
Suite D
Pittsburgh, PA
15217 USA

Join our Mailing List.

Published: March 1999.


From the Mountains of the Moon to the Neon Paintbrush: Seeing and Technology

Peter Walsh, Davis Museum and Cultural Center, USA

My starting quotation is this one, first published in 1929:

One of my never-to-be-forgotten experiences was circumnavigating New York in a boat. The trip took all day… One who has not seen New York in this way would be amazed at the number of people who live on the water. Someone has called them 'harbour gypsies.' Their homes are on boats-whole fleets of them, decorated with flower boxes and bright-colored awnings. It is amusing to note how many of these stumbling, awkward harbour gypsies have pretty feminine names - Bella, Floradora, Rosalind, Pearl of the Deep, Minnehaha, Sister Nell. The occupants can be seen going about their household tasks - cooking, washing, sewing, gossiping from one barge to another, and there is a flood of smells which gives eyes to the mind. The children and dogs play on the tiny deck, and chase each other into the water, where they are perfectly at home. (Keller, 1998, p. 506)
What is remarkable about this passage is not so much its content as its author. It was written by Helen Keller. Keller, I hope you will remember, lost both her sight and hearing in very early childhood. She learned to "hear" others by interpreting the letters they traced on her hands. I have, in fact, omitted this sentence from Keller's account: "I had with me four people who could use the hand alphabet--- my teacher, my sister, my niece, and Mr. Holmes."

Part of the scene Keller is able to glean from sensation and smells, which, she notes "gave eyes to her mind." The rest she has constructed from the laborious hand notations of her companions.

I cite this case to make a fundamental point about seeing. It is not done just with the eyes. I am making a distinction here between eyesight or light perception through the optic nerve, and seeing. In humans, seeing involves all the senses and the mind.

This is hardly a new observation. Marshall McLuhan pointed it out and he, in turn, was referring back to the work of Francis Bacon (Marchand, 1998). But it is something that we have a strong tendency to overlook.

To people with normal eyesight, achievements such as Keller's can even seem suspect, a species of fraud or parlor trick. But Keller's case is hardly unique. Another writer who lost his eyesight in early childhood is the distinguished Indian-American author Ved Mehta. Not wanting to be known as a "blind author," Mehta writes, even more than Keller does, as if he could see with his eyes as well as his mind.

In one of his memoirs, Mehta quotes Herbert L. Matthews' New York Times review of his second book. He found Matthew's comments on his blindness particularly painful. Matthews says:

Ved Mehta plays an extraordinary trick on his prospective readers. Mr. Mehta, a Punjabi Hindu, now 25 years old, has been completely blind since the age of 3. He has written this book about his return to India after ten years' absence as if he had normal vision… He cannot help his blindness and has, indeed, turned it by a miracle of will power and courage into something resembling an asset, but he could not hope to write about India as if he were not blind. (Mehta, 1998, p. 51-2)
Mehta comments: "Matthews's main point was: How dare a blind person write as if he could see? Isn't writing in that way dishonest?" (Mehta, 1998, p. 53) To a friend, Mehta answers his own question: "'I live among the sighted. I dress, I eat, I walk with the sensibilities of the sighted in mind. I hear the talk of the sighted from morning to night. My whole inner life is made up of visual assumptions.' (Mehta, 1998, p. 56) Later, he writes of "piecing together a world of five senses by the diligent use of four." To prevent him from writing as he chose is like preventing a deaf Beethoven from composing. (Mehta, 1998, p. 59)

Mehta, moreover, seems not only to see the visual world but to understand and value its full range of signs and symbols, even at great expense to himself. Early in his career, he moved into the famous Dakota on New York's Central Park. He writes that:

The entire apartment needed a coat of paint and I had to pay for that. I felt that for my books I must have built-in bookcases with a special kind of molding to match that of the apartment, and obtaining them required the services of a cabinetmaker. More important, none of my… furniture was appropriate: the Dakota seemed to call for antique furniture… no sooner had I moved in, with nothing but my books and records and a couple of pictures, than I was caught in a web of rug merchants and antique dealers, drapers and upholsterers, and the money was draining out of my savings account as if I were a rich traveller recklessly throwing gold coins in the Fontana di Trevi. (Mehta, 1998, p. 229)
Despite the fact that Mehta cannot see in the conventional sense, he finds himself caught up in the full symbolic world of visual status. A few months later, he reports,

I was just settling in with some of my newly acquired possessions-an eighteenth-century dining table and some Georgian silver. Given my precarious life as a writer, I have no simple explanation of why I went in for such trappings; either I was always trying to put up a dazzling front in order to guard against any unwarranted pity or I was just born extravagant. (Mehta, 1998, p. 234)
But just as it is possible to see and even act visually where there is no eyesight, it is possible not to see where there is. The American writer Paul Theroux tells the story of "a certain New Yorker" who, because Mehta's writing described the visual world with such vividness and nuance doubted that he was really blind.

Seeing Mehta at a party, holding forth in front of an attentive audience, the doubter decided to test him. He crept up to Mehta, began making faces at him, waved his hands over his eyes, thumbed his nose, and did everything he could to distract and interrupt him. Finally he put his face right up to Mehta's and stuck his tongue out. Through all of this, Mehta went right on talking, calmly and articulately, without the slightest sign he was distracted.

Finally, the doubter, humiliated by his own behavior, left the party. As he went out, me said to his hostess "I had always thought Ved Mehta was faking his blindness, or at least exaggerating. I am now convinced that Ved Mehta is blind."

"That's not Ved Mehta," the hostess replied. "It's V.S. Naipaul." (Theroux, 1998, p. 279)

I ran across a quotation recently that succinctly states this first point I am trying to make. John Dugdale is a photographer who lost most of his vision from AIDS-related CMV retinitus but continues to work. "I absolutely have a full, clear visual picture of everything that I photograph," says Dugdale. "That really starts inside my head, because eyesight and vision are completely different." (Photography in New York, 1998)

This brings me to the subject of technology. The essential point of this talk is that technology and seeing are closely linked. Technology changes and, in some cases, even forms what we see.

I am defining technology very broadly here. I am going back the Greek root of te???, which means "art" - art in the broadest sense that encompasses fine art, craft, and technique. I will be talking about technology as any artificial means that relates to seeing. Cave paintings, under this definition, is technology, as is the written alphabet. And of course photographs, moving pictures, television, and the World Wide Web are also technology. All such technologies are directly linked to the natural capacities of human beings.

To show you what I mean, let me use the case of Galileo and the mountains of the moon.

In 1610, Galileo published a book with the title Sidereus Nuncius, usually translated as "The Starry Messenger." In the summer of 1609, Galileo had heard of an odd Dutch invention, a piece of optical technology that made distant things appear closer than they really were. In the Starry Messenger, Galileo describes in detail how he adapted, improved, and greatly enlarged the invention and used it to look at the moon.

What happened next is one of the great moments in the history of science for, in examining in detail the various spots on the moon's surface, Galileo came to an astonishing conclusion. He wrote:

I have been led to the opinion and conviction that the surface of the moon is not smooth, uniform, and precisely spherical as a great number of philosophers believe it (and the other heavenly bodies) to be, but is uneven, rough, and full of cavities and prominences, being not unlike the face of the earth, relieved by chains of mountains and deep valleys. (Drake, 1957, p. 31)
Galileo's conclusion doesn't seem so remarkable to us because we take his telescope, and much later technologies, for granted. So we see the mountains of the moon, even with the naked eye, without much difficulty. But Galileo's contemporaries saw the moon and the entire universe differently for the reason that, for many centuries, they had been told that the cosmos consists of a series of perfect spheres, all of them revolving around the earth, which stood motionless at the center of creation.

Besides the moon, Galileo discovered a number of other things, including several moons of Jupiter and many previously unseen stars, that challenged the view of the cosmos that had been established since Aristotle. Those of you who remember your history of science know that Galileo's observations came in the middle of what has been called "The Copernican Revolution," which changed the way human beings see the universe.

The Copernican Revolution is, I think, a great deal more complicated than people generally think. It is not quite the clear-cut "paradigm shift" that Thomas Kuhn and others would have us believe. I'm going to attempt a quick summary of what was really going on because I think it makes an interesting illustration about the relationships between seeing, vision, thinking, and technology.

The idea that the sun, not the earth, was the center of the solar system had actually been around for a very long time, probably since the Greeks and certainly since the late Middle Ages. The problem was that the idea was very hard to see: the optical illusion that the sun and stars revolve around the earth is very strong. The clue to the truth, however, was the planets. Because the planets actually revolve around the sun, not the earth, they behave oddly when viewed from the earth. Instead of rising and setting like the sun and moon, they sometimes appear to slow down, stop in their courses, and even move backwards from time to time. This is why the ancient Greek word for planet is p?a??t?s, which also means "wanderer."

How to explain the movements of the planets if they are supposed to revolve around the earth, as was believed, in perfectly circular orbits (actually, they were imagined as set in revolving crystal spheres)? Well, it wasn't easy. You had to imagine perfectly circular suborbits or "epicycles" within the orbits. As observations of the movements of the planets became more precise and detailed, the number of epicycles needed to account for them grew. By the time of Copernicus, the system required 83 epicycles within eight perfect crystal spheres, all kept in motion by a hierarchy of angels. (Hanson, 1967, p. 221-2)

Everyone realized that the epicycles were pretty untidy. Alfonso X of Castile famously summed up the whole mess: "If the Lord Almighty had consulted me before embarking on the Creation, I should have recommended something simpler."

Copernicus' model of the universe was only partly right. He put the sun instead of the earth in the center. But, because the heavens were believed to be perfect and unchanging, he still assumed that all the orbits were perfectly circular. So he still needed 17 epicycles to explain how the solar system worked.

This was progress, you might say, but not quite the full revolution. It wasn't until 54 years after Copernicus' death that Johannes Kepler, after an immense amount of work, figured out that the orbits were not circular but elliptical that the epicycles were no longer needed.

As a young man Kepler was assistant to the Danish astronomer Tycho Brahe and later used Brahe's very precise astronomical observations in his work. Brahe's observations were actually so accurate that he refused to believe them. His calculations indicated that the stars were millions of miles away. That was clearly impossible. So, he concluded that the planets revolved around the sun, but the sun and everything else revolved around the earth, which stood still at the center of the universe.

Copernicus circulated his ideas in manuscript form for years. But he published his famous book placing the sun in the center of the solar system in 1543, the year he died. No great controversy followed. The book was a dud. Its first printing of a thousand copies never sold out and it had altogether four reprintings in 400 years.

Copernicus' work has been called "an unreadable book describing an unworkable system." (Koestler, 1967, p.329) Part of the reason may have been that Copernicus was not trying to start a revolution but merely clean up the messiness of the old model of the universe. The result was as much a fudge as a revelation.

Twenty-one years after Copernicus' death Galileo is born, in 1564. Thirty-three years later Kepler publishes his book on planetary motion. Another dozen years pass and Galileo builds his first telescope. It is now sixty-six years after Copernicus' death and Galileo himself is forty-five. Where is the revolution?

The first printing of "The Starry Messenger" sold out immediately. The book had two major effects. It made Galileo an instant celebrity and it created a technological fad. The demand for telescopes, especially ones made by Galileo, increased dramatically. The fascination with the new technology was so great that, in Florence, people who thought his package from Galileo must contain a telescope mobbed a courier and demanded he open it at once. (Drake, 1957, p. 59)

The early controversies around Galileo's discoveries focused on the telescope, not so much on his observations. Galileo's discoveries were said to be illusions created by the technology, not things that really existed. One adversary said the moon only appeared to have a rough surface. It really was as smooth and spherical as Aristotle had claimed: the mountains and craters were just covered with a smooth, transparent material, which the telescope could not detect. Others argued that nothing new could exist in the heavens because astrologers had already accounted for everything in the sky that could have any effect on the earth. Still others just refused to look through the telescope at all. (Drake, 1957, p. 73)

It was not until Galileo published his book on sunspots, in which he fully endorsed the Copernican system and reinforced it with his observations that he got into serious trouble with the authorities. Even then, the process of officially condemning his ideas took a long, tortuous, and even half-hearted path over two decades, during which he published several more works.

Central to Galileo's dangerous idea of scientific truth was that his notion that it was accessible to any reasonably intelligent person. Of these ordinary observers, he wrote, "I want them to see that just as nature has given to them, as well as to philosophers, eyes with which to see her works, so she has also given them brains capable of penetrating and understanding them." (Drake, 1957, p. 84)

Galileo understood that his version of truth was created out of combination of technology, logic, and the senses. Just as Helen Keller's friends conjured up for her the Manhattan waterfront, so Galileo conjured up for the world a whole new cosmos.

Thus follows my second point: technology can change the way people see, completely and forever. The telescope has changed so utterly the way we see the universe that the old system of perfect spheres and epicycles is difficult for us even to imagine much less see in the sky.

So, you might say, technology helps us to see things as they really are. Not necessarily, as this comparison will explain.

Here we have two European maps of Africa. The one on the left was made in the late Renaissance. Although its outlines are rather crude, you can make out the main features of the continent: Lake Victoria, the Nile River, and its sources in the east, the Niger River in the west. Even some of the major cities of the interior, such as Benin, are clearly marked out.

The map on the right is a French map made in the nineteenth century. Written across the middle of the continent are large letters spelling out the French words meaning "INTERIOR PARTS almost entirely UNKNOWN."

When Professor Craig Murphy, a political scientist at Wellesley College, first showed me this comparison I had a reaction that is probably similar to the one you are having right now. "Wait a minute," I thought. "If Europeans knew the interior of Africa in the Renaissance, how could be 'unknown' in the nineteenth century?"

In fact, Professor Murphy explained, before the Age of Discovery were quite familiar with the interior of Africa.

In 1858, John Speke, the British explorer, traced the White Nile to its origins and found Africa's largest lake, Victoria Nyanza. Europe hailed him as the victor in the centuries long search for the source of the Nile. But if Speke had bothered to check the map collection at the University of Edinburgh before he left for Africa, he would have found at least half a dozen European printed maps, all of them more than two centuries old, that already located Victoria Nyanza and the source of the Nile just where he was about to "discover" them. (Murphy, 1996)

How did modern Europe manage to forget what it had known in the Renaissance? Part of the reason is political. European knowledge of Africa came largely through Arab traders who traveled there on business. At the end of the fifteenth century, with the fall of the Arab dynasty in Spain and the nearly simultaneous collapse of the Byzantine Empire in the East, centuries of contact between the European and Arab worlds were broken.

These political events, of course, did not destroy all the maps. The maps were discredited-by the new technology of map-making.

The Renaissance maps were not maps in the sense we use today. They were in effect illustrations of travelers' accounts, cataloguing major landmarks and putting them in rough approximation in symbolic space. They were comparable to the kind of maps you jot down to remind yourself of a friend's verbal directions: "follow route 10 south to the big Mobil station and then ask." Such maps are perfectly suitable to travelers over land, who plot their progress from town to town and landmark to landmark with the help of local informers. But they are not nearly so helpful if you are traveling by sea.

When the Portuguese sailors began to explore the coast of Africa, the old land travelers accounts were pretty useless. Instead, Portuguese navigators developed their own maps. They used the modern science of cartography, which uses observations with precise optical instruments in conjunction with mathematics to create carefully scaled representations of landmasses and ocean. These new maps were linked with modern navigation techniques that used further technology to make careful astronomical observations. The whole process was an enormously effective tool in the European exploration of the world.

The Portuguese did not map the interior of Africa because they could not see it from their ships, and rarely traveled very far from the coast. Thus the centers of their African maps were blank and, since they were no longer considered "scientific" and were no longer confirmed by new travelers' accounts, the old maps were forgotten. Thanks to the new technology, Europeans were no longer able to "see" Africa.

This "erasure" of Africa had, in turn, some very important political implications. Africa, for Europeans, became the "Dark Continent," just so much empty unclaimed space. This spurred the nineteenth century's "rush for Africa," in which almost the entire continent was "discovered" and carved up by European powers.

In Asia, Europeans tended to conquer intact kingdoms and nation-states. Africa was divided "by the map," that is, in straight lines that disregarded pre-existing language, ethnic, or political divisions. This process had devastating effects on colonial and post-colonial Africa, effects that continue to this day. And, to me, the most astonishing thing of all is that this all happened from the effects of a new technology that no one even noticed was having an effect.

The story of the African maps illustrates my third point: technology can have unexpected, unpleasant, and even unnoticed effects on the way we see. Even an apparently benign or at least politically neutral technology like cartography can have real and devastating implications for the lives of millions and millions of people.

In the last two hundred years, a cascade of new technologies-lithography, photography, motion pictures, television, radar, computers-even sound recordings and radio---have helped change the way human beings see. I'm going to skip over most of these to get to the World Wide Web. One of the axioms of this paper is that the World Wide Web is a kind of apotheosis of visual technologies, uniting such media, as motion pictures, the printed book, the computer, and the sound recording, all with their attendant effects. ( see Walsh, 1998)

I tried to think of a catchphrase that encompassed the emerging social psychology of the World Wide Web. In the end, I came up with two works that surprised me. I want to make clear before I say them that I don't intend any value judgement here.

The two words are "children's literature."

Let me repeat, I don't intend the catch phrase to have pejorative implications.

"Around the Campfire."

Inna Costache of Loyola University has pointed out that there is an important difference between contemporary computer images and earlier ones: that is, they glow. "Unlike other forms of reproduction," she has written, "the seductive brightness of the [computer] screen and attractive graphic patterns… have transformed the interface into a surrogate primary source." (Costache, 1998).

The advent of the World Wide Web coincided with the introduction of cheap, high quality color computer monitors so that much of the Web appears as if painted with a neon paintbrush. Thus, the web is cozy, warm, and small. It is a flickering substitute for a nice wood fire and seems to draw the same rapt, uncritical attention.

I wonder also if this effect has anything to do with the dramatic changes in the public image of the computer over the last few years. In popular literature and film, up to and including Stanley Kubrick's 2001, A Space Odyssey, computers were huge, distant, cold, and inclined towards dangerous malfunctions. They were inhuman and counter-human, often resentful and subversive towards their creators.

Little of this fear seems to attach to the web, which so far is largely seen as helpful, friendly, imaginative, comforting, unthreatening, and, if anything, slightly comic. The Web and the Teletubbies seem to have a lot in common.

"The Picture Book."

A number of commentators on the new technologies have suggested we are moving from a text-based culture to a more visual one. In fact, as McLuhan points out (McLuhan, 1962), written language is essentially a visual medium and the advent of printed text in particular helped convert Western culture from a "hot" or emotional oral and aural culture to a "cool" or more detached visual culture.

What I'd like to suggest is really happening on the World Wide Web is something more like a picture book, in which written words and visual images have equal weight and roughly equivalent symbolic value. The typical World Wide Web page is a flashing, glittering collage of text and images and actually the boundaries between the two tend to dissolve. Text is treated visually, images are treated as signs and codes. A Web page is not meant to be read sequentially---or diachronically, as the structuralists put it---but simultaneously and synchronically much like the pages of an especially frenetic picture book. The images are clues to the text and the text is a clue to meaning of the images. Both are intuited as much as consciously understood. Browsing through the Web thus is a bit like walking through Times Square.

"The Magic Wardrobe."

This theme is named for C. S. Lewis's classic children's fantasy, The Lion, the Witch, and the Wardrobe. The children in the book walk into the back of an old wardrobe and find themselves magically transported into a fantasy world called Narnia, where magic exists and where the children, moreover, are really kings and queens. The sequence of time between Narnia and earth are such that the children are able to have fantastic adventures, grow up, and rule for many years as Narnia's royal family. Yet, when they go back through the wardrobe, they find that they are still children and only a few hours have passed back on earth.

In other books, the children are able to return to Narnia for long periods of time without aging or missing out on the normal chronology of their ordinary lives. In one of the books in the saga, Lewis even describes the fantasy equivalent of a hyperlinked web page: a mysterious in-between world filled with pools. To jump into any one of the pools is to enter a different world. (Lewis, 1988)

In the same way as C. S. Lewis' fantasy, the Web promises magic portals to other worlds in which one can play many roles more exciting than the ones offered us in normal life, yet which takes nothing-not even time--- from our off-line existence. This strange sense that on-line existence is somehow suspended from off-line seems even to be a factor it the newly-identified syndrome known as "computer addiction." Psychiatric professionals are even beginning to offer treatment in the addiction (McLean's, 1997).

"Ghost Stories."

The Catholic philosopher of modernism, Teilhard de Chardin, described the ghost story aspect of technology as early as 1959. He wrote that: "…thanks to the prodigious biological event represented by the discovery of electro-magnetic waves, each individual finds himself henceforth (actively and passively) simultaneously present, over land and sea, in every corner of the earth." (Teilhard de Chardin, 1959, p. 240)

What Teilhard de Chardin describes here is the realization, through technology, of the age-old fantasy of being a spirit, that is, of mind and perception separated from the physical limitations of a body. McLuhan described the same phenomenon as "disembodied man." (McLuhan, 1962)

During the twentieth century, technology has allowed people to float, more and more completely, to distant places and there to witness events, large or small, as they were actually taking place. At first, with radio, these technological ghosts were only able to hear; later, with the advent of television, they were able to see as well. Now, with the Internet, they have begun to speak and are gradually forming the illusion of a physical form, just like the ghosts in folk tales and children's stories.

The significance of this ghostly existence, paralleling our physical existence, should not be underestimated. Radio allowed human beings to listen in on the crash of the Hindenburg and World War II. Television permitted real-time travel to the assassination of Lee Harvey Oswald, the riots of the 1968 Democratic Party Convention, and the Vietnam War. Now the Internet allows out-of-body voyages to museums and outer space alike and even permits us, like witches and vampires, to materialize inside private homes. The vividness of these electronic visitations has had a profound effect on our social and political consciousness, as these few examples should make clear. The notion of "face to face" begins to take on new meanings and even seem rather quaint.

"The War of the Worlds."

In 1962, as if anticipating the current state of the World Wide Web in the late 1990s, McLuhan wrote that "instead of tending toward a vast Alexandrian Library, the world has become a computer, an electronic brain, exactly as in an infantile piece of science fiction." (McLuhan, 1962, p. 32) Just such a scenario is the subject of a classic science fiction story written by Harlan Ellison and originally published in 1967. (Ellison, 1979) In Ellison's dark fable, all the computers in the world link themselves into one vast, malevolent intelligence that turns on its human creators. Apocalyptic science fiction seems to linger throughout the web, from the Year 2K bug to the "Heavens Gate" fantasies about extraterrestrial beings. Perhaps this slightly darker side of the Web are the enjoyable scary tales to balance the coziness I mentioned earlier.

"Jack the Giant Killer."

Bruno Bettelheim pointed out the great significance, in children's stories, of the weak, small, and powerless standing up to and overcoming the strong, great, and mighty. (Bettelheim, 1976) The story of Jack and the Beanstalk is only one example of many children's stories in which the hero manages to triumph over beings that are far more powerful. Bettelheim explains that the fantasy helps children compensate for their dependency on the adult world and imagine the time when they come into their own adult powers.

The quintessential Jack of the Internet beanstalk is Matt Drudge, creator of the infamous Web site, The Drudge Report. Matt Drudge, who before the advent of the World Wide Web was a convenience store clerk, created The Drudge Report in a cramped Hollywood apartment. Not constrained by normal journalistic standards or ethics, Drudge has been able to scoop the media giants with his blend of email-based gossip and Internet innuendo. His role in breaking the Monica Lewinsky story shook up the White House and traditional journalism alike. By the sudden, reverse lens effect of the Web, he made himself a person to be noticed and to contend with. (McClintock, 1998)

As the story of Gallileo's telescope testifies, this is not the first time technology has aided those who would challenge established authority. With the Web now surrounded by would-be censors, and Drudge himself the subject of libel suits, it remains to be seen how long the Jack aspect of the Web will survive.

"The Cat in the Hat." Childhood and the Web are both fertile grounds for what I call the "instant symbol." Children, perhaps because their minds are generally less cluttered than adult minds, are quick to pick up on visual signs. Children's stories, Disney movies, and Saturday morning cartoons alike create and exploit dozens of instant symbols. Whether it appears on Dr. Suess's original creation or on a pre-teen on a ski slope, the Cat in the Hat's hat is instantly recognized, as are other such symbols and characters as the Ninja Turtles, the Teletubbies, R2D2, the White Rabbit, and the Red Queen.

The Web and the other visual media that feed into it create a similar set of instant symbols by virtue of their ability to repeat images over and over.

Few Americans have ever seen either Saddam Hussein or Monica Lewinsky in person. Yet because of electronic technologies, a cartoon like this one can amuse us. It also shows how these technologies can convert a simple piece of headgear-a black beret---into a potent political statement.

I doubt that anyone in the United States is now able to wear a black beret in public without evoking comments on some of the political subjects alluded to in this cartoon. A similar fate is waiting for any object or image that becomes unwittingly attached to a major public event.

In short, the Web presents a child's eye view of the world, where roles are fluid and partly imaginary, fantastic wishes come true, parental figures are easily overthrown, and the relationships between image and meaning are quickly and easily created.

I don't quite know what to make of this revelation of mine except that biologists point out that human beings are, as a species, somewhat "neotenic." A neotenic species retains into adulthood characteristics than, in related species, belong to childhood. For example, the facial features of adult humans more closely resemble baby chimpanzees than they do the features of adult chimpanzees.

There is some prima facie evidence that human beings are neotenic in their social evolution as well. Such evidence includes, for example, professional sports and the U.S. Congress, among other things. Childhood and the characteristics of childhood are more and more prized in our culture. If childish behavior is frowned upon, a flexible and childlike mind is often considered a sign of genius. It has also been shown that when two linguistic groups come together, it is the children who spontaneously create the grammar that blends them into a new, united language.

At this point leave the theme of children's literature and come, at long last, to the part of this paper that actually deals with the theme of this conference: museums and the World Wide Web. Let me start by considering something close to most museums: collecting. Children love to collect. In this, they are usually assumed to be imitating adults. In light of the above, I'd like to suggest that perhaps collecting is yet another neotenic characteristic of humankind.

It is common for the difference in human cultures to be defined by geographic boundaries, or nation-states, or language, or religion. Visual cultures-different ways of looking at the world---are rarely mentioned. Yet, I would argue that in some ways differences in visual cultures are even more important for often being overlooked.

Earlier in this paper, I talked about Galileo's discoveries with the telescope. Galileo, in fact, created a new visual culture, one that created deep divisions within the linguistic, national, and religious culture in which he lived. A similar change in visual culture, one with profound implications for global politics, took place when the technology of map-making changed at the dawn of the age of discovery.

Another major change in visual culture, one that led directly to the creation of the modern museum, was Tradescant's Ark. There were actually two Tradescant's, John Tradescant the Elder, and his son, John Tradescant the Younger. The Elder, probably not coincidentally, was a near contemporary of Galileo.

As gardener to the first Earl of Sailsbury, the Duke of Buckingham, and finally King Charles I, Tradescant the Elder traveled widely in search of plants. In his travels, in addition to plant specimens, he began to collect rare, unusual, or historically significant objects from around the world. Sea captains, ambassadors, and traders traveling overseas gave other items to him. Eventually, the collection grew so large and famous that Tradescant began to display it to the public in his house at Lambeth, which became known as "The Ark."

In some ways, Tradescant can be thought of as the "Galileo of objects," discovering and bringing together things that had not been seen and known before and presenting them to the world at large, so that people could see them and study them for themselves.

John Tradescant the Elder passed on his profession and his collection to his son, John Tradescant the Younger. The Ark continued to be open to the public for a fee. In 1656, a catalogue of the collection was printed, the first such catalogue to be published in England, just as the Ark had been the first museum to admit the public at large.

Elias Ashmole, the Younger Tradescant's friend, collaborator, and heir, gave collections to Oxford University. In 1683, Oxford opened the Ashmolean Museum, which was built to publicly display the collections. The Ashmolean, of course, still exists today as one of the oldest public museums on earth and in some ways the "mother of all museums." Parts of the Tradescant's eclectic collection-including King Henry VIII's Stirrups, Chief Powhatan's Mantle, an African drum and trumpet, a Chinese Rhino-horn cup, are still on display there. (Ashmolean, n.d.)

Charlie Gere of Birkbeck College, University of London, has presented a paper that points out the increasing interest in "irrational" cabinets of curiosity such the one created by the Tradescants. (Gere, 1998) He points out that a number of artists, including Joseph Kosuth, Joseph Cornell, Susan Hiller, and Damian Hirst, refer to such collections in their work. He also maintains in the paper that the great inventions of modernism were the museum catalogue and the modern map, both of which began to rationalize and order a rapidly expanding world of knowledge.

Finally, Gere proposes that "the so-called 'irrational' cabinet is an appropriate model for the representation of visual and material culture in digital technology." I find this a fascinating idea and endorse it for your consideration. It places museum Web sites in a peculiarly important position, as a bridge between one dividing line in the creation of visual cultures and a new one. This is a bridge, as I hope I have pointed out, to territories with both exciting new discoveries and unexpected dangers.

As you move through this conference, tread softly, but please keep your eyes open.


Ashmolean (n.d.) The Tradescant room (Undated Pamphlet) Oxford: Ashmolean Museum.

Benedetti, P. (1996) Ed. Forward Through the Rearview Mirror; Reflections on & by Marshall McLuhan. Cambridge, MA: M I T Press.

Bettelheim, B (1976) The Uses of Enchantment: the Meaning and Importance of Fairy Tales. New York: Knopf.

Costache, I. D. (1998) The work of art (historians) in the age of electronic (re) production. Paper presented at the Fourteenth Annual Conference, Computers and the History of Art, Victoria and Albert Museum, London.

Drake, S. (1957) Ed. and Trans. Discoveries and Opinions of Galileo. Garden City, NY: Doubleday and Company.

Ellison, H. (1979) I have no mouth, and I must scream. In H. Ellison The Fantasies of Harlan Ellison (Rev. Ed.) Boston: Gregg Press.

Gere, C. (1998) Hypermedia and emblematics. Paper presented at the Fourteenth Annual Conference, Computers and the History of Art, Victoria and Albert Museum, London.

Hanson, N.R. (1967) Nicholas Copernicus. In P. Edwards (Ed.) The Encyclopedia of Philosophy. New York: Macmillan.

Keller, H. (1998) I go adventuring. In P. Lopate (Ed) Writing New York: A Literary Anthology. New York: The Library of America.

Koestler, A. (1967) Johannes Kepler. In P. Edwards (Ed.) The Encyclopedia of Philosophy. New York: Macmillan.

Lewis, C.S. (1988) The Chronicles of Narnia (7 Vols.). New York: Macmillan

McClintock, D (1988) Matt Drudge, town crier for the new age. Brill’s Content, November.

McLean’s (1997). Outpatient services brochure. Waltham, MA: McLean’s Hospital.

McLuhan, M. (1962) The Gutenberg Galaxy: The Making of Typographic Man. Toronto: The University of Toronto Press.

Marchand, P. (1998) Marshall McLuhan: The Medium & the Messenger (Rev. Ed.) Cambridge, MA: M I T Press.

Mehta. V. (1998) Remembering Mr. Shawn’s New Yorker: The Invisible Art of Editing. New York: The Overlook Press.

Murphy, C. (1996) in conversation.

Photography New York (1998). November-December.

Teilhard de Chardin, P. (1959). Phenomenon of Man. Trans. B. Wall. New York: Harper.

Theroux, P (1998) Sir Vidia’s Shadow: A Friendship Across Five Continents. Boston: Houghton Mifflin Co.

Walsh, P. (1998) The headless curator: art history in the age of universal access. Paper presented at the Fourteenth Annual Conference, Computers and the History of Art, Victoria and Albert Museum, London, forthcoming in the CHArt Journal.