1
Towards Teleportation, Time Travel and Immortality
Raj Reddy
ACM 50th Anniversary Conference
March 5, 1997
Introduction by James Burke
I was going to say that our next speaker is going to take another way out look at things, but having heard Bruce, let's say relatively way out. He earned a doctorate in computer science back in 1963, when he came to America from his native India via another degree in Australia. After teaching for a while at Stanford, he moved to Carnegie Mellon, where he was named a professor in 1973. He is now the Herbert A. Simon University Professor of Computer Science and Robotics. And he is recognized worldwide for his work on speech recognition and as the founder of the Carnegie Mellon Robotics Institute, which he ran until he took up his present position as dean of the School of Computer Science there. He’s a member of the National Academy of Engineering and the American Academy of Arts and Sciences. He was president of the American Association for Artificial Intelligence from 1987 to 1989. In 1984, he was awarded the French Legion d’Honor for his work on bringing advanced technology to developing countries, and he was awarded the ACM Turing Award in 1995. His ongoing interest is in human computer interaction. He has projects running at the moment on speech recognition, multimedia collaboration techniques, just-in-time lectures and automated machine shop. With a background like that, it’s all the more interesting that he should choose to talk about something like teleportation, time travel and immortality. I think it promises to tickle the fancy. Ladies and gentlemen, Raj Reddy.
Raj Reddy
As we look forward to the next fifty years, it is interesting to note that when the Association for Computing Machinery was being formed fifty years ago, the sense of excitement was no less palpable than it is today. Vannevar Bush had proposed MEMEX with hyperlinks between documents. Turing, having successfully broken the German code using a special-purpose digital computer, proposed the construction of a universal computing engine called Ace. John Von Neumann had recently formalized the idea of a stored-program computer. Eckert and Mauchly had created ENIAC, the first electronic digital computer in the U.S. There’s no question that the last fifty years have been exciting, dramatic and, in many ways, full of unanticipated events which have changed our lives.
What will the next fifty years bring? Given the continuing exponential rate of change, it is reasonable to assume that the next fifty years will be even more dramatic than the last hundred years. When you recall a hundred years ago, there were no cars and no highways, no electric utilities, no phone system, no radio or TV, and no airplanes, so you can well imagine the magnitude of the change that awaits us!
In this talk, I‘d like to share my thoughts on how our dreams about teleportation, time travel and immortality are likely to be realized. One of our most compelling, enduring fantasies of the future has been Star Trek, where the themes of teleportation, time travel and immortality have captured the imagination of generations. Will technology make this possible in the next fifty years? We’ve heard several possible futures in the last two days. I’d like to provide you with one more.
Technology over the next 50 years
By the year 2000, we can expect to see a giga-PC, a billion operations per second, a billion bits of memory and a billion-bit network bandwidth available for less than two thousand dollars. Barring the creation of a cartel or some unforeseen technological barrier, we should see a tera-PC by the year 2015 and a peta-PC by the year 2030--well before 2047.
The question is, what will we do with all this power? How will it affect the way we live and work? Many things will hardly change; our social systems, the food we eat, the clothes we wear and mating rituals will hardly be affected. Others, such as the way we learn, the way we work, the way we interact with each other and the quality and delivery of health care will undergo profound changes. First and foremost, we can hope that Microsoft will use some of this computing power to create computers that never fail and software that never needs rebooting. And yes, I can do without the leisure that I get during the boot time and at the closing time of the Windows 95, thank you.
The improvement in secondary memory will be even more dramatic. Many of you know that while the processor and memory technologies have been doubling every twenty-four months or less, disk densities have been doubling every eighteen months or so, leading to a thousandfold improvement every fifteen years. Today, you can buy a four-gigabyte disk memory for less than four hundred dollars. Four gigabytes can be used to store about ten thousand books of five hundred pages each--larger than most of our personal libraries at home. By the year 2010, we should be able to buy four terabytes for about the same price. At that cost, each of us can have a personal library of several million books, a lifetime collection of music and a lifetime collection of all our favorite movies thrown in--on our home PC. What we don’t have on our PC will be available at the click of the mouse from the universal digital library containing all the authored works of the human race.
If you choose to, you will be able to capture everything you ever said from the time you are born to your last breath in less than a few terabytes. Everything you ever did and experienced can be stored in less than a petabyte. All of this storage will only cost you a hundred dollars or less by the year 2025
So how will all this affect our lives? We’ve heard a number of scenarios for the future in the past few days. I’d like to share some of my dreams on how this technology will be used to save lives, provide education and entertainment on a personalized basis, provide universal access to information and improve the quality of life for the entire human race.
The first invention that will have a major impact on society will be the accident avoiding car. Let us look at the current state of this technology.
Video of Navlab narrated by Dr. Charles Thorpe. The Carnegie Mellon Navlab Project brings together computer vision, advanced sensors, high-speed processors, planning and control to build robot vehicles that drive themselves on roads and cross-country. The project began in 1984 as part of ARPA’s Autonomous Land Vehicle program--the ALV. In the early ‘80s, most robots were small, slow, indoor vehicles tethered to big computers . The Stanford cart took fifteen minutes to map obstacles, plan a path and move each meter. The CMU Imp and Neptune improved on the cart’s top speed, but still moved in short bursts separated by long periods of looking and thinking. In contrast, ARPA’s ten-year goals for the ALV were to achieve eighty kilometers per hour on roads, and to travel long distances across open terrain.
With the Terragator, our first outdoor robot at CMU, we began to make fundamental changes in our approach. The Navlab, built in 1986, was our first self-contained test bed. It had room for onboard generators, onboard sensors, onboard computers and, most importantly, onboard graduate students. The next test bed was the Navlab II, an army ambulance HMMWV. It has many of the sensors used on earlier vehicles, plus cameras on pan-tilt mounts and three aligned cameras for trinocular stereo vision. The HMMWV has high ground clearance for driving on rough terrain and a one hundred and ten kilometer per hour top speed for highway driving. Computer-controlled motors turn the steering wheel and control the brake and throttle.
Perception and planning capabilities have evolved with the vehicles. Alvin is the current main-road-following vision system. Alvin is a neural network, which learns to drive by watching a human driver. Alvin has driven as far as a hundred kilometers and at speeds over a hundred and ten kilometers per hour. Ranger finds paths through rugged terrain. It takes range images, projects them onto the terrain and builds Cartesian elevation maps. Smartee and D-star find and follow cross-country routes. D-star plans a route using A* search. As the vehicle drives, Smartee finds obstacles using Geneesha’s map, steers the vehicle around them and passes the obstacles to D-star. D-star adds the new obstacles to it’s global map and replans the optimal path.
Currently, Navlab technology is being applied to highway safety. In a recent trip from Washington, D.C. to San Diego, the Navlab 5 Vision System steered autonomously more than ninety-eight percent of the way. In a driver-warning application, the vision system watches as a person drives and sounds an alarm if the driver falls asleep and the vehicle drifts off the road. The same autonomous navigation capability is a central part of the automated highway system, a project that is building completely automated cars, trucks and buses. Automated vehicles will improve safety, decrease congestion and improve mobility for the elderly and disabled.
Every year, about forty thousand people die in automobile accidents, and the annual repair bill is about fifty-five billion dollars! Even if this technology helps to eliminate half of these accidents, the savings would pay for all basic research in information technologythat has been done since the founding of ACM fifty years ago.
Towards Teleportation
The second area of major potential impact on society is telemedicine. Remote medical consultation is already beginning to improve the quality of care for people located in remote areas. With increased bandwidth and computational capabilities, it will become possible to perform 3-D visualization, remote control of microrobotic surgery and other sophisticated procedures. It’s not quite teleportation in the classical sense of Star Trek, but consider the following: If you can watch the Super Bowl from the vantage point of a quarterback in the midfield, or repair a robot that has fallen down on the surface of Mars or perform telesurgery three thousand miles away, then you have the functional equivalent of teleportation--bringing the world to us, and bringing us to the world, atoms to bits. Let us look at some recent advances in 3-D modeling and multibaseline-in-stereo theory that are essential for being able to do these functions. Can we show this short video please?
Video of 3D modeling narrated by Dr. Takeo Kanade. A real-time, 3-D modeling system using multibaseline-stereo theory has been developed by Professor Takeo Kanade and other researchers at Carnegie Mellon University. The virtualized reality studio dome is fully covered by many cameras from all directions. The range or depth of every point in an image was computed using the same multibaseline-stereo algorithm used in the video-rate stereo machine. The scene can be reconstructed with the depth and intensity information by placing a virtua, or soft camera from the front, from the left, from the right or from the top, or moving the soft camera as the user moves freely. For this baseball scene, we can create a ball’- eye view. A one-on-one basketball scene has also been virtualized from a number of viewpoints.
Currently this system requires about a teraflop per second for the 3-D reconstruction of the basketball scene at the video rate. Instrumenting a football field with a dome consisting of ten thousand high-definition cameras will require twenty petaflops of computation and a hundred gigabytes of bandwidth to transmit the 3D Model.
Universal access to information and knowledge
Another area that will have a major impact on society will be the creation of a digital library. We already have access to a broad base of information through the Web, but it is less than one percent of all the information that is available in the archives. We can envision the day when all the authored works of the human race will be available to anyone in the world instantaneously. Not just the books, not just the journals or newspapers on demand, but also music, paintings, and movies. Once you have music on demand, you can throw away all of your CDs and just use the Web to access anything you want. You may just have to pay five cents each time you listen to it--that could be the way it works. This will, in turn, lead to a flood of information competing for the scarce resource of human attention. With the predictable advances in summarization and abstraction techniques, we should be able to see Gone With The Wind in one hour or less, and the Super Bowl in less than a half hour and not miss any of the fun, including the conclusion in real time.
Besides providing entertainment on demand, we can expect the Web to provide learning and education on an individualized basis. The best example of this is demonstrated by the reading tutor, which provides help to students who might otherwise run the risk of growing up illiterate. Can we show the next videotape please?
Video of The Listen Project narrated by Dr. Jack Mostow. Illiteracy costs the United States over 225 BILLION dollars annually in corporate retraining, industrial accidents and lost competitiveness. If we can reduce illiteracy by just twenty percent, Project LISTEN could save the nation over 45 BILLION dollars a year.
At Carnegie Mellon University, Project LISTEN is taking a novel approach to the problem of illiteracy. We have developed a prototype automated reading coach that listens to a child read aloud and helps when needed. The system is based on the CMU Sphinx II speech-recognition technology. The coach provides a combination of reading and listening, in which the child reads wherever possible, and the coach helps wherever necessary -- a bit like training wheels on a bicycle.