The University of Chicago Magazine
|TOP OF HIS GAME|
Endorphins alone don't explain how Wos persevered through an almost disastrous undergraduate experience at the U of C. Wos admits he hadn't even thought of college when, at age 14, he was encouraged by his homeroom teacher, a Miss Van Vliet, to take the University of Chicago's college-entrance exam, just to find out what it was like. ("I was bad in the vocabulary," he says of his exam results, "but deadly with the number sequences.") Wos was amazed to be not only accepted but awarded a full scholarship.
He was summoned to campus for "what they called a general medical exam, and it must have been shocking for the person because I walked in, I'm blind, I look like I'm 10 years old. One of the questions he asked was what about my social life. `Let's see: I'm skinny, short, blind, and poor,' I told him. `I don't have one.'" Wos and the examiner concurred that it might be a good idea for him to take more time to grow emotionally and physically before tackling the U of C.
Wos returned two years later, a few inches taller but without a much clearer idea of what to expect. He was, he discovered, the only blind student on campus. Few of the assigned materials were available in Braille, and no one was provided to read him the materials that weren't, although a proctor did read him the exams.
"When I think back," he says, "I wonder why the hell I just didn't flunk out. My competitive nature, I suppose." Getting mostly C's, Wos passed his required bachelor's courses, and decided to get his master's in a to-be-determined field. When a professor told him the desire to have better relationships with girls was probably not sufficient cause to pursue a psychology degree, he picked math. "I knew [it] was based on arithmetic, and I was good at arithmetic. I used to race the adding machines at the bookstore, and I always won."
Wos quickly learned that Chicago's math department "wasn't about numbers. It was about abstract mathematics. Groups and rings and fields and topological spaces and set theory." Being blind placed Wos further into uncharted territory. "They didn't even have a good Braille code for mathematics. So we invented our own. And I got through. I got great grades."
Two professors, in particular, took a liking to him: Paul Halmos and I.J. Kaplansky, "who will probably go down in the history of mathematics as one of the century's greatest," says Wos. Both challenged him to use the mental gifts he'd previously taken for granted.
When Wos finished his master's, Halmos offered him some advice: "In my judgment, do not do all your work at the same institution." At Halmos's suggestion, Wos enrolled at the University of Illinois at Urbana-Champaign (the fabled birthplace of the HAL 9000) for his doctoral work.
At Illinois, the famed algebraist Reinhold Baer agreed to take him on as a student. "I was treated like a star," Wos says, "because anyone trained at the University of Chicago was considered so damned smart. Chicago's was the best department around."
The "star" soon found he had a questionable future. Although "the whole department was behind me," his application for a teaching assistantship was rejected by a dean who, without ever meeting Wos, decided a blind man couldn't handle the job. Recently married and determined to become employable, Wos called the chair of Illinois's computer-science department on a hunch. Despite the fact he hadn't taken even one course in computing, he told the chair, "I'd like to learn about programming and run some jobs."
Intrigued when Wos made the unprecedented claim that no "debugging" would be necessary for his programs because he didn't intend to make any errors, the chair told Wos he could start right away. "We kept attacking problem after problem, and after a couple of weeks I told him, `Sir, something's missing. You haven't told me about my errors.'" It took a few more weeks, Wos says, for the chair to finally admit he'd found no errors. Wos loved that.
Suddenly, the "skinny, short, blind, and poor" kid was hot property. It was 1957, when companies investing in computers "began to realize they were going to need people who understood mathematics also." Without much effort, Wos got offers for interviews in Texas and California, but the chance to come back to Chicago proved irresistible. He accepted a job at Argonne, assuming it would involve writing and debugging programs. The division director thought his talent might be better used as a consultant to "interface" between the laboratory's scientists and its programmers. "I designed algorithms and methodology for doing this, and I was doing okay." He'd soon be doing better.
Attempts to use computers to prove mathematical theorems had been tried since the late 1950s with scant success. When Wos entered the field in 1963, he did so at the suggestion of his division director, William F. Miller, who rightly deduced that Wos was numbingly bored by his regular work.
After reading one paper on the subject, Wos immediately sensed why the results were so paltry. Researchers had plugged all the facts about a particular theorem into their computers and then just let them run. Without any guidance, the computer examined every possibility, even those that weren't important to the overall solution. Conclusions begat more conclusions until, generations of conclusions later, the computer was sorting through a population-explosion of information, most of which pertained only indirectly and non-essentially to the original problem. "The computer was attacking the problem, says Wos, "but it wasn't attacking the problem effectively. "
It's as if a stranger to Chicago were assigned the task of getting from 57th and Cottage Grove to State and Jackson, but wasn't given a compass, map, or directions. True, she might get lucky, but chances are greater she'd wind up lost.
Wos the gambler didn't like long-shot odds. What was needed was a way to allow the computer to sort the relevant data from the red herrings. To do that, it would need to be taught strategy. Wos's gaming experiences had taught him that "playing according to the rules without evaluating the consequences usually loses the money. Strategy is essential for winning."Wos applied two basic strategies to what was then called mechanical theorem-proving. In the first, he merely had the program direct its reasoning based on which facts were syntactically simplest.
The second, far more powerful, strategy involved restricting the program's reasoning. The goal here was to "figure out a way to tell the program, look, you must divide the information into three classes--general information, special information, and the information that pertains to the actual goal--and always when you draw a conclusion make sure it's never just based on general information alone." Wos's innovation transformed automated reasoning from an interesting sideline into a legitimate field of research--acknowledged in 1983 when the American Mathematical Society awarded Wos and colleague Steve Winker its first-ever prize for achievement in automated theorem-proving.
In teaching computers to prove theorems, Wos thought he'd learn something by asking mathematicians how they performed the same task. Curiously, not one could describe the process to him in any precise way; the solving of a problem came mostly as a flash of inspiration. Wos himself recalls how he and Argonne colleague Brian Smith worked with no progress for months in the mid '70s on a question in Jordan algebra. Then he awoke one morning with a start. "Ohhhhh," he recalls exclaiming, "I know how to do the whole damn problem." Seemingly out of nowhere, Wos's insight led to an entirely new type of arithmetic, called quad arithmetic, that enabled automated-reasoning programs to tackle in five minutes problems that had previously required 9.5 hours.
Though armed with impressive methodological tools, Wos and his colleagues spent years of trial-and-error before taking on automated reasoning's Great White Whale: the Robbins conjecture.
First posed by Herbert Robbins at Harvard University in the 1930s, the problem is in an area called Boolean algebra, a mathematical model of some of the basic rules of logic and thought. The conjecture Robbins proposed was that, in mathematical terms, three particular equations were powerful enough to capture all of the laws of Boolean algebra. The renowned logician Alfred Tarski worked on the problem, included it in a book, and handed it out to his Berkeley graduate students, but Wos didn't hear about it until 1979, when his colleague Steve Winker brought it to his office.
Wos suggested Winker "strengthen the hypotheses" by adding various Boolean conditions that, if true, would solve the problem. Winker asked what such an attack would tell him as a mathematician. "Nothing," Wos replied. "But as a gambler it tells you a lot." With the help of automated-reasoning programs, Wos's backdoor approach yielded many such conditions, indicating the conjecture was probably true, but it remained unsolved.
Despite the failure, Wos remained bullish that automated reason would one day crack a major problem like the Robbins conjecture. Bigger, faster computers helped, but far more important, says Wos, were the "brilliant strides" being made by his Argonne colleagues Ross Overbeek and William McCune "in designing computer programs that could do much more sophisticated things": programs based on powerful strategies that Wos himself developed.
Still, McCune startled Wos by bringing up the Robbins problem again last September, asserting, "I think we can get it." McCune suspected that a new program he'd developed called EQP (for "equational prover") just might do the trick, but confesses he was as amazed as anyone when, eight days later, the computer spewed out a proof. Hand-checking by McCune and several outside mathematicians confirmed that it was indisputably correct.
Although Robbins himself, now 81, told the New York Times he was thrilled with the news, other mathematicians, Wos admits, have taken a defensive posture. "People don't want to think any machine can do something they can't do. They don't want to feel like they're becoming obsolete. They want to do it themselves."
But Wos has never seen EQP and other automated-reasoning programs as replacements for mathematicians. Their potential, he believes, is as loyal assistants, willing and able to do researchers' grunt work, "and sometimes much more," while researchers pursue more stimulating questions. Wos was pleased when his old U of C mentor, I.J. Kaplansky, conveyed precisely this thought to him in the late '70s, predicting that automated reasoning would give 21st-century mathematicians the power to tackle problems so sophisticated that a mathematician from today's world would likely find the work incomprehensible.
The potential of automated reasoning, Wos predicts, will grow far beyond the realm of mathematics. One reason why Wos successfully campaigned to change the name of his field from automated theorem-proving to the more general automated reasoning was to get across the point that "we don't just prove theorems. We look at conjectures, we design circuits, we solve puzzles, we prove properties of other programs." He proudly notes that a University of Texas group recently unveiled a computer-chip design done exclusively by a computer using automated reasoning. With progress likely to continue, couldn't something like a HAL 9000 be just around the corner?
Doubtful, says Wos. But, he conjectures, there may soon be hand-held "thought processors" using automated reasoning to help humans avoid such logistical errors as scheduling two meetings at the same time, or designing a building but forgetting to add waste disposal. In any event, he's happy to leave the challenge of developing HAL-like computers who recognize speech or express emotion to his colleagues in artificial intelligence: None of it, he says, has much to do with automated reasoning.
Which is, he argues, precisely its power. Ever the gambler, Wos is willing to bet that automated reasoning will deliver far more than artificial intelligence, and in a relatively short period of time, precisely because it avoids all attempts to emulate the mysteries of the human mind. "Anyway, why would you want to program a computer to be vicious, crabby, selfish, and inconsiderate," Wos asks, "when humans do all of those things so very well?"
Return to April 1997 Table of Contents