So, what does it mean? Well, let's look at the equation more closely. (Relax, it won't hurt.) On stage left, we have E. E stands for energy, and energy is the ability to move things. When Sachin Tendulkar's bat strikes the ball, we say it has a lot of energy because it can move something—in this case that poor battered ball. Meanwhile, to the right we have m, for mass, which you can think of roughly as weight. Mass is multiplied by c2, which is the speed of light, squared. Ignore the distracting speed of light for a moment. What the equation is telling us, pure and simple, is that energy and mass can be equated, converted to each other—like dollars and euros.
This was a profound revelation. Here's why. We are used to thinking of fast-moving objects (an express train, a rocket) as highly energetic. Similarly, we had previously thought that an object that had come to a standstill had used up all its energy. But the equation says that there's actually a vast untapped reservoir of energy left, stored up in the mass itself. It's as if we've discovered that a gas tank that we thought was empty in fact holds a secret reserve.
Now, because light travels so absurdly fast—seven rounds of the earth in one second flat—mass is multiplied by a huge number, in our usual units, when converted to energy. In terms of our analogy, it's as if the conversion is not between dollars and euros but between dollars and some tinpot currency, like the old Turkish lira. Just as you could get over a million Turkish liras for one US dollar, you can get a lot of energy for a tiny bit of mass. And so it is: the devastation unleashed by the Hiroshima atom bomb came from converting less than a single gram of mass into energy.
Nor is it all nuclear. Every breath you take, every move you make, involves E=mc2. As you turn this page, a minuscule amount of your mass is used up to provide the energy for that action. As you blink, some more mass is converted. To power your heartbeat, it's again the omnipresent E=mc2 in action.
Beautifully, the very equation that destroys also holds the key to creation. Indeed, what appeared in Einstein's paper was not E=mc2 but m=E/c2. Although that's an equivalent equation, the emphasis is rather different. Whereas E=mc2 tells us how much energy is released when mass is destroyed, m=E/c2 tells us how much mass can be created from energy. Creation and annihilation, in one.
Einstein was after creation, not annihilation. He wanted to know where mass came from. And in his glorious equation an important part of the answer lay revealed: mass comes from energy. After the Big Bang, our universe was filled with intense hot radiation. Magically, that energy turned itself into matter, using m=E/c2, matter that was to later became you and me. Indeed, the equation also applies to you in another way. As you may know, your atoms are made up of protons, neutrons and electrons. And the protons and neutrons in turn are made up of tiny particles called quarks. But here's the funny thing: the individual quarks themselves weigh almost nothing. So then where does your weight come from? The answer: it comes from the energy of the interacting quarks. It is the quarks' energy that, through m=E/c2, gives you your mass. Think of quarks the next time you step on a weighing scale.
As for nuclear weapons, this is what Einstein, an ardent pacifist and a deep admirer of Mahatma Gandhi, had to say: "I made one great mistake in my life when I signed the letter [...] recommending that atom bombs be made." Creation, and not annihilation.
Seventeen centuries after these musings, cosmology has become a precision science. On Sunday, the lead astronomers for a satellite called cobe will be awarded the Nobel prize in physics for virtually confirming the Big Bang theory of the universe.
This view of the universe starts with the astronomer Edwin Hubble. In 1929, peering through his telescope, Hubble noticed that other galaxies outside our own Milky Way were not randomly moving about, as you might expect, but were mostly going away from us. Not only that, but the farther the galaxies were, the faster they were speeding away. Hubble's observation was astounding: it meant that the universe was expanding. Here's why. Imagine a sheet, a huge infinite sheet, made of some stretchy material, say spandex. On our sheet there's a printed design of a grid, an endless tic-tac-toe pattern of squares. Now place a little ant at the corner of each square and then gently stretch the sheet in all directions. As the squares get stretched bigger, each ant sees all the other ants getting stretched away.
What does this have to do with Hubble's observation? Well, the ants are galaxies and the stretchy sheet corresponds to expanding space. Every ant sees its neighbours getting farther away. What's more, every ant sees that the more distant ants are moving away faster than the nearby ants because there is more expanding space in between. That is exactly what Hubble observed with his galaxies. So space—the universe—must be expanding. Also, as all ants are equivalent, every ant is entitled to regard itself as being at the centre of the expansion. So you can either think of yourself as an irrelevant point, one of infinitely many ants, or you can imagine yourself as the centre of the universe. (I know a few people who think of themselves like that.) Both are valid viewpoints.
The expansion of the universe has a striking implication. If the squares are continually getting bigger, then, at some earlier time they must have been smaller. So, going farther and farther back in time, the squares must have been ever smaller, ever more compressed. We say that that's when the Big Bang was. Out of that moment, some 13.7 billion years ago, rushed forth all the matter in existence, swept out by expanding space, as if in an explosion, bang!
The Big Bang theory also says that the universe was once a fiery inferno. As space expanded, this fireball cooled. Today all that's left of that initial blaze is a fading afterglow, a cold faint light invisible to the naked eye. But the light is not undetectable; indeed, you can detect it yourself with an old TV. When your TV's antenna is not tuned to any station, the screen shows a blizzard of white dots. About one per cent of that blizzard is in fact a signal that comes to you straight from the afterglow of that ancient fireball, the Big Bang. cobe, the satellite for which the Nobel prize is being awarded, was specifically designed to detect this signal, this whisper from the distant past. It was a grand success. Fifteen years in the making, cobe confirmed the Big Bang theory after only nine minutes of operation.
Was there anything before the Big Bang? Unfortunately, neither theory nor experiment tells us how the universe came to be, only what happened after the universe already was. The Big Bang theory is a tale of the infancy and maturity of a baby universe, not of the baby's birth. All our current theories break down at the moment of the Big Bang. It's very frustrating. It's as if we're watching a detective story whose reel snaps tantalisingly just before the mystery is revealed. So Augustine's inquiry of what happened before the creation, and whether there even was a "before" then, is a question that remains unanswered, for now.
From Rahel's point of view, she has travelled twenty years into the future. It's stranger than fiction. But true: in 1971, ultra-precise atomic clocks flown on jet planes around the world came back shifted by 59 billionths of a second compared to clocks on earth; the clocks had been transported ever-so-slightly into the future. And after spending nearly two years whizzing around the earth aboard the Mir space station, Russian astronaut Sergei Avdeyev has returned to a planet that has aged one-fiftieth of a second more than him; from his point of view, he has travelled one-fiftieth of a second into the future. Time travel to the future is not only possible, it has been done.
Slowing down your clock relative to others is one thing, but can you make it run backwards? What many of us would like to do is to return to the past and, preferably, change it. We'd like to go back and invest in that meteoric stock before it took off rather than after it had peaked, choose A rather than B as the answer to that exam question we got wrong, deliver the perfect comeback to an insult rather than just standing there speechless. And if only we could use the wisdom of our adulthood to put right the folly of our youth. Is this possible? Perhaps. The trick is to travel faster than light. As Einstein showed, time runs slower as you go faster. At exactly the speed of light, time comes to a stop. And if you could move even faster than light then, according to the theory of relativity, you could actually move backwards in time compared with stay-at-home types. But there's a catch: the same theory also states that, unfortunately for would-be time travellers, it's impossible to cross the speed of light. There might however be a way out. If we could take a shortcut, we might be able to overtake light, despite not being as fast. Imagine a mountain roadrace, say the Tour de France. A racer who discovers a hidden tunnel cutting through a mountain would appear to have gone faster than his competitors who are forced to cycle all the way around the mountain. Similarly, if wormholes-- shortcuts through space--exist, one might be able to use them to out-race light, which is our main requirement for time travel. Wormholes are not for the faint-hearted. Some are narrow passages, filled with potentially dangerous forms of energy, while others pass perilously close to the mouths of black holes. One of the uncertainties in this already highly speculative subject is whether wormholes can be traversed. This is partly why theoretical physicists have not been able to confirm or rule out the possibility of time machines.
The possibility of time travel suggests all kinds of cause-and-effect paradoxes. Suppose you went back in time and played cards with your grandfather all night on the very evening that he would otherwise have met your grandmother. By obstructing the union of your grandparents you would have prevented your own birth-- but then how could you have gone back in time in the first place? Even more perversely, can you go back in time to give birth to yourself, like some temporal ouroboros, the mythical snake that springs from its own mouth? Thinking about the logical consistency of time travel can quickly give you a headache worse than any new year day's hangover. So let's go back to the future, to 2007. As for 2006... I'll be back.
This ability to manipulate atoms individually will lead to staggering advances. For example, both diamonds and the lead in your pencil are made of carbon atoms; the vast difference between these two materials comes from how the carbon atoms are arranged. By rearranging atoms, it is possible to turn pencil lead into diamond. But that's nothing. Recently, fantastic carbon configurations called fullerenes have been discovered which are ultra-light and a thousand times stronger than steel. There's now serious talk of using filaments of fullerenes to create a cable for a space elevator. Future satellites, rather than being launched by rocket, might simply ascend this elevator, this stairway to heaven.
As Feynman understood, it is through knowing the laws of science that we can say what is and is not possible. Here's what's possible. It's possible to fly (with some discomfort) from Delhi to London in under 20 minutes. It's possible to create minuscule robots that would be injected through a needle, swim through your veins and, like a surgeon in a submarine, fix your body from the inside, eliminating operations. No scientific laws would be violated if such things happen. If they don't happen, it will be because they're too difficult or too expensive—but not because they're impossible.
From a scientific point of view, it is perhaps more interesting to ask not what is possible, but what is impossible. We know certain things are impossible because the equations of our best scientific theories forbid them. For example, it's impossible to travel faster than light, as that would violate the equations of the theory of relativity. It's virtually impossible for a broken glass to reassemble itself, as that would violate the laws of thermodynamics. There are other things, such as time machines, which we don't yet know whether they are possible. By discovering scientific laws, we set boundaries on what even an infinitely advanced civilisation would be able to do.
Still, a word of caution is due. Physical laws are not the same as mathematical theorems. In mathematics, it is utterly impossible for two plus two to equal five. But scientific laws are only as good as their assumptions. And as we have shed our assumptions, our laws have become more comprehensive, more liberal. Take alchemy. In the days before chemistry, before atomic elements were known, medieval chemists tried hard to turn one substance into another. Even the great Isaac Newton spent 30 years persistently, but fruitlessly, trying to turn lead into gold. Not knowing chemistry, Newton did not know that in all chemical reactions the basic elements are unaltered; like reassembled Lego pieces, atomic elements can be combined as different compounds but the building blocks—the elements themselves—do not change. Chemistry laws say the transmutation of one element into another—the holy grail of alchemy—is simply impossible.
Impossible? Well, no. The discovery of radioactivity showed that it is, after all, possible for one element to turn into another, uranium being the classic example. As scientific knowledge deepened from chemical to nuclear processes, what had been considered impossible became possible. (Similarly, mass was thought to be conserved, until E=mc2 converted it to energy.) The lesson here is not that we should abandon chemistry, but that scientific theories can be limited in scope. Laws have their jurisdiction.
As the purview of science increases, the illegal can become legal. What then is truly impossible? The ultimate laws of science will be determined by a theory of fundamental physics. But without a final all-encompassing theory of physics in hand, it's hard to say which of the present scientific laws will survive. Until then, we can still dream of the impossible.
Because of their one-way nature, black holes raise fascinating questions: What happens to something that falls into a black hole? (It gets torn to bits.) How does a black hole form? (From the corpse of a dead star.) What happens when two black holes collide? (They fuse into a bigger hole.) Recently, despite their invisibility, black holes have even been detected by astronomers. Indeed, an immense black hole called Sagittarius A*, having already devoured the weight of three million stars, has been found hiding at the centre of our own galaxy.
Then, in 1974, the physicist Stephen Hawking discovered something unbelievable: his equations showed there was a way to escape from black holes after all. Hawking's discovery relied on the freakish laws of quantum mechanics, which permit particles to "tunnel", that is, to occasionally pass through otherwise impenetrable barriers. As a result, black holes actually leak a steady stream of escaping particles, now known as Hawking radiation. As more and more particles escape, a black hole must eventually disintegrate. Far from being the ever-growing, ever-fattening brutes we had thought they were, black holes, if not constantly fed, can lose weight and disappear in a puff of Hawking radiation.
This was surprising enough. But Hawking's equations also suggested something else, something deeply troubling: the escaping particles seemed to contain hardly a trace of what made up the black hole. This is alarming because, in physics, nothing is ever lost. Toss this page into a fire, and physics says that from the flicker of the flames, from the way the air currents flow, from the glow of the ashes, from these and a myriad other observations, you could—in principle—reconstruct every single detail of the page: that it was the science column from Outlook, the position of your fingerprint on the page, indeed the position of every molecule. In theory, you could read the page just by staring at the fire.
And yet, for black holes, this most basic tenet—that the past and the future are uniquely connected—is supposedly violated. Particles radiating out of a black hole seem to carry virtually no information about what had fallen into the black hole. Throw in a ton of feathers or a ton of bricks, the particles leaking out of the black hole are the same; there's no sign of what has been destroyed. Once the black hole disappears, all knowledge, all information about its contents is obliterated. Gone.
This may seem an abstract concern, but at stake is the very power of physics to match the future with the past. And so, for thirty long years, theoretical physicists have wrestled with this black hole information puzzle. Some have searched Hawking's particles, in vain, for subtle signs of the black hole's contents. Others seem resigned that black holes cause the most sacred laws of physics to break down. String theory, a theory that describes all known forces, gives compelling evidence that information is not lost. But where the information is and how it gets out is still a mystery. For now, the case of the vanishing black hole remains unsolved.
Consider the coin toss at the beginning of a match. You might think there's a 50 per cent chance that the coin will land heads up. But that's not truly a matter of chance. If you were to carefully observe the way in which you flicked the coin with your thumb, if you were to account for the size and weight and shape of the coin, you would be able to predict—correctly, with 100 per cent accuracy—just how the coin would land. The seeming randomness of the coin toss comes about merely as a result of our ignorance of the precise details of the toss. Similarly, when we say there's a 30 per cent likelihood of rain, the indefiniteness of that forecast only reflects our incomplete data on the weather. Randomness, like guessing on an exam where you don't know the answer, stems from a lack of knowledge. As the great mathematician Laplace put it, if we knew everything about the present, "nothing would be uncertain and the future just like the past would be present before [our] eyes".
Indeed, the great power of physics is its ability to use knowledge of the present to predict the future. Using observations made today, we can predict the height of the tide at noon tomorrow, the time of the first solar eclipse of the year 4000, even the ultimate fate of the universe. And sure enough, experiments confirm that the world is predictable. Drop a stone a hundred times from the top of the Qutub Minar and every time it will hit the ground in 3.8 seconds. It's not a matter of luck or chance. In fact, virtually all our technology relies crucially on predictability. We can drive our cars and microwave our food with assurance because we are confident that when we use technology—itself a kind of experiment—it will yield the same result it did during its design and testing phase.
Yet there is one phenomenal exception. In the early years of the twentieth century, while exploring the properties of atoms, physicists stumbled upon a strange and utterly different reality. Bewildered and confused, they scratched together a new theory to explain what they found. This theory—quantum mechanics—has survived its uncertain beginnings to become the most successful theory in physics today. It is a theory like no other. For the central revelation of quantum mechanics is this: the world is ruled by chance. Yes, the microscopic world is random—not just random because of ignorance, but fundamentally random.
For example, if you try to pinpoint the location of an electron of some atom, you may find it in one place one moment but in a completely different place the next moment. Far from staying put or even following a nice, smooth orbit, the electron jerks around frenetically, haphazardly, like a dancer seen under a stroboscope. That's because—so says quantum mechanics—the position of the electron is a matter of chance. There's even a small chance the electron may pop up on the moon.
Quantum mechanics has transformed the way we do physics. Today, using gigantic particle accelerators, we violently smash particles again and again, over and over, in exactly the same way, literally billions of times a minute, for years. But, unlike the stone dropped from the Qutub Minar, we don't get the same result every time. Because quantum mechanics is based on chance, it offers the possibility of a rare lucky discovery, a Nobel-prize-winning jackpot. Einstein, who despised quantum mechanics, scoffed at this mode of inquiry, saying he would rather be "an employee in a gaming house than a physicist".
The philosophical implications of a world run by chance are hard to accept. "God does not play dice with the universe," Einstein famously complained. And why is it that the macroscopic world of big things appears so predictable, so definite, when the underlying microscopic world of atoms is itself random? All this is very disturbing. Ah, but if only the Indian cricket team had been microscopic, they could really have blamed their defeats on quantum mechanical bad luck.
It's natural also to think of time as a dimension. As Hermann Minkowski, Einstein's maths teacher in college, put it, "Nobody has ever noticed a place, except at a time, or a time except at a place." To arrange a get-together with your friends, it's not enough to give them the three spatial coordinates of the location; you also have to say at what time. That's one more number, so chalk up time as the fourth dimension. Actually, there's more to it than that. Imagine the four directions: left-right, back-forth, up-down and past-future. In 1905, Einstein showed that just as what we mean by left and right depends on which way in space we are facing, what we mean by past and future depends on which way in space-time we are facing. That may not be obvious but, by uniting space and time, it does justify our inclusion of time as another dimension.
So we have four so far. What about the fifth? Our minds cannot directly visualise more dimensions. But luckily, there's a clever trick we can use to aid our imagination. A fourth dimension of space (that is, a fifth dimension of space-time), sometimes called hyperspace, must appear to us 3D beings much as a third dimension would appear to a 2D creature. To imagine what hyperspace would look like to us, imagine what we look like to a flat amoeba living on a 2D sheet of paper. (This would have to be a most intelligent amoeba, of course.) For example, where is hyperspace? Answer: it's all around us, just as 3D space is all around a 2D piece of paper. Is it different from ordinary space? No, it's just another direction. What would a being from hyperspace be able to do? It would have the god-like ability to reach into your body without going through your skin, in the same way that you can touch the centre of the 2D amoeba without going through its edge, simply by raising your finger off the paper.
Well, that's all fine in the abstract, but does hyperspace really exist? Nobody knows for sure, but for close to a century physicists have seriously contemplated its existence. String theory—our best theory for describing matter, forces, space and time—boldly asserts that there are as many as eleven space-time dimensions! These 11 would include not only our familiar three space and one time but also an additional seven dimensions of space, all waiting to be discovered.
If they're there, the obvious next question is why we haven't seen them yet. There are two possible explanations. One is that hyperspace may have only the tiniest extent. Just as our amoeba might mistakenly think that a piece of paper is purely two-dimensional only because its third dimension—along its thickness—is so limited, we might simply have failed to observe that all around us is a thin sliver of hyperspace. The fabric of space-time, like the sheerest of silk chiffons, may have only the most imperceptible thickness in the other dimensions. If this is true, you shouldn't plan to spend your next family holiday visiting hyperspace.
But, in 1999, physicists Lisa Randall and Raman Sundrum proposed another explanation. Maybe the other dimensions were not so small. Instead, they suggested that the reason we haven't seen them is that we are stuck to our three dimensions. Just as the amoeba is stuck to the paper by the force of gravity, so too we—along with everything else in our familiar universe—might be confined by a force to our familiar dimensions and be prevented from entering the other dimensions that are all around us. Excitingly, this theory can be experimentally tested. This time next year, a giant experiment near Geneva will try to find evidence of hidden dimensions of space. If we find them (a big if), it will be one of the most sensational discoveries in human history.
Sixty-five million years ago, an asteroid or comet about 10 km wide came hurtling through space from one of these regions at some 20 km per second and slammed into the earth near what is now the coast of Mexico. The kinetic energy delivered by the impact was beyond comprehension—ten billion times the energy of the Hiroshima atom bomb—and it immediately unleashed an all-out apocalypse on earth. Computer simulations and geological evidence suggest that the collision triggered massive earthquakes worldwide, as well as possibly volcanic eruptions and 100-metre-high mega-tsunamis. Meanwhile, the debris from the collision flew up in a plume that rose almost to the moon. As the debris fell back down, it heated up the atmosphere, caught fire and pelted the earth with a hail of burning rocks. These in turn ignited a global forest fire. Those who managed to survive the earthquakes, the volcanoes, the tsunamis and the fires were plunged into pitch darkness and below-freezing temperatures as the dust from the collision blocked out the sun for almost a year. Without sunlight, plants failed to photosynthesise and food chains collapsed.
Around seventy per cent of living species were wiped out, just like that. On land, no species weighing more than 25 kg survived; all that was left of the mighty dinosaurs, who had walked the earth for over a hundred million years, were their bones. (Not exactly: a few flying dinosaurs did survive; they evolved into today's birds.) The obvious question is: can such things—shudder—happen again? The answer is written on the face of the moon. Because the moon has no weather (being too light to hold an atmosphere) and no tectonic plates, its surface is not subject to erosion. But because there is no erosion, the moon's scars do not heal. And yes, one of the first things we notice about the moon's blemished face is that it is pitted with craters, each one gouged out by a past collision.
Evidently, the collision that finished the dinosaurs was not a one-off event. Indeed, at shortly after 7 am on June 30, 1908, a meteor blew up in the earth's atmosphere, flattening thousands of square kilometres. Luckily, the explosion was over Tunguska, Siberia, so the only casualties were several hundred reindeer and one reindeer herdsman. And in 1994 astronomers watched as comet Shoemaker-Levy 9 crashed spectacularly into Jupiter. But the mother of all accidents took place about four-and-a-half billion years ago, when the nascent earth was hit so hard that a large amount of it was blasted out into orbit. The blasted material later coalesced to form the earth's sister, the moon.
Crater analysis indicates that collisions obey a "power law'' distribution. This means that if a one-metre object strikes every year, then a two-metre object strikes the earth only every four years and a four-metre object every sixteen years. The kind of object that wiped out the dinosaurs appears only about once in a 100 million years. Unfortunately, there's no posted schedule so we have no idea when the next big one will show up. At the other extreme, tiny dust-like micro-meteoroids are constantly entering our atmosphere; we often see these streaking beautifully across the night sky in the form of shooting stars.
If a large stray object were to be spotted approaching the earth, could we do anything about it? There are a few ideas out there, ranging from trying to blow it up with a nuclear weapon to trying to divert its trajectory with a nudge. But given that these things speed along at several kilometres per second, we'll need to have pretty good aim. I myself would probably have one last party with my friends.
Stupid Math Tricks
Paul Erdos (1913-1996) was one of the greatest mathematicians of all times, working with number theory. He has been called the mathematician of mathematicians and the oddball of oddballs. He lived for mathematics. He had no life, no home, no possessions, no interests other than mathematics and only spoke to people who loved mathematics. Everyone else, to use his description, was “dead”. He slept for 3 hours a day and spent the rest on mathematics. He ate very little (no time) and traveled widely with all his belongings in a plastic bag, looking for fellow mathematicians to live with for a few days (he really never had a home, but lived with whoever he could find). His hosts have said that he had no idea how to cut an apple or how to wash his underwear.
Mathematicians pride themselves as being the pursuers of the purest science of all sciences. So pure and so sublime is the purest form of mathematics, that it is blasphemy to ask “what is the use of this”. Practical applications are impure. But if course, mathematics shape out lives everyday. From buildings to toilets, from the cheap Chinese toys to supersonic aircraft, everything is crafted by mathematics.
An old joke is quite relevant. A man flying a hot air balloon got lost. So he descended and asked a woman walking in a field “Where am I?” She thought for some time and then replied. “In a hot-air balloon”. Immediately the balloonist realized she was a mathematician, for three reasons (1) She thought before replying (2) What she said was absolutely correct and (3) Her reply was totally useless.
However, much to the chagrin of real mathematicians, there is fun in mathematics. Paul Erdos would of course turn over in his grave if he heard that.
Marriage by Mathematics
People meet people, people marry people. When and how should someone decide to marry? Every time, John meets a suitable lady, he must make an important decision – to attempt to marry her or to move on to find a better mate. The problem is how does he know he will ever meet a better mate?
Lets do this mathematically. Suppose someone had 100 cards with 100 random numbers on them. He shows them to you one by one and you have to guess when you have seen the highest number in the whole pack (analogous to the best lady). The goal is to do it as soon as possible. You could wait till you have seen them all, then you will really find the largest number (in case of ladies, John has to wait his entire life to decide), but the sooner you make your move the better it is for you. Mathematicians have worked this out, and the best compromise is to pick the highest number after looking at 37 cards. That is about one third of your way into the game. Actually its not one third, but 1/e, where e is about 2.71828.
Suppose we assume the range of marriageable ages for women is about 18 to 40 and for men is about 20 to 45. Then a woman has 22 years to look for a mate, and a man has 25 years, but if a person waits too long, all the good ones will all be taken. Using the above strategy, the 1/e point for women is age 26.1 and for men is at age 29.1. Hence, this is the best time to take the plunge. The same theory, with a twist, can be used for arranged marriages. If you have gathered nprospects, make contact with the first random n/e prospects and then pick the best.
This technique is of course, quite useful in a variety of optimization problems.
The Birthday paradox
We are at a party. There are lots of people in a room. How many people must there be such that there are two people in the room with the same birthday (ignoring leap years). Of course, even if there are just 2 people, it is possible, though unlikely, that they will have the same birthday.
If there are 366 people then you are guaranteed to find at least 2 people sharing the same birthday. This is called the Pigeon Hole Principle, which states “If you stuff n pigeons into n-1 holes, then there must be a hole with more than one pigeon”.
However, there is an even more interesting situation. If there are just 23 people in a room, the probability of two people having the same birthday is more than half. This means, that quite often, in a gathering of over 23 people, there are people sharing the same birthday. In fact, if there are 40 people in a room, the probability of two people sharing a birthday is over 90%. Quite difficult to believe, but its true. This is called the Birthday Paradox.
Apart from the fun factor, the birthday paradox happens to be an important principle that is used to determine the difficulty of cracking certain encryption codes. That is, a code that looks difficult to crack, can be shown to be quite weak, using the Birthday Paradox.
Measuring a Wall
Watch Jim measure a wall. Jim is a Mathematician. He has this wall in his backyard, about 30 feet long, that he needs to measure. If he was an Engineer, he would get a foot-ruler and then use it repeatedly until he found the length of the wall. Since he is a Mathematician, he does not posses a foot-ruler.
So Jim goes to his neighbor and borrows a foot ruler, and places it against one end of the wall. Now, Jim realizes, one foot-ruler would not be enough, a lot of wall still remains to be measured. So he goes to another neighbor to get another foot-ruler. He places this ruler, after the first one, and realizes he needs more. By the time he finishes measuring the wall, he has borrowed 30 foot-rulers, from 30 neighbors.
Very pleased with himself, for having successfully performed an engineering feat, Jim goes out to return all the rules he has borrowed, but then he realizes he has no idea which ruler belongs to who. Being a good mathematician, he randomizes them (mathematical term for shuffling them), and returns them to random neighbors.
Now, here is the question. What is the chance that some neighbor gets his own ruler back? Probably pretty small. So lets ask the opposite question. “What is the chance that not a single neighbor got his own foot-ruler back?”. The probability of this happening should be quite high. And it should get higher the more neighbors there are—that is if he borrowed 100 rulers the chances of everyone getting wrong rulers would be even higher.
Actually it is not. The chance of no one getting the right ruler is quite low. It is about 37%, Which means, it is very likely (63%) at least one (or more) got the correct ruler. And this probability does not change even if the number of neighbors increase—in fact the larger the number of neighbors, the closer the probability is to 36.79%. Strange but true. Where did the 36.79% come from? It came from the very famous number, the number e. The probability for everyone getting the wrong ruler is 1/e, which is about 0.3679.
Six Degrees of Separation
Legend has it that any two people is separated by a small degree, typically about 6. That is if we take John in New York, and Rajiv in Delhi, it is very likely, we can find a six other people forming a chain of acquaintances from John to Rajiv. This story is also called the Small World Paradox. This surprising (almost) fact cannot be mathematically proven, as it obviously not totally true. For example, there is probably no connection between a Russian village dweller and an isolated tribal in the Amazon forest.
Statistical studies have shown that the small number of connections is indeed almost always true. The degree of separation amongst most people tends to be about six, and in some rare cases rises to 10. Another study involving web pages show that most web pages are also related by a small degree—that is to go from one web page to another needs a small number of click (on hyperlinks).
Movie actors talk of the Kevin Bacon game, where each actor tried to find the separation between him or her, and Kevin Bacon (Kevin Bacon is a not too well known actor). Of course, it is not quite surprising that every actor is a few steps away from Kevin Bacon, given the huge number of collaborators each person in the acting business has.
The “Erdos Number” was invented in honor of Paul Eros. Erdos has the Erdos number 0. Any person who has coauthored a paper with Erdos has an Erdos number 1, there are 507 of them (shows the prolific collaboration Paul Erdos was known for). Coauthors of people who coauthored with these people have Erdos number 2, and there are estimated to be about 6,000 of them. The largest number of people, who have Erdos numbers, have the number 5 (should have been 6). As the number increases, the population drops off. In the year 2000 there was only 1 person, known to have an Erdos number 15. Of course, this will change as time progresses.
Yours truly, has an Erdos number of 3.
The Number e
Marrying and measuring led us to the inverse of e. What is this e? In mathematics, e is a very famous number, much more famous than the ubiquitous pi or p. p is used by engineers, mathematicians prefer e.
It is hard to characterize e as a real life number. One of the best examples compares simple interest with compound interest. Let us say, you keep Rs. 100 in a bank account earning x% simple interest (that is, the interest does not earn interest). The money doubles in y years. If however, you were earning compound interest, in y years, the money would grow e times. Of course, we know e is 2.7182. And, surprisingly, the above result is independent of Rs. 100 or xor y.
Mind over Matter (Part 1)
Which tastes better, Coke or Pepsi? Must be easy to decide--have a sip of Coke and a sip of Pepsi, and then decide. Of course some would choose Coke and some would choose Pepsi. Right? Wrong!
Strangely, most people cannot tell the difference, unless if they drink both, successively. That is, give someone a glass of the dark fizzy stuff and ask what is it. Quite likely, they will guess wrong. Give someone a glass of water and a glass of water with a few spoons of sugar dissolved in it—everyone will correctly identify which one is which. Since many cannot identify whether a drink is Coke or Pepsi, it means there is not much of a difference.
If you serve both drinks, the taster tastes the difference (minor) but finds it hard to identify or to state a clear preference. Suppose your friend John is an ardent Pepsi drinker. You pour him a cup of Coke and a cup of Pepsi and ask him which tastes better (John should like the Pepsi). Very likely he will get it right. Now do it a bit differently—you pour the drinks, but get Mary to serve it to John. Chances are, this time John will get it wrong. Why?
When you serve the drinks, you are performing a “single-blind test”. That is, you know which cup contains which brand of cola, but John does not. When Mary serves the drinks it is a “double-blind test”, that is Mary and John have no idea of the contents. The difference between results of single and double blind tests have been shown to be statistically significant. The server transmits subconscious cues to the victim.
Why do people have strong preferences for similar fizzy drinks labeled Coke and Pepsi? Why are taste tests so complicated? The answer to both issues lies in the complexity of the human mind.
Preferences are often a cultivated phenomenon, deeply embedded in the psyche. If a child is told by his or her parents that Coke is better than Pepsi, he or she will believe it, internalize it and then adhere to it for ever. The reality may never override this perception. This phenomenon transcends taste and encompasses most of our perceptions, likes and dislikes, behaviors, emotions and choices.
Double blind testing tried to eliminate many subtle cues that science cannot pinpoint, but exists in real life. The smile, the gesture and the demeanor, all are things we pick up even if we think we do not. A simple thing like preference turns out to be quite immensely complex to measure. Even if during the taste test, John picks Coke to be better than Pepsi, he will not change his mind. Next time he knowingly drinks Coke, it will taste horrendous to him.
We all watch is amazement as Olympic figure skaters pirouette on the ice. They glide, they swoop, they spin, they jump. They seem to fly and sail, they seem to defy the laws of gravity. How do they do it? How does a fast moving woman jump up and land into the arms of a fast moving man without falling, all on a surface where friction is non-existent? It is an un-quantifiable coordination of body and mind. (On a personal note, I once observed a young lady do a jump on the ice skating rink. It looked so simple that I tried it. About a millisecond after my feet left the surface, my entire body made intimate contact with the ice. Every part (including parts I did not know existed) of my body hurt. For two weeks.)
While most people can ride a bike, and consider it to be easy; those who cannot find it terribly difficult. You are on a bike, riding down a winding path. Other people are walking, an occasional dog is scampering across. As you speed up and glide along, you see everything, yet things are blurred. You notice various moving objects in front of you, but you do not hit them. The objects move unpredictably, the dog crossing the road stops and then starts walking again. Yet you do not hit the dog. Is that difficult to do? Are you really thinking about the possible mishaps and are you computing the time and motion coordinations needed for collision avoidance? You must be, but not consciously.
The brain can be trained to predict and perceive and act without the person actively thinking about it. The brain often does it quite right. Throw a ball into the air and a dog can run up to the point where it will land and then jump and catch it in mid air. It is a phenomenal coordination of time, motion, muscle control and body control. Making a computer-controlled arm catch a ball has been a challenge for scientists and engineers. The results have been a mixed bag. In 1998 a team from MIT built a robotic arm that could catch a ball, and found that the computation and physical dexterity required was quite complex. The contraption worked, but the average dog can do it better.
Experiments to make computers act like dogs seem like fun, but machines to be more cognizant of the physical environment is important. A major challenge is to enable aircraft to land under automated control. Statistically the most accident-prone segment of a flight is the landing. Landings under human control can lead to pilot error. Taking the human out of the loop can improve aviation safety.
Prototype aircraft that have landed without human control. However reliability of such technology still cannot be trusted with human lives. The best reported landing automation (by NASA, 2001) actually uses a human—a camera mounted on the unmanned plane sends pictures to a screen in front of a trained pilot. Electrodes wired to his arms and feet pick up his muscle movements and transmit them to the controls of the drone. The pilot watches the video and his body impulses actually land the plane.
About a million humans are capable of landing airplanes. Commercial, private and recreational flying generates millions of safe landings every day, but every one of them is handled by a real live human and not machines. Computers and instruments aboard a modern jetliner may navigate the plane to the destination, but when it comes in for the touchdown, a human takes over total control.
Landing a plane involves the management of a few key parameters—airspeed, glide slope, drift and alignment. First, the airspeed has to be right. We want to land at the slowest possible airspeed, a bit higher than the aircraft’s stall speed (the stall speed is the point when the wind separates from the wings and the plane drops fast). The glide slope determines how fast the plane descends and must be a line that leads the plane from its current position to the touchdown point. Drift is evil, it is caused by winds that keep pushing the plane off of the intended path. Finally the plane not only has to be traveling in the right direction, it has to be pointed right—planes can fly slightly sideways (due to drift) but they have to land perfectly aligned.
You are behind the controls of an aircraft as we make the final approach to the landing strip. As a pilot, you have two feet on the rudders, one hand on the yoke, one hand on the throttle and one hand on the trim. One eye must be planted on the aim point, constantly judging the glide slope, drift and alignment. The second eye must be glancing down to judge your height, and correlate it with the glide-slope indicator and altimeter on the dash. If you have a third eye, it should watch the engine RPM, the manifold pressure, and the vertical speed indicator. The fourth and the most important eye must be glued to the airspeed indicator to ensure the beast does not stall. A stall, close to the ground is very irritating to your friends; they will be obliged to attend your funeral.
To cut a long story of wrestling with wind, physics and machine short, a good coordination of your three hands and four eyes coupled with intense electrical pingings flowing though the nerves, twitching many a muscle in harmony, brings you flying over the runway, a few feet off the ground at a little above the stall speed. The incessant chatter on the radio distracts you, while you pull the nose up to stop the descent (a descending plane hitting the ground will break off the wheels.) As the plane levels out, a cushion of air (called ground effect) bounces it up towards the sky. Some of your hands compensate for the bounce while others compensate for the irritating drift. You move the throttle and the trim to help in the compensation; you have no idea which way you moved them. Unknown to you, your feet are working overtime, managing alignment. You raise the nose up and the airspeed drops to the verge of a stall, just a few inches of the ground. The imminent stall reduces lift and the plane drops to the ground. Through this intense maneuver (about 2 seconds), you did not have time to think: the body was reacting to commands from the brain, with the mind turned off.
[To be continued]
Partha Dasgupta is on the faculty of the Computer Science and Engineering Department at Arizona State University in Tempe. His specializations are in the areas of Operating Systems, Cryptography and Networking. His homepage is at http://cactus.eas.asu.edu/partha
Post a Comment