Yes, we have no bananas

by Russ Roberts on November 30, 2011

in Data

Dave Hebert alerts me to this manuscript that illustrates the money experiment mentioned in this post of long ago. The bottom line is that a trillion monkeys typing for a trillion years are unlikely to come up with more than a few words of Shakespeare. You really might need an infinite number of monkeys typing forever. And where would you find an infinite number of bananas to sustain them?

Be Sociable, Share!

Comments

comments

45 comments    Share Share    Print    Email

{ 45 comments }

Jon Murphy November 30, 2011 at 2:35 pm

That little book cost 2,000 pounds? That’s well worth the money

Economic Freedom November 30, 2011 at 2:40 pm

It is a mistake to think that we can’t learn a lot from studying monkeys. I know you didn’t say that, but Shakespeare was an ape. So we were lucky to get a Shakespeare or a Cervantes from just two apes without the bananas.

http://en.wikipedia.org/wiki/Ape

Economic Freedom November 30, 2011 at 3:58 pm

It is a mistake to think that we can’t learn a lot from studying monkeys.

For example, I’m a monkey. And we’ve all learned that, given enough time, a smart monkey can evolve into an ignorant troll. Just read my recent posts for proof.

Jon Murphy November 30, 2011 at 5:24 pm

That was a good one! Will the real Economic Freedom please stand up.

Greg Webb November 30, 2011 at 5:27 pm

No need for the real Economic Freedom to stand up…intelligence, not socialist dogma, indicates the real one.

Jon Murphy November 30, 2011 at 11:29 pm

For the record, I didn’t type that. At 5:24 PM, I was at a party.

Greg Webb November 30, 2011 at 11:35 pm

At 5:24 pm, I was still at work. Don’t ever leave the university! It’s a nice place….:)

Jon Murphy December 1, 2011 at 7:25 am

Dude, I’m graduated. I have a full time job. I have to work late tonight to make up for leaving early today. But I don’t mind. I enjoy my job.

Brian November 30, 2011 at 3:34 pm
Jon Murphy November 30, 2011 at 3:39 pm

Facinating

Ubiquitous November 30, 2011 at 4:08 pm

Sauce for goose, sauce for gander.

If pure dumb luck is highly unlikely to guide the hands of even trillions of monkeys typing away to lead to an information-rich result such as a Shakespearian sonnet, then — for exactly the same reason — pure dumb luck is highly unlikely to guide the molecular motions of trillions of amino acids (or nucleotides, if you wish to begin with DNA instead) to lead to an information-rich result such as a functional protein.

Abiogenesis, guided by nothing but pure dumb luck, is a dead end.

vikingvista November 30, 2011 at 6:35 pm

“highly unlikely”

That is actually an incredible understatement. But the universe is old and big (another incredible understatement). The trillion monkey experiment is no longer statistically absurd when you realize how many stars and planets there are in the universe, and how long the universe has had them.

These magnitudes allow unlikely events to occur frequency. That is, to make your final claim, you should be using the word “impossible” rather than “unlikely”.

Ubiquitous December 1, 2011 at 5:45 am

@vikingvista:
“the universe is old and big…monkey experiment is no longer statistically absurd…magnitudes allow unlikely events to occur…”

Alas, amigo, the universe is not nearly big enough or old enough to recreate by chance the unique sequence of 643 character-symbols (letters, spaces, punctuation) comprising this famous sonnet:

When my love swears that she is made of truth,
I do believe her (though I know she lies)
That she might think me some untutored youth,
Unskillful in the world’s false forgeries.
Thus, vainly thinking that she thinks me young,
Although I know my years be past the best,
I, smiling, credit her false-speaking tongue,
Outfacing faults in love, with love’s ill rest.
But wherefore says my love that she is young?
And wherefore say not I, that I am old?
O, love’s best habit’s in a soothing tongue,
And age in love loves not to have years told.
Therefore I’ll lie with love, and love, with me,
Since that our faults in love thus smothered be.

A simple calculation will prove this. A monkey must randomly type one character at a time from a keyboard that has the following choices: 26 letters, 1 spacebar, 10 punctuation marks = 37 character-symbols (we’ll assume, for simplicity, the keyboards have no number keys or special symbol keys like &, %, $, #, @, *). Since any one of those 37 keys has an equal chance of being struck by the monkey, the symbol that actually does appear had a probability of doing so of one-in-thirty-seven, or 1/37. That probability would pertain to each of the 643 character-symbols comprising the sonnet; so the probability of an entire sequence of 643 character-symbols is 1/37 multiplied by itself 643 times.

(1/37)^643 is approximately (1/10)^965. Meaning: given a keyboard with 37 symbols, there are 10^965 unique combinations with a length of 643 symbols, and only one of them exactly matches the Shakespeare sonnet. The monkeys have to find that one combination out of all possible ones.

The problem can now be stated more precisely:

Assuming the Big Bang model of the universe is true, the age of the universe is about 12 billion years, which equals 10^17 seconds. Is it reasonable to conclude that 10^12 monkeys typing randomly on keyboards of 37 symbols, over a time period of 10^17 seconds, could locate, by chance alone, one unique string of 643 character-symbols out of a total of 10^965 combinations?

Answer: It is entirely unreasonable to conclude that, even if the odds, in a formal sense, are non-zero. In other words, we should still reject chance as an explanation, even if it is formally non-zero.

With a trillion monkeys typing away for 10^17 seconds, the total probabilistic resources you’re using are 10^12 * 10^17 = 10^29. That’s the size of the so-called “search space.” Compare the orders of magnitude — just the exponents — of the monkey/time search space to the sonnet search space: “29″ vs. “965″! And remember that even if you luxuriously allow your hypothetical to increase by a factor of 10 — i.e., 10 trillion monkeys, or 120 billion years of existence since the Big Bang — the exponent simply increases by “1″; i.e., 10^30 instead of 10^29. You’re still a long, long way from 10^965.

Think of these search spaces as actual volumetric spaces or rooms filled with unit cubes (e.g., atomic-sized dice). We can pretend to fit 10^965 very tiny dice in a room the size of the Milky Way galaxy — only one die represents the unique sequence of symbols we recognize as the sonnet above; but 10^29 dice would only fill a mere speck of dust floating around the galaxy. Our trillion monkeys and their 12 billion years would be able to reasonably search through a volumetric space the size of a speck of dust to look for the sonnet-sequence, but what about the rest of the space in the galaxy? You don’t have the “probabilistic resources” for that; i.e., you neither have enough monkeys, nor do you have enough time, to do all the random searching, given the total number of possible sequence combinations.

In other words, you’re searching for a needle in a haystack, and you’ve got too much hay to search through in the allotted time.

I’m not saying it cannot reasonably be done at all; I’m saying it cannot be reasonably done by chance+time alone.

On deciding whether or not chance is a good causal explanation for an event, statisticians establish a threshold, above which they concede that chance could have been the cause (even if unlikely), but below which they reject chance as the explanation (even if non-zero). The region below the threshold is the “rejection region” and it’s established by means of non-mathematical criteria such as empirical evidence, experience, etc.

For example, if someone is tossing a coin, how many times in a row would he have to toss “heads” before you’d begin to suspect that the coin was not a fair one? Ten? Twenty? Fifty? 250? What would that suspicion be based on? Prior experience tossing coins yourself, perhaps? Or maybe prior experience with dishonest coin-tossers? Whatever threshold you choose, I don’t think you’d say “Irrespective of how many tosses in a row land ‘heads’, I will believe the coin is fair, unless I have direct physical evidence to the contrary.” No one would ever be that naive.

Instead of a sequence of 643 character-symbols comprising a sonnet, assume we’re speaking about a sequence of 643 amino acids comprising a polypeptide. Instead of monkeys typing 37 symbols randomly over 12 billion years, assume blind physical forces jostling 20 amino acids over the same period. Same problems and same considerations apply.

If we reject the monkey hypothesis and chance as unreasonable for explaining the emergence of the sonnet, then we should reject the abiogenesis hypothesis and chance as unreasonable for explaining the emergence of life.

vikingvista December 1, 2011 at 2:38 pm

Whoa whoa whoa. I know how to multiply numbers together. By your reasoning, the current configuration of particles in the universe is even less likely, yet here we are–planets, stars, galaxies, clusters of galaxies, Giant’s Causeway, rings of Saturn, cubic zirconium, atoms, quarks, …the works.

The real issue is how an existing environment (and there is always an existing environment) directs the outcome of seemingly random events. That is, you are ignoring selective pressures. An environment biases these events towards those things more likely to exist in a particular enviromnent.

So you need to explain how, in your mechanism, the environment which imparts meaning to the works of Shakespeare, is imparting influence upon the seemingly random monkey-typing processes. For without such a meaning-imparting environment, the works of Shakespeare are no more special to anything in the universe than any other string of characters.

Ubiquitous December 1, 2011 at 6:30 pm

By your reasoning, the current configuration of particles in the universe is even less likely, yet here we are–

Yep. Kinda does away with the chance hypothesis. I agree completely with astrophysicist Sir Fred Hoyle, who said that the universe “appears to have been monkeyed with by a super-intelligence.”

The real issue is how an existing environment (and there is always an existing environment) directs the outcome of seemingly random events.

But any existing environment is itself the product of random events, so it obviously can’t direct other environments to a non-random outcome. “Random” means “highly probable.” The real issue is how an outcome that is both specific and complex can appear when there are many more outcomes that are non-specified and non-complex (i.e., random). Chance doesn’t explain it.

If you set off a bomb in someone’s apartment, the reason the explosion doesn’t neatly put the coffee mugs in the cupboard, and carefully fold the dish towels, and alphabetically arrange all the spices on the spice rack, is not that such an outcome is mathematically zero — it isn’t mathematically zero; but all the other possible outcomes — such as shattered glass on the floor, shards of coffee-mug ceramic embedded in the sheetrock walls, light fixtures torn out of the ceiling and hanging by frayed wiring — have a much higher probability of occurring. In other words, there are many more possible combinations of shattered glass and crockery on the floor and the walls than there are possible combinations of intact glass and crockery neatly stacked; the explosion makes the environment in the apartment “play the odds” by taking a highly unlikely configuration (intact, neatly arranged) and putting it into one of many more likely configurations (shattered, broken, random).

So talk of “environments directing an outcome” to anything but a highly probably, randomized result, is just that: talk.

With the exception of teleology — “mind”, something capable of projecting into the future an end, and capable of selecting shorter-term goals as means — I don’t know of any kind of environment or natural physical cause that can overcome the odds of entropy. Random = highly probable. It makes no difference how often you set off a bomb in someone’s apartment; the random outcome will always have higher probability of occurring than the complex specified one. Which means that when you walk into someone’s impeccably neat apartment, you cannot assume that its neatness was caused by a bomb explosion creating an environment that directed an unlikely outcome. You assume teleology and purpose; you assume a maid.

vikingvista December 1, 2011 at 9:10 pm

DU: Yep. Kinda does away with the chance hypothesis. I agree completely with astrophysicist Sir Fred Hoyle, who said that the universe “appears to have been monkeyed with by a super-intelligence.”

I merely look at nature. I observe order where there is no observable consciousness acting upon it, and assume nothing. Nature is what it is, not what I want it to be. It would be as presumptuous for me (or you) to assume that order is always the exclusive product of consciousness as to assume disorder is, wth my evidence being merely that either can be.

ME: The real issue is how an existing environment (and there is always an existing environment) directs the outcome of seemingly random events.
DU: But any existing environment is itself the product of random events, so it obviously can’t direct other environments to a non-random outcome.

1. What law of nature says that random cannot beget nonrandom, or visa versa?

2. You are missing the reality that there is nowhere any such thing as pure randomness. What you perceive as random (by virtue of not having identified a pattern in it), is always present with something that has meaning structure to you (is nonrandom). If that were not the case, your consciousness would not even be able to function.

3. Pure randomness is not even imaginable, yet you assume it to be the genesis of all things? Why is the genesis not pure order? What exactly is this strange natural relationship between random and nonrandom that you presume to know?

DU: “Random” means “highly probable.”

No it doesn’t. If that is what you believe, then have been misusing the term, as well as misreading me and anyone else using the term.

DU: The real issue is how an outcome that is both specific and complex can appear when there are many more outcomes that are non-specified and non-complex (i.e., random).

First, “random” also does not mean “non-specified and non-complex”.

Every outcome, and every state, is “specific”. A rock doesn’t care if it rests upon a straight flush or a 9 high. That particular 9 high configuration has precisely the same probability as that particular straight flush. Its relevance is the context that you give it.

Complexity too is highly dependent upon the context you choose. There is no objective measure to say that your brain is more complex than the atomic configuration of the neighboring supercluster of galaxies.

So the real issue, is why do some configurations take on important meanings for you than do others.

DU: “Chance doesn’t explain it.”

Chance doesn’t explain anything. It is our model for that which we cannot, or don’t need to, explain. Natural selection is how these events that we can’t explain or even relate to other environmental events, are shaped by those other events.

DU: If you set off a bomb in someone’s apartment, the reason the explosion doesn’t neatly put the coffee mugs in the cupboard, and carefully fold the dish towels, and alphabetically arrange all the spices on the spice rack, is not that such an outcome is mathematically zero — it isn’t mathematically zero; but all the other possible outcomes…have a much higher probability of occurring.

Yes. This is exactly what I mean about the influence of the surrounding environment. Just as with natural selection, the environment changes the probability of outcomes of these events. It makes some outcomes more likely (those selected by the environment) and others less likely. Natural selection is the story of causal forces shaping seemingly random events. It isn’t about violations of causality. Nor is it about treating unlikely events as likely, like a bomb neatly arranging a room. It is about looking at those random events, such as the rare or seemingly insignificant ones that we observe today, and imagining what a shaping environment would do to them over the course of trillions of experiments.

DU: So talk of “environments directing an outcome” to anything but a highly probably, randomized result, is just that: talk.

Now it sounds as though you are biasing yourself to an outcome–the one to which you are necessarily a part. The current state of matter in the world may be extremely improbable. But so would any other particular state. What is considerably more probable is that *some* state (somewhere in the universe) would emerge constrained by its particular causal forces into something that a conscious being would consider highly “ordered”, “complex”, or “interesting”.

DU: With the exception of teleology — “mind”, something capable of projecting into the future an end, and capable of selecting shorter-term goals as means — I don’t know of any kind of environment or natural physical cause that can overcome the odds of entropy.

This is simply a fundamental misunderstanding of entropy. I doubt even a devoutly religious intelligent design physicist would make this error. Local decreases in entropy are commonplace in the universe (obviously) and quite mundane.

DU: Random = highly probable.

No. Absolutely untrue in any corner of the English language that I have encountered. I don’t even know where you get this.

“You assume teleology and purpose; you assume a maid.”

Well, clearly you do. Not just upon seeing an ordered apartment, but aparently upon seeing anything that your mind finds to be ordered. But as much as you would like it to be the case, your assumptions do not determine external reality.

Ubiquitous December 1, 2011 at 11:25 pm

What law of nature says that random cannot beget nonrandom, or visa versa?

The 2nd law of thermodynamics known as “entropy.”

You can imagine all the physical miracles you want, but you cannot so easily get away with imagining a mathematical miracle. Things in nature flow from the less probable to the more probable; from higher degrees of structure, differentiation, and information, to lower degrees of structure, differentiation, and information. When you put a bowl of hot soup on the dinner table, the soup gradually cools and the air around it gradually warms; in a small enough, closed system, the hot soup and the cool surrounding air would randomize — essentially, commingling — until everything is one bland temperature; i.e. no differentiation. That’s a statistical law. The reason it happens is that one bland uniform temperature has a much higher probability of occurring in nature than a sharp gradient, with hot on one side and cool on the other. What entropy tells us is that you will never see the opposite: in a closed system, you will never see the soup maintaining its hotness and the surrounding gradually cooling itself of its own accord. You can accomplish this, of course, by adding information from outside the system — essentially, extending the boundaries of the system — but then you’re merely pushing the process back one step. As soon as you run out of energy or fuel, the enlarged system will begin to randomize until everything is again one, even temperature.

Entropy explains why an exploding bomb — which is simply gas expanding in a rapid, random way (sort of like muirgeo’s posts) — never create order while exploding, but always disorder. Entropy also explains why structures that start out highly ordered and in highly improbable configurations — a man-made brick wall, for example — always deteriorate over time rather than get stronger: the randomized form of the brick wall is called “rubble”, and it’s a more highly probable configuration for clay, mortar, and water — because there are so many forms that rubble can assume — than the form known as a “sound brick wall” for which there’s pretty much only one form it can assume.

If you wanted to clean your apartment in preparation for the holidays, would you attempt to do so by setting off a bomb in the hopes that “random would beget non-random”, or would you do it by putting in new organizational information from outside the original closed system — a fancy way of saying “hire a maid”?

vikingvista December 1, 2011 at 11:44 pm

ME: What law of nature says that random cannot beget nonrandom, or visa versa?

DU: The 2nd law of thermodynamics known as “entropy.”

That simply is false. And it is false in any thermodynamics textbook, and it is clearly false in the everyday universe that we observe. It is a fundamental misunderstanding that is easily rectified before you get to the end of the first page of the first chapter that discusses entropy.

And the 2nd law isn’t known as “entropy”. Entropy is known as “entropy”, which a quantity that can increase, decrease, or remain unchanged. The 2nd law can be stated *in terms of* entropy by saying the the entropy of a closed system is always increasing. It does not say “uniformly” increasing. It does not say that nonclosed subsystems of the closed system cannot be decreasing. If it did say that, then the law would be proven wrong a thousand times a day.

So I will answer my own question. There is NO law that dictates whether or not randomness can arise from nonrandomness, or visa versa. Nor can there be. To think so would reflect a misunderstanding of what “random” means.

Ubiquitous December 2, 2011 at 1:20 pm

The 2nd law can be stated *in terms of* entropy by saying the the entropy of a closed system is always increasing.

Actually, the 2nd law cannot be stated any other way; ergo it is known as the law of entropy. You’re quibbling.

It does not say “uniformly” increasing.

Neither did I. I said that highly ordered states tend toward uniform states of lower order. “Uniform” = “random” = “highly probable”. They all mean the same thing as far as the 2nd law is concerned. If your textbooks don’t say that, then you wuz robbed. Return the books and get a refund because these are standard notions in thermodynamics, statistics, and probability.

It does not say that nonclosed subsystems of the closed system cannot be decreasing.

It says that in an open system, any decrease in entropy — meaning, any increase in order — must be “pre-existing order” that has entered through a barrier; and the increase in order cannot be greater than the pre-existing order that is entering through the barrier.

Example: a 2-room house, separated from the rest of the universe, with a door separating the rooms. One room is heated, the other not. Over time, the house as a whole must decrease its “thermal order”; the two rooms will slowly become one even temperature. If we open the door, the two rooms are now an “open system.” Evolutionists — always eager to believe in a miraculous universe where, apparently, anything can happen — claim that even in this scenario, it is possible for the cold room to get colder, provided the hot room gets even hotter, so that the “total entropy” of the house as a whole can still be said to have increased. (i.e., local decreases in entropy, compensated by even greater increases in entropy someplace else.) This, however, is not what the 2nd law actually says. The law says that in an open system (2 rooms with the door open between them), the cold room can get colder IF something is passing through the barrier — through the doorway — and entering the cold room that would increase the probability of its getting colder.

As mathematician Granville Sewell puts it:

“if an increase in order is extremely improbable when a system is closed, it is still extremely improbably when the system is open, unless something is entering which makes the increase not extremely improbable.”

That thermal order might be hypothetically flowing from the hot room, through the open door, into the cold room (making it even colder) does not in any way make it easier for the thermal order in the cold room to “morph” into some other kind of order such as that required for the appearance of computers, books, television sets, or biological organisms — because thermal order doesn’t increase the probability of computers, et al., appearing. To increase the probability of highly ordered things such as computers and televisions appearing in the cold room, you would need something as highly ordered as computers and televisions flowing out of the hot room, through the barrier, and into the cold room. “Thermal order” flowing from one room to the next wouldn’t increase the probability of something else appearing.

…order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here (it would have been violated somewhere else!). But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here. What happens in an open system depends on the initial and boundary conditions; what happens in a closed system depends only on the initial conditions.

Everything the second law predicts, it predicts with such high probability that it is as reliable as any other law of science . . . And since the second law derives its authority from logic alone, and thus cannot be overturned by future discoveries, Sir Arthur Eddington called it the ”supreme” law of Nature.

More generally, the second law predicts that, in a closed system where only natural forces are at work, every type of order is unstable and will eventually decrease, as everything tends toward more probable (more random) states*–not only will carbon and temperature distributions become more random, but the performance of all electronic devices will deteriorate, not improve. Natural forces, such as corrosion, erosion, fire and explosions, do not create order, they destroy it. The second law is all about probability. The reason natural forces may turn a spaceship into a pile of rubble but not vice-versa is probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back.

…the second law does not simply require that any increase in thermal order in an open system be compensated for by a decrease outside the system, it requires that the increase in thermal order be no greater than the thermal order entering the open system. The thermal order in an open system can decrease in two different ways: it may be converted to disorder … or it may be exported through the boundary (second term). It can increase in only one way–by importation through the boundary.

http://www.iscid.org/papers/Sewell_EvolutionThermodynamics_102802.pdf

Granville Sewell
University of Texas, El Paso

*Note well: “everything tends toward more probable (more random) states”

Random states are highly probably states. Non-random states are low-probability states. Proof? Take your Scrabble letters, put them in the cardboard box whence they came, shake well, toss into the air, and note the patterns the letters make on the floor. They will all be random patterns — gibberish. Why? Because there are many such random patterns, so they have a high probability of appearing when they land on the floor. But Scrabble squares that form the non-random pattern “Shall I compare thee to a summer’s day?” have effectively zero probability of forming without intelligent, teleological, goal-directed input from outside the system of “Scrabble squares + cardboard box + tossing motion”. I.e., to overcome the huge odds of getting gibberish, you would have to have the mental goal in mind — “Shall I compare thee to a summer’s day?” — and then you would have to select the letters that matched your target goal. Intelligent Selection, as opposed to Natural Selection.

Finally (as an aside, relevant to the discussion above), the leftist wing-nut, Noam Chomsky, asked an important question in linguistics back in the 1960s. We can appreciate his question even if we reject his answer. The question was this:

Given 26 letters and 1 space, how many “well formed” sentences can one create in English that are 100 characters long? The total number of combinations is easy to calculate: it’s simply 27^100, or approximately 10^140. Most of those combinations will be gibberish, of course. So using a few assumptions (e.g., in English, a “u” will follow a “q” with a probability of 100%; “e” is the most frequent letter; etc.) and statistical techniques, Chomsky and his colleague (Morris Halle) showed that there are actually about 10^25 well formed English sentences that could be formed having the length of 100 characters. Now 10^25 is a big number, but it’s completely dwarfed by the immensity of a number like 10^140 which represented the total number of combinations of 100 characters, not just the well formed intelligible ones. Chomsky’s question, therefore, was this: how does an infant, in the process of language acquisition, learn to distinguish generic well-formed constructions from generic gibberish constructions, given that there are relatively few intelligible combinations compared to the total number of possible combinations? If the total number of possible combinations (10^140) were likened to a planet such as Earth, the paltry few intelligible combinations (10^25) could be likened to a spot on the earth the size of a pinhead. So how does the child zero-in on that pinhead — especially (claimed Chomsky) given the paucity of good examples coming from adults (so-called “motherese” or “baby talk” that parents usually use when speaking to infants). Chomsky’s answer, unfortunately, was to claim that grammatical knowledge of correct, well formed constructions, must be inborn and “hardwired” because the search space of 10^140 would be too big for any child to sort through and test for acceptability by trial and error, or any sort of empirical process. We don’t have to accept his answer to see the relevance of his question. It’s the same problem with the trillion monkeys trying to recreate a Shakespeare sonnet: there are so many non-sonnet gibberish combinations compared to the one unique sonnet combination that the odds of hitting the latter are drowned out by the much greater odds of hitting the former.

vikingvista December 2, 2011 at 2:16 pm

DU: Actually, the 2nd law cannot be stated any other way; ergo it is known as the law of entropy. You’re quibbling.

False again. It can be, and has been, stated in several different ways. Here is an easy place for you to start:

http://en.wikipedia.org/wiki/Second_law_of_thermodynamics

These are demonstrable facts that you keep getting wrong here. Whoever is your source of this nonsense, shun them immediately, and start doing some independent reading.

ME: It does not say “uniformly” increasing.
DU: Neither did I.

Great! Then you agree that entropy can (and hopefully recognize that it commonly does) decrease throughout nature. It therefore is of absolutely no use in your argument. The 2nd law road is a dead end or you.

DU: I said that highly ordered states tend toward uniform states of lower order. “Uniform” = “random” = “highly probable”. They all mean the same thing as far as the 2nd law is concerned.

“Uniform” absolutely does not = “random” absolutely does not = “highly probable” These definitions are not only false, but are absurd. They don’t mean the same thing as far as the 2nd law is concerned or as far as the English language is concerned.

U, I’m sure you’re a perfectly nice guy, but if you can’t get some basic facts and definitions straight, there is no way to advance this argument. Being an advocate of self-learning, I used to think a formal science education was unnecessary. Thanks for changing my mind on that, at least.

Ubiquitous December 2, 2011 at 6:43 pm

You might want to actually read the posts you link to. The Wikipedia article proves my case, not yours. This quote at the end summarizes it succinctly:

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

—Sir Arthur Stanley Eddington, The Nature of the Physical World (1927)

And see the following:

http://www.ams.org/notices/199805/lieb.pdf

A Guide to Entropy and the Second Law of Thermodynamics
by Elliott H. Lieb and Jakob Yngvason

[Elliott H. Lieb is professor of mathematics and physics at Princeton University.
Jakob Yngvason is professor of theoretical physics at Vienna University.]

We shall abuse language (or reformulate it) by referring to the existence of entropy as the second law. This, at least, is unambiguous. The entropy we are talking about is that defined by thermodynamics (and not some analytic quantity, usually involving expressions such as −p lnp, that appears in information theory, probability theory, and statistical mechanical models).

The Second Law. Three popular formulations of this law are:

1. Clausius: No process is possible, the sole result of which is that heat is transferred from a body to a hotter one.

2. Kelvin (and Planck): No process is possible, the sole result of which is that a body is cooled and work is done.

3. Carathéodory: In any neighborhood of any state there are states that cannot be reached from it by an adiabatic process.

All three formulations are supposed to lead to the entropy principle (defined below). These steps can be found in many books and will not be trodden again here.

In sum:

Entropy is the 2nd law of thermodynamics.

The 2nd law of thermodynamics has 3 popular formulations (Clausius; Kelvin; Caratheodory)

Ergo, entropy has 3 popular formulations (Clausius; Kelvin; Caratheodory).

The statements above show that entropy and the 2nd law are the same thing. The statements by mathematician Granville Sewall I posted earlier show that the idea of an “open system” doesn’t allow you to posit mathematical miracles to explain difficult things like life, the fine-tuning of physical constants, etc. An “open system” does not mean that entropy reverses itself locally; it does not mean that the laws of probability in the local system are temporarily suspended; it means that additional order can pass a barrier and increase the order in one part of the system, but it had to already exist in the other part of the system whence it came. The famous hypothetical called “Maxwell’s Demon” shows this clearly. You can reread it in the Wikipedia article you linked to.

I’ll continue to believe what professional mathematicians and physicists tell me about entropy and the 2nd law of thermodynamics; you can continue to believe what the little voice emanating from the lint in your navel tells you. I certainly don’t want to disabuse you of your naive belief in mathematical-miracles-on-demand. The universe is big enough to accommodate my believing in the truth and your believing in fairy tales.

Variety is the spice of life, and besides: I really am a perfectly nice guy.

vikingvista December 2, 2011 at 11:51 pm

DU: You might want to actually read the posts you link to. The Wikipedia article proves my case, not yours.

False again. You said “the 2nd law cannot be stated any other way”. But it is stated in many other ways, without reference to entropy. OF COURSE all such statements are equivalent, because they are all referring to the same thing. But they ARE stated in very different ways. In particular, I think you’d find less room for theology in the statistical mechanical description (though theologians never fail to find it somewhere).

DU: The statements above show that entropy and the 2nd law are the same thing.

Wrong again. Apparently you didn’t even read them. The 2nd law of thermodynamics says a particular thing ABOUT entropy (of a closed system). It is not the same as entropy.

DU: the idea of an “open system” doesn’t allow you to posit mathematical miracles to explain difficult things like life,

That’s exactly right. Just because you see life, or order, or patterns, doesn’t allow you to posit that some conscious entity created them. Correct. Why do you disagree with that statement?

DU: An “open system” does not mean that entropy reverses itself locally;

That’s not what an “open system” “means” (you and your wierd definitions), but decreases in entropy is what is observed ALL THE TIME in nature. As a chemical engineer I not only observed it daily (as we all do), but quantified it almost as often.

DU: It does not mean that the laws of probability in the local system are temporarily suspended;

Whoever said it did?

DU: it means that additional order can pass a barrier and increase the order in one part of the system, but it had to already exist in the other part of the system whence it came.

A local decrease in entropy must correspond to an increase at least as large in the closed system as a whole. THAT is the 2nd law. The second law says nothing about a “barrier”, or “whence it came”. It is afterall a state variable.

DU: The famous hypothetical called “Maxwell’s Demon” shows this clearly.

I’m not sure what you are implying that Maxwell’s Demon showed. A “barrier” is an integral part of the hypothetical, not of the 2nd law. It was just a thought experiment to show how the second law might be violated. But as a natural law, it is a matter of fact that nobody has ever observed it violated. The demon therefore has to be interpreted in the context of the presumed law–meaning we assume the entropy of the demon increases by at least the amount he’s decreasing entropy in the box–which has always been the case in any realized experiment. It’s not really a very useful thought experiment, anyway.

And of course it was for Maxwell only a statistical interpretation was intended. It was not meant to be interpreted that only a clever entity could decrease local entropy–an observation that is common and mundane, without any evidence of millions of little demons running around causing them.

DU: I’ll continue to believe what professional mathematicians and physicists tell me about entropy and the 2nd law of thermodynamics;

I’ll give you credit for at least admitting that you are unable to understand these things for yourself, and therefore must take the word of authorities. But you might at least try not to misrepresent the words that you are taking on faith. Lack of understanding is disabling, but I assure you, you’ve got it all wrong in any classroom in the world. These aren’t my esoteric notions.

DU: your naive belief in mathematical-miracles-on-demand.

Huh? But I said that I *didn’t* assume the miracle of a complex intelligent mover, or magical random-to-nonrandom functions, but merely take observations as they are. Are you sure you are not confusing me with you?

SaulOhio November 30, 2011 at 6:39 pm

Have a look at the link Brian posted. Abiogenesis would be a process similar to that.

WhiskeyJim November 30, 2011 at 5:47 pm

I believe the same kind of reasoning suggests the idea of random selection as espoused by Darwin is also impossible.

No, I am not a creationist.

But I do believe complex systems are adaptive and self-organizing and therefore find a way to direct their development.

Notice hierarchy destroys complexity, which destroys adaptability, emergence and therefore innovation. Hey, that is what government does.

SaulOhio November 30, 2011 at 6:40 pm

Darwin did not espouse random selection. Its called NATURAL selection for a reason.

WhiskeyJim December 1, 2011 at 12:15 pm

And it is random.

vikingvista December 1, 2011 at 2:53 pm

Random is the inability of a thinking mind to reliably identify a pattern. But both you and I can identify countless patterns even in those parts of the universe untouched by man. Nonrandomness is at least as common as randomness. Natural selection is about the influence of those nonrandom forces. It is about how the outcome of a series of unrelated or perhaps even counterveiling events (such as random mutation) are shaped by them.

Ubiquitous December 2, 2011 at 11:35 am

Random is the inability of a thinking mind to reliably identify a pattern.

Not so. The inability of a thinking mind reliably to identify a pattern is called “lack of imagination”; in extreme cases, “plain stupidity.”

The word “pattern” does not necessarily imply non-randomness; there can be random patterns and there can be non-random patterns. So the concept “pattern” doesn’t help us define the concept “random.”

“Random”, as used in the sense of a “random sequence of characters”, refers to the fact that to describe the sequence in an algorithm, one must reproduce the sequence in full. Conversely, a non-random sequence of numbers can be described with fewer characters in an algorithm by means of compressing it.

Example:

A non-random pattern, or sequence of numbers:

2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113

This sequence comprises 119 characters. This is a non-random pattern because it can be compressed using fewer characters into an algorithmic step such as:

List first 30 primes

This uses 20 characters. (A programmer could probably compress this still further.)

Since the original sequence was compressible into the algorithm, it’s said to be non-random.

Conversely, a sequence such as this:

70, 82, 84, 38, 56, 69, 64, 74, 66, 1, 30, 88, 2, 37, 35, 53, 49, 62, 63, 41, 86, 98, 48, 8, 26

is a random pattern. It uses 95 characters and cannot be compressed into an algorithmic instruction of shorter length that could recreate the original string. To describe the character string, one must rewrite it in full, using all 95 characters.

If a character string is compressible into an algorithm of fewer characters, it’s non-random. If the string is incompressible and has to be reproduced in full to describe it, it’s random.

Algorithmic compressibility is also used to distinguish orders of complexity. If a sequence is highly compressible into a short algorithm, it’s simple; if it cannot be compressed and must be reproduced in full to describe it, it’s complex.

vikingvista December 2, 2011 at 2:49 pm

ME: Random is the inability of a thinking mind to reliably identify a pattern.
DU: Not so. The inability of a thinking mind reliably to identify a pattern is called “lack of imagination”; in extreme cases, “plain stupidity.”

No amount of imagination has allowed anyone as of yet to identify a reliable pattern in the sequence of registered electron locations on a screen in a double slit experiment. In fact, it has been so removed from anyone’s imagination, that the quantum theory explaining it supposes that it is random and that random is a fundamental feature of nature.

And there are perfectly usable wholly deterministic algorithms for producing a completely repeatable series of numbers that you and just about anybody would call “random”–that is would be unable to decipher the pattern produced by the algorithm.

The premises of quantum theory notwithstanding, there is nothing in nature that can reliably be called random to any certainty beyond the claim that a pattern has not yet been realized. That is, there is no test for randomness, that some wholly deterministic pseudorandom number generating algorithm cannot be devised to fool.

So I state again, for your edification, random is the inability of a thinking mind to reliably identify a pattern. That is precisely what it is. And it sure as hell doesn’t mean “highly probable”. I don’t even know where you get that.

DU: The word “pattern” does not necessarily imply non-randomness; there can be random patterns and there can be non-random patterns. So the concept “pattern” doesn’t help us define the concept “random.”

As I have already explained to you, there is no such thing as purely random. In anything perceived as random, there is always a nonrandom context. There must be, because without *some* discernible context, your consciousness cannot even function. For example, even the best pseudorandom number generator, generates a random sequence according to a distribution. There is *no* discernible pattern in the event-to-event sequence, but there *is* a discernible pattern in the limiting distribution of all such events.

So, pattern is fundamental to understanding what we mean by “random”. Random cannot be understood without it.

We can use statistical mechanics to define, in a particular context, what is meant by “order”, but the theory assumes randomness, because that is how we model those observations for which we have not identified a pattern. That is, the universe contains things for which we have not been able to discern a pattern. We theorize randomness to model those things. But it is no contradiction that totally nonrandom pseudorandom algorithms are perfectly suitable for implementing most such models. That’s because it isn’t an actual objective state of nature called “random” that matters, but just that the algorithm sufficiently models our PERCEPTION of random.

This is not to say there isn’t an objective state of nature that is random. It is just that even if there were, it could never be distinguished from our mere inability to identify a nonrandom underlying mechanism. So the most we can ever say, is that:

Random is the inability of a thinking mind to reliably identify a pattern.

Ubiquitous December 1, 2011 at 7:00 pm

Darwin did not espouse random selection. Its called NATURAL selection for a reason.

Darwin did not define Natural Selection in “Origin of Species.” What he had in mind, however, was a material, non-conscious, non-teleological selection process that mimicked the conscious, teleological selection process of the various animal breeders whom he knew. He assumed that blind nature could imitate without intention what conscious human animal breeders did with intention, if only given enough time. What he also needed was a non-goal-directed source of variation in living systems that blind nature could select or not select. That was a stumbling block for classical Darwinists until they met with geneticists at an international conference in the 1940s. The synthesis of classical Darwinism and Mendelian genetics — in which gene variation was surmised to result from random environmental factors and therefore provide a source of variation — became officially known as the Synthetic Theory, unofficially as “neo-Darwinism.”

After the discovery of the structure and function of DNA, gene variation was materially reduced to copying errors in DNA, in which a base nucleotide in the copy gets substituted for a different one in the original. These are known as “point mutations” and are by theoretical necessity assumed to have occurred completely by chance. Thus, the two causal mechanisms in neo-Darwinism for evolution are claimed to be “random mutation” and “natural selection.”

vikingvista November 30, 2011 at 6:25 pm

For some reason, I can’t get the manuscript to come up. But I can say, that I have myself coded the monkey experiment (albeit for considerably less than the entire works of Shakespeare) from scratch. The code is simple, as are the mathematics. Perhaps the experiment in the manuscript left out any selective pressures, which in nature are ubiquitous.

And while I agree that it is unlikely there have ever been trillions of typewriters with trillions of dedicated monkeys working for trillions of years, that does miss the point about spontaneous complex orders (as perceived by a human mind) emerging from non-conscious (“random”) phenomena, particularly if constrained by selective pressures. And if such orders were themselves further constraining, or even generating of further orders, the process would naturally accelerate.

Steve November 30, 2011 at 8:12 pm

If you have an infinite number of monkeys, why do they need to type forever? Infinity is a whole lot more than a trillion. They’ll reproduce Shakespeare’s entire work in precisely the amount of time it takes a single monkey to make the required number of key strokes.

vikingvista November 30, 2011 at 8:50 pm

Can’t we just genetically engineer one monkey with a trillion fingers?

Ubiquitous December 1, 2011 at 5:59 am

Can’t we just genetically engineer one monkey with a trillion fingers?

They’ve already done that, but each finger is a middle finger. The monkey’s name is “Invisible Backhand.”

vikingvista December 1, 2011 at 6:11 pm

How ironic for him. So many fingers, so few brains.

Ubiquitous December 1, 2011 at 7:04 pm

How ironic for him. So many fingers, so few brains.

I mentioned that to the genetic engineers. “We’re working on it, but we need more funding” was their reply.

Government-funded science. What do you expect?

Seth November 30, 2011 at 9:08 pm

It took the universe over 10 billion years to spin out some Shakespeare.

vikingvista December 1, 2011 at 2:55 pm

Nice.

khodge December 1, 2011 at 7:26 pm

Let’s see…10 billion divided by infinity =

vikingvista December 1, 2011 at 9:25 pm

So, you don’t think Shakespeare exists? I would’ve thought Seth’s statement was completely without controversy.

Chucklehead November 30, 2011 at 9:52 pm

The Banana Story: The price of everything.
A lady complains to the grocer that the price of Bananas is too high. “It is 8 cents” she exclaims, “the store across the street has them for 5 cents.”
Why didn’t you buy them?” the grocer answers
“They didn’t have any” she replied.
“Well, if I didn’t have any, I would sell them for 2 cents.”

kyle8 November 30, 2011 at 9:53 pm

A trillion Krugmans, writing a trillion columns for the NYTimes will still never come up with an honest or realistic economic analysis.

vikingvista November 30, 2011 at 11:19 pm

True. They’d have better luck with a trillion monkeys. Or even just 1 monkey.

Greg Webb November 30, 2011 at 11:33 pm

Spot on!

Hasdrubal December 1, 2011 at 10:36 am

We may not have a way to feed an infinite number of monkeys, but the good folks at the IETF have ensured that we do have a protocol to handle all their typing: RFC 2795, the Infinite Monkey Protocol Suite (IMPS). http://tools.ietf.org/html/rfc2795

Previous post:

Next post: