Evolution vs Intelligent Design

Ten cubits from rim to rim = ten cubit diameter = 5 cubit radius

Circumference = 2pir = 2 * 5 * pi = 31.416 cubit

sounds close enough to me
Close does not matter, I was trying to show J-man that even if he chooses to believe the bible he cannot take every word of it literally like he was doing in his second post.
 

Tangerine

Where the Lights Are
is a Top Team Rater Alumnusis a Community Leader Alumnusis a Smogon Discord Contributor Alumnusis a Tiering Contributor Alumnusis a Top Contributor Alumnusis a Smogon Media Contributor Alumnus
Close does not matter, I was trying to show J-man that even if he chooses to believe the bible he cannot take every word of it literally like he was doing in his second post.
That's a bad example, though, especially if you're going with the "literal" argument because nowhere does it say the measurements are precise.

"Oh god the Bible rounded shit you can't take it literally"

great argument
 
I'll answer this in words... if i'm perceiving this correct, then what you're saying is that God locally destroyed and then promised not to destroy the earth with a flood... It's kind of like me saying, "i promise never to kill a person with a knife again" after i accidently stabbed you in the foot with a knife
I'm not saying anything, that website is.

But there could have been a flood on a scale never seen since, one that inundated an entire civilization, but did not cover the Earth. Indeed, it is reasonably established scientifically that there have been absolutely colossal floods, like the filling of the Black Sea, or the emptying of glacial lake Missoula that carved out the Scablands. (I don't know any major scientifically-attested floods in the right time and place to match the Blblical story though.)

EDIT: The meaning of the word "Earth" is the issue also. It is not clear that the original Hebrew refers to the entire planet; it may instead refer to a region, 'all that is known', 'the land', kind of thing.

But less about the Bible. Back to the original issue:
J-man said:
Cantab said:
I think those in favour of ID fall into two groups: those who are religiously motivated, and those who do not understand evolution and natural selection.
I certainly believe there are those, like me, who subscribe to ID because we understand the faults in the Evolutionary Hypothesis. I would be careful putting things into groups if i were you.
Looks to me like you're one of the first group - in favour of ID because it is in accord to your religious beliefs.
 
I hate to interject, but I somewhat agree with Ira. I don't believe that "God" created the first single-celled organism, but I really DO NOT understand the current theories about how it happened. Right now, the only thing that sounds slightly plausible to me is that other intelligent creatures contaminated Earth or something similar.
And how did these other intelligent creatures come to be? You're just running in circles.

So, would someone please explain to me in a way that I will understand how the first single-celled organism came to be? Because, I simply don't get how lightning + organic material in water = single-celled organism.
Well, it's not that simple. What you describe has only been shown to produce amino acids. As far as I can tell, abiogenesis is not yet very well understood. But in principle, imagine a large vat of chemicals. Some reactions are going to happen naturally, producing new chemicals, like amino acids. Then, some other reactions are going to happen with these new chemicals, producing other chemicals, like proteins, and so on. There are probably dozens of ways through which simple organisms can be formed through such a process, some of which are more probable than others.

What's important to think about is that just like living organisms can reproduce themselves rather consistently, there may be very simple chemicals which, when produced even once, increase the likelihood of being produced again - and through such "bootstraps", they can prosper and combine themselves into more complex organisms. This helps understand how events with extremely low probability can cascade into something like life, one step at a time.

Furthermore, I believe that the concept of "scaffolding" is very important (this is one of the things ID proponents do not grasp). Essentially, imagine that an organism "evolves" as follows: first, it is A+B. Then, a new feature, C, is evolved: A+B+C. But it so happens that B and C go pretty well together and eventually A becomes useless, its job having been progressively taken up by C: B+C. Then we have more features: B+C+D+E, and D and E end up filling B's niche: C+D+E. In the end, A and B have disappeared from the organism, leaving only C+D+E. When studying current organisms, one might be puzzled at how a thing as complex as C+D+E could have evolved, but the answer is that they never did evolve alone. They were supported by A and B, a sort of scaffold, and then they didn't need the scaffold anymore. Primordial single celled organism might have occurred in such a context.

To put it in another way, imagine that in 2,000 years, all organic lifeforms are extinct and all that remains are sentient machines we built and eventually prevailed due to greater strength and/or intelligence. If these machines somehow completely forgot about their origins as they settled on some arid planet with no water or atmosphere, *they* might want to believe in God or in super-intelligent aliens "seeding" them there. The latter would be close to the truth, incidentally. But in fact they would be the continuation of humanity, which they eventually outgrew, because they were better adapted. Organisms do not necessarily come from an intelligence that is superior to theirs - they perfect themselves through simple, mechanical processes that are inherent to nature. As far as I can tell, abiogenesis, evolution and natural selection are a solid framework to explain our own origins.

About the Bible: please, don't bother. To believe that the Bible is inerrant is so deeply misguided that arguing about it is next to hopeless. You can be sure that anything the Bible says can be "interpreted" in a variety of ways... except when it can't. So essentially interpretation can be used to selectively twist the meaning of the text when it goes against completely overwhelming evidence, while avoiding reinterpreting other passages. It is difficult to make people who do that understand how retarded it is.
 
Intelligent Design doesn't explain the origin of life either.

What it says is "Well, this is too complicated to have happened by itself, something intelligent must have created it." This just shifts the question to "What created the intelligent something?", instead of "what created us?".

The same is true of the earth-seeded-by-aliens explanations.

Intelligent Design (in terms only as a competitive theory to Evolution) says that all the species on the planet were created by an intelligent being as they are. Evolution says that the species that are alive today have been formed by diverging genetic circumstances over a long period of natural selection. Intelligent Design is not the standard religious argument that there is a God; it is a description of the manner that God created things. Similarly, evolution does not prove there is no god.

Neither of these two explain life.
 
Intelligent Design doesn't explain the origin of life either.

What it says is "Well, this is too complicated to have happened by itself, something intelligent must have created it." This just shifts the question to "What created the intelligent something?", instead of "what created us?".
Ironically, it so happens that the "intelligent designer" doesn't actually have to be intelligent.

Imagine an entity with very average intelligence (on par with the average human), but eternal and having infinite patience as well as the ability to assemble atoms and molecules however he wishes. That entity could assemble atoms and molecules randomly, then drop the result in an enclosure and watch. If the resulting "thing" he made randomly isn't what he wants, he can just get rid of it and try again. Should he keep doing this forever, eventually, after an horrifyingly long time, he'd have made up plants, animals, humans, and he could populate a planet with them. If he put the universe on "pause" while he was doing it, from our perspective, he'd have done it in an instant.

Trial and error is a valid (albeit time consuming) creation technique. No intelligence required. Heck, a machine could do it!
 
Lati0s: you're trying too hard. The easiest example to disprove Biblical literalism is the following.

New American Bible:

Before reading the following passages, here's a question:

What was told to Peter? That "Jesus was Risen" or "Jesus isn't in the Tomb" ?? In John's Gospel, Mary Magdalene was not told by the angels until after Peter visted the tomb. In Luke's Gospel, Mary Magdalene was told before she told Peter.

John 20
1 1 2 3 On the first day of the week, Mary of Magdala came to the tomb early in the morning, while it was still dark, and saw the stone removed from the tomb. 2 So she ran 4 and went to Simon Peter and to the other disciple whom Jesus loved, and told them, "They have taken the Lord from the tomb, and we don't know where they put him." 3 5 So Peter and the other disciple went out and came to the tomb. 4 They both ran, but the other disciple ran faster than Peter and arrived at the tomb first; 5 he bent down and saw the burial cloths there, but did not go in. 6 When Simon Peter arrived after him, he went into the tomb and saw the burial cloths 6 there, 7 and the cloth that had covered his head, not with the burial cloths but rolled up in a separate place. 8 Then the other disciple also went in, the one who had arrived at the tomb first, and he saw and believed. 9 7 For they did not yet understand the scripture that he had to rise from the dead. 10 Then the disciples returned home. 11 8 But Mary stayed outside the tomb weeping. And as she wept, she bent over into the tomb 12 and saw two angels in white sitting there, one at the head and one at the feet where the body of Jesus had been. 13 And they said to her, "Woman, why are you weeping?" She said to them, "They have taken my Lord, and I don't know where they laid him."
Luke 24
1 1 But at daybreak on the first day of the week they took the spices they had prepared and went to the tomb. 2 They found the stone rolled away from the tomb; 3 but when they entered, they did not find the body of the Lord Jesus. 4 While they were puzzling over this, behold, two men in dazzling garments appeared to them. 5 They were terrified and bowed their faces to the ground. They said to them, "Why do you seek the living one among the dead? 6 He is not here, but he has been raised. 2 Remember what he said to you while he was still in Galilee, 7 that the Son of Man must be handed over to sinners and be crucified, and rise on the third day." 8 And they remembered his words. 9 3 Then they returned from the tomb and announced all these things to the eleven and to all the others. 10 The women were Mary Magdalene, Joanna, and Mary the mother of James; the others who accompanied them also told this to the apostles, 11 but their story seemed like nonsense and they did not believe them. 12 4 But Peter got up and ran to the tomb, bent down, and saw the burial cloths alone; then he went home amazed at what had happened.
Biblical Literalism is dangerous to your faith. So lets not have any of this "There was a creator because its in Genesis" nonsense.

EDIT: for you Christians out there, know where your Church stands on Biblical Inerrancy. Part of the reason for so many faiths is the varying amount of belief in Biblical Literalism between the faiths.

Trial and error is a valid (albeit time consuming) creation technique. No intelligence required. Heck, a machine could do it!
Note: Just because a Machine can do it does not necessarily mean it is not intelligent. Machines play Chess better than any human now. 50 years ago, Chess was THE defining element of intelligence.

Anyone who has followed the history of AI will note that "Intelligence" is always defined as "that problem where computers epic phail at but humans are good at". For example: the Turing Test is still accepted because no computer has passed it yet. The "Chess" test for intelligence is no longer accepted as a true test of intelligence.

Strangely enough: Mathematical Proofs are also no longer accepted as a measurement of intelligence. Ever since a part of the Four Coloring Theorem was proven by a Computer (and can only be proven by a computer) at least. A proof has to be "simple" to be intelligent. Not only correct anymore. After all, computers can prove Theorems now.
 
It exists even today. If someone plays chess in a movie, book, or Anime (*cough* Code Geass), then they're absurdly intelligent. 50 years ago, good chess programs didn't exist. So it was acceptable to say "Oh yeah, when computers can beat me in chess, then they're more intelligent than me", you know, as an objective test.

Today, computers ARE better than everyone in chess, so you can't say that anymore.
 
Don't dwell on this issue too much. It's merely one of semantics, not of testing processes. When people say "Intelligent Design", they don't mean the same thing as when they refer to "Intelligence Quotient", or even an Intelligent Person.
 
Perhaps: but then... Evolution itself is being used successfully in the AI field. It happens to be a very efficient search algorithm in a great many of cases.

If "Intelligence" can be interpreted... then I could argue that "Survival of the Fittest" (in AI / Algorithms, it would be classified as "Greedy") is "intelligent". After all: Greedy Algorithms work in a great number of cases. Even if it isn't "very intelligent", but it clearly is intelligent enough for AI work.

Also...Humans don't have eyes like a Hawk or a Cuttlefish, or the strength of a Gorilla, or hell... humans are pretty piss poor at monogamy compared to Wolves. Just about every trait about a human compared to the animal kingdom can be shown that we humans were "designed" pretty poorly... or at least, better "designs" exist. If anything, this is a cue for a greedy, piss poor greedy rule like "Survival of the Fittest".
 
It exists even today. If someone plays chess in a movie, book, or Anime (*cough* Code Geass), then they're absurdly intelligent. 50 years ago, good chess programs didn't exist. So it was acceptable to say "Oh yeah, when computers can beat me in chess, then they're more intelligent than me", you know, as an objective test.
oh yeah, an appeal to anime. really strengthening your case here
 
Note: Just because a Machine can do it does not necessarily mean it is not intelligent. Machines play Chess better than any human now. 50 years ago, Chess was THE defining element of intelligence.
No, look: when I say a machine can do it, I mean five short lines of code could do it.

Anyone who has followed the history of AI will note that "Intelligence" is always defined as "that problem where computers epic phail at but humans are good at". For example: the Turing Test is still accepted because no computer has passed it yet. The "Chess" test for intelligence is no longer accepted as a true test of intelligence.
We need tests to gauge progress. If a test is passed, a more difficult one must be made. In any case, no serious researcher will define "intelligence" in such a narrow way. Intelligence is not something that can be defined precisely or measured meaningfully. Adaptivity is an intelligence trait. So is efficiency at managing resources: a lot of problems can be trivially solved in exponential time, so intelligence isn't so much the ability to solve them as the ability to solve them quickly.

In any case, there does not exist a universally agreed upon definition of intelligence (although it is generally agreed upon that brute force is not intelligence, because it lacks efficiency). The definition I would personally give is that the intelligence of an algorithm is a measure that is relative to a problem set, and that it is more or less the ratio of the problem set that an algorithm can solve, divided by the resources (time, memory, energy, etc.) it takes to solve them. Choose the problem set accordingly to your preferences.
 
No, look: when I say a machine can do it, I mean five short lines of code could do it.
Not necessarily. From my experience... "Trial and Error" is formulated as a search on an arbitrary graph. Your trial tests the current node, and the "next" trial is chosen based on a heuristic function on the error.

How do you deal with cycles in this graph? If you remember every node you visit, then you'll need O(n) where n is the number of nodes... which potentially is infinity. (for example: Searching for a proof on a Mathematical Theorem has a search space of (countable) infinity).

There are other problems traversing a plane of infinity, even with trial and error. If you go down the wrong path... you'll never find the right answer. So... you need to also figure out how to enumerate the infinite number of solutions in such a way that you'll eventually find the answer.

We need tests to gauge progress. If a test is passed, a more difficult one must be made. In any case, no serious researcher will define "intelligence" in such a narrow way. Intelligence is not something that can be defined precisely or measured meaningfully. Adaptivity is an intelligence trait. So is efficiency at managing resources: a lot of problems can be trivially solved in exponential time, so intelligence isn't so much the ability to solve them as the ability to solve them quickly.
Well, this is getting outside of the scope of Evolution vs Intelligent Design, so I will resign this point of discussion. I accept your point.
 
Not necessarily. From my experience... "Trial and Error" is formulated as a search on an arbitrary graph. Your trial tests the current node, and the "next" trial is chosen based on a heuristic function on the error.
I specified the "heuristic", if we can even call it that. The nodes are just random every time. If we bound the size of the thing that is randomly assembled every time, shove molecules on a fine discrete grid and pick with uniform probability between all possibilities, we expect every possibility to come up at some point. We can also methodically explore the space in order. In either case, it's extremely simple, although it's clearly exponential time.
 
I dunno enough physics at this point to answer this, so I'll ask you: is space/time discrete or continuous? If space/time is an uncountable infinity, then it is impossible to iterate through all of the possibilities with that method. (On the other hand, I can imagine it working with a countable infinity, such as one on a discrete grid)

EDIT: actually, it doesn't matter. Either way requires you to prove space/time discrete or continuous, which is non-trivial. Ultimately, those 5 lines of code will require a massive proof to show that it works out as expected. I would call that reasonably intelligent.
 
I dunno enough physics at this point to answer this, so I'll ask you: is space/time discrete or continuous? If space/time is an uncountable infinity, then it is impossible to iterate through all of the possibilities with that method.
I would lean towards discrete, but that's irrelevant. I did specify you could place the molecules on a "fine discrete grid", which we can do even if space/time is continuous (just pick a spacing). Given how noisy the universe is, a robust mechanism should be able to cope with inaccuracies of a certain magnitude and thus it does not seem productive to go for a higher precision than that.

The proof that the program would work is trivial. The algorithm can produce a human body. Assuming a good source of randomness (we can say it's a given), it will occur eventually.
 
As far as I'm aware, the entirety of existence is considered to be discretely quantised. The planck time, 10^-43 seconds, is considered to be the smallest unit of time there is. Any two events that are separated by less than that are considered to be coincident in time.

Similarly for the planck length and space, but I can't remember the index of that one off the top of my head.
 
A conservative (lower limit) estimate of the required precision would probably be the Planck length.

Edit: @MrIndigo: Not quite. The Planck scale is the point at which quantum fluctuations would produce sufficient energy densities to create miniature black holes. This probably signals the breakdown of quantum physics at these scales. As far as I know, there is no successful physical theory using quantised spacetime. Many physicists have tried, but it seems to run into problems. Indeed, while energy is quantised, that doesn't mean distance and time are. (In fact, if spacetime curvature, being gravitational energy, is quantised, then I've a feeling spacetime itself has to be continuous or you'll run into a 'misalignment' problem on a grid. But I'm not certain on this.)
 
The proof that the program would work is trivial. The algorithm can produce a human body. Assuming a good source of randomness (we can say it's a given), it will occur eventually.
Nope.

Lets say I have an algorithm for picking a number. The algorithm is as follows:

2*(Random Integer) = picked number.

If it isn't the number you guessed, then I'll try again. Even with a "sufficient amount of randomness", I'll never be able to produce an odd number with this algorithm.

You need a better proof than just "Oh, I rely on randomness to get me the correct result". Sometimes, you can randomly try all of the WRONG stuff, which is especially bad when there's an infinite number of wrong answers.

---------------------

The correct proof, which we appear to be creating, would involve showing that the Planck Length is good enough, and all that good stuff. At this point though, I admit that I'm dismissing the form of your argument. Either way, my point is that its not nearly as trivial as you're making it out to be.

EDIT: Case in point: how will you handle collision detection? On a sufficiently fine grid, I guess pixel-perfect can be used, but good luck fitting that within "5 lines of code".
 
Nope.

Lets say I have an algorithm for picking a number. The algorithm is as follows:

2*(Random Integer) = picked number.

If it isn't the number you guessed, then I'll try again. Even with a "sufficient amount of randomness", I'll never be able to produce an odd number with this algorithm.
If the only numbers choosable are even numbers, then the algorithm will hit the answer. This corresponds to discrete spacetime.
If we only require the numbers to match to the nearest 2, the algorithm also hits the answer. This corresponds to continuous spacetime, but finding a distance below which we can neglect the differences.

You ignore collision detection. On the microscale, either you get rid of the previous atom to place another, or you just shove 'em in the same place and let the thing blow up. On the macroscale, you again simply don't care if you hit the same thing more than once.
 
Lemme build this up from the ground up then, to show you how non-trivial this is.

Assuming a good source of randomness (we can say it's a given), it will occur eventually.
You're assuming more than just a source of randomness.

A good source of randomness needs to be cleaned up, by getting rid of the bias. Now there are algorithms to clean up the bias, but the proofs of such algorithms are non-trivial, and are deep computer science. Particularly, you assumed uniform random numbers, which needs particular care to generate.

At very least, removing the bias (so that you ensure that all random numbers are possible from your source of randomness) is a non-trivial problem (its a solved problem, but its certainly non-trivial).

And at this point, the code is beginning to show some inkling of "intelligence". At very least, it would take intelligence to put together this code that unbiased a random number generator for use of a device that iterates over all possible molecular configurations in search of a configuration that shows signs of Life.

(Not that I agree with Intelligent Design, I just don't like the form of your argument)

Or hell: what about this "Trial and Error". How do you detect the error? What is the code that detects a successful trial? These are NON TRIVIAL pieces of information you're just brushing away. Certainly not something you can do in "5 lines of code".

I mean, I can see what you are trying to do.

Code:
Object o;
do{
     o = new RandomObject();
} while (!isLife(o));
return o;
What I'm trying to say is that "new RandomObject()" is non-trivial, and "isLife" is non-trivial. RandomObject() needs to be written in such a way that you KNOW it traverses all possible configurations of molecules (ie: you need to make sure that the search will eventually finish as time goes to infinity... ), and "isLife" somehow needs to... detect life.
 
Nope.

Lets say I have an algorithm for picking a number. The algorithm is as follows:

2*(Random Integer) = picked number.

If it isn't the number you guessed, then I'll try again. Even with a "sufficient amount of randomness", I'll never be able to produce an odd number with this algorithm.

You need a better proof than just "Oh, I rely on randomness to get me the correct result". Sometimes, you can randomly try all of the WRONG stuff, which is especially bad when there's an infinite number of wrong answers.
Oh, christ. Okay, forget randomness. Use lexical order. Exploring a grid isn't a difficult problem at all.

The correct proof, which we appear to be creating, would involve showing that the Planck Length is good enough, and all that good stuff.
The Planck Length is overkill. If you think this much precision is needed you are out of your mind. The reason why is simple: at every moment, random quantum noise happens everywhere in everybody's brains. Neurons fail all the time. Bodies move, they collide, they jitter, they take shocks, and so on. At room temperature, every atom moves around randomly way more than a planck length at every instant. If you misplace all the molecules in a human body of a billion planck lengths (not even an electron's radius), it will not change jack shit. You just don't need that much precision at all.

At this point though, I admit that I'm dismissing the form of your argument. Either way, my point is that its not nearly as trivial as you're making it out to be.
On the contrary, it is every bit as trivial as I'm making it out to be. You are raising completely irrelevant points.

EDIT: Case in point: how will you handle collision detection? On a sufficiently fine grid, I guess pixel-perfect can be used, but good luck fitting that within "5 lines of code".
Collision detection? What does that even mean? Who gives a shit?

You're assuming more than just a source of randomness.

A good source of randomness needs to be cleaned up, by getting rid of the bias. Now there are algorithms to clean up the bias, but the proofs of such algorithms are non-trivial, and are deep computer science. Particularly, you assumed uniform random numbers, which needs particular care to generate.
I said it's a given. You have *God* powers. You can place molecules at a precision of a planck length. You just don't know how to place them to get what you want. At this point, it's not a stretch to suppose you got a stash of truly random numbers lying around as well. I mean we're clearly not working on a limited budget here.

Or hell: what about this "Trial and Error". How do you detect the error? What is the code that detects a successful trial?
Pit the creature in a maze. Give it an exam. You know what the creature should be able to do, perhaps what it should look like, it's not difficult to test for that. You've got all the time in the world, it's not like you give a shit about false negatives either.
 
A conservative (lower limit) estimate of the required precision would probably be the Planck length.

Edit: @MrIndigo: Not quite. The Planck scale is the point at which quantum fluctuations would produce sufficient energy densities to create miniature black holes. This probably signals the breakdown of quantum physics at these scales. As far as I know, there is no successful physical theory using quantised spacetime. Many physicists have tried, but it seems to run into problems. Indeed, while energy is quantised, that doesn't mean distance and time are. (In fact, if spacetime curvature, being gravitational energy, is quantised, then I've a feeling spacetime itself has to be continuous or you'll run into a 'misalignment' problem on a grid. But I'm not certain on this.)
Oh, I didn't mean to say that it was a unified theory or anything; but I think even the nonrelativity quantum people use the planck scale as the limit (disregarding gravitational interactions and whatnot).

In fact, one of the more popular GUTs (Quantum Loop Gravity) was recently rendered extremely unlikely by experiments with Gamma Ray Bursts. It was in Nature (October).
 

Users Who Are Viewing This Thread (Users: 1, Guests: 0)

Top