QUOTE(Ebe)
Yes yes (a - b) = 0, and hence in that line we are dividing by 0 right?
Yeah, like I'm really going to believe you knew that before you posted.
QUOTE(Ebe)
And how can 0.9999... = 1 make sense? It opens up a slippery slope. Suddenly 0.8999... = 0.9, which means 0.799.. = 0.8
It makes sense because there's no reason why it shouldn't make sense (i.e. no reason why the definition of decimals only allows one number to have only one decimal expansion). In fact, every terminating decimal can now be written as a repeating decimal, so it's generalized for the better.
Regardless, this isn't basic arithmetic anyway. Decimals are a way to write irrational numbers, since rational numbers can be written as fractions, so there's no need to define decimals for basic arithmetic.
QUOTE(Ebe)
This all assumes that there is eventually a smallest number and hence that would equal 1. But that admits that math and it's infinites are wrong.
No, it only shows that, once again, you have no idea what you're talking about.
First, the infinites are built up from the axiom of induction. Second, and this is important, if .999...=1, their difference is 0, so there is no need to have a smallest positive number that is 1-.999... (and in fact there is no smallest positive real number, which is proven by dividing your assumed number by 2).
QUOTE(Ebe)
You are saying that math is consistent, but it is consistent with the parameters that it itself defines.
You seem to have a fundamental misunderstanding of math, and another fundamental misunderstanding of science, which is to assume that it is possible that the real world can somehow prove that proven math results are wrong. Math is always done in the context of axioms that make logical sense and axiom sets which do not lead to contradictions (since a self-contradictory system would be able to prove literally anything and its negation). Axiom sets can logically be proven consistent using basic logic, which assumes things like, "If this implies that, and this is true, then that must be true." This is basic, low-level logic.
You can't prove or disprove this with science, because math doesn't say anything about the world, and science is knowledge only about the world. Science requires a theory to make a prediction before it can be proven correct, and math simply doesn't predict anything about the world. As my teacher said about the man who told the lost traveler in a hot air balloon that he was in a hot air balloon, "That man must be a mathematician. Completely correct, and completely useless."
QUOTE(Ebe)
The fact you constantly use math in order to validate math is very frustrating.
Math is validated by basic logic. Do you have something against basic logic?
The more you speak, the more ignorance you're shown to have.
QUOTE(Ebe)
You will only convince people who already agree with you.
On the contrary. You will only convince people who believe in a few basic truths, such as: 1) Something can't be true and false at the same time. 2) If this implies that, and this is true, then that is also true.
QUOTE(jcdietz)
That's correct. They're called postulates.
Wrong. It's a definition that something is "equal" to itself.
QUOTE(Ebe)
Right, I understand that you can do that with Math, I'm just stating that using a mathematical proof to show that something in math is correct only makes it consistent, not proven.
Given a theorem, what's proven is that under the assumptions, the theorem is true. That's all that math says, which is all it needs to say. Math doesn't say that it's definitely true (which seems to be something that you can't grasp), it only says that, under the assumptions, it's true.
What other way would you use to prove something? Science is based on a laughably weak foundation for a system of knowledge. It requires an assumption that the future will resemble the past, and it requires empirical evidence (which is subject to error, unlike correct mathematical proofs) of the causality.
QUOTE(Ebe)
The same with the infinite repeating decimals, they keep getting smaller, and can never, ever equal a whole, unless in fact, there is a smallest unit at some point, in which case math is once again wrong, and infinity is shown to be a crock.
God, why can't you stop being wrong?
First of all, this is a thought experiment, which is applied in the real world, not in a mathematical setting. In the real world, we may well have a "smallest distance", and only a mathematical system in which that assumption is made can apply to this.
Second, it IS consistent with math, the same way .9+.09+.009+...=.999...=1. There is no gap between them.
Third, you haven't even taken calculus, have you? Calculus tells you exactly how to deal with things getting closer and closer to a point.
Fourth, and probably most importantly, there is a basic assumption you're making that is completely baseless, which is that it takes time to add things together at each step. In fact, their is no reason why the addition should take time.
Fifth, the paradox of the paradox. If the arrow isn't shot, where is it in the next second? The second after that?
Finally, that's not the arrow paradox. The arrow paradox has to do with assuming that at a fixed time, the arrow can be considered not moving. So you didn't even get that right.
QUOTE(Sauce)
The point I'm making is that one equaling one doesn't need to be disputed because it CAN be proven.
You can't prove a definition.