So technically, the probability of something happening along an infinite point is 1/infinity or a .0(infinite number of zeros)1 ... which is mathematically the same as dividing it by zero. Its fallacious. in the extreme.
You are wrong.In mathematics, a **probability measure **is a

** real-valued function** defined on a set of events in a

** probability space **that satisfies

** measure **properties such as

*countable additivity*.

[3] The difference between a probability measure and the more general notion of measure (which includes concepts like

* area *or

*volume*) is that a probability measure must assign value 1 to the entire probability space.

Probability measures are real-valued functions, which means their codomain (output) is contained the set of real numbers. Real numbers are an Archimedean field, which means they contain no infinitesimal numbers.

This means that expressions like "1/infinity" and ".000...1" either aren't real numbers, or are precisely equal to 0.

".000...1" isn't even a decimal representation:

where *a*_{0} is a nonnegative integer, and *a*_{1}, *a*_{2}, … are integers satisfying 0 ≤ *a*_{i} ≤ 9, called the digits of the decimal representation. The sequence of digits specified may be finite, in which case any further digits *a*_{i} are assumed to be 0.

There can't be anything "after" an infinite number of 0's in a decimal representation, since the terms in the series form a sequence, and between any two terms in a sequence there are only a finite number of other terms. I can provide a rigorous proof if you like.

Alternatively, if you define 0.000...1 as the limit of {0.1, 0.01, 0.001, ...}, then 0.000...1 = 0. I can provide a rigorous proof if you like.

I've specified a state space (S = [0,1], the closed unit interval, as well as its Lebesgue-measurable subsets) and a probability measure (defined by a probability density function) for what's called a *continuous uniform distribution*. This is an incredibly commonplace random variable. The method for computing a probability in this context is taking an integral of the density function, and for the event that X = 0.5 (or equivalently that X is in the set {0.5}), the probability is 0.

For more information/examples involving the continuous uniform distribution, see here, here, or here.

Here's an introductory text for mathematical statistics. Chapter 4 introduces continuous random variables.

---------- Post added at 03:52 PM ---------- Previous post was at 02:25 PM ----------

Originally Posted by

**gree0232**
Its not a claim that I have made - my claim is that I asked you to support it with a citation. You cannot challenge me to support a claim that I did not make prideful one. A citation please.

First, you said:Already conceded - when we start involving infinities - **and the initial example provided was not a zero probability event.**

Emphasis mine. The event in question was X=0.5, where X is a uniformly distributed continuous random variable on the closed unit interval. Your claim is that this event--that X = 0.5--is not a probability 0 event. Equivalently, you are claiming that P(X = 0.5) is not equal to 0.

All the ones you listed without support.

Quote and link.

The ones from modal logic that you apparently are just walking around with in your head? Should ne no problem for you to list a citation for the equations you present.

What equations are you talking about? You mean like the argument I gave? A good introductions to modal proofs in K logics is given here, which includes this remark:Modal logic is not the name of a single logical system;there are a number of different logical systems thatmake use of the signs ‘□’ and ‘◊’, each with its ownset of rules.

And if so, then then we can get to meat of the puzzle here because those equations WITH citations will give us more than you are letting on.

The proofs and equations stand on their own. What do I need to cite for you to understand that P(X=1) = P(X=2) = ... = P(X=6) = 1/6 is a legitimate discrete probability distribution? I could prove it for you; would you need a citation for each line of the proof? Someone else would have needed to make this particular argument before in order for you to believe that the proof makes sense? You can't look at the logic on your own?

In short, the CLAIM I MAKE OPENLY, is that you are pulling equations off the internet without fully understanding them. In short plagiarism. It's up to you to prove that the is not the case.

Who fully understands anything in mathematics? Not even Gauss could truthfully make such a claim.

In any case, the proofs and equations I'm writing are elementary, mathematically speaking. You'd run into this level of math around freshman or sophomore year in university.

As equations are all on the internet, shouldn't be a problem.

What kind of bizarre statement is this? Would you need a citation to understand that the polynomial (x-1591359154298357439857942859218341343)(x -e^5723957239867239602975943258943259847230985742) has two real roots? Someone else would have needed to have written this specific equation before on the internet for you to believe that? You wouldn't even accept a proof?

Again, just to make the disjunction clear, you are pulling often complex equations out of your head, and yet your argumentation is ... 1+1=3 ...

Kinda a mismatch there. We'll address that one shortly.

Back on target, you can see, the basics of modal logic are pretty easy.

Logic Symbols Expressions Symbolized

Modal Logic □ It is necessary that ..

◊ It is possible that …

Deontic Logic O It is obligatory that …

P It is permitted that …

F It is forbidden that …

Temporal Logic G It will always be the case that …

F It will be the case that …

H It has always been the case that …

P It was the case that …

Doxastic Logic Bx x believes that …

Which is how we get to know that modal logic deals primarily with ... drum roll ... computer programming.

http://plato.stanford.edu/entries/logic-modal/
It's of use to computer programming, but much of the interest in modal logic (including historically its development by philosophers like Saul Kripke) is philosophical and mathematical.

You will also not that in the citation with each equation comes paragraph or two of explanation, and as the basis of modal logic if qualifiers in PROBABILITY, a point you cannot concede, one is left with the rather pallid reflection that you are simply cutting the equations off the internet, using the paragraph of explanation out of context, to justify a point that is not even germane to the discussion.

Modal logic's quantifiers are from logic. They're frequently compared to universal quantification ("for all") and existential quantification ("there exists"), which aren't specifically *probability* quantifiers, but rather quantifiers in general.

Your point about different subsets of logic all require INDUCTION or DEDUCTION to function, yet another point you failed to concede.

Why would I concede a claim I never made? Where did I say that there was a subset of logic that didn't require induction or deduction? Indeed, my point was quite to the contrary: that each of the very different kinds of logics had their own rules of deduction, resulting in a wide diversity of deductive systems.

Additionally, as we see above, the forms of modal logic are far more expansive than you let on, so, having now made the point, I will bash you about continuously until you concede that there is more to modal logic than the term modal ... for no apparent purpose whatsoever.

I didn't say that there *wasn't* more to modal logic than the term modal. You are strawmanning here.

And I'm quite happy to agree that there are many forms of modal logic. It only further supports my argument that there are many different kinds of logics, contrary to your claim that logic is simple.

**In short, every equation you listed without support and citation has not been properly supported.** The error is yours clive, and the challenge is not on me to prove that there are no citations, that is what is called prima facie - its self evident. Modal logic is well known and easily cited, as I demonstrated EVERY TIME I listed it with a source - which is how one avoids accusations of plagiarizing - by properly supporting there work.

I'll go through every equation I've written, right now.Let X be a continuous random variable with a probability density function given by f:R->R where

f(x) =

1 for 0 < x < 1

0 otherwise

This function is a specific case of the more general family of functions of probability densities for continuous uniform random variables, as you can see here:

From the same post, we have the following statement:

Then P(a <= X <= b) is given by the integral of f(x) from a to b. Thus P(X = .5) = P(.5 <= X <= .5) = the integral of f(x) from .5 to .5, which is trivially 0.

The first statement follows from the definition of the cumulative distribution function...

The cumulative distribution function of a real-valued random variable *X* is the function given by

where the right-hand side represents the probability that the random variable *X* takes on a value less than or equal to *x*. The probability that *X* lies in the semi-closed interval (*a*, *b*], where *a* < *b*, is therefore

...which by the second part of the Fundamental theorem of calculus

Let *f* and *F* be real-valued functions defined on a closed interval [*a*, *b*] such that the derivative of *F* is *f*. That is, *f* and *F* are functions such that for all *x* in [*a*, *b*],

...is equal to the integral of the probability density function:

I made the following statement here:

Technically results like this can occur because measures in general (and probability measures in particular) are only required to be countably additive, i.e. the measure of a disjoint countable union of sets is the countable sum of the measures of each set in the union. This entails that if the measure of each set is 0, then the measure of the union is forced to be 0.

However, when you have a disjoint *uncountable* union of sets, the measure of each set being 0 doesn't allow you to conclude that the measure of the union must also be 0. So e.g. P(X in [0,1]) = P(X in the union of {r} for 0 <= r <= 1) = 1, but P(X in {r}) = 0 for each r in [0,1].

This follows from the definition of a measure:

- .

...along with the fact that ⋃*r*∈R{*r*}=R, which follows from the definition of union:

The most general notion is the union of an arbitrary collection of sets, sometimes called an *infinitary union*. If **M** is a set whose elements are themselves sets, then *x* is an element of the union of **M** if and only if there is at least one element *A* of **M** such that *x* is an element of *A*. In symbols:

...and the fact that probability measures must return 1 for the event space itself:

*μ* must return results in the

unit interval [0, 1], returning 0 for the empty set and 1 for the entire space.

...as well as the fact that the event space for a continuous uniform distribution is the set of real numbers (since its pdf is defined for all real numbers, as previously cited).

I gave the following statement here:

1/2^100 (=7.89 x 10^-31)

This statement is false as written; 7.89 x 10^-31 has been rounded to 2 decimal places, but is easily verified as being the correct 2-decimal estimation by a suitably powerful calculator.

I gave the following statement here:

Let X be the number face-up after rolling a fair 6-sided die. Then P(X=1) = P(X=2) = ... = P(X=6) = 1/6. P(X != 1) = P(X=2) + P(X=3) + ... + P(X=6) = 5/6.

This is a standard presentation of a random variable with a discrete uniform distribution with parameters a = 1, b = 6:

http://en.wikipedia.org/wiki/Uniform...%28discrete%29

The verification that P(X != 1) = P(X=2) + P(X=3) + P(X=4) + P(X=5) + P(X=6) follows from the fact that the event space is equal to S = {1,2,3,4,5,6}, and P(X in S and X not in {1}) = P(X in {1,2,3,4,5,6} - {1}) = P(X in {2,3,4,5,6}), along with the countable additivity of the probability measure and the fact that {1,2,3,4,5,6} is the (finite, and therefore countable) union of the pairwise disjoint sets {1},{2},{3},{4},{5},{6}.

I gave this statement here:(1) Necessarily (p & q) [premise]

(2) Necessarily, [box proof]

(2a) p & q ["Necessarily" in]

(2b) Therefore p [Conjunction elimination]

(3) Therefore, Necessarily p ["Necessarily" out]

(4) Necessarily,

(4a) p & q ["Necessarily" in]

(4b) Therefore q [Conjunction elimination]

(5) Therefore, Necessarily q ["Necessarily" out]

(6) Therefore, Necessarily p & Necessarily q. [&-introduction]

This establishes **Necessarily (p & q) -> [(Necessarily p) & (Necessarily q)]**

(1') Necessarily p & Necessarily q

(2') Necessarily q [conjunction elimination]

(2') Necessarily p [conjunction elimination]

(3') Necessarily, [box proof]

(3'a) p ["Necessarily" in]

(3'b) q ["Necessarily" in]

(3'c) p&q [&-introduction]

(4') Necessarily (p&q) ["Necessary" out]

This establishes** ****Necessarily p & Necessarily q -> Necessarily (p&q)**

Combined with the previous result, we have:

**Necessarily p & Necessarily q <=> Necessarily (p&q)**
This proof follows the system described

here.

That about sums it up, I think.

However, as mentioned above, you are reduced to 1+1=3 - and I am going to introduce you to another form of invalid logic:

**the fallacy. **
**Fallacy**: a mistaken belief, especially one based on unsound arguments.

Specifically, we call this an argument from absurdity/ignorance.

"There are lots of "proofs" that claim to prove something that is obviously not true, like 1 + 1 = 1 or 2 = 1. All of these "proofs" contain some error that most people aren't likely to notice. The most common trick is to divide an equation by zero, which is not allowed (in fact, you cannot ever divide by zero.) If a "proof" divides by zero, it can "prove" anything it wants to, including false statements.

It's important to recognize that while these "proofs" may be funny and cute,

**they always contain some error, and are therefore not real proofs**."

http://mathforum.org/dr.math/faq/faq.false.proof.html
Right, not all sequences of sentences form a valid argument. I've never claimed otherwise. None of this contradicts anything I've written.

So now we have both mathematicians and historians laughing at your version of logic.

You haven't quoted any mathematician or historian (other than perhaps yourself) who has even addressed either the logic systems I've mentioned or the claims I've made about them.

Congrats. You cannot divide by infinity or zero - as ... drum roll ... you can then fallaciously prove ANYTHING AT ALL!

I'm glad you agree. I take it that you therefore retract your previous claim that probabilities include numbers like "1/infinity", which would involve dividing by infinity.

But ... another drum roll please ... it mathematically fallacious! (Insert roaring crowd).

#1 - maybe you should address the thesis of of your opponent rather than introducing irrelevant minutia.

#2 - Support your work.

#1 - All of my points have addressed specific claims made by you.

#2 - Proof should speak for themselves, but I'm glad to link to wikipedia articles or other introductory texts if you're not yet comfortable with the technicalities.

If the outcome is not certain - its inductive. If you have to manipulate something with a claim that 100%, an axiom, when it clearly is not, then its inductive. And argument is only as strong as its support, and as yo have no point other than minutia and pride ... what is the point?

I'm certain that the probability that a uniform discrete random variable equals an event in its support is equal to 1/n, where n is the number of elements in its support. This can be proved deductively from the definitions of the event space and the probability measure. Do you agree, therefore, that there is a deductive argument whose conclusion is a statement about probability?

Who knows, because we can't get past you demanding people accept minutia that is not only not relevant, its often wrong. Really Clive, this discussion is not about you - and I for one would like it to return to the subject - whether or not logic supports God.

Then simply retract your claims about the uniqueness of inductive reasoning, that 1/infinity is a probability, that logic is simple, that modal logic isn't deductive, etc.

1+1=2 and 3 as proof? Is laughably fallacious. But thanks for a proof that is basely insulting to the intelligence of others. I'll ring Richard Dawkins now, and let him know that God is conclusively proved so he can get his meal of crow ready - its deductive and certain. Richard will be SO glad.

It's a valid argument, since it is an example of modus ponens.

If you think it is unsound, then please identify which premise is false.

## Bookmarks