Are differences between uniformly distributed numbers uniformly distributed?Uniform PDF of the difference of two r.vWhat is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials?Does the probability of multiple independent events follow a normal distribution?placing bets to maximize the difference between two random numbersWhat is the expected absolute difference between sample and population mean?Variance of a continuous uniformly distributed random variableCheck uniformly distributed continuous random variableGiven a bounded number of die rolls following unif1,6, produce unif1,7General rule uniform distributed classes

What does AI software look like, and how is it different from other software?

Weird Power Outage in Certain Rooms- Condominium

How to response to requests to retest, in hope that the bug is gone?

How can a person Insulate copper wire in a medieval world?

Is there any math conjecture that would make a lot of damage if disproven?

Why does radiocarbon dating only work in nonliving creatures?

The quietest classical orchestra instrument to play at home

Water Jug Problem in AI

A major investor asked me (the software lead) whether the board should fire my boss (the Co-Founder and CTO). What should I do?

Does Bluetooth interfere with WiFi?

Does the sun cross other spiral arms in its movement around the galaxy's center?

What to do if caught in a physical pentest?

Why do some applications have files with no extension?

What's the difference between "can't move" and "speed becomes 0"?

How to plot a super ellipse

Does the German President's apology for WWII reflect the views of the people of Germany?

Expand a recursive pattern

What are these tiny kidney bean sized things in my rotisserie chicken

How to ask someone about something without accusing them

May Notes contain emoji graphics?

Ideal Firearms for time travel or low tech universe exploration

Solving Fizz Buzz using LINQ in C#

Crack hashed passwords using a known password

A novel about helpful woodlice aliens who eventually end up destroying every civilization they try to help



Are differences between uniformly distributed numbers uniformly distributed?


Uniform PDF of the difference of two r.vWhat is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials?Does the probability of multiple independent events follow a normal distribution?placing bets to maximize the difference between two random numbersWhat is the expected absolute difference between sample and population mean?Variance of a continuous uniformly distributed random variableCheck uniformly distributed continuous random variableGiven a bounded number of die rolls following unif1,6, produce unif1,7General rule uniform distributed classes






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;









22














$begingroup$


We roll a 6-sided die a large number of times.



Calculating the difference (absolute value) between a roll and its preceding roll, are the differences expected to be uniformly distributed?



To illustrate with 10 rolls:



roll num result diff
1 1 0
2 2 1
3 1 1
4 3 2
5 3 0
6 5 2
7 1 4
8 6 5
9 4 2
10 4 0


Would the diff values be uniformly distributed?










share|cite|improve this question












$endgroup$










  • 13




    $begingroup$
    Plot a histogram to at least get a sense
    $endgroup$
    – gunes
    Aug 11 at 10:54







  • 2




    $begingroup$
    Check out the Poisson distribution.
    $endgroup$
    – leftaroundabout
    Aug 12 at 9:42










  • $begingroup$
    This looks like homework....
    $endgroup$
    – Manu H
    Aug 16 at 8:16










  • $begingroup$
    @Manu H, I assure you homework days are way behind me
    $endgroup$
    – HeyJude
    Aug 16 at 8:23


















22














$begingroup$


We roll a 6-sided die a large number of times.



Calculating the difference (absolute value) between a roll and its preceding roll, are the differences expected to be uniformly distributed?



To illustrate with 10 rolls:



roll num result diff
1 1 0
2 2 1
3 1 1
4 3 2
5 3 0
6 5 2
7 1 4
8 6 5
9 4 2
10 4 0


Would the diff values be uniformly distributed?










share|cite|improve this question












$endgroup$










  • 13




    $begingroup$
    Plot a histogram to at least get a sense
    $endgroup$
    – gunes
    Aug 11 at 10:54







  • 2




    $begingroup$
    Check out the Poisson distribution.
    $endgroup$
    – leftaroundabout
    Aug 12 at 9:42










  • $begingroup$
    This looks like homework....
    $endgroup$
    – Manu H
    Aug 16 at 8:16










  • $begingroup$
    @Manu H, I assure you homework days are way behind me
    $endgroup$
    – HeyJude
    Aug 16 at 8:23














22












22








22


4



$begingroup$


We roll a 6-sided die a large number of times.



Calculating the difference (absolute value) between a roll and its preceding roll, are the differences expected to be uniformly distributed?



To illustrate with 10 rolls:



roll num result diff
1 1 0
2 2 1
3 1 1
4 3 2
5 3 0
6 5 2
7 1 4
8 6 5
9 4 2
10 4 0


Would the diff values be uniformly distributed?










share|cite|improve this question












$endgroup$




We roll a 6-sided die a large number of times.



Calculating the difference (absolute value) between a roll and its preceding roll, are the differences expected to be uniformly distributed?



To illustrate with 10 rolls:



roll num result diff
1 1 0
2 2 1
3 1 1
4 3 2
5 3 0
6 5 2
7 1 4
8 6 5
9 4 2
10 4 0


Would the diff values be uniformly distributed?







distributions uniform






share|cite|improve this question
















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Aug 11 at 16:30









Nick Cox

41.3k6 gold badges93 silver badges137 bronze badges




41.3k6 gold badges93 silver badges137 bronze badges










asked Aug 11 at 9:47









HeyJudeHeyJude

2132 silver badges5 bronze badges




2132 silver badges5 bronze badges










  • 13




    $begingroup$
    Plot a histogram to at least get a sense
    $endgroup$
    – gunes
    Aug 11 at 10:54







  • 2




    $begingroup$
    Check out the Poisson distribution.
    $endgroup$
    – leftaroundabout
    Aug 12 at 9:42










  • $begingroup$
    This looks like homework....
    $endgroup$
    – Manu H
    Aug 16 at 8:16










  • $begingroup$
    @Manu H, I assure you homework days are way behind me
    $endgroup$
    – HeyJude
    Aug 16 at 8:23













  • 13




    $begingroup$
    Plot a histogram to at least get a sense
    $endgroup$
    – gunes
    Aug 11 at 10:54







  • 2




    $begingroup$
    Check out the Poisson distribution.
    $endgroup$
    – leftaroundabout
    Aug 12 at 9:42










  • $begingroup$
    This looks like homework....
    $endgroup$
    – Manu H
    Aug 16 at 8:16










  • $begingroup$
    @Manu H, I assure you homework days are way behind me
    $endgroup$
    – HeyJude
    Aug 16 at 8:23








13




13




$begingroup$
Plot a histogram to at least get a sense
$endgroup$
– gunes
Aug 11 at 10:54





$begingroup$
Plot a histogram to at least get a sense
$endgroup$
– gunes
Aug 11 at 10:54





2




2




$begingroup$
Check out the Poisson distribution.
$endgroup$
– leftaroundabout
Aug 12 at 9:42




$begingroup$
Check out the Poisson distribution.
$endgroup$
– leftaroundabout
Aug 12 at 9:42












$begingroup$
This looks like homework....
$endgroup$
– Manu H
Aug 16 at 8:16




$begingroup$
This looks like homework....
$endgroup$
– Manu H
Aug 16 at 8:16












$begingroup$
@Manu H, I assure you homework days are way behind me
$endgroup$
– HeyJude
Aug 16 at 8:23





$begingroup$
@Manu H, I assure you homework days are way behind me
$endgroup$
– HeyJude
Aug 16 at 8:23











6 Answers
6






active

oldest

votes


















36
















$begingroup$

No it is not uniform



You can count the $36$ equally likely possibilities for the absolute differences



 second 1 2 3 4 5 6
first
1 0 1 2 3 4 5
2 1 0 1 2 3 4
3 2 1 0 1 2 3
4 3 2 1 0 1 2
5 4 3 2 1 0 1
6 5 4 3 2 1 0


which gives a probability distribution for the absolute differences of



0 6/36 1/6
1 10/36 5/18
2 8/36 2/9
3 6/36 1/6
4 4/36 1/9
5 2/36 1/18





share|cite|improve this answer










$endgroup$










  • 27




    $begingroup$
    @onurcanbektas The table in this answer clearly contradicts your assertion: for instance, it shows that only one of the possible differences is 5 whereas 6 of them are 0. Since all 36 possibilities are equally probable, that's non-uniform.
    $endgroup$
    – whuber
    Aug 11 at 17:42






  • 13




    $begingroup$
    @onurcanbektas I invite you once again to contemplate the table. Since it has only two absolute differences of 5, isn't it obvious that no more than two differences can equal 5?
    $endgroup$
    – whuber
    Aug 11 at 17:50







  • 14




    $begingroup$
    @onurcanbektas For simple differences (i.e. with signs, so integers from -5 though to +5), the distribution is a symmetric discrete triangular distribution with the mode (most likely value) at 0. For absolute differences as shown in my answer, the mode is 1.
    $endgroup$
    – Henry
    Aug 11 at 18:39







  • 2




    $begingroup$
    Might be worth noting that the signed difference modulo 6 is uniformly distributed, though.
    $endgroup$
    – Federico Poloni
    Aug 12 at 16:35






  • 2




    $begingroup$
    @FedericoPoloni Is this not trivially obvious? I mean I never really though about it, before reading the comment, but it is quite obvious that this simply has to be true
    $endgroup$
    – Cruncher
    Aug 12 at 17:37


















21
















$begingroup$

Using only the most basic axioms about probabilities and real numbers, one can prove a much stronger statement:




The difference of any two independent, identically distributed nonconstant random values $X-Y$ never has a discrete uniform distribution.




(An analogous statement for continuous variables is proven at Uniform PDF of the difference of two r.v.)



The idea is that the chance $X-Y$ is an extreme value must be less than the chance that $X-Y$ is zero, because there is only one way to (say) maximize $X-Y$ whereas there are many ways to make the difference zero, because $X$ and $Y$ have the same distribution and therefore can equal each other. Here are the details.



First observe that the hypothetical two variables $X$ and $Y$ in question can each attain only a finite number $n$ of values with positive probability, because there will be at least $n$ distinct differences and a uniform distribution assigns them all equal probability. If $n$ is infinite, then so would be the number of possible differences having positive, equal probability, whence the sum of their chances would be infinite, which is impossible.



Next, since the number of differences is finite, there will be a largest among them. The largest difference can be achieved only when subtracting the smallest value of $Y$--let's call it $m$ and suppose it has probability $q = Pr(Y=m)$--from the largest value of $X$--let's call that that one $M$ with $p = Pr(X=M).$ Because $X$ and $Y$ are independent, the chance of this difference is the product of these chances,



$$Pr(X-Y = M - m) = Pr(X=M)Pr(Y=m) = pq gt 0.tag*$$



Finally, because $X$ and $Y$ have the same distribution, there are many ways their differences can produce the value $0.$ Among these ways are the cases where $X=Y=m$ and $X=Y=M.$ Because this distribution is nonconstant, $m$ differs from $M.$ That shows those two cases are disjoint events and therefore they must contribute at least an amount $p^2 + q^2$ to the chance that $X-Y$ is zero; that is,



$$Pr(X-Y=0) ge Pr(X=Y=m) + Pr(X=Y=M) = p^2 + q^2.$$



Since squares of numbers are not negative, $0 le (p-q)^2,$ whence we deduce from $(*)$ that



$$Pr(X-Y=M-m)=pq le pq + (p-q)^2 = p^2 + q^2 - pq lt p^2 + q^2 le Pr(X-Y=0),$$



showing the distribution of $X-Y$ is not uniform, QED.



Edit in response to a comment



A similar analysis of the absolute differences $|X-Y|$ observes that because $X$ and $Y$ have the same distribution, $m=-M.$ This requires us to study $Pr(X-Y=|M-m|) = 2pq.$ The same algebraic technique yields almost the same result, but there is the possibility that $2pq=2pq+(p-q)^2$ and $2pq+p^2+q^2=1.$ That system of equations has the unique solution $p=q=1/2$ corresponding to a fair coin (a "two-sided die"). Apart from this exception the result for the absolute differences is the same as that for the differences, and for the same underlying reasons already given: namely, the absolute differences of two iid random variables cannot be uniformly distributed whenever there are more than two distinct differences with positive probability.



(end of edit)




Let's apply this result to the question, which asks about something a little more complex.



Model each independent roll of the die (which might be an unfair die) with a random variable $X_i,$ $i=1, 2, ldots, n.$ The differences observed in these $n$ rolls are the numbers $Delta X_i = X_i+1-X_i.$ We might wonder how uniformly distributed these $n-1$ numbers are. That's really a question about the statistical expectations: what is the expected number of $Delta X_i$ that are equal to zero, for instance? What is the expected number of $Delta X_i$ equal to $-1$? Etc., etc.



The problematic aspect of this question is that the $Delta X_i$ are not independent: for instance, $Delta X_1 = X_2-X_1$ and $Delta X_2 = X_3 - X_2$ involve the same roll $X_2.$



However, this isn't really a difficulty. Since statistical expectation is additive and all differences have the same distribution, if we pick any possible value $k$ of the differences, the expected number of times the difference equals $k$ in the entire sequence of $n$ rolls is just $n-1$ times the expected number of times the difference equals $k$ in a single step of the process. That single-step expectation is $Pr(Delta X_i = k)$ (for any $i$). These expectations will be the same for all $k$ (that is, uniform) if and only if they are the same for a single $Delta X_i.$ But we have seen that no $Delta X_i$ has a uniform distribution, even when the die might be biased. Thus, even in this weaker sense of expected frequencies, the differences of the rolls are not uniform.






share|cite|improve this answer












$endgroup$














  • $begingroup$
    @Michael Good point: I answered the question as asked (which is about "differences"), rather than as illustrated (which clearly refers to absolute differences). The same technique applies--one just has to consider both the max and min differences. In the case where those are the only two possibilities (along with zero), we can get equality, which is where the Bernoulli$(1/2)$ result comes from (showing it's the unique such example).
    $endgroup$
    – whuber
    Aug 12 at 19:43











  • $begingroup$
    Another answer proving a particular version of this is here.
    $endgroup$
    – Reinstate Monica
    Aug 13 at 9:32











  • $begingroup$
    Thanks, @Ben: I had forgotten that thread. Because it's a better reference, I now link directly to it in this answer.
    $endgroup$
    – whuber
    Aug 13 at 13:02


















12
















$begingroup$

On an intuitive level, a random event can only be uniformly distributed if all of its outcomes are equally likely.



Is that so for the random event in question -- absolute difference between two dice rolls?



It suffices in this case to look at the extremes -- what are the biggest and smallest values this difference could take?



Obviously 0 is the smallest (we're looking at absolute differences and the rolls can be the same), and 5 is the biggest (6 vs 1).



We can show the event is non-uniform by showing that 0 is more (or less) likely to occur than 5.



At a glance, there are only two ways for 5 to occur -- if the first dice is 6 and the second 1, or vice versa. How many ways can 0 occur?






share|cite|improve this answer










$endgroup$










  • 1




    $begingroup$
    +1 I think this gets to the heart of the matter. I have posted a generalization of the question that ultimately relies on the same observation.
    $endgroup$
    – whuber
    Aug 12 at 18:34


















5
















$begingroup$

As presented by Henry, differences of uniformly distributed distributions are not uniformly distributed.



To illustrate this with simulated data, we can use a very simple R script:



barplot(table(sample(x=1:6, size=10000, replace=T)))


enter image description here



We see that this produces indeed a uniform distribution. Let's now have a look at the distribution of the absolute differences of two random samples from this distribution.



barplot(table(abs(sample(x=1:6, size=10000, replace=T) - sample(x=1:6, size=10000, replace=T))))


enter image description here






share|cite|improve this answer












$endgroup$










  • 6




    $begingroup$
    Why does this have anything to do with the CLT, which concerns the asymptotic distribution of means of large numbers of iid values?
    $endgroup$
    – whuber
    Aug 11 at 17:43






  • 2




    $begingroup$
    I like the connection you originally made with CLT. Let $n$ be the number of samples to be added (or subtracted) from the original uniform distribution. CLT implies that for large $n$ the distribution will tend toward normal. This in turn implies that the distribution cannot remain uniform for any $n>1$, such as $n=2$ which is what OP is asking. (If this isn't self-explanatory, consider that if the sum were uniformly distributed when $n=2$, reindexing would imply that it is also uniform when $n=4$, etc, including for large $n$.)
    $endgroup$
    – krubo
    Aug 11 at 19:08






  • 3




    $begingroup$
    @Krubo The original question asks about the distribution of differences between successive rolls of a die. The CLT has nothing to say about that. Indeed, no matter how many times the die is rolled, the distribution of those differences will not approach normality.
    $endgroup$
    – whuber
    Aug 12 at 12:23










  • $begingroup$
    Does this distribution tend to uniform as the number of die faces tends to infinity? Not sure how to go about showing that, but intuitively it feels like it heads in that direction, but I don't know if it get asymptotically "blocked" somewhere before flattening enough
    $endgroup$
    – Cruncher
    Aug 12 at 17:44










  • $begingroup$
    @Cruncher you can easily change the number of die faces in the R-Code. The more faces there are, the more apparent the stairwairs nature of the distribution becomes. '1' is always the peak of that stair and with larger differences the probabilities approximate zero. Additionally, difference of '0' is distinctly rarer than '1'. (at least if the die's smallest value is '1')
    $endgroup$
    – LuckyPal
    Aug 13 at 6:43


















2
















$begingroup$

Others have worked the calculations, I will give you an answer that seems more intuitive to me. You want to study the sum of two unifrom r.v. (Z = X + (-Y)), the overall distribution is the (discrete) convolution product :



$$ P(Z=z) = sum^infty_k=-infty P(X=k) P(-Y = z-k) $$



This sum is rather intuitive : the probability to get $z$, is the sum of the probabilities to get something with X (noted $k$ here) and the complement to $z$ with -Y.



From signal processing, we know how the convolution product behave:



  • The convolution product of two uniform function (two rectangles) will give a triangle. This is illustrated by wikipedia for continuous functions:

enter image description here



  • You can understand what happen here : as $z$ move up (the vertical dotted line) the common domain of both rectangle move up then down, which correspond to the probability to get $z$.


  • More generally we know that the only functions that are stable by convolution are those of the gaussian familly. i.e. Only gaussian distribution are stable by addition (or more generally, linear combination). This is also meaning that you don't get a uniform distribution when combining uniform distributions.


As to why we get those results, the answer lies in the Fourrier decomposition of those functions. The Fourrier transformation of a convolution product being the simple product of the Fourrier transformations of each function. This give direct links between the fourrier coefficients of the rectangle and triangle functions.






share|cite|improve this answer










$endgroup$














  • $begingroup$
    Please check the validity of your claims and the logic of your answer. The question isn't whether the convolution of two uniform distributions is uniform: it's whether the convolution of some distribution and its reversal can be uniform. And there are far more distributional families than the Gaussian that are stable under convolution (modulo standardization, of course): see en.wikipedia.org/wiki/Stable_distribution
    $endgroup$
    – whuber
    Aug 13 at 17:59











  • $begingroup$
    You are right about stable distributions. For the question, I am pretty sure this is about the difference of two random values with uniform distribution (as indicated by the title). The question whether the convolution of some distribution and its reversal can be uniform is larger than what is asked here.
    $endgroup$
    – lcrmorin
    Aug 22 at 12:25


















1
















$begingroup$

If $x$ and $y$ are two consecutive dice rolls, you can visualize $|x-y| = k$ (for $k = 0, 1, 2, 3, 4, 5$) as follows where each color corresponds to a different value of $k$:



consecutive dice rolls difference visualization



As you can easily see, the number of points for each color is not the same; therefore, the differences are not uniformly distributed.






share|cite|improve this answer










$endgroup$
















    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "65"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );














    draft saved

    draft discarded
















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f421676%2fare-differences-between-uniformly-distributed-numbers-uniformly-distributed%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown


























    6 Answers
    6






    active

    oldest

    votes








    6 Answers
    6






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    36
















    $begingroup$

    No it is not uniform



    You can count the $36$ equally likely possibilities for the absolute differences



     second 1 2 3 4 5 6
    first
    1 0 1 2 3 4 5
    2 1 0 1 2 3 4
    3 2 1 0 1 2 3
    4 3 2 1 0 1 2
    5 4 3 2 1 0 1
    6 5 4 3 2 1 0


    which gives a probability distribution for the absolute differences of



    0 6/36 1/6
    1 10/36 5/18
    2 8/36 2/9
    3 6/36 1/6
    4 4/36 1/9
    5 2/36 1/18





    share|cite|improve this answer










    $endgroup$










    • 27




      $begingroup$
      @onurcanbektas The table in this answer clearly contradicts your assertion: for instance, it shows that only one of the possible differences is 5 whereas 6 of them are 0. Since all 36 possibilities are equally probable, that's non-uniform.
      $endgroup$
      – whuber
      Aug 11 at 17:42






    • 13




      $begingroup$
      @onurcanbektas I invite you once again to contemplate the table. Since it has only two absolute differences of 5, isn't it obvious that no more than two differences can equal 5?
      $endgroup$
      – whuber
      Aug 11 at 17:50







    • 14




      $begingroup$
      @onurcanbektas For simple differences (i.e. with signs, so integers from -5 though to +5), the distribution is a symmetric discrete triangular distribution with the mode (most likely value) at 0. For absolute differences as shown in my answer, the mode is 1.
      $endgroup$
      – Henry
      Aug 11 at 18:39







    • 2




      $begingroup$
      Might be worth noting that the signed difference modulo 6 is uniformly distributed, though.
      $endgroup$
      – Federico Poloni
      Aug 12 at 16:35






    • 2




      $begingroup$
      @FedericoPoloni Is this not trivially obvious? I mean I never really though about it, before reading the comment, but it is quite obvious that this simply has to be true
      $endgroup$
      – Cruncher
      Aug 12 at 17:37















    36
















    $begingroup$

    No it is not uniform



    You can count the $36$ equally likely possibilities for the absolute differences



     second 1 2 3 4 5 6
    first
    1 0 1 2 3 4 5
    2 1 0 1 2 3 4
    3 2 1 0 1 2 3
    4 3 2 1 0 1 2
    5 4 3 2 1 0 1
    6 5 4 3 2 1 0


    which gives a probability distribution for the absolute differences of



    0 6/36 1/6
    1 10/36 5/18
    2 8/36 2/9
    3 6/36 1/6
    4 4/36 1/9
    5 2/36 1/18





    share|cite|improve this answer










    $endgroup$










    • 27




      $begingroup$
      @onurcanbektas The table in this answer clearly contradicts your assertion: for instance, it shows that only one of the possible differences is 5 whereas 6 of them are 0. Since all 36 possibilities are equally probable, that's non-uniform.
      $endgroup$
      – whuber
      Aug 11 at 17:42






    • 13




      $begingroup$
      @onurcanbektas I invite you once again to contemplate the table. Since it has only two absolute differences of 5, isn't it obvious that no more than two differences can equal 5?
      $endgroup$
      – whuber
      Aug 11 at 17:50







    • 14




      $begingroup$
      @onurcanbektas For simple differences (i.e. with signs, so integers from -5 though to +5), the distribution is a symmetric discrete triangular distribution with the mode (most likely value) at 0. For absolute differences as shown in my answer, the mode is 1.
      $endgroup$
      – Henry
      Aug 11 at 18:39







    • 2




      $begingroup$
      Might be worth noting that the signed difference modulo 6 is uniformly distributed, though.
      $endgroup$
      – Federico Poloni
      Aug 12 at 16:35






    • 2




      $begingroup$
      @FedericoPoloni Is this not trivially obvious? I mean I never really though about it, before reading the comment, but it is quite obvious that this simply has to be true
      $endgroup$
      – Cruncher
      Aug 12 at 17:37













    36














    36










    36







    $begingroup$

    No it is not uniform



    You can count the $36$ equally likely possibilities for the absolute differences



     second 1 2 3 4 5 6
    first
    1 0 1 2 3 4 5
    2 1 0 1 2 3 4
    3 2 1 0 1 2 3
    4 3 2 1 0 1 2
    5 4 3 2 1 0 1
    6 5 4 3 2 1 0


    which gives a probability distribution for the absolute differences of



    0 6/36 1/6
    1 10/36 5/18
    2 8/36 2/9
    3 6/36 1/6
    4 4/36 1/9
    5 2/36 1/18





    share|cite|improve this answer










    $endgroup$



    No it is not uniform



    You can count the $36$ equally likely possibilities for the absolute differences



     second 1 2 3 4 5 6
    first
    1 0 1 2 3 4 5
    2 1 0 1 2 3 4
    3 2 1 0 1 2 3
    4 3 2 1 0 1 2
    5 4 3 2 1 0 1
    6 5 4 3 2 1 0


    which gives a probability distribution for the absolute differences of



    0 6/36 1/6
    1 10/36 5/18
    2 8/36 2/9
    3 6/36 1/6
    4 4/36 1/9
    5 2/36 1/18






    share|cite|improve this answer













    share|cite|improve this answer




    share|cite|improve this answer










    answered Aug 11 at 9:54









    HenryHenry

    21.1k1 gold badge42 silver badges73 bronze badges




    21.1k1 gold badge42 silver badges73 bronze badges










    • 27




      $begingroup$
      @onurcanbektas The table in this answer clearly contradicts your assertion: for instance, it shows that only one of the possible differences is 5 whereas 6 of them are 0. Since all 36 possibilities are equally probable, that's non-uniform.
      $endgroup$
      – whuber
      Aug 11 at 17:42






    • 13




      $begingroup$
      @onurcanbektas I invite you once again to contemplate the table. Since it has only two absolute differences of 5, isn't it obvious that no more than two differences can equal 5?
      $endgroup$
      – whuber
      Aug 11 at 17:50







    • 14




      $begingroup$
      @onurcanbektas For simple differences (i.e. with signs, so integers from -5 though to +5), the distribution is a symmetric discrete triangular distribution with the mode (most likely value) at 0. For absolute differences as shown in my answer, the mode is 1.
      $endgroup$
      – Henry
      Aug 11 at 18:39







    • 2




      $begingroup$
      Might be worth noting that the signed difference modulo 6 is uniformly distributed, though.
      $endgroup$
      – Federico Poloni
      Aug 12 at 16:35






    • 2




      $begingroup$
      @FedericoPoloni Is this not trivially obvious? I mean I never really though about it, before reading the comment, but it is quite obvious that this simply has to be true
      $endgroup$
      – Cruncher
      Aug 12 at 17:37












    • 27




      $begingroup$
      @onurcanbektas The table in this answer clearly contradicts your assertion: for instance, it shows that only one of the possible differences is 5 whereas 6 of them are 0. Since all 36 possibilities are equally probable, that's non-uniform.
      $endgroup$
      – whuber
      Aug 11 at 17:42






    • 13




      $begingroup$
      @onurcanbektas I invite you once again to contemplate the table. Since it has only two absolute differences of 5, isn't it obvious that no more than two differences can equal 5?
      $endgroup$
      – whuber
      Aug 11 at 17:50







    • 14




      $begingroup$
      @onurcanbektas For simple differences (i.e. with signs, so integers from -5 though to +5), the distribution is a symmetric discrete triangular distribution with the mode (most likely value) at 0. For absolute differences as shown in my answer, the mode is 1.
      $endgroup$
      – Henry
      Aug 11 at 18:39







    • 2




      $begingroup$
      Might be worth noting that the signed difference modulo 6 is uniformly distributed, though.
      $endgroup$
      – Federico Poloni
      Aug 12 at 16:35






    • 2




      $begingroup$
      @FedericoPoloni Is this not trivially obvious? I mean I never really though about it, before reading the comment, but it is quite obvious that this simply has to be true
      $endgroup$
      – Cruncher
      Aug 12 at 17:37







    27




    27




    $begingroup$
    @onurcanbektas The table in this answer clearly contradicts your assertion: for instance, it shows that only one of the possible differences is 5 whereas 6 of them are 0. Since all 36 possibilities are equally probable, that's non-uniform.
    $endgroup$
    – whuber
    Aug 11 at 17:42




    $begingroup$
    @onurcanbektas The table in this answer clearly contradicts your assertion: for instance, it shows that only one of the possible differences is 5 whereas 6 of them are 0. Since all 36 possibilities are equally probable, that's non-uniform.
    $endgroup$
    – whuber
    Aug 11 at 17:42




    13




    13




    $begingroup$
    @onurcanbektas I invite you once again to contemplate the table. Since it has only two absolute differences of 5, isn't it obvious that no more than two differences can equal 5?
    $endgroup$
    – whuber
    Aug 11 at 17:50





    $begingroup$
    @onurcanbektas I invite you once again to contemplate the table. Since it has only two absolute differences of 5, isn't it obvious that no more than two differences can equal 5?
    $endgroup$
    – whuber
    Aug 11 at 17:50





    14




    14




    $begingroup$
    @onurcanbektas For simple differences (i.e. with signs, so integers from -5 though to +5), the distribution is a symmetric discrete triangular distribution with the mode (most likely value) at 0. For absolute differences as shown in my answer, the mode is 1.
    $endgroup$
    – Henry
    Aug 11 at 18:39





    $begingroup$
    @onurcanbektas For simple differences (i.e. with signs, so integers from -5 though to +5), the distribution is a symmetric discrete triangular distribution with the mode (most likely value) at 0. For absolute differences as shown in my answer, the mode is 1.
    $endgroup$
    – Henry
    Aug 11 at 18:39





    2




    2




    $begingroup$
    Might be worth noting that the signed difference modulo 6 is uniformly distributed, though.
    $endgroup$
    – Federico Poloni
    Aug 12 at 16:35




    $begingroup$
    Might be worth noting that the signed difference modulo 6 is uniformly distributed, though.
    $endgroup$
    – Federico Poloni
    Aug 12 at 16:35




    2




    2




    $begingroup$
    @FedericoPoloni Is this not trivially obvious? I mean I never really though about it, before reading the comment, but it is quite obvious that this simply has to be true
    $endgroup$
    – Cruncher
    Aug 12 at 17:37




    $begingroup$
    @FedericoPoloni Is this not trivially obvious? I mean I never really though about it, before reading the comment, but it is quite obvious that this simply has to be true
    $endgroup$
    – Cruncher
    Aug 12 at 17:37













    21
















    $begingroup$

    Using only the most basic axioms about probabilities and real numbers, one can prove a much stronger statement:




    The difference of any two independent, identically distributed nonconstant random values $X-Y$ never has a discrete uniform distribution.




    (An analogous statement for continuous variables is proven at Uniform PDF of the difference of two r.v.)



    The idea is that the chance $X-Y$ is an extreme value must be less than the chance that $X-Y$ is zero, because there is only one way to (say) maximize $X-Y$ whereas there are many ways to make the difference zero, because $X$ and $Y$ have the same distribution and therefore can equal each other. Here are the details.



    First observe that the hypothetical two variables $X$ and $Y$ in question can each attain only a finite number $n$ of values with positive probability, because there will be at least $n$ distinct differences and a uniform distribution assigns them all equal probability. If $n$ is infinite, then so would be the number of possible differences having positive, equal probability, whence the sum of their chances would be infinite, which is impossible.



    Next, since the number of differences is finite, there will be a largest among them. The largest difference can be achieved only when subtracting the smallest value of $Y$--let's call it $m$ and suppose it has probability $q = Pr(Y=m)$--from the largest value of $X$--let's call that that one $M$ with $p = Pr(X=M).$ Because $X$ and $Y$ are independent, the chance of this difference is the product of these chances,



    $$Pr(X-Y = M - m) = Pr(X=M)Pr(Y=m) = pq gt 0.tag*$$



    Finally, because $X$ and $Y$ have the same distribution, there are many ways their differences can produce the value $0.$ Among these ways are the cases where $X=Y=m$ and $X=Y=M.$ Because this distribution is nonconstant, $m$ differs from $M.$ That shows those two cases are disjoint events and therefore they must contribute at least an amount $p^2 + q^2$ to the chance that $X-Y$ is zero; that is,



    $$Pr(X-Y=0) ge Pr(X=Y=m) + Pr(X=Y=M) = p^2 + q^2.$$



    Since squares of numbers are not negative, $0 le (p-q)^2,$ whence we deduce from $(*)$ that



    $$Pr(X-Y=M-m)=pq le pq + (p-q)^2 = p^2 + q^2 - pq lt p^2 + q^2 le Pr(X-Y=0),$$



    showing the distribution of $X-Y$ is not uniform, QED.



    Edit in response to a comment



    A similar analysis of the absolute differences $|X-Y|$ observes that because $X$ and $Y$ have the same distribution, $m=-M.$ This requires us to study $Pr(X-Y=|M-m|) = 2pq.$ The same algebraic technique yields almost the same result, but there is the possibility that $2pq=2pq+(p-q)^2$ and $2pq+p^2+q^2=1.$ That system of equations has the unique solution $p=q=1/2$ corresponding to a fair coin (a "two-sided die"). Apart from this exception the result for the absolute differences is the same as that for the differences, and for the same underlying reasons already given: namely, the absolute differences of two iid random variables cannot be uniformly distributed whenever there are more than two distinct differences with positive probability.



    (end of edit)




    Let's apply this result to the question, which asks about something a little more complex.



    Model each independent roll of the die (which might be an unfair die) with a random variable $X_i,$ $i=1, 2, ldots, n.$ The differences observed in these $n$ rolls are the numbers $Delta X_i = X_i+1-X_i.$ We might wonder how uniformly distributed these $n-1$ numbers are. That's really a question about the statistical expectations: what is the expected number of $Delta X_i$ that are equal to zero, for instance? What is the expected number of $Delta X_i$ equal to $-1$? Etc., etc.



    The problematic aspect of this question is that the $Delta X_i$ are not independent: for instance, $Delta X_1 = X_2-X_1$ and $Delta X_2 = X_3 - X_2$ involve the same roll $X_2.$



    However, this isn't really a difficulty. Since statistical expectation is additive and all differences have the same distribution, if we pick any possible value $k$ of the differences, the expected number of times the difference equals $k$ in the entire sequence of $n$ rolls is just $n-1$ times the expected number of times the difference equals $k$ in a single step of the process. That single-step expectation is $Pr(Delta X_i = k)$ (for any $i$). These expectations will be the same for all $k$ (that is, uniform) if and only if they are the same for a single $Delta X_i.$ But we have seen that no $Delta X_i$ has a uniform distribution, even when the die might be biased. Thus, even in this weaker sense of expected frequencies, the differences of the rolls are not uniform.






    share|cite|improve this answer












    $endgroup$














    • $begingroup$
      @Michael Good point: I answered the question as asked (which is about "differences"), rather than as illustrated (which clearly refers to absolute differences). The same technique applies--one just has to consider both the max and min differences. In the case where those are the only two possibilities (along with zero), we can get equality, which is where the Bernoulli$(1/2)$ result comes from (showing it's the unique such example).
      $endgroup$
      – whuber
      Aug 12 at 19:43











    • $begingroup$
      Another answer proving a particular version of this is here.
      $endgroup$
      – Reinstate Monica
      Aug 13 at 9:32











    • $begingroup$
      Thanks, @Ben: I had forgotten that thread. Because it's a better reference, I now link directly to it in this answer.
      $endgroup$
      – whuber
      Aug 13 at 13:02















    21
















    $begingroup$

    Using only the most basic axioms about probabilities and real numbers, one can prove a much stronger statement:




    The difference of any two independent, identically distributed nonconstant random values $X-Y$ never has a discrete uniform distribution.




    (An analogous statement for continuous variables is proven at Uniform PDF of the difference of two r.v.)



    The idea is that the chance $X-Y$ is an extreme value must be less than the chance that $X-Y$ is zero, because there is only one way to (say) maximize $X-Y$ whereas there are many ways to make the difference zero, because $X$ and $Y$ have the same distribution and therefore can equal each other. Here are the details.



    First observe that the hypothetical two variables $X$ and $Y$ in question can each attain only a finite number $n$ of values with positive probability, because there will be at least $n$ distinct differences and a uniform distribution assigns them all equal probability. If $n$ is infinite, then so would be the number of possible differences having positive, equal probability, whence the sum of their chances would be infinite, which is impossible.



    Next, since the number of differences is finite, there will be a largest among them. The largest difference can be achieved only when subtracting the smallest value of $Y$--let's call it $m$ and suppose it has probability $q = Pr(Y=m)$--from the largest value of $X$--let's call that that one $M$ with $p = Pr(X=M).$ Because $X$ and $Y$ are independent, the chance of this difference is the product of these chances,



    $$Pr(X-Y = M - m) = Pr(X=M)Pr(Y=m) = pq gt 0.tag*$$



    Finally, because $X$ and $Y$ have the same distribution, there are many ways their differences can produce the value $0.$ Among these ways are the cases where $X=Y=m$ and $X=Y=M.$ Because this distribution is nonconstant, $m$ differs from $M.$ That shows those two cases are disjoint events and therefore they must contribute at least an amount $p^2 + q^2$ to the chance that $X-Y$ is zero; that is,



    $$Pr(X-Y=0) ge Pr(X=Y=m) + Pr(X=Y=M) = p^2 + q^2.$$



    Since squares of numbers are not negative, $0 le (p-q)^2,$ whence we deduce from $(*)$ that



    $$Pr(X-Y=M-m)=pq le pq + (p-q)^2 = p^2 + q^2 - pq lt p^2 + q^2 le Pr(X-Y=0),$$



    showing the distribution of $X-Y$ is not uniform, QED.



    Edit in response to a comment



    A similar analysis of the absolute differences $|X-Y|$ observes that because $X$ and $Y$ have the same distribution, $m=-M.$ This requires us to study $Pr(X-Y=|M-m|) = 2pq.$ The same algebraic technique yields almost the same result, but there is the possibility that $2pq=2pq+(p-q)^2$ and $2pq+p^2+q^2=1.$ That system of equations has the unique solution $p=q=1/2$ corresponding to a fair coin (a "two-sided die"). Apart from this exception the result for the absolute differences is the same as that for the differences, and for the same underlying reasons already given: namely, the absolute differences of two iid random variables cannot be uniformly distributed whenever there are more than two distinct differences with positive probability.



    (end of edit)




    Let's apply this result to the question, which asks about something a little more complex.



    Model each independent roll of the die (which might be an unfair die) with a random variable $X_i,$ $i=1, 2, ldots, n.$ The differences observed in these $n$ rolls are the numbers $Delta X_i = X_i+1-X_i.$ We might wonder how uniformly distributed these $n-1$ numbers are. That's really a question about the statistical expectations: what is the expected number of $Delta X_i$ that are equal to zero, for instance? What is the expected number of $Delta X_i$ equal to $-1$? Etc., etc.



    The problematic aspect of this question is that the $Delta X_i$ are not independent: for instance, $Delta X_1 = X_2-X_1$ and $Delta X_2 = X_3 - X_2$ involve the same roll $X_2.$



    However, this isn't really a difficulty. Since statistical expectation is additive and all differences have the same distribution, if we pick any possible value $k$ of the differences, the expected number of times the difference equals $k$ in the entire sequence of $n$ rolls is just $n-1$ times the expected number of times the difference equals $k$ in a single step of the process. That single-step expectation is $Pr(Delta X_i = k)$ (for any $i$). These expectations will be the same for all $k$ (that is, uniform) if and only if they are the same for a single $Delta X_i.$ But we have seen that no $Delta X_i$ has a uniform distribution, even when the die might be biased. Thus, even in this weaker sense of expected frequencies, the differences of the rolls are not uniform.






    share|cite|improve this answer












    $endgroup$














    • $begingroup$
      @Michael Good point: I answered the question as asked (which is about "differences"), rather than as illustrated (which clearly refers to absolute differences). The same technique applies--one just has to consider both the max and min differences. In the case where those are the only two possibilities (along with zero), we can get equality, which is where the Bernoulli$(1/2)$ result comes from (showing it's the unique such example).
      $endgroup$
      – whuber
      Aug 12 at 19:43











    • $begingroup$
      Another answer proving a particular version of this is here.
      $endgroup$
      – Reinstate Monica
      Aug 13 at 9:32











    • $begingroup$
      Thanks, @Ben: I had forgotten that thread. Because it's a better reference, I now link directly to it in this answer.
      $endgroup$
      – whuber
      Aug 13 at 13:02













    21














    21










    21







    $begingroup$

    Using only the most basic axioms about probabilities and real numbers, one can prove a much stronger statement:




    The difference of any two independent, identically distributed nonconstant random values $X-Y$ never has a discrete uniform distribution.




    (An analogous statement for continuous variables is proven at Uniform PDF of the difference of two r.v.)



    The idea is that the chance $X-Y$ is an extreme value must be less than the chance that $X-Y$ is zero, because there is only one way to (say) maximize $X-Y$ whereas there are many ways to make the difference zero, because $X$ and $Y$ have the same distribution and therefore can equal each other. Here are the details.



    First observe that the hypothetical two variables $X$ and $Y$ in question can each attain only a finite number $n$ of values with positive probability, because there will be at least $n$ distinct differences and a uniform distribution assigns them all equal probability. If $n$ is infinite, then so would be the number of possible differences having positive, equal probability, whence the sum of their chances would be infinite, which is impossible.



    Next, since the number of differences is finite, there will be a largest among them. The largest difference can be achieved only when subtracting the smallest value of $Y$--let's call it $m$ and suppose it has probability $q = Pr(Y=m)$--from the largest value of $X$--let's call that that one $M$ with $p = Pr(X=M).$ Because $X$ and $Y$ are independent, the chance of this difference is the product of these chances,



    $$Pr(X-Y = M - m) = Pr(X=M)Pr(Y=m) = pq gt 0.tag*$$



    Finally, because $X$ and $Y$ have the same distribution, there are many ways their differences can produce the value $0.$ Among these ways are the cases where $X=Y=m$ and $X=Y=M.$ Because this distribution is nonconstant, $m$ differs from $M.$ That shows those two cases are disjoint events and therefore they must contribute at least an amount $p^2 + q^2$ to the chance that $X-Y$ is zero; that is,



    $$Pr(X-Y=0) ge Pr(X=Y=m) + Pr(X=Y=M) = p^2 + q^2.$$



    Since squares of numbers are not negative, $0 le (p-q)^2,$ whence we deduce from $(*)$ that



    $$Pr(X-Y=M-m)=pq le pq + (p-q)^2 = p^2 + q^2 - pq lt p^2 + q^2 le Pr(X-Y=0),$$



    showing the distribution of $X-Y$ is not uniform, QED.



    Edit in response to a comment



    A similar analysis of the absolute differences $|X-Y|$ observes that because $X$ and $Y$ have the same distribution, $m=-M.$ This requires us to study $Pr(X-Y=|M-m|) = 2pq.$ The same algebraic technique yields almost the same result, but there is the possibility that $2pq=2pq+(p-q)^2$ and $2pq+p^2+q^2=1.$ That system of equations has the unique solution $p=q=1/2$ corresponding to a fair coin (a "two-sided die"). Apart from this exception the result for the absolute differences is the same as that for the differences, and for the same underlying reasons already given: namely, the absolute differences of two iid random variables cannot be uniformly distributed whenever there are more than two distinct differences with positive probability.



    (end of edit)




    Let's apply this result to the question, which asks about something a little more complex.



    Model each independent roll of the die (which might be an unfair die) with a random variable $X_i,$ $i=1, 2, ldots, n.$ The differences observed in these $n$ rolls are the numbers $Delta X_i = X_i+1-X_i.$ We might wonder how uniformly distributed these $n-1$ numbers are. That's really a question about the statistical expectations: what is the expected number of $Delta X_i$ that are equal to zero, for instance? What is the expected number of $Delta X_i$ equal to $-1$? Etc., etc.



    The problematic aspect of this question is that the $Delta X_i$ are not independent: for instance, $Delta X_1 = X_2-X_1$ and $Delta X_2 = X_3 - X_2$ involve the same roll $X_2.$



    However, this isn't really a difficulty. Since statistical expectation is additive and all differences have the same distribution, if we pick any possible value $k$ of the differences, the expected number of times the difference equals $k$ in the entire sequence of $n$ rolls is just $n-1$ times the expected number of times the difference equals $k$ in a single step of the process. That single-step expectation is $Pr(Delta X_i = k)$ (for any $i$). These expectations will be the same for all $k$ (that is, uniform) if and only if they are the same for a single $Delta X_i.$ But we have seen that no $Delta X_i$ has a uniform distribution, even when the die might be biased. Thus, even in this weaker sense of expected frequencies, the differences of the rolls are not uniform.






    share|cite|improve this answer












    $endgroup$



    Using only the most basic axioms about probabilities and real numbers, one can prove a much stronger statement:




    The difference of any two independent, identically distributed nonconstant random values $X-Y$ never has a discrete uniform distribution.




    (An analogous statement for continuous variables is proven at Uniform PDF of the difference of two r.v.)



    The idea is that the chance $X-Y$ is an extreme value must be less than the chance that $X-Y$ is zero, because there is only one way to (say) maximize $X-Y$ whereas there are many ways to make the difference zero, because $X$ and $Y$ have the same distribution and therefore can equal each other. Here are the details.



    First observe that the hypothetical two variables $X$ and $Y$ in question can each attain only a finite number $n$ of values with positive probability, because there will be at least $n$ distinct differences and a uniform distribution assigns them all equal probability. If $n$ is infinite, then so would be the number of possible differences having positive, equal probability, whence the sum of their chances would be infinite, which is impossible.



    Next, since the number of differences is finite, there will be a largest among them. The largest difference can be achieved only when subtracting the smallest value of $Y$--let's call it $m$ and suppose it has probability $q = Pr(Y=m)$--from the largest value of $X$--let's call that that one $M$ with $p = Pr(X=M).$ Because $X$ and $Y$ are independent, the chance of this difference is the product of these chances,



    $$Pr(X-Y = M - m) = Pr(X=M)Pr(Y=m) = pq gt 0.tag*$$



    Finally, because $X$ and $Y$ have the same distribution, there are many ways their differences can produce the value $0.$ Among these ways are the cases where $X=Y=m$ and $X=Y=M.$ Because this distribution is nonconstant, $m$ differs from $M.$ That shows those two cases are disjoint events and therefore they must contribute at least an amount $p^2 + q^2$ to the chance that $X-Y$ is zero; that is,



    $$Pr(X-Y=0) ge Pr(X=Y=m) + Pr(X=Y=M) = p^2 + q^2.$$



    Since squares of numbers are not negative, $0 le (p-q)^2,$ whence we deduce from $(*)$ that



    $$Pr(X-Y=M-m)=pq le pq + (p-q)^2 = p^2 + q^2 - pq lt p^2 + q^2 le Pr(X-Y=0),$$



    showing the distribution of $X-Y$ is not uniform, QED.



    Edit in response to a comment



    A similar analysis of the absolute differences $|X-Y|$ observes that because $X$ and $Y$ have the same distribution, $m=-M.$ This requires us to study $Pr(X-Y=|M-m|) = 2pq.$ The same algebraic technique yields almost the same result, but there is the possibility that $2pq=2pq+(p-q)^2$ and $2pq+p^2+q^2=1.$ That system of equations has the unique solution $p=q=1/2$ corresponding to a fair coin (a "two-sided die"). Apart from this exception the result for the absolute differences is the same as that for the differences, and for the same underlying reasons already given: namely, the absolute differences of two iid random variables cannot be uniformly distributed whenever there are more than two distinct differences with positive probability.



    (end of edit)




    Let's apply this result to the question, which asks about something a little more complex.



    Model each independent roll of the die (which might be an unfair die) with a random variable $X_i,$ $i=1, 2, ldots, n.$ The differences observed in these $n$ rolls are the numbers $Delta X_i = X_i+1-X_i.$ We might wonder how uniformly distributed these $n-1$ numbers are. That's really a question about the statistical expectations: what is the expected number of $Delta X_i$ that are equal to zero, for instance? What is the expected number of $Delta X_i$ equal to $-1$? Etc., etc.



    The problematic aspect of this question is that the $Delta X_i$ are not independent: for instance, $Delta X_1 = X_2-X_1$ and $Delta X_2 = X_3 - X_2$ involve the same roll $X_2.$



    However, this isn't really a difficulty. Since statistical expectation is additive and all differences have the same distribution, if we pick any possible value $k$ of the differences, the expected number of times the difference equals $k$ in the entire sequence of $n$ rolls is just $n-1$ times the expected number of times the difference equals $k$ in a single step of the process. That single-step expectation is $Pr(Delta X_i = k)$ (for any $i$). These expectations will be the same for all $k$ (that is, uniform) if and only if they are the same for a single $Delta X_i.$ But we have seen that no $Delta X_i$ has a uniform distribution, even when the die might be biased. Thus, even in this weaker sense of expected frequencies, the differences of the rolls are not uniform.







    share|cite|improve this answer















    share|cite|improve this answer




    share|cite|improve this answer








    edited Aug 13 at 13:01

























    answered Aug 12 at 17:05









    whuberwhuber

    221k35 gold badges483 silver badges879 bronze badges




    221k35 gold badges483 silver badges879 bronze badges














    • $begingroup$
      @Michael Good point: I answered the question as asked (which is about "differences"), rather than as illustrated (which clearly refers to absolute differences). The same technique applies--one just has to consider both the max and min differences. In the case where those are the only two possibilities (along with zero), we can get equality, which is where the Bernoulli$(1/2)$ result comes from (showing it's the unique such example).
      $endgroup$
      – whuber
      Aug 12 at 19:43











    • $begingroup$
      Another answer proving a particular version of this is here.
      $endgroup$
      – Reinstate Monica
      Aug 13 at 9:32











    • $begingroup$
      Thanks, @Ben: I had forgotten that thread. Because it's a better reference, I now link directly to it in this answer.
      $endgroup$
      – whuber
      Aug 13 at 13:02
















    • $begingroup$
      @Michael Good point: I answered the question as asked (which is about "differences"), rather than as illustrated (which clearly refers to absolute differences). The same technique applies--one just has to consider both the max and min differences. In the case where those are the only two possibilities (along with zero), we can get equality, which is where the Bernoulli$(1/2)$ result comes from (showing it's the unique such example).
      $endgroup$
      – whuber
      Aug 12 at 19:43











    • $begingroup$
      Another answer proving a particular version of this is here.
      $endgroup$
      – Reinstate Monica
      Aug 13 at 9:32











    • $begingroup$
      Thanks, @Ben: I had forgotten that thread. Because it's a better reference, I now link directly to it in this answer.
      $endgroup$
      – whuber
      Aug 13 at 13:02















    $begingroup$
    @Michael Good point: I answered the question as asked (which is about "differences"), rather than as illustrated (which clearly refers to absolute differences). The same technique applies--one just has to consider both the max and min differences. In the case where those are the only two possibilities (along with zero), we can get equality, which is where the Bernoulli$(1/2)$ result comes from (showing it's the unique such example).
    $endgroup$
    – whuber
    Aug 12 at 19:43





    $begingroup$
    @Michael Good point: I answered the question as asked (which is about "differences"), rather than as illustrated (which clearly refers to absolute differences). The same technique applies--one just has to consider both the max and min differences. In the case where those are the only two possibilities (along with zero), we can get equality, which is where the Bernoulli$(1/2)$ result comes from (showing it's the unique such example).
    $endgroup$
    – whuber
    Aug 12 at 19:43













    $begingroup$
    Another answer proving a particular version of this is here.
    $endgroup$
    – Reinstate Monica
    Aug 13 at 9:32





    $begingroup$
    Another answer proving a particular version of this is here.
    $endgroup$
    – Reinstate Monica
    Aug 13 at 9:32













    $begingroup$
    Thanks, @Ben: I had forgotten that thread. Because it's a better reference, I now link directly to it in this answer.
    $endgroup$
    – whuber
    Aug 13 at 13:02




    $begingroup$
    Thanks, @Ben: I had forgotten that thread. Because it's a better reference, I now link directly to it in this answer.
    $endgroup$
    – whuber
    Aug 13 at 13:02











    12
















    $begingroup$

    On an intuitive level, a random event can only be uniformly distributed if all of its outcomes are equally likely.



    Is that so for the random event in question -- absolute difference between two dice rolls?



    It suffices in this case to look at the extremes -- what are the biggest and smallest values this difference could take?



    Obviously 0 is the smallest (we're looking at absolute differences and the rolls can be the same), and 5 is the biggest (6 vs 1).



    We can show the event is non-uniform by showing that 0 is more (or less) likely to occur than 5.



    At a glance, there are only two ways for 5 to occur -- if the first dice is 6 and the second 1, or vice versa. How many ways can 0 occur?






    share|cite|improve this answer










    $endgroup$










    • 1




      $begingroup$
      +1 I think this gets to the heart of the matter. I have posted a generalization of the question that ultimately relies on the same observation.
      $endgroup$
      – whuber
      Aug 12 at 18:34















    12
















    $begingroup$

    On an intuitive level, a random event can only be uniformly distributed if all of its outcomes are equally likely.



    Is that so for the random event in question -- absolute difference between two dice rolls?



    It suffices in this case to look at the extremes -- what are the biggest and smallest values this difference could take?



    Obviously 0 is the smallest (we're looking at absolute differences and the rolls can be the same), and 5 is the biggest (6 vs 1).



    We can show the event is non-uniform by showing that 0 is more (or less) likely to occur than 5.



    At a glance, there are only two ways for 5 to occur -- if the first dice is 6 and the second 1, or vice versa. How many ways can 0 occur?






    share|cite|improve this answer










    $endgroup$










    • 1




      $begingroup$
      +1 I think this gets to the heart of the matter. I have posted a generalization of the question that ultimately relies on the same observation.
      $endgroup$
      – whuber
      Aug 12 at 18:34













    12














    12










    12







    $begingroup$

    On an intuitive level, a random event can only be uniformly distributed if all of its outcomes are equally likely.



    Is that so for the random event in question -- absolute difference between two dice rolls?



    It suffices in this case to look at the extremes -- what are the biggest and smallest values this difference could take?



    Obviously 0 is the smallest (we're looking at absolute differences and the rolls can be the same), and 5 is the biggest (6 vs 1).



    We can show the event is non-uniform by showing that 0 is more (or less) likely to occur than 5.



    At a glance, there are only two ways for 5 to occur -- if the first dice is 6 and the second 1, or vice versa. How many ways can 0 occur?






    share|cite|improve this answer










    $endgroup$



    On an intuitive level, a random event can only be uniformly distributed if all of its outcomes are equally likely.



    Is that so for the random event in question -- absolute difference between two dice rolls?



    It suffices in this case to look at the extremes -- what are the biggest and smallest values this difference could take?



    Obviously 0 is the smallest (we're looking at absolute differences and the rolls can be the same), and 5 is the biggest (6 vs 1).



    We can show the event is non-uniform by showing that 0 is more (or less) likely to occur than 5.



    At a glance, there are only two ways for 5 to occur -- if the first dice is 6 and the second 1, or vice versa. How many ways can 0 occur?







    share|cite|improve this answer













    share|cite|improve this answer




    share|cite|improve this answer










    answered Aug 12 at 16:05









    MichaelChiricoMichaelChirico

    1,0381 gold badge7 silver badges20 bronze badges




    1,0381 gold badge7 silver badges20 bronze badges










    • 1




      $begingroup$
      +1 I think this gets to the heart of the matter. I have posted a generalization of the question that ultimately relies on the same observation.
      $endgroup$
      – whuber
      Aug 12 at 18:34












    • 1




      $begingroup$
      +1 I think this gets to the heart of the matter. I have posted a generalization of the question that ultimately relies on the same observation.
      $endgroup$
      – whuber
      Aug 12 at 18:34







    1




    1




    $begingroup$
    +1 I think this gets to the heart of the matter. I have posted a generalization of the question that ultimately relies on the same observation.
    $endgroup$
    – whuber
    Aug 12 at 18:34




    $begingroup$
    +1 I think this gets to the heart of the matter. I have posted a generalization of the question that ultimately relies on the same observation.
    $endgroup$
    – whuber
    Aug 12 at 18:34











    5
















    $begingroup$

    As presented by Henry, differences of uniformly distributed distributions are not uniformly distributed.



    To illustrate this with simulated data, we can use a very simple R script:



    barplot(table(sample(x=1:6, size=10000, replace=T)))


    enter image description here



    We see that this produces indeed a uniform distribution. Let's now have a look at the distribution of the absolute differences of two random samples from this distribution.



    barplot(table(abs(sample(x=1:6, size=10000, replace=T) - sample(x=1:6, size=10000, replace=T))))


    enter image description here






    share|cite|improve this answer












    $endgroup$










    • 6




      $begingroup$
      Why does this have anything to do with the CLT, which concerns the asymptotic distribution of means of large numbers of iid values?
      $endgroup$
      – whuber
      Aug 11 at 17:43






    • 2




      $begingroup$
      I like the connection you originally made with CLT. Let $n$ be the number of samples to be added (or subtracted) from the original uniform distribution. CLT implies that for large $n$ the distribution will tend toward normal. This in turn implies that the distribution cannot remain uniform for any $n>1$, such as $n=2$ which is what OP is asking. (If this isn't self-explanatory, consider that if the sum were uniformly distributed when $n=2$, reindexing would imply that it is also uniform when $n=4$, etc, including for large $n$.)
      $endgroup$
      – krubo
      Aug 11 at 19:08






    • 3




      $begingroup$
      @Krubo The original question asks about the distribution of differences between successive rolls of a die. The CLT has nothing to say about that. Indeed, no matter how many times the die is rolled, the distribution of those differences will not approach normality.
      $endgroup$
      – whuber
      Aug 12 at 12:23










    • $begingroup$
      Does this distribution tend to uniform as the number of die faces tends to infinity? Not sure how to go about showing that, but intuitively it feels like it heads in that direction, but I don't know if it get asymptotically "blocked" somewhere before flattening enough
      $endgroup$
      – Cruncher
      Aug 12 at 17:44










    • $begingroup$
      @Cruncher you can easily change the number of die faces in the R-Code. The more faces there are, the more apparent the stairwairs nature of the distribution becomes. '1' is always the peak of that stair and with larger differences the probabilities approximate zero. Additionally, difference of '0' is distinctly rarer than '1'. (at least if the die's smallest value is '1')
      $endgroup$
      – LuckyPal
      Aug 13 at 6:43















    5
















    $begingroup$

    As presented by Henry, differences of uniformly distributed distributions are not uniformly distributed.



    To illustrate this with simulated data, we can use a very simple R script:



    barplot(table(sample(x=1:6, size=10000, replace=T)))


    enter image description here



    We see that this produces indeed a uniform distribution. Let's now have a look at the distribution of the absolute differences of two random samples from this distribution.



    barplot(table(abs(sample(x=1:6, size=10000, replace=T) - sample(x=1:6, size=10000, replace=T))))


    enter image description here






    share|cite|improve this answer












    $endgroup$










    • 6




      $begingroup$
      Why does this have anything to do with the CLT, which concerns the asymptotic distribution of means of large numbers of iid values?
      $endgroup$
      – whuber
      Aug 11 at 17:43






    • 2




      $begingroup$
      I like the connection you originally made with CLT. Let $n$ be the number of samples to be added (or subtracted) from the original uniform distribution. CLT implies that for large $n$ the distribution will tend toward normal. This in turn implies that the distribution cannot remain uniform for any $n>1$, such as $n=2$ which is what OP is asking. (If this isn't self-explanatory, consider that if the sum were uniformly distributed when $n=2$, reindexing would imply that it is also uniform when $n=4$, etc, including for large $n$.)
      $endgroup$
      – krubo
      Aug 11 at 19:08






    • 3




      $begingroup$
      @Krubo The original question asks about the distribution of differences between successive rolls of a die. The CLT has nothing to say about that. Indeed, no matter how many times the die is rolled, the distribution of those differences will not approach normality.
      $endgroup$
      – whuber
      Aug 12 at 12:23










    • $begingroup$
      Does this distribution tend to uniform as the number of die faces tends to infinity? Not sure how to go about showing that, but intuitively it feels like it heads in that direction, but I don't know if it get asymptotically "blocked" somewhere before flattening enough
      $endgroup$
      – Cruncher
      Aug 12 at 17:44










    • $begingroup$
      @Cruncher you can easily change the number of die faces in the R-Code. The more faces there are, the more apparent the stairwairs nature of the distribution becomes. '1' is always the peak of that stair and with larger differences the probabilities approximate zero. Additionally, difference of '0' is distinctly rarer than '1'. (at least if the die's smallest value is '1')
      $endgroup$
      – LuckyPal
      Aug 13 at 6:43













    5














    5










    5







    $begingroup$

    As presented by Henry, differences of uniformly distributed distributions are not uniformly distributed.



    To illustrate this with simulated data, we can use a very simple R script:



    barplot(table(sample(x=1:6, size=10000, replace=T)))


    enter image description here



    We see that this produces indeed a uniform distribution. Let's now have a look at the distribution of the absolute differences of two random samples from this distribution.



    barplot(table(abs(sample(x=1:6, size=10000, replace=T) - sample(x=1:6, size=10000, replace=T))))


    enter image description here






    share|cite|improve this answer












    $endgroup$



    As presented by Henry, differences of uniformly distributed distributions are not uniformly distributed.



    To illustrate this with simulated data, we can use a very simple R script:



    barplot(table(sample(x=1:6, size=10000, replace=T)))


    enter image description here



    We see that this produces indeed a uniform distribution. Let's now have a look at the distribution of the absolute differences of two random samples from this distribution.



    barplot(table(abs(sample(x=1:6, size=10000, replace=T) - sample(x=1:6, size=10000, replace=T))))


    enter image description here







    share|cite|improve this answer















    share|cite|improve this answer




    share|cite|improve this answer








    edited Aug 12 at 15:18









    smci

    9331 gold badge10 silver badges18 bronze badges




    9331 gold badge10 silver badges18 bronze badges










    answered Aug 11 at 16:48









    LuckyPalLuckyPal

    4692 silver badges12 bronze badges




    4692 silver badges12 bronze badges










    • 6




      $begingroup$
      Why does this have anything to do with the CLT, which concerns the asymptotic distribution of means of large numbers of iid values?
      $endgroup$
      – whuber
      Aug 11 at 17:43






    • 2




      $begingroup$
      I like the connection you originally made with CLT. Let $n$ be the number of samples to be added (or subtracted) from the original uniform distribution. CLT implies that for large $n$ the distribution will tend toward normal. This in turn implies that the distribution cannot remain uniform for any $n>1$, such as $n=2$ which is what OP is asking. (If this isn't self-explanatory, consider that if the sum were uniformly distributed when $n=2$, reindexing would imply that it is also uniform when $n=4$, etc, including for large $n$.)
      $endgroup$
      – krubo
      Aug 11 at 19:08






    • 3




      $begingroup$
      @Krubo The original question asks about the distribution of differences between successive rolls of a die. The CLT has nothing to say about that. Indeed, no matter how many times the die is rolled, the distribution of those differences will not approach normality.
      $endgroup$
      – whuber
      Aug 12 at 12:23










    • $begingroup$
      Does this distribution tend to uniform as the number of die faces tends to infinity? Not sure how to go about showing that, but intuitively it feels like it heads in that direction, but I don't know if it get asymptotically "blocked" somewhere before flattening enough
      $endgroup$
      – Cruncher
      Aug 12 at 17:44










    • $begingroup$
      @Cruncher you can easily change the number of die faces in the R-Code. The more faces there are, the more apparent the stairwairs nature of the distribution becomes. '1' is always the peak of that stair and with larger differences the probabilities approximate zero. Additionally, difference of '0' is distinctly rarer than '1'. (at least if the die's smallest value is '1')
      $endgroup$
      – LuckyPal
      Aug 13 at 6:43












    • 6




      $begingroup$
      Why does this have anything to do with the CLT, which concerns the asymptotic distribution of means of large numbers of iid values?
      $endgroup$
      – whuber
      Aug 11 at 17:43






    • 2




      $begingroup$
      I like the connection you originally made with CLT. Let $n$ be the number of samples to be added (or subtracted) from the original uniform distribution. CLT implies that for large $n$ the distribution will tend toward normal. This in turn implies that the distribution cannot remain uniform for any $n>1$, such as $n=2$ which is what OP is asking. (If this isn't self-explanatory, consider that if the sum were uniformly distributed when $n=2$, reindexing would imply that it is also uniform when $n=4$, etc, including for large $n$.)
      $endgroup$
      – krubo
      Aug 11 at 19:08






    • 3




      $begingroup$
      @Krubo The original question asks about the distribution of differences between successive rolls of a die. The CLT has nothing to say about that. Indeed, no matter how many times the die is rolled, the distribution of those differences will not approach normality.
      $endgroup$
      – whuber
      Aug 12 at 12:23










    • $begingroup$
      Does this distribution tend to uniform as the number of die faces tends to infinity? Not sure how to go about showing that, but intuitively it feels like it heads in that direction, but I don't know if it get asymptotically "blocked" somewhere before flattening enough
      $endgroup$
      – Cruncher
      Aug 12 at 17:44










    • $begingroup$
      @Cruncher you can easily change the number of die faces in the R-Code. The more faces there are, the more apparent the stairwairs nature of the distribution becomes. '1' is always the peak of that stair and with larger differences the probabilities approximate zero. Additionally, difference of '0' is distinctly rarer than '1'. (at least if the die's smallest value is '1')
      $endgroup$
      – LuckyPal
      Aug 13 at 6:43







    6




    6




    $begingroup$
    Why does this have anything to do with the CLT, which concerns the asymptotic distribution of means of large numbers of iid values?
    $endgroup$
    – whuber
    Aug 11 at 17:43




    $begingroup$
    Why does this have anything to do with the CLT, which concerns the asymptotic distribution of means of large numbers of iid values?
    $endgroup$
    – whuber
    Aug 11 at 17:43




    2




    2




    $begingroup$
    I like the connection you originally made with CLT. Let $n$ be the number of samples to be added (or subtracted) from the original uniform distribution. CLT implies that for large $n$ the distribution will tend toward normal. This in turn implies that the distribution cannot remain uniform for any $n>1$, such as $n=2$ which is what OP is asking. (If this isn't self-explanatory, consider that if the sum were uniformly distributed when $n=2$, reindexing would imply that it is also uniform when $n=4$, etc, including for large $n$.)
    $endgroup$
    – krubo
    Aug 11 at 19:08




    $begingroup$
    I like the connection you originally made with CLT. Let $n$ be the number of samples to be added (or subtracted) from the original uniform distribution. CLT implies that for large $n$ the distribution will tend toward normal. This in turn implies that the distribution cannot remain uniform for any $n>1$, such as $n=2$ which is what OP is asking. (If this isn't self-explanatory, consider that if the sum were uniformly distributed when $n=2$, reindexing would imply that it is also uniform when $n=4$, etc, including for large $n$.)
    $endgroup$
    – krubo
    Aug 11 at 19:08




    3




    3




    $begingroup$
    @Krubo The original question asks about the distribution of differences between successive rolls of a die. The CLT has nothing to say about that. Indeed, no matter how many times the die is rolled, the distribution of those differences will not approach normality.
    $endgroup$
    – whuber
    Aug 12 at 12:23




    $begingroup$
    @Krubo The original question asks about the distribution of differences between successive rolls of a die. The CLT has nothing to say about that. Indeed, no matter how many times the die is rolled, the distribution of those differences will not approach normality.
    $endgroup$
    – whuber
    Aug 12 at 12:23












    $begingroup$
    Does this distribution tend to uniform as the number of die faces tends to infinity? Not sure how to go about showing that, but intuitively it feels like it heads in that direction, but I don't know if it get asymptotically "blocked" somewhere before flattening enough
    $endgroup$
    – Cruncher
    Aug 12 at 17:44




    $begingroup$
    Does this distribution tend to uniform as the number of die faces tends to infinity? Not sure how to go about showing that, but intuitively it feels like it heads in that direction, but I don't know if it get asymptotically "blocked" somewhere before flattening enough
    $endgroup$
    – Cruncher
    Aug 12 at 17:44












    $begingroup$
    @Cruncher you can easily change the number of die faces in the R-Code. The more faces there are, the more apparent the stairwairs nature of the distribution becomes. '1' is always the peak of that stair and with larger differences the probabilities approximate zero. Additionally, difference of '0' is distinctly rarer than '1'. (at least if the die's smallest value is '1')
    $endgroup$
    – LuckyPal
    Aug 13 at 6:43




    $begingroup$
    @Cruncher you can easily change the number of die faces in the R-Code. The more faces there are, the more apparent the stairwairs nature of the distribution becomes. '1' is always the peak of that stair and with larger differences the probabilities approximate zero. Additionally, difference of '0' is distinctly rarer than '1'. (at least if the die's smallest value is '1')
    $endgroup$
    – LuckyPal
    Aug 13 at 6:43











    2
















    $begingroup$

    Others have worked the calculations, I will give you an answer that seems more intuitive to me. You want to study the sum of two unifrom r.v. (Z = X + (-Y)), the overall distribution is the (discrete) convolution product :



    $$ P(Z=z) = sum^infty_k=-infty P(X=k) P(-Y = z-k) $$



    This sum is rather intuitive : the probability to get $z$, is the sum of the probabilities to get something with X (noted $k$ here) and the complement to $z$ with -Y.



    From signal processing, we know how the convolution product behave:



    • The convolution product of two uniform function (two rectangles) will give a triangle. This is illustrated by wikipedia for continuous functions:

    enter image description here



    • You can understand what happen here : as $z$ move up (the vertical dotted line) the common domain of both rectangle move up then down, which correspond to the probability to get $z$.


    • More generally we know that the only functions that are stable by convolution are those of the gaussian familly. i.e. Only gaussian distribution are stable by addition (or more generally, linear combination). This is also meaning that you don't get a uniform distribution when combining uniform distributions.


    As to why we get those results, the answer lies in the Fourrier decomposition of those functions. The Fourrier transformation of a convolution product being the simple product of the Fourrier transformations of each function. This give direct links between the fourrier coefficients of the rectangle and triangle functions.






    share|cite|improve this answer










    $endgroup$














    • $begingroup$
      Please check the validity of your claims and the logic of your answer. The question isn't whether the convolution of two uniform distributions is uniform: it's whether the convolution of some distribution and its reversal can be uniform. And there are far more distributional families than the Gaussian that are stable under convolution (modulo standardization, of course): see en.wikipedia.org/wiki/Stable_distribution
      $endgroup$
      – whuber
      Aug 13 at 17:59











    • $begingroup$
      You are right about stable distributions. For the question, I am pretty sure this is about the difference of two random values with uniform distribution (as indicated by the title). The question whether the convolution of some distribution and its reversal can be uniform is larger than what is asked here.
      $endgroup$
      – lcrmorin
      Aug 22 at 12:25















    2
















    $begingroup$

    Others have worked the calculations, I will give you an answer that seems more intuitive to me. You want to study the sum of two unifrom r.v. (Z = X + (-Y)), the overall distribution is the (discrete) convolution product :



    $$ P(Z=z) = sum^infty_k=-infty P(X=k) P(-Y = z-k) $$



    This sum is rather intuitive : the probability to get $z$, is the sum of the probabilities to get something with X (noted $k$ here) and the complement to $z$ with -Y.



    From signal processing, we know how the convolution product behave:



    • The convolution product of two uniform function (two rectangles) will give a triangle. This is illustrated by wikipedia for continuous functions:

    enter image description here



    • You can understand what happen here : as $z$ move up (the vertical dotted line) the common domain of both rectangle move up then down, which correspond to the probability to get $z$.


    • More generally we know that the only functions that are stable by convolution are those of the gaussian familly. i.e. Only gaussian distribution are stable by addition (or more generally, linear combination). This is also meaning that you don't get a uniform distribution when combining uniform distributions.


    As to why we get those results, the answer lies in the Fourrier decomposition of those functions. The Fourrier transformation of a convolution product being the simple product of the Fourrier transformations of each function. This give direct links between the fourrier coefficients of the rectangle and triangle functions.






    share|cite|improve this answer










    $endgroup$














    • $begingroup$
      Please check the validity of your claims and the logic of your answer. The question isn't whether the convolution of two uniform distributions is uniform: it's whether the convolution of some distribution and its reversal can be uniform. And there are far more distributional families than the Gaussian that are stable under convolution (modulo standardization, of course): see en.wikipedia.org/wiki/Stable_distribution
      $endgroup$
      – whuber
      Aug 13 at 17:59











    • $begingroup$
      You are right about stable distributions. For the question, I am pretty sure this is about the difference of two random values with uniform distribution (as indicated by the title). The question whether the convolution of some distribution and its reversal can be uniform is larger than what is asked here.
      $endgroup$
      – lcrmorin
      Aug 22 at 12:25













    2














    2










    2







    $begingroup$

    Others have worked the calculations, I will give you an answer that seems more intuitive to me. You want to study the sum of two unifrom r.v. (Z = X + (-Y)), the overall distribution is the (discrete) convolution product :



    $$ P(Z=z) = sum^infty_k=-infty P(X=k) P(-Y = z-k) $$



    This sum is rather intuitive : the probability to get $z$, is the sum of the probabilities to get something with X (noted $k$ here) and the complement to $z$ with -Y.



    From signal processing, we know how the convolution product behave:



    • The convolution product of two uniform function (two rectangles) will give a triangle. This is illustrated by wikipedia for continuous functions:

    enter image description here



    • You can understand what happen here : as $z$ move up (the vertical dotted line) the common domain of both rectangle move up then down, which correspond to the probability to get $z$.


    • More generally we know that the only functions that are stable by convolution are those of the gaussian familly. i.e. Only gaussian distribution are stable by addition (or more generally, linear combination). This is also meaning that you don't get a uniform distribution when combining uniform distributions.


    As to why we get those results, the answer lies in the Fourrier decomposition of those functions. The Fourrier transformation of a convolution product being the simple product of the Fourrier transformations of each function. This give direct links between the fourrier coefficients of the rectangle and triangle functions.






    share|cite|improve this answer










    $endgroup$



    Others have worked the calculations, I will give you an answer that seems more intuitive to me. You want to study the sum of two unifrom r.v. (Z = X + (-Y)), the overall distribution is the (discrete) convolution product :



    $$ P(Z=z) = sum^infty_k=-infty P(X=k) P(-Y = z-k) $$



    This sum is rather intuitive : the probability to get $z$, is the sum of the probabilities to get something with X (noted $k$ here) and the complement to $z$ with -Y.



    From signal processing, we know how the convolution product behave:



    • The convolution product of two uniform function (two rectangles) will give a triangle. This is illustrated by wikipedia for continuous functions:

    enter image description here



    • You can understand what happen here : as $z$ move up (the vertical dotted line) the common domain of both rectangle move up then down, which correspond to the probability to get $z$.


    • More generally we know that the only functions that are stable by convolution are those of the gaussian familly. i.e. Only gaussian distribution are stable by addition (or more generally, linear combination). This is also meaning that you don't get a uniform distribution when combining uniform distributions.


    As to why we get those results, the answer lies in the Fourrier decomposition of those functions. The Fourrier transformation of a convolution product being the simple product of the Fourrier transformations of each function. This give direct links between the fourrier coefficients of the rectangle and triangle functions.







    share|cite|improve this answer













    share|cite|improve this answer




    share|cite|improve this answer










    answered Aug 13 at 9:11









    lcrmorinlcrmorin

    1,07211 silver badges30 bronze badges




    1,07211 silver badges30 bronze badges














    • $begingroup$
      Please check the validity of your claims and the logic of your answer. The question isn't whether the convolution of two uniform distributions is uniform: it's whether the convolution of some distribution and its reversal can be uniform. And there are far more distributional families than the Gaussian that are stable under convolution (modulo standardization, of course): see en.wikipedia.org/wiki/Stable_distribution
      $endgroup$
      – whuber
      Aug 13 at 17:59











    • $begingroup$
      You are right about stable distributions. For the question, I am pretty sure this is about the difference of two random values with uniform distribution (as indicated by the title). The question whether the convolution of some distribution and its reversal can be uniform is larger than what is asked here.
      $endgroup$
      – lcrmorin
      Aug 22 at 12:25
















    • $begingroup$
      Please check the validity of your claims and the logic of your answer. The question isn't whether the convolution of two uniform distributions is uniform: it's whether the convolution of some distribution and its reversal can be uniform. And there are far more distributional families than the Gaussian that are stable under convolution (modulo standardization, of course): see en.wikipedia.org/wiki/Stable_distribution
      $endgroup$
      – whuber
      Aug 13 at 17:59











    • $begingroup$
      You are right about stable distributions. For the question, I am pretty sure this is about the difference of two random values with uniform distribution (as indicated by the title). The question whether the convolution of some distribution and its reversal can be uniform is larger than what is asked here.
      $endgroup$
      – lcrmorin
      Aug 22 at 12:25















    $begingroup$
    Please check the validity of your claims and the logic of your answer. The question isn't whether the convolution of two uniform distributions is uniform: it's whether the convolution of some distribution and its reversal can be uniform. And there are far more distributional families than the Gaussian that are stable under convolution (modulo standardization, of course): see en.wikipedia.org/wiki/Stable_distribution
    $endgroup$
    – whuber
    Aug 13 at 17:59





    $begingroup$
    Please check the validity of your claims and the logic of your answer. The question isn't whether the convolution of two uniform distributions is uniform: it's whether the convolution of some distribution and its reversal can be uniform. And there are far more distributional families than the Gaussian that are stable under convolution (modulo standardization, of course): see en.wikipedia.org/wiki/Stable_distribution
    $endgroup$
    – whuber
    Aug 13 at 17:59













    $begingroup$
    You are right about stable distributions. For the question, I am pretty sure this is about the difference of two random values with uniform distribution (as indicated by the title). The question whether the convolution of some distribution and its reversal can be uniform is larger than what is asked here.
    $endgroup$
    – lcrmorin
    Aug 22 at 12:25




    $begingroup$
    You are right about stable distributions. For the question, I am pretty sure this is about the difference of two random values with uniform distribution (as indicated by the title). The question whether the convolution of some distribution and its reversal can be uniform is larger than what is asked here.
    $endgroup$
    – lcrmorin
    Aug 22 at 12:25











    1
















    $begingroup$

    If $x$ and $y$ are two consecutive dice rolls, you can visualize $|x-y| = k$ (for $k = 0, 1, 2, 3, 4, 5$) as follows where each color corresponds to a different value of $k$:



    consecutive dice rolls difference visualization



    As you can easily see, the number of points for each color is not the same; therefore, the differences are not uniformly distributed.






    share|cite|improve this answer










    $endgroup$



















      1
















      $begingroup$

      If $x$ and $y$ are two consecutive dice rolls, you can visualize $|x-y| = k$ (for $k = 0, 1, 2, 3, 4, 5$) as follows where each color corresponds to a different value of $k$:



      consecutive dice rolls difference visualization



      As you can easily see, the number of points for each color is not the same; therefore, the differences are not uniformly distributed.






      share|cite|improve this answer










      $endgroup$

















        1














        1










        1







        $begingroup$

        If $x$ and $y$ are two consecutive dice rolls, you can visualize $|x-y| = k$ (for $k = 0, 1, 2, 3, 4, 5$) as follows where each color corresponds to a different value of $k$:



        consecutive dice rolls difference visualization



        As you can easily see, the number of points for each color is not the same; therefore, the differences are not uniformly distributed.






        share|cite|improve this answer










        $endgroup$



        If $x$ and $y$ are two consecutive dice rolls, you can visualize $|x-y| = k$ (for $k = 0, 1, 2, 3, 4, 5$) as follows where each color corresponds to a different value of $k$:



        consecutive dice rolls difference visualization



        As you can easily see, the number of points for each color is not the same; therefore, the differences are not uniformly distributed.







        share|cite|improve this answer













        share|cite|improve this answer




        share|cite|improve this answer










        answered Aug 12 at 19:32









        todaytoday

        3611 silver badge11 bronze badges




        3611 silver badge11 bronze badges































            draft saved

            draft discarded















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f421676%2fare-differences-between-uniformly-distributed-numbers-uniformly-distributed%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown









            Popular posts from this blog

            Tamil (spriik) Luke uk diar | Nawigatjuun

            Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

            Where does the image of a data connector as a sharp metal spike originate from?Where does the concept of infected people turning into zombies only after death originate from?Where does the motif of a reanimated human head originate?Where did the notion that Dragons could speak originate?Where does the archetypal image of the 'Grey' alien come from?Where did the suffix '-Man' originate?Where does the notion of being injured or killed by an illusion originate?Where did the term “sophont” originate?Where does the trope of magic spells being driven by advanced technology originate from?Where did the term “the living impaired” originate?