Your point about the proportional difference not decreasing is an important one, because it brings up one of the things that people have a hard time understanding about random numbers. Statistical intuition would suggest that as the sample size gets larger, the sample should more closely reflect the population of numbers being sampled from (the uniform distribution in this case). The problem is that this is only correct on average, across a whole bunch of samples. I’ll try to illustrate this with the following example:

I took 1 million samples from a uniform distribution on the interval (0,1). Then, for the first 10,000 samples, I found out what proportion of them were less than 1/3 (I would expect it to be about 1/3). I then did this at different numbers of samples: the first 10,100, 10,200, etc. until I had done this for all of the samples. The result is plotted below.

As you can see by looking at the scale of the y axis, the proportion less than 1/3 is about 1/3, as expected (I specifically didn’t do less than 10,000 because it’s too variable there). Note that I plotted the x-axis in log scale and that 1/3 is indicated by the horizontal dashed line. You can see that as the number of samples increases, the proportion less than 1/3 stabilizes. However, one interesting thing is that it is not always moving toward 1/3. For example, at about 100,000 samples (1e+05), the proportion is fairly accurate. However, at around 316,000 samples, it has become relatively inaccurate, before becoming more accurate again toward 1 million samples.

One thing to take away from this is that 1 million is not really large enough to get extremely accurate proportions, even though this is what we intuitively expect.

On average, however, if you have 1 million samples, the proportion will be more accurate than if you have 10,000 samples. But for any 1, or 2, or 10, or whatever series of numbers, those may not be more accurate at 1 million than at 10,000. You might need to average across a large number of series of numbers before you will be able to see that 1 million samples is more accurate than 10,000. My point is that looking at just one series of numbers, even if that series has 10 million samples in it, can be misleading.

One of the reasons that statistical intuition doesn’t work that well in this case is the (correct) idea is that the proportion will get more accurate as the sample becomes arbitrarily large. In particular, if the sample is infinitely large, exactly 1/3 of the numbers will be less than 1/3. This is where it breaks: For a human, 1 million is indistinguishable from infinity, but there really is a big difference between the two.