lunedì 27 ottobre 2025

Homework 4

 The Law of Large Numbers

The Law of Large Numbers, is one of the most important theorems in probability theory. It establishes that, when an experiment is repeated independently many times under the same conditions, the empirical average of the observed outcomes will converge to the expected value of the underlying random variable.

In formal terms, let X1,X2,,XnX_1, X_2, \ldots, X_n be a sequence of independent and identically distributed random variables with mean E[Xi]=μ\mathbb{E}[X_i] = \mu. The Law of Large Numbers states that the sample mean


converges to μ as the number of samples n increases, either in probability (this is called the Weak Law of Large Numbers) or almost surely (that is called the Strong Law of Large Numbers). Intuitively, this means that the more we repeat a random experiment, the more the observed average outcome stabilizes around its theoretical mean, so the "randomness" is averaged out in the long run.

For example, in the case of Bernoulli trials, where each random variable Xi takes values 1 (success) with probability p and 0 (failure) with probability 1 - p, the sample mean represents the relative frequency of success:


The law of Large Numbers tells that f(n) will converge to p as n increases. This convergence is the theoretical foundation of empirical probability estimation.

The test

To validate the Law of Large Numbers, a simulation was performed using indipentent Bernoulli trials. In each trial, the outcome Xi can take the value 1 (success) with probability p or 0 (failure) with probability 1 - p. For each sequence of trials, the relative frequency of success after k iterations is computed as

The experiment generates m independent trajectories, each consisting of n Bernoulli trials.
For each trajectory, the relative frequency f(k)f(k) is tracked step-by-step, and the final value f(n)f(n) is recorded.
The objective is to show, both visually and numerically, that as nn increases:

  1. The trajectories of f(k)f(k) progressively stabilize around the theoretical probability pp.

  2. The empirical distribution of the final f(n)f(n) values (across all mm trajectories) becomes increasingly concentrated near pp.

This behavior confirms that the average outcome of repeated independent random trials converges to the expected value, providing a visual and statistical demonstration of the Law of Large Numbers.

The simulation was implemented in JavaScript, allowing real-time visualization of multiple trajectories and an adaptive histogram of final relative frequencies.
The visualization makes it possible to observe the convergence process interactively and to compare empirical results with the theoretical variance predicted by the Central Limit Theorem.

We chose values of m, n, p in such a way in which is possible to observe how these values change over time, starting with high oscillations and converging over time toward the p value. For that reason, in our test we chose p = 0.5, n = 300, m = 100 (so, there will be 100 trajectories, where each trajectory represents a sequence of 300 independent Bernoulli trials, where each trials has a probability p = 0.5 of success).

This is done with this function, present in the JavaScript code:


Now, speaking of results, this graph done using the same program,
shows how the means done for every trajectory, tend to converge towards p (0.5) over time, starting with high oscillations, and then converging.


The histogram below represents the empirical distribution of the final values f(n) across all 100 trajectories. Each blue bar corresponds to the number of trajectories that ended with a particular frequency of success. The orange vertical line represents the theoretical probability p = 0.5. 



As we can see, the distribution is approximately symmetric and centered around f(n) = 0.5, confirming that the empirical mean of all trajectories coincides with the theoretical probability. Most of the observed frequencies fall within the interval [0.44, 0.56], which corresponds closely to the theoretical range p±2σ where:



The concentration of the final frequencies around p demonstrates the Law of Large Numbers: as the number of trials n increases, the dispersion of the sample means f(n) decreases, and their values converges toward the expected probability p.

Moreover, the bell-shaped form of the histogram reflects the Central Limit Theorem, according to which thwe distribution of sample means approximates a normal distribution:


Therefore, both visually and statistically, the experiment confirms that random fluctuations tend to average out as the number of trials increases, providing an empirical validation of the theoretical law.

This experiment successfully demonstrated the Law of Large Numbers through a computational simulation of Bernoulli trials. By analyzing multiple trajectories of relative frequencies f(k) = Sk/k and their empirical distribution at f(n), it was observed that all trajectories progressively converged toward the theoretical probability chosen (p = 0.5).

The empirical histogram showed that the dispersion of the final frequencies decreased proportionally to 1/n, and that the mean of the observed outcomes coincided almost perfectly with the expected probability. These results confirm that, as the number of independent trials increases, random fluctuations average out and the empirical mean stabilizes around its theoretical expectation.







Nessun commento:

Posta un commento

Homework 11

Homework 11 – Simulation of a Wiener Process via Euler–Maruyama and Connection to the Counting Process Approximation 1. Introduction In Home...