This page calculates the probablitiy of an error given a certain bit probability error rate for each bit:
Error probability |
Sample run
A sample with a probability of error of 0.01 and for 8 bits. We get an overall probability of an error at 0.07726. It can be seen that there are 8 one-bit errors, 28 two-bit errors, 56 three-bit errors, and so. The probability of two bits being in error is 0.00264.
P(error): 0.01 No bits: 8 -------------------------- P(no error): 0.922744694428 P(error): 0.0772553055721 -------------------------- Bits No of errors P(error) 1 8 0.0745652278326 2 28 0.00263614441832 3 56 5.32554427944e-05 4 70 6.72417207e-07 5 56 5.4336744e-09 6 28 2.74428e-11 7 8 7.92e-14 8 1 1e-16 Summation of errors: 0.0772553055721
Code
The following is the Python code:
import math import sys p_error= 0.001 n_bits = 8 if (len(sys.argv)>1): p_error=float(sys.argv[1]) if (len(sys.argv)>1): n_bits=int(sys.argv[2]) def comb(n,m): val = math.factorial(n)/((math.factorial(m)*math.factorial(n-m))) return(val) def calc_p_error(p_error,n_bits,no_errors): res = comb(n_bits,no_errors)*pow(p_error,no_errors)*pow(1-p_error,n_bits-no_errors) return res prob_no_error = pow(1-p_error,n_bits) print "P(error)",p_error," No bits: ",n_bits print "--------------------------" print "P(no error):\t",prob_no_error print "P(error):\t",1-prob_no_error print "--------------------------" print "Err Bits\tNo of errors\tP(error)" p_error_total=0 for i in range(1,n_bits+1): p_calc=calc_p_error(p_error,n_bits,i) p_error_total += p_calc print i,"\t\t",comb(n_bits,i)," \t",p_calc print "Summation of errors: ",p_error_total
Outline
If the probability of no errors on a signal bit is (1-p), then the probability of no errors of data with n bits will thus be:
Probability of no errors \(= (1-p)^n\)
The probability of an error will thus be:
Probability of an error \(= 1-(1-p)^n\)
The probability of a single error can be determined by assuming that all the other bits are received correctly, thus this will be:
\((1-p)^{n–1}\)
Thus the probability of a single error at a given position will be this probability multiplied by the probability of an error on a single bit, thus:
\(p(1-p)^{n–1}\)
As there are n bit positions then the probability of a single bit error will be:
Probability of single error \(= n.p(1-p)^{n–1}\)
For example if the received data has 8 bits and the probability of an error in a single bit is 0.001 then:
Probability of no error \(= (1-0.001)^{256} = 0.774\)
Thus the probability of an error is 0.226, and
Probability of single error \(= 256 \times 0.001(1-0.001)^{256–1} = 0.0079\)
Combinations of errors
Combinational theory can be used in error calculation to determine the number of combina-tions of error bits that occur in some n-bit data. For example, in 8 bit data there are 8 combination of single-bit errors: (1), (2), (3), (4), (5), (6), (7) and (8). With 2 bits in error there are 28 combinations (1,2), (1,3), (1,4), (1,5), (1,6), (1,7), (1,8), (2,3), (2,4), (2,5), (2,6), (2,7), (2,8), (3,4), (3,5)…(6,6), (6,7), (6,8), (7,8). In general the formula for the number of combinations of m-bit errors for n bits is:
Thus the number of double-bit errors that can occur in 8 bits is:
\(\binom nk=\frac{n!}{k!(n-k)!}\)
Table 1 shows the combinations for bit error with 8-bit data. Thus, it can be seen that there are 255 different error conditions (8 single-bit errors, 28 double-bit errors, 56 triple-bit errors, and so on).
Table 1 Combinations
No of bit error Combinations No of bit errors Combinations 1 8 5 56 2 28 6 28 3 56 7 8 4 70 8 1
To determine the probability with m bits at specific places, use the probability that (n-m) bits will be received correctly:
\((1-p)^{n–m}\)
Thus the probability that m bits, at specific places, will be received incorrectly is:
\(p \times m(1-p)^{n–m}\)
The probability of an m-bit error in n bits is thus:
\(P_e = \binom nk p^m (1-p)^{n-m}\)
Thus the probability of error in n-bit data is:
\(\sum_{m=1}^{n} \binom nk p^m (1-p)^{n-m}\)
which is in the form of a binomial distribution.