Calculating Chao-Shen corrected Jensen-Shannon Divergence

60 Views Asked by At

I am trying to calculate JS divergence of two distributions P and R with Chao-Shen correction (https://link.springer.com/article/10.1023/a:1026096204727). P and R are simple arrays where each element represents count of certain microstates. Their length are same. P may contain a lot of 0 bins (hence the use of Chao-Shen correction). Here's my code:

def js_cs(P,R):
    
    M = (P+R)/2
    js = 0.5 * kl_cs(P, M) + 0.5 * kl_cs(R, M)
    
    return js

def kl_cs(P, R):
    
    # Convert to float
    P = P.astype(float)
    R = R.astype(float)
    
    yx = P[P > 0]  # remove bins with zero counts
    n = np.sum(yx)
    p = yx / n
    f1 = np.sum(yx == 1)  # number of singletons in the sample
    if f1 == n:  # avoid C == 0
        f1 -= 1
    C = 1 - (f1 / n)  # estimated coverage of the sample
    pa = C * p  # coverage adjusted empirical frequencies
    la = (1 - (1 - pa) ** n)  # probability to see a bin (species) in the sample
    H = -np.sum((pa * np.log(pa)) / la)
    
    # Make sure no zero values in R for log computation
    R /= np.sum(R)
    
    # Only consider entries where P > 0
    R = R[P > 0]
    
    # Compute corrected KL-divergence
    cross_entropy = -np.sum((pa * np.log(R)) / la)
    return (cross_entropy - H)/np.log(2)

I am seeing that JS div values are negative which is weird. As far as I know this should not happen. What could be the reason? Am I calculating JS divergence properly?

One thing to note, the negative values are not very large, they're very close to 0, but negative nonetheless.

0

There are 0 best solutions below