Particle filter probability density not as expected

2.3k Views Asked by At

Playing a bit with a particle filter I wonder why the probability density doesn't look as I'm expecting:

I tried to implement a very simple model where $x_k = x_k-1 + noise$ and the measurement is $z = x_k + noise$ with measurement values always switching (toggling) between 0 and 1.

My expectations:

  • mean = 0.5 --- works as expected
  • probability density function with (normal distributed) peaks at 0 and 1 and the rest roughly zero --- works not at all

The resulting probability density is just a normal distribution around 0.5: enter image description here

So is that distribution correct or is there a bug in my code?
What do I need to change in the code to get the binary distribution that I'm expecting?

#!/usr/bin/python3

import math
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation

xMin = -1.15
xMax =  2.15
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], frameon=True, xlim=( xMin, xMax ), ylim=( -0.1, 0.5 ) )
color = 'k'
ims = []

stdModel = 0.05
stdMeasure = 0.15

# Number of particles
N = 1000
x_Particles = np.random.uniform( xMin, xMax, size=N )
x_weightsLn = np.ones(N) * math.log(1/N)

for i in range( 100 ):
    measure = i%2 # toggle between 0 and 1

    # predict:
    # Stationary model: x_k = x_k-1 + noise
    x_Particles[:] += np.random.randn(N) * stdModel

    ### calculate and display probability density at this point
    x_ParticlesSortIndices = np.argsort( x_Particles )
    x_ParticlesSort = x_Particles[x_ParticlesSortIndices]
    x_weightsSort = np.exp( x_weightsLn[x_ParticlesSortIndices] )
    x_weightsSortCumSum = np.cumsum( x_weightsSort )
    samplePos = np.linspace( xMin, xMax, 201 )
    sampleValIndices = np.minimum( np.searchsorted( x_ParticlesSort, samplePos ), N-1 )
    sampleVal = x_weightsSortCumSum[sampleValIndices]
    sampleVal = sampleVal[1:] - sampleVal[:-1]
    samplePos = samplePos[1:]
    sampleVal /= sum( sampleVal )
    thisplot = ax.plot(
        samplePos,sampleVal,'-'+color+'',
        x_Particles,np.random.uniform( -0.09, -0.01, size=N),'k.',
        [measure], 0, 'bx'
    )
    ims.append( thisplot )
    ###

    # measure:
    # direct measurement: z = z + noise
    z_Particles = x_Particles + np.random.randn(N) * stdMeasure
    # Normal Gauss:
    #x_weights *= (1/math.sqrt(2*math.pi*stdMeasure)) * np.exp( -(measure-z_Particles)**2/(2*stdMeasure) )
    # Logarithmic version, ignoring prefactor as normalisation will get rid of it anyway
    x_weightsLn += -(measure-z_Particles)**2/(2*stdMeasure)
    x_weightsLn -= np.log(sum(np.exp(x_weightsLn))) # normalize

    # resample:
    doResample = (1. / np.sum(np.exp(2*x_weightsLn))) < N/2
    if doResample:
        # stratified_resample
        positions = (np.random.random(N) + range(N)) / N
        indexes = np.zeros(N, 'i')
        cumulative_sum = np.cumsum(np.exp(x_weightsLn))
        i, j = 0, 0
        while i < N:
            if positions[i] < cumulative_sum[j]:
                indexes[i] = j
                i += 1
            else:
                j += 1
        x_Particles[:] = x_Particles[indexes]
        x_weightsLn.fill(math.log(1.0 / N))
    if doResample:
        if 'k' == color:
            color = 'r'
        else:
            color = 'k'

im_ani = animation.ArtistAnimation(fig, ims, interval=50, blit=True )
plt.show()
1

There are 1 best solutions below

0
chtz On

Your expectation is wrong. Just calculate (manually) what would happen to particles at 0.0, 0.5 and 1.0 after two iterations (if there was no movement in between).

To get your desired effect try something like this as your measurement function:

x_weightsLn += -min((0-z_Particles)**2,(1-z_Particles)**2)/(2*stdMeasure)

That would over time increase the weights of particles that are either close to 0 or to 1. However, if the particles are badly distributed initially, you may end up having just one peak, or two peaks with significantly different sizes.