I've just started a DSP course and am trying understand the big picture.
As I understand it, approximating signals is needed frequently in Digital Signal Processing. So, let's say I have a vector that consists of N samples, and let's say I want to recreate the original signal from those samples. I go to MATLAB, plot the samples, and find a function (or series of functions) that seems like it would be reasonably close to the sample vector. Then I try to find a vector, c, of weights that would adjust that function so as to reduce the least squares error between it and the sample vector to a minimum.
How does the concept of Orthogonality help me in my endeavor to find this approximate function?
Thanks
This is a programming site and you might get better answers elsewhere, for example here
But here's my two pence worth. Suppose in your example you want to find the weights w[] so that, if your signal is f and your chosen functions are b[],
In general to do this, you need to calculate the inner products
and then solve
While this is easy enough to do in a suitable language, if the functions b[] are orthogonal it's even easier, for then the matrix A is diagonal and we can write the solution down directly:
There are other advantages to using orthogonal bases.
Suppose that you were unsure how many basis functions you needed to use. You might wonder, say, if just using some subset of the b[] would be good enough. If the b[] are not orthogonal you will need to solve for the weights afresh: the optimal weights for say b1 b[2] and b[3] are not the same as the first three weights for b1 .. b[7]. However if the b[] are orthogonal, the optimal weights for say b1 b[2] and b[3] are indeed the same as the first three weights for b1 .. b[7]. This simplifies selection of which b to use.
Another issue is numerical stability, which in this case comes down to how much error is introduced by solving for the weights. In general this will be less, sometimes much less, when using an orthogonal basis.