I'm currently working with PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). I understand the geometrical meanings and working processes of both. Then I want to find an example in which both produce the same result, starting from the simplest case in two-dimensional space.
Something that I tried is having classes cluster that are "parallel" in some way, so PCA will just pass through the conflicting linear space and LDA will do the same, but I'm not sure about that.
Ex in this image will LDA works like the PCA second component?
The simplest case in two-dimensional space for getting the same result of LDA and PCA are two samples, e.g. [-1,0] and [+1,0] where the x-axis contains the class information.
Principal Components #1 is [+1,0].
This is also the subspace in which the projection of the samples still can be completely classified.