Cross Validated Asked on November 24, 2021
I have N small floating point vectors of length K (typically, N is in the millions and K=9). I need to compute a lot (millions and millions) of squared euclidean distances between those vectors. It would be great if i could reduce this 9-vector elements to, say, length 3 or 4. I was thinking about PCA. I could pre-compute and store the reduced vectors and then proceed with the distances computation. Could this work? For instances, each of those vectors is a vectorized 3×3 patch around given pixel of a natural image. A patch is created for every pixel.
In your opinion, could this work?
For the image problem, a common technique used is sparse coding (Andrew Ng ppt) for dimension reduction. Maybe a sparse autoencoder, if you feel the sparse coding performance is not powerful enough.
Sounds like you are trying some convolutional neural network. Did you do something along the lines of the description here and it was not effective?.
Sparse Coding Scikit-learn implementation
-Edit Sorry, I didn't read all the way through. I am not sure what you are trying to do by computing the squared euclidean distances between the vectors. I am not sure what is applicable anymore, but sparse coding still might be a viable solution and PCA is definitely worth trying since it is computationally less expensive.
Answered by itzjustricky on November 24, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP