Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's actually correct and intentional. TurboQuant applies the same rotation matrix to every vector. The key insight is that any unit vector, when multiplied by a random orthogonal matrix, produces coordinates with a known distribution (Beta/arcsine in 2D, near-Gaussian in high-d). The randomness is in the matrix itself (generated once from a seed), not per-vector. Since the distribution is the same regardless of the input vector, a single precomputed quantization grid works for everything. I've updated the description to make this clearer.
 help



Thanks. However, from this visualization it's not clear how the random rotation is beneficial. I guess it makes more sense on higher dimensional vectors.

Yes, this is important in high dimension. But sadly, very hard to visualize. In 2d it looks like unnecessary.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: