Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What they're saying is that the error for a vector increases with r, which is true.

Trivially, with r=0, the error is 0, regardless of how heavily the direction is quantized. Larger r means larger absolute error in the reconstructed vector.



Yes, the important part is that the normalized error does not increase with the dimension of the vector (which does happen when using biased quantizers)

It is expected that bigger vectors have proportionally bigger error, nothing can be done by the quantizer about that.


Except maybe storing another smaller vector for the difference with the original data an also quantize that maybe recursively



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: