Hacker Newsnew | past | comments | ask | show | jobs | submit | justinclift's commentslogin

If the person was deep enough into the system to have access to location data, then they'd probably be able to just directly look up customer details (likely easier).

Absolutely not. I have access to geo-located network telemetry. CRM data is completely off limit to anyone on my team.

Are you in a small company where most people wear lots of hats, or in a big company that has siloed off groups? Am guessing it's more of the big company approach that silos things off?

As far as telcos go, I work at a pretty small one. We have fewer subscribers than say, a single Chinese operator would have in a second tier city.

Well maybe it wasn't such a well secured company and also this seems story from the past.

Built-in positioning of network traces is relatively recent in mobile network equipment and dedicated probes.

If that happened more than 5-6 years ago, it would sound even less likely. Most telcos never bothered doing the processing needed to position raw events based on timing advances. They'd simply offload that to third party companies. These solution providers aren't crazy, they don't touch data that isn't already anonymized. It's even less probable that a random employee would have access to the multiple datasets needed to piece someone's personal data together.


Are you running it from your own gpu(s), or paying for the gpu usage in a likely-eye-watering way? ;)

I was using an old (2nd or 3nd Gen) Surface Pro for several months doing this, and apart from it being Windows based (ugh) it was pretty good. Until I dropped the thing. o_O

I have a Surface Book now, that I put Linux on for a while (bad idea, super flaky with Surface Linux). I'd probably recommend the Surface Pro again over the Surface Book, and just put up with Windows (ugh x2). Using the AtlasOS variant at least, so less crappy compared to stock Windows.


> Analyzing "emotion" in the model is completely anthropocentric.

Yeah, asking a text generator designed to sound as-human-as-possible about its "welfare" then actually giving credence to the output is a category error.

It's like asking a ceramic mug with "Best Dad!" written on the side if I'm the best dad, then uncritically just believing the words painted there. :( :( :(


> Qwen 3.6:27b uses 29/32gb of vram

What context size are you using for that?

Btw, are you using flash attention in Ollama for this model? I think it's required for this model to operate ok.


I squeezed it into 24 GiB VRAM (since I have RX7900XTX):

-- Q5_K_M Unsloth quantization on Linux llama.cpp

-- context 81k, flash attention on, 8-bit K/V caches

-- pp 625 t/s, tg 30 t/s


Depends entirely on quantization. Q6_K with max context length (262144) is ~40GB of VRAM.

Q8 with the same context wouldn't fit in 48GB of VRAM, it did with 128k of context.


The Tracy Chapman, and Midnight Oil recordings on there are pretty good:

* Tracy Chapman: https://archive.org/details/@aadam_jacobs_collection?and[]=c...

* Midnight Oil: https://archive.org/details/@aadam_jacobs_collection?and[]=c...


Visually it seems nice, but seems to be completely missing any volume control button(s) or widgets?

Looks like it. This quant ( https://huggingface.co/inferencerlabs/Kimi-K2.6-MLX-3.6bit ) says:

> Q3.6 typically achieves useable accuracy in our coding test and fits within a 512GB memory budget

This one ( https://huggingface.co/mlx-community/Kimi-K2.6-MoE-Smart-Qua... ) though says it fits on a 192GB mac:

> M3/M4 Ultra 192GB+ (fits in ~150GB)


Wonder if stuff like this would affect it?

https://github.com/p-e-w/heretic

Guessing it probably would?


Neat project! I would be interested in a paper about this.

I think the tricky part with this type of technology is that, this works if the training data was not curated. What I mean is, if someone trains an LLM to simply not include key events it will not be able to reply

Not being a hater. This is neato!


In that case you can use either rag or fine-tuning. The entire premise of the Tiananmen Square argument is just Americans feeling inferior. I use Chinese models every day for work and my personal life, the model not knowing about this one historical event has had zero impact on me.

That's truly a wonderful collection of pelicans riding bicycles.

Much Win! ;)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: