Hacker Newsnew | past | comments | ask | show | jobs | submit | lemming's commentslogin

Related question: how are people handling adding family members of varying technical abilities to your tailnets? Does each family member get a separate user so you can manage their access? For my immediate family I was just logging tailscale in as me on their devices, but that becomes a pain when they get logged out and need me to log in again before things go back to working.

- For homes with close family (parents and siblings), I setup a subnet router and local DNS server on a Dell Wyse, making it one of their DNS servers so it can point them to services

- Yes, they should have their own account. However you can only add a few before you need to move to a paid version

- You can disable the expiry for nodes, which should keep it connected and prevent you needing to sign in again for them, for the most part


I'm very familiar with Clojure, but even I can't make a good argument that:

    (tc/select-rows ds #(> (% "year") 2008))
is more, or at least as, intuitive as:

    filter(ds, year > 2008)
as cited above. I think there's a good argument to be made that Clojure's data processing abilities, particularly around immutable data, make a compelling case in spite of the syntax. The REPL is great too, and the JVM is fast. But I still to this day imagine infix comparisons in my head and then mentally move the comparator to the front of the list to make sure I get it right.

I am really not in data science, and I have decent Clojure experience. Is there a reason anyone would pick Clojure over something like K? From what I understand, those array languages are really good for writing safe but efficient code on rectangular data.

How about this?

    (filter ds (> year 2008))
That's a trivial Clojure macro to make work if it's what you find "intuitive."

But if it doesn’t have access to the network, then it’s just not very useful. And if it does, then it’s just a prompt injection away from exfiltrating your data, or doing something you didn’t expect (eg deleting all your emails).


This argument is predicated on Anthropic losing money on the subs, but I'm not sure that's a cut and dried argument. OpenAI have said publicly that they're (very) profitable on inference, and they're much cheaper than Anthropic. I suspect this is just artificially trying to create a moat. The problem is their moat is not as sticky as they think it is - I completely ditched Claude for Codex a while ago, my money now goes to OpenAI, and I'm very happy with it. For a while Claude was noticeably better, but that's not the case any more - in my case I prefer Codex.


They aren't public companies (yet). They are allowed to just lie about these things. It's also not really reasonable to only count inference compute as a cost since it's not like any of these companies could stop doing R&D without being abandoned for having worse models within a year or 2


> They are allowed to just lie about these things.

That would turn into investment fraud the moment they IPO.


This actually produces more impressive results than I expected. My understanding was that models are quite poor at spatial reasoning/understanding, so I'm surprised it can generate such good assets. Do you use different models for the 3d generation?


It will be true as soon as it becomes official though, assuming they actually go through with it and this is not just a bargaining tactic.


Won’t that require an act of congress? How likely does that seem?


Huawei was not on the NDAA (the congress part) until August 2019, well after companies started cutting ties in April/May of that year


My interpretation of that is that I’m required to assume good faith on behalf of other commenters. So, if someone makes the same argument as the government, I’m supposed to assume good faith there, but nothing requires me to assume good faith on behalf of the government. So I can say that this is obviously a shakedown without breaking the rules.


"Assume good faith" does not mean "extend an unlimited amount of good faith to demonstrably bad-faith actors".


Could you go into a little more detail about the deep context - what does it grab, and which model is used to process it? Are you also using a groq model for the transcription?


It takes a screenshot of the current window and sends it to Llama in Groq asking it to describe what you’re doing and pull out any key info like names with spelling.

You can go to Settings > Run Logs in FreeFlow to see the full pipeline ran on each request with the exact prompt and LLM response to see exactly what is sent / returned.


Is it possible to customise the key binding? Most of these services let you customise the binding, and also support toggle for push-to-talk mode.


Tested at Mistral’s scale is a very different thing to tested at OpenAI’s scale.


The scale of being "tested" clearly convinced Meta (beyond OpenAI's scale) [0] HuggingFace [1], Perplexity [2] and unsuprisingly many others in the AI industry [3] that require more compute than GPUs can deliver.

So labelling it "untested" even at Meta's scale as a customer (which exceeds OpenAI's scale) is quiet nonsensical and frankly an uninformed take.

[0] https://www.cerebras.ai/customer-spotlights/meta

[1] https://www.cerebras.ai/news/hugging-face-partners-with-cere...

[2] https://www.cerebras.ai/press-release/cerebras-powers-perple...

[3] https://www.cerebras.ai/customer-spotlights


Meta didn't offer it. They offered the free llama version on their cloud. Maybe now Zuck will be conincrto buy their chips though


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: