Hacker Newsnew | past | comments | ask | show | jobs | submit | snthpy's commentslogin

I like it but confusing that there's also a similar but different loonlang.org.

I agree. I've switched to pls and tx just for my own typing because of how common they are.

> We found that high-quality constitutional documents combined with fictional stories portraying an aligned AI can reduce agentic misalignment by more than a factor of three despite being unrelated to the evaluation scenario.

tl;dr Fairy Tales are an effective teaching tool in vivo et in silico


Looks cool.

That's kind of my (not the project's) vision for PRQL - a general LINQ type embeddable data transformation language.

Unfortunately no time to work on it these days.


YAML no thanks.

I want something that uses BPML for actual business workflows.


Thank you. I also did a double take and concluded this must be about arm64 emulation then.

A bit OT but what about the Netherlands?

No hurricanes there, so I guess they can just slowly rebuild walls to keep up with sea level and increase number of pumps already used to get rid of excess rainwater. Not great but manageable as long they secure enough money.

Aha interesting. Thanks!

As PoW goes, this is cool.

One thing I didn't understand though is Light mode:

> Fast mode is for mining. Light mode is for verification. The reference README says

The post only describes Fast mode, right?

Presumably verification is done by miners who already have the memory set up so Fast mode would be faster for them.

Verification is still relatively fast because you don't have to try gazillions of nonces so who is Light mode necessary and how does it work?


Light mode is used mostly by the monerod, validating incoming blocks. But on a machine with sufficient free RAM, monerod can also be set to use Fast mode instead.

The post described both modes. The only difference is that Fast mode processes the cache to generate the full 2.1GB dataset, so subsequent programs can just reference it as needed. Light mode uses only the 256MB cache and generates the required dataset values individually, on each access. That saves RAM but costs more CPU time.


Yeah like the rhinos and elephants that I didn't know you used to get in that area. Maybe they were too efficient and that's what limited their proliferation when they hit resource limits?

I've been wanting to ask this:

Why isn't

    git clone --depth 1 ... 
the default?

I would guess that for at least 90% of the repos I clone, I just want to install something. Even for the rest, I might hack on the code but seldom look into the history. If I do then I could do a `git fetch` at that point and save the bandwidth and disk space the rest of the time.



Thanks. That's great! I especially like that it then lazy loads the blobs as you need them.

I was going to ask if there's a way to set that as the default but I guess I'll just set up an alias like I have for most of the subcommands I use daily.


A question: why is git involved at all in this? You don't want a repository.


Good question! Idk and I don't make the rules. I guess people default to it because most people have git installed already?

I'm thinking of LazyVim for example which has [1]:

    git clone https://github.com/LazyVim/starter ~/.config/nvim
After that, once you do a sync or update, there's a whole lot more cloning going on.

The other projects I was going to mention have apparently all switched away from using git for their package management (homebrew, Go, cargo, ...). I can't help but wonder to what extent that might have been influenced by the default slowness of doing a full git clone?

Of course these all could add `--depth 1` to their instructions or internal package management tooling, and ofc we need both options to be available. I pondering aloud that in my observation, `--depth 1` is probably the option that I want more often than not but YMMV.

1: https://github.com/aerys/gpm


This! The default was to have a link to download a tarball of the source. And if the user wanted to contribute (or check the devel version), you would add a link to the vcs.


Grabbing git repos instead of just tarballs is useful.

A) You can update them, because you can git pull to fetch changes.

B) If you want to apply patches on top, its better to have version control so you can keep track of what you changed, especially useful if you want to rebase.


A) only valid if you want to stay with the devel version

B) See A

I use OpenBSD and before that, I was on alpine, debian, and arch. Of it was a software I want to try, I downloaded the tarball. if it’s something I wanted to keep for longer, I created a port or a custom packages.


You should invert your framing.

It's only *not valid* if you intend to use a fixed version forever. Otherwise you might as well include versioning for any other case.


> Otherwise you might as well include versioning for any other case.

It’s easier to version a port and the patches than to try keeping a series and patches on top of a dev branch. Not saying that your use cases are invalid, but the point of the thread was using git for building software. If you’re not developing the software, there’s no need to go from something that is working well to an unstable build every week.


Of course it's valid for release versions too: just fetch and checkout the release tag you want. I do this all the time.

Juggling multiple directories and tarballs is a pastime from a bygone era. It's even more commands if you want to reuse the existing directory!


I think gitignore solves a problem that is hard to solve with the traditional tarball approach.

Downloading a tarball and running ./configure or make, editing a config file here or there, etc then running `make install` is the most common flow. Now days I find myself frequently editing the Dockerfile to make it to my liking. With a git repo, the owners of the repo have excluded all the local files, build caches, etc and you can keep pulling to get updates stashing and reapplying your local changes. With tarballs, you have to figure it out all over again. Lose your build cache (language dependent maybe), lose a change you made here or there, etc.


What if that's only you? Git isn't made only for those who "just want to install something"


Fair enough. I also work with a monorepo at work but that I cloned like 5 years ago.

If I think about what I've cloned over the last week or so (LazyVim, gstack, my dotfiles), most of the time I just want the current state and be able to pull updates. Even for my dotfiles or projects that I fork and hack on, most of the time I'm just adding commits and it's seldom that I want to go back to historical ones.

Given how often I see `git clone ...` instructions in Github README.md files, I was just wondering how many other people felt the same?

So my contention is that most of the time, `git clone --depth 1` or `git clone --filter=blob:none` is what you actually want, and in the case that you want the full history then you could do `git clone --depth 0` (or `git clone -full` for even better UX, not that the git cli is known for it's UX).


Its not the default because that'd be counter-productive to developers who use git with larger repositories, which is how git started life in the first place - your clone depth would be entirely useless for Linux kernel developers, for example, if it were default ..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: