I didn't want to nitpick terminology, but yes, the tile-placement algorithm here is just a way of solving constraint satisfaction problems with DFS using a "minimum remaining values" heuristic [0]. The original use case for generating textures [1] is different in that the constraints are implicit in the input bitmap, but this project is a more straightforward tile placement with explicit constraints.
I think this algorithm is more efficient for generating maps with only local (adjacency) constraints, but setting this up as an integer linear program and plugging it into a constraint solver is more generalizable (say, if you wanted to enforce a constraint that rivers had to flow across the whole map and could not loop).
But I agree "wave function collapse" is not really the best name, for two reasons:
- the original repository mentions "it doesn't do the actual quantum mechanics, but it was inspired by QM", but it implies something QM-related.
- as an ORIE major in college that loved optimization, I think constraint satisfaction problems are really cool and actually somewhat approachable! So calling the heuristic something else like "wave function collapse" might limit people from finding previous work and known improvements (e.g. forward checking).
Colloquially this is what gamedevs mean when they refer to WaveFunctionCollapse (though the constraints may or may not be inferred from tiles or 3D models, depending on the implementation). It may not match the academic terminology exactly
Last-Write-Win CRDTs are nice, but I wish the article talked about where CRDT really shine, which is when the state truly converge in a non-destructive way, for example:
1) Counters
While not really useful, they demonstrate this well:
- mutations are +n and -n
- their order do not matter
- converging the state is a matter of applying the operations of remote peers locally
2) Append-only data structures
Useful for accounting, or replication of time-series/logs with no master/slave relationship between nodes (where writes would be accepted only on a "master" node).
- the only mutation is "append"
- converging the state is applying the peers operations then sorting by timestamp
EDIT: add more
3) Multi Value registers (and maps)
Similar to Last-Write-Win registers (and maps), but all writes are kept, the value becomes a set of concurrent values.
4) Many more...
Each is useful for specific use cases. And since not everybody is making collaborative tools, but many are working on distributed systems, I think it's worth it to mention this.
On another note, the article talks about state based CRDTs, where you need to share the whole state. In the examples I gave above, they are operation based CRDTs, where you need to share the operations done on the state and recompute it when needed.
Delta-CRDTs are an optimization over state based CRDTs where you share state diffs instead of the whole state (described in this paper: https://arxiv.org/pdf/1603.01529 ).
You would have a process handling the calls to the postgres.
That process has as local state the database connection and receive messages that are translated to SQL queries, here 2 scenarios are possible:
1) The query is invalid (you are trying to inert a row with a missing foreign key, or wrong data type).
In that case, you send the error back to the caller.
2) There is a network problem between your application and the database (might be temporary).
You just let the process crash (local state is lost), the supervisor restarts it, the restarted process tries to connect back to the database (new local state). If it still fails it will crash again and the supervisor might decide to notify other parts of the application of the problem. If the network issue was temporary, the restart succeeds.
Before crashing, you notified the caller that there was a problem and he should retry.
Now, for the caller. You could start a transient process in a dynamic supervisor for every query. That would handle the retry mechanism. The "querier process" would quit only on success and send the result back as a message. When receiving an error, it would crash and then be restarted by the supervisor for the retry.
There are plenty of other solutions, and in Elixir you have "ecto" that handles all of this for you. "ecto" is not an ORM, but rather a data-mapper: https://github.com/elixir-ecto/ecto
Hi! I found that it was the most easy language to get started with. I looked at other language such as `HCL` (the Terraform language) but I thought that it would be too complex to learn, and to get started with. I really want this project to be easy to work with.
What language did you have in mind that you'd rather use compared to YAML ?
YAML is actually tricky, especially with multiline strings and space handling.
I do find HCL to be simpler. But for things like Ansible, I'd rather have a real programming language, like PyInfra does (which I never got to try yet unfortunately).
Yeah at beginning I really wanted to use HCL.. But their parser and implementing it was actually a little bit more difficult that doing just raw YAML parsing.
It also doesn't help that I'm not that familiar with the language.
> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software
Key words are:
- permission is [...] granted
- free of charge
- without restriction
- use, copy, …
Then:
> may not be used for the purposes of […]
The license contradicts itself.
> Don't we have to ask for permission before feeding someone's years of work into an AI?
That's the point of an OpenSource license, to give permission.
This kind of stuff makes me think very few people really understand what OpenSource is about. The very same people who will fallback to licenses such as the BSL as soon as people/companies will use the permissions that they gave, and then will complain that "no one wants to pay for the thing i did for free and nobody asked for".
I understand these points. As someone who truly love open source, we can see open source projects are becoming just a free training materials for AI. After training LLMs using open-source projects AI can build far superior software one day and that software may be not free, not able to replace by any non-AI software project. We all know that day is not far and that period of time all open-source software might consider legacy as no individual contributor able to implement stuff the speed of AI. What you are protecting is not only a legacy system we build decade old requirements and also the death of the purpose of why people build free software.
What we have to focus is why we created free software, not word by word terms that not fulfill the requirement at this and future time period.
You can't say you love opensource and be mad that users are using the freedom you granted.
OpenSource projects are not becoming free training material for AI, AI companies are using a freedom OpenSource projects granted.
The claim that AI can build far superior software is dubious and I don't believe it one second. And even if it were true, that does not change anything.
With or without AI, permissive licenses (MIT, BSD, ISC, ...) always allowed the code to be used and redistributed in non opensource software. If you don't want that, use the GPL or a derive. If you don't believe that the GPL would be enforceable on the derivative works produced by AI, don't release your code as opensource.
OpenSource is essentially an ideology, that software should be free of use, and transparent, and freely shareable, without restriction. If you don't buy into that ideology, it's fine, but don't claim to love OpenSource when you don't. Just like a person who eats fish should not claim to be vegan.
AI will not be the end of OpenSource, firstly because it's a dead-end technology, it has already peaked years ago and is becoming worse with each new model. It does not have the ability to build complex software beyond a CRUD app (would you use a kernel that was entirely vibecoded? would you trust it the way you trust the Linux kernel?). Secondly, because OpenSource does not discriminate who gets to enjoy the freedom you granted.
You decided to "work for free" when you decided to distribute as OpenSource. If you don't want to work for free, maybe OpenSource is not for you.
The whole point of open source license is that they are a legal document that can be enforced and have legal meaning. It's not just a feel-good article. Your argument is like saying to a client who you are drafting a contract to and say "oh yeah don't worry about the word by word terms in the contract, wink".
Also, this "non-AI" license is plainly not open source nor is it permissive. You can't really say you are a fan of open source when you use a license like this. The whole pt of the MIT license is that you just take it with no strings attached. You can use the software for good or for evil. It's not the license's job to decide.
There is nothing wrong with not liking open source, btw. The largest tech companies in the world all have their most critical software behind closed doors. I just really dislike it when people engage in double-speak and go on this open source clout chasing. This is also why all these hipsters startups (MongoDB, Redis, etc) all ended up enshittifying their open source products IMO, because culturally we are all trying to chase this "we ♥ open source" meme without thinking whether it makes sense.
If people say they "truly love open source", they should mean it.
I've built a hobby OS around BEAM... BEAM doesn't require a whole lot from the OS, I built a minimal kernel that runs a single process, which you could consider a unikernel or at least very close. I had originally wanted BEAM in ring 0, but I had a lot of trouble getting started. This way, I can just use a pre-compiled BEAM for FreeBSD and don't have to fight with weird compilation options. Anyway, with x86-32 at least, I can give my Ring 3 process access to all the ioports and let it request a mmap of any address, so the only drivers I need in the kernel are IRQ controllers, timers, and pre-beam console. Once beam is up, console i/o and networking is managed from erlang code (with a couple nifs)
Emphasis on "one per partition", which if I understand correctly as "network partition", means that in the absence of network partition, there is one leader.
I do have only a surface understanding of Raft, and I'm learning while doing yes.
In Raft state space is split into partitions. Each partition gets it leader. For example in a cluster of 3 nodes and 65536 partitions each node is a leader for 1/3 of partitions with two others acting as replicas. This way each node simultaneously leader for some partitions and replica for others.
I'd add though, that the "one leader" thing was not the only reason why I ditched Raft. The Go library hashicorp/raft was quite complex to use, and I've had a lot of situations where the cluster failed to elect a leader, and ending up with a corrupted state.
It's a log management/processing software, with visual scripting.
Started out of frustration towards OpenObserve and its inability (at the time) to properly/easily refine/categorize logs: we had many VMs, with many Docker containers, with some containers running multiple processes. Parsing the logs and routing them to different storages was crucial to ease debugging/monitoring.
It was initially built in Go + HTMX + React Flow encapsulated in a WebComponent, I then migrated to React (no SSR). It integrates VRL using Rust+CGO.
It is by far easier to use than Logstash and similar tools, and in fact it aims to replace it.
The goal of the original algorithm ( https://github.com/mxgmn/WaveFunctionCollapse ) is to infer the constraints from a sample, and then run a constraint solver.
Hard-coding the constraints skips the whole point of the algorithm (which is also badly named by the way).