Hacker Newsnew | past | comments | ask | show | jobs | submit | CupricTea's commentslogin

I was just in Europe this February. I took a bus from France to Germany and customs checked the passports of everyone on board.

>who want zig to "win over rust" for whatever reasons

I don't understand why this mentality is so common. Zig and Rust are both fine languages with markedly different design goals and they can coexist.


Honestly, I don't know. I think it's because of frustration, but the community attitude is part of it. I experienced first hand people frustrated with Rust moving to Zig and finding other people to pick onto Rust and finding fertile ground (especially if moderators and heads of the community let this kind of behavior continue).

Rust has never been about outright eliminating unsafe code, it's about encapsulating that unsafe code within a safe externally usable API.

When creating a dynamic sized array type, it's much simpler to reason about its invariants when you assume only its public methods have access to its size and length fields, rather than trust the user to remember to update those fields themselves.

The above is an analogy which is obviously fixed by using opaque accesor functions, but Rust takes it further by encapsulating raw pointer usage itself.

The whole ethos of unsafe Rust is that you encapsulate usages of things like raw pointers and mutable static variables in smaller, more easily verifiable modules rather than having everyone deal with them directly.


>when you a pass a pointer to my function, do I take ownership of your pointer or not?

It's honestly frustrating how prevalent this is in C, and the docs don't even tell you this, and if you guess it does take ownership and make a copy for it and you were wrong, now you just leaked memory, or if you guessed the other way now you have the potential to double-free it, use after free, or have it mutated behind your back.


I remember talking about this concept with my brother a while back. Since LLMs have no neuroplasticity, they are locked in to what they were trained on in the time they were trained in. A model trained in 2026 would stay exactly the same for use by someone in 2126 to gain an insight on our time. Like a book that you can actually talk to.

Tiny Glade launched to 10,692 concurrent players with a 97% overwhelmingly positive score on Steam.

Calling that "not very notable" for an indie title is pretty ignorant.


GitHub is at the point where it immediately rate limits me if I try to look at a project's commit history without being logged in, as in the first time I even open a single URL to the commit history, I get "Too Many Requests" from GitHub thrown at me. I don't know if my work's antivirus stack is causing GitHub to be suspicious of me, but it's definitely egregious.


It’s not you or your setup. I experience the same behavior. Tried with and without Private Relay, residential and commercial ISPs at different locations, and more to debug it. Same results.

I think GitHub has just gotten so aggressive with their rate limit policies that it’s straight up incompatible with their own product. The charitable interpretation is that they aren’t keeping good track of how many requests each page actually performs in order to calibrate rate limiting.


If you didn't specifically test without it, I'd attribute that to cgnat


On the other side of the coin, they also punish people who have slow connections. The acceptable speed for downloading from github on my connection is 90k/sec. No more, no less. Something prevents the rate from being higher (probably Github), and if the rate drops any lower for any length of time, the connection will suddenly abort right in the middle of the download. Since the dumpster fire that is git doesn't support resume, welcome to hell. If I didn't have a fast server elsewhere to git to then zip up and re-download, I'd be screwed.


My theory is that they rate limit that URL aggressively due to AI scrapers. At this point it's faster to just clone the repo and do your searching locally.


Your work is probably all exiting through the same IP, you competing with others on the same IP is causing the rate limit.


The very same thing happen on my residential connection, I can do one search query, then I'm rate limited for 15+ minutes, same if I access any list of commits.


I've considered this, but the company is small enough that the number of people who would be on GitHub at any moment (instead of our internal git forge) can be counted on one hand, and when I'm the first one there in the morning it still rate limits me.


Do you have any on-prem cicd jobs that access github? Our's kept failing, had to move over to the ECR release of some stuff.


Hm, I've also noticed sites being more aggressive about verifications after I started using LLMs locally. They think I'm a bot (which... fair), even on completely unrelated sites I seem to be getting prompted for human verification much more often.


Maybe your company's ISP is CGNat'ting you?


May explain the ipv6 resistance. Hard to do effective per-ip rate-limiting with v6.


I don't understand, wouldn't it make it easier?


No, IPv6 as it is supposed to be implemented gives (say) a single server a /64, which is for all intents and purposes an inexhaustible supply of IPs. You could in principle have an IP per site you visit and have plenty left to spare.

Random Google result with a bit more:

https://www.captaindns.com/en/blog/ipv6-subnet-sizes-48-vs-5...

So if I wanted to annoy GitHub, I could connect to them without ever using the same IP twice. Their response would have to be banning my /64, or possibly /56.


> No, IPv6 as it is supposed to be implemented gives (say) a single server a /64, which is for all intents and purposes an inexhaustible supply of IPs. You could in principle have an IP per site you visit and have plenty left to spare.

No, as it's supposed to be implemented a single internet-routable /64 is used per *network* and then most devices are expected to assign themselves a single address within that network using SLAAC.

ISPs are then expected to provide each connected *site* with at least a /56 and in some cases a /48 so the site's admins can then split that apart in to /64s for whatever networks they may have running at the site. That said, I'm on AT&T fiber and I am allocated a /60 instead, which IMO is still plenty for a home internet connection because even the most insane homelab setups are rarely going to need more than 16 subnets.

> So if I wanted to annoy GitHub, I could connect to them without ever using the same IP twice. Their response would have to be banning my /64, or possibly /56.

Well yeah, but it's not like it's exactly rocket science to implement any sorts of IP rate limiting or blocking at the subnet level instead of individual IP. For those purposes you can basically assume that a v6 /64 is equivalent to a v4 /32. A /56 is more or less comparable to /25 through /29 block assignments from a normal ISP, and a /48 is comparable to a /24 as the smallest network that can be advertised in the global routing tables.


Its not harder to rate limit a /64 though.


It is because the IPv6 rollout has not been consistent. Some assign /64 per machine, some assign /64 per data center. Some even go the other way and do a /56 per machine. We've had to build up a list of overrides to do some ranges by /64 and others by /128 because of how they allocate addresses. This creates extra burden on server operators and it's not surprising that some just choose not to deal with it.


This problem exists for ipv4 too: some machines have static address, others have dynamic, so you can implement overrides.


Ipv6 is cheap though. If I want to get past your IP or per Network limit, options abound.


What can you do to get a new IPv6 network that is easier than getting a new IPv4?

Stuff like bouncing a modem, getting a new VPS, making a VPN connection I would expect to be pretty similar. And getting a block officially allocated to you is a lot of work.


If you allocate a dedicated spam network, it will make spam easy to detect and block.


Why are we pretending that you are checking logs and adding firewall rules manually. Anything worth ddosing is going to have automatic systems that take care of this. If not put an ai agent on it.


Artifacts from 700kya were not left by anatomically modern humans.


Something I've never understood about public likes before is why they ever existed in the first place.

Previously, retweeting would show something to your followers, and liking tweets would...show them to your followers...

Two ways to do the exact same thing. So it was added cognitive pressure to pick which action to do.


PCIe is probably the most future proof technology we have right now. Even if it is upheaveled at the hardware level, from the software perspective it just exposes a device's arbitrary registers to some memory mapped location. Software drivers for PCIe devices will continue to work the same.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: