Hacker Newsnew | past | comments | ask | show | jobs | submit | Tuna-Fish's commentslogin

> HVDC

Sorry, no. Our recent experiences during the energy crisis caused by the Russian invasion of Ukraine showed us that we cannot trust energy sources outside our own borders.

> overbuild solar

The effective sunlight in November in Finland is measured in single-digit hours per month. That's not a joke, or an exaggeration. Solar is completely out of the question.

Right now, the only carbon-free solution is fission. Fusion potentially adds another, but that's far off still.


> Sorry, no. Our recent experiences during the energy crisis caused by the Russian invasion of Ukraine showed us that we cannot trust energy sources outside our own borders.

You could trust Sweden, Estonia, etc. since they're all in the EU. Also Norway. But overall good point.

> Right now, the only carbon-free solution is fission. Fusion potentially adds another, but that's far off still.

I've never been to Finland, but I'm sure there's some wind there too.

But on the subject of war, fission turns out to be a huge vulnerability for Ukraine. Fusion would be better but it'd still be extremely expensive infrastructure that could be very easily disabled. So from the war standpoint what's probably most beneficial is a very distributed usage of wind/solar.


> You could trust Sweden, Estonia, etc. since they're all in the EU. Also Norway. But overall good point.

Your neighbors have winter at the same time as you. HVDC only solves this problem if it goes very far.


> You could trust Sweden, Estonia, etc. since they're all in the EU. Also Norway. But overall good point.

No. I wasn't just referring to loss of supply from Russia. What I was referring to was that when supply from Russia was lost, every country in the EU scrambled to secure their own supply, essentially competing on who could fuck over their neighbors the most. (It was Germany. Germany wins that prize.) No supply outside our borders can be trusted.

> I've never been to Finland, but I'm sure there's some wind there too.

Finland is subject to a weather phenomena where a stable anticyclone forms over the country, resulting in a high-pressure system that's essentially still. In winter, this can result in weeks of dead calm during the coldest temperatures experienced in the country. We already have a lot of wind capacity, and whenever this happens the electricity prices spike sky high.

> But on the subject of war, fission turns out to be a huge vulnerability for Ukraine. Fusion would be better but it'd still be extremely expensive infrastructure that could be very easily disabled.

We are a NATO member, and we have our own long range strike capability. If Russia directly attacks, Moscow will burn, which is why they likely won't. But Putin likes to play these hybrid games, where he tries his best to fuck over everyone without directly attacking.


> Improvement in production processes and materials (e.g. magnetic coatings) allowing smaller tracks

Improvements in coatings improve the data per track, but no improvement was needed for increasing the amount of tracks. On a 1.44MB drive there are 100 000 bits per track, but only 80 tracks per side. Or, in other terms, the length of a single bit along the track (on the innermost track) was ~1.2µm, and the width of that same bit, sideways to the track, was ~200µm, for an aspect ratio of 166:1. As far as the media was concerned, roughly 10:1 aspect ratio would have been more than enough, or a normal 1.44MB floppy could have supported more than a 1000 tracks per side.

The limiting factor was that old floppies had no way for the head to follow the track, it was just indexed into a fixed position by the drive mechanism. This meant that the tracks had to be ridiculously wide to support all the possible misalignment on both the reader and the writer. To improve track density, what was needed was some mechanism to make the head locate the tracks and follow them as the disk rotated under them. Iomega solved this by etching shallow concentric circles for the tracks on the surface of the disc. These rings were essentially invisible for the magnetic head, but allowed a separate laser to pick the up and follow them.


"Finders keepers" is a legal principle only common in common law countries. In most of the world, in no way could you be construed to own something just because you found it in the ground.

In most civil law countries, everything always has a legal owner (usually reverting to the state when no other legal owner can be found), and if you just "find" something and take it, you have committed theft. In Germany, the antiquities law is clear that anything of significant historical value belongs to the state, with a monetary reward possible for the finder in some situations (and finding something and not reporting it is a crime). If an old coin is deemed to not be historically significant, it probably belongs to the landowner.


> If an old coin is deemed to not be historically significant, it probably belongs to the landowner.

According to § 984 BGB, a historically insignificant find belongs to the finder and landowner in equal shares.[1] If the find is so important that it is considered a "cultural monument" (Kulturdenkmal), the law of the individual German state determines who owns it and whether or how much of a compensation is payed to the finder.[2]

[1] https://www.gesetze-im-internet.de/bgb/__984.html (in German)

[2] For details see https://de.wikipedia.org/wiki/Schatzregal#Deutschland (in German)


The oldest written text that definitely refers to it is the Dream Stele by Thutmose IV, which describes him having it dug free of sand. The monument was more than a thousand years old at that point.

Young kings showing their piety by restoring old monuments was useful royal propaganda. This wasn't even the last time that the Sphinx was restored.


... But that's no different from IPv4. Sometimes you have one per user, sometimes there are ~1000 users per IP.

Most of the ipv4 world is now behind CGNAT, one user per ip is simply a wrong assumption.


Anonymous rate limits for us are skewed towards preventing abusive behavior. Most users do not have a problem, even there is a CGNAT on IPv4.

For IPv6, if we block on /128 and a single machine gets /64, a malicious user has near infinite IPs. In the case of Linode and others that do /64 for a whole data center, it's easy to rate limit the whole thing.

Wrong assumption or not, it is an issue that is made worse by IPv6


I don't doubt your experience, but I wouldn't expect it to continue. I don't think Tuna-Fish is correct that "most" of the IPv4 world is behind CGNAT, but that does appear to be the trend. You can't even assume hosting providers give their subscribers their own IPv4 addresses anymore. On the other hand, there's a chance providers like Linode will eventually wise up and start giving subscribers their own /64 - there are certainly enough IPv6 addresses available for that, unlike with IPv4.

> I don't think Tuna-Fish is correct that "most" of the IPv4 world is behind CGNAT

~60%+ of internet traffic is mobile, which is ~100% behind CGNAT.

On desktop, only ~20% of US and European web traffic uses CGNAT, but in China that number is ~80%, in India ~70% and varies among African countries but is typically well over 70%, with it being essentially universal in some countries.

Overall, something a bit over 80% of all ipv4 traffic worldwide currently uses CGNAT. It's just distributed very unevenly, with US and European consumers enjoying high IP allocations for historical reasons, and the rest of the world making do with what they have.


Oh wow, thanks for those numbers!

Since mmbleh mentioned Linode I'm guessing they're more concerned with traffic from servers, where CGNAT is uncommon. But even that may be changing - https://blog.exe.dev/ssh-host-header


Yeah, our traffic is more from automated systems/servers, nothing from mobile

Yeah, absolutely no expectations for the future. My point was more that while there may be clear benefits for users, IPv6 presents real problems for service operators with no clear solutions in sight.

Given that GitHub also offers free services for anonymous users, I can imagine they face similar problems. The easiest move is simply to just not bother, and I can't blame them for it.


If a single machine gets /64 and you rate limit by /64, what doesn't work?

>Linode and others that do /64 for a whole data center

That's how it's supposed to work.


> That's how it's supposed to work.

According to who?

It could fit best practices if your datacenter has one tenant and they want to put the entire thing on a single subnet? In general I would expect a datacenter to get something like a /48 minimum. Even home connections are supposed to get more than /64 allocated.

And Linode's default setup only gives each server a single /128. That's not how it's supposed to work. But you can request /64 or /56.


If the OS uses SLAAC by default, then it will just work, but SLAAC is for humans and makes less sense for web servers (yet can make sense for vpn servers). For web servers /128 is more meaningful.

HBF is coming fast, with the first examples expected to be sampling to users this year.

The storage technology of Flash memory can be optimized to be as fast and more energy-efficient than DRAM at large linear reads, there was just little demand before because doing so costs you ~half of your density and doesn't improve your writes at all. All the flash memory manufacturers realized that this is a huge opportunity for model weights and are now chasing this.

Or in other words, after the initial price peak stabilizes in a few years, it will be reasonable to put ~500GB of weights into a device for ~$100 in memory costs.


Give claude a separate user, make the tests not writable for it. Generally you should limit claude to only have write access to the specific things it needs to edit, this will save you tokens because it will fail faster when it goes off the rails.

Don't even need a separate user if you're on linux (or wsl), just use the sandbox feature, you can specify allowed directories for read and/or write.

The sandbox is powered by bubblewrap (used by Flatpaks) so I trust it.


That difference will fade with time.

The pictures were taken months later, so some fading already happened

Artemis II is basically a test mission for Orion. And while flippantly Orion isn't doing anything that Apollo didn't do first, it definitely does it with a lot more margin, more living space, more safety and redundancy, and an actual toilet instead of gross poop bags you had to manipulate your waste into.

While the speed increases weren't as dramatic, do note that even in single core speed, unlike the clocks would suggest the Ryzen 7 is much, much more than 1.23X faster than the P4. The P4 was a particularly fragile architecture, and achieved IPC on real code was typically well below 1, often closer to 0.5. The x3d variants of Ryzen have been measured at running above 3 average IPC on real, complex loads. So the single-core uplift from that P4 to a modern AMD core is about the same as from that 300MHz Pentium to the 3.8 P4, it just took 20 years, not 8. Of course, now we also have 8 times the cores.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: