Could we even catch up to them at all with the current propulsion technology? Not only did they have decades of head start but they took advantage of a unique planetary alignment that I don't think will come back around anytime soon.
Yes, easily. The alignment doesn't really matter for that. Almost all your speed gain comes from just Jupiter. Saturn is 30% the mass and 2/3 of the orbital velocity, so your gain from Saturn is only 20% of what you can get from Jupiter (and also your potential gain is limited by a minimum approach distance greater than the rings, or you'd hit them.) And the ice giants are slower and smaller yet; Voyager barely gained from Uranus and actually slowed from Neptune since it wasn't routed to gain speed there.
New Horizons achieved 80% of Voyager's velocity with just Jupiter, and it wasn't really trying to optimize for speed, it approached Jupiter only to 10 million km (over 100x greater than the planet's radius.) A probe dedicated to a fast slingshot past Jupiter could easily overtake Voyager. We haven't had any need to try, unless one of the missions to specifically study the heliopause-interstellar area happens. It would still take a while to catch up to Voyager's head start, but it's doable.
The alignment for Voyager was captivating, but it really wasn't as important as people typically think. Jupiter alone can get you anywhere and launch windows for it come every 12 years. If the four-planet alignment hadn't happened then, realistically we would have just done separate Jupiter-Uranus and Jupiter-Neptune missions.
Correct, both of them are really really old, accuracy wise. N64 emulation has improved a lot in the past 4-5 years, but old emulators haven’t caught up
Traditionally, emulators relied heavily on HLE. Low-level efforts are recent and not mature.
The miSTer core for N64 (and ModRetro's M64 core effort by the same person) and Ares N64 support are the only two serious efforts I am aware of. They tend to share compatibility issues, and advance together when understanding of the platform grows.
Obviously this is just a personal judgment, but I believe N64 is currently understood at quite a good level. Most of the docs are on https://n64brew.dev/. Low level efforts are recent for sure, though I'm not sure I would rate them as "not mature". Ares is able to run most of the library (including 64DD) and all the homebrew library with zero per-game configurations or tweaks.
The standards I applied are not some subjective "good level" but bsnes-level. The way Near intended.
The one game I am aware of and keep checking is "Wonder Project J2 - Koruro no Mori no Jozet".
Broken in both Ares and the miSTer core. AIUI nobody knows why it does not work yet, which shows gaps in the understanding of the machine. Otherwise not an issue for me, as I can run it on the actual hardware, which I own.
Note that, in no small way, I do appreciate the efforts. The state of the art of N64 emulation is much better now than just a few years ago. But it sure is not there yet.
Unfortunately nowadays id Software doesn't seem to be at the cutting edge of engine technology anymore. Most interesting new developments now come from Unreal Engine as far as I can tell. Like virtual geometry (Nanite) or efficient ray traced direct illumination (MegaLights).
The id tech 8 engine is a whole lot more performant than the unreal 5 engine and absolutely does what it needs to, fantastically, I would add for the game it was made for.
No, they are only using ray traced global illumination, which Unreal Engine already had several years prior (Lumen). They are not second place either, because several other engines also had it before id Tech.
> They ripped our Carmacks texture streaming stuff outta the engine years,ago though
I'm pretty sure they are still using texture streaming. There is no alternative to that.
The PS1 doesn't an FPU but got a version of Quake 2, so it's possible. That said, it was somewhat different from the PC version, so it could be argued that it's not the same game.
I can't speak on Quake, but I was a level designer on the failed effort to port Unreal to PSX.
My understanding from talking to the coders at the time was that Unreal's software renderer was a huge advantage as a starting point. They were able to reuse a lot of the portal rendering stuff as setup on the R3K cpu, but none of the rasterization. That had to go to the graphics core, which was a post setup 2D engine that in addition to the usual sprites, could do tris and quads.
We had a budget of about 3k polygons post clipping, and having two enemies on screen would burn about half of that. The other huge limit is the texture cache was tiny, so we couldn't do lightmaps. Our lightning was baked in at vertex level and it just was what it was.
I imagine the situation with Quake was comparable. The BSP stuff would carry right over, but I can't imagine they got lightmapping proper working at the time. They'd also need some sort of solution for overdraw, as Quake's PVS was a lot more loose than Unreal's portal clipping.
The PS1 version uses a custom engine based on technology built for the game Shadow Master, the previous title by Hammerhead Studios. It was a technical tour de force for the original PlayStation.
They removed all UT games from online stores and added UT 3X which is a free version of UT3 with Epic Online Services baked instead of original ones.
The only way one could legally get UT99 is to buy physical. And it's been like that for many years prior to an event above, which also disabled the server browser after 22 years of running intact.
Yeah that part didn't make sense, not to mention that neither the PS3 nor the 360 were running 64-bit software. They didn't have enough memory for it to be worth it.
you don't need memory to make 64 bit software worth it. Just 64 bit mathematics requirements. Which basically no video game console uses as from what I understand 32-bit floating point continue to be state of the art in video game simulations
Fundamentally it's still a memory limitation, just in terms of memory latency/cache misses instead of capacity. If you double the size of your numbers you're doubling the space it takes up and all the problems that come with it.
No it isn't. The 64-bit capabilities of modern CPUs have almost nothing to do with memory. The address space is rarely 64 bits of physical address space anyways. A "64-bit" computer doesn't actually have the ability to deal with 64 bits of memory.
If you double the size of numbers, sure it takes up twice the space. If the total size is still less that one page it isn't likely to make a big difference anyways. What really makes a difference is trying to do 64-bit mathematics with 32-bit hardware. This implies some degree of emulation with a series of instructions, whereas a 64-bit CPU could execute that in 1 instruction. That 1 instruction very likely executes in less cycles than a series of other instructions. Otherwise no one would have bothered with it
"Bitness" of a CPU almost always refers to memory addressing.
Now you could build a weird CPU that has "more memory" than it has addressable width (the 8086 is kind of like this with segmentation and 8/16 bit) but if your CPU is 64 bit you're likely not to use anything less than 64 bit math in general (though you can get some tricks with multiple adds of 32 bit numbers packed).
But a 32 bit CPU can do all sorts of things with larger numbers, it's just that moving them around may be more time-consuming. After all, that's basically what MMX and friends are.
The original 8087 implemented 80-bit operands in its stack.
It would also process binary-coded decimal integers, as well as floating point.
"The two came up with a revolutionary design with 64 bits of mantissa and 16 bits of exponent for the longest-format real number, with a stack architecture CPU and eight 80-bit stack registers, with a computationally rich instruction set."
Typically, it doesn't have the ability to deal with a full 64 bits of memory, but it does have the ability to deal with more than 32 bits of memory, and all pointers are 64 bits long for alignment reasons.
It's possible but rare for systems to have 64-bit GPRs but a 32-bit address space. Examples I can think of include the Nintendo 64 (MIPS; apparently commercial games rarely actually used the 64-bit instructions, so the console's name was pretty much a misnomer), some Apple Watch models (standard 64-bit ARM but with a compiler ABI that made pointers 32 bits to save memory), and the ill-fated x32 ABI on Linux (same thing but on x86-64).
That said, even "32-bit" CPUs usually have some kind of support for 64-bit floats (except for tiny embedded CPUs).
The 360 and PS3 also ran like the N64. On PowerPC, 32 bit mode on a 64 bit processor just enables a 32 bit mask on effective addresses. All of the rest is still there line the upper halves of GPRs and the instructions like ldd.
Parts of the 360 did. The hypervisor ran in 64bit mode, and use multiple simultaneous mirrors of physical address space with different security properties as part of its security model.
It's not like the games weren't running in 64 bit mode too (on both consoles)
They had full access to the 64 bit GPRs. There wasn't anything technically stopping game code from accessing the 64 bit address space by reinterpreting a 64 bit int as a pointer (except that nothing was mapped there).
It's only the pointers that were 32 bit, and that was nothing more than a compiler modification (like the linux x32 ABI).
They did it to minimise memory space/bandwidth. With only 512 MB of memory, it made zero sense to waste the full 8 bytes per pointer. The savings quickly add up for pointer heavy structures.
I remember this being a pain point for early PS3 homebrew. Stock gcc was missing the compiler modifications, and you had a choice between compiling 32 bit code (which couldn't use the 64bit GPRs) or wasting bandwidth on 64 bit pointers (with a bunch of hacky adapter code for dealing 32 bit pointers from Sony libraries)
The difference is that on PowerPC, 32bit mode on 64bit processors (clearing the SF bit in the MSR) is just enabling a hardware 32bit mask on the effective address before it gets translated into a virtual address.
Unlike on x86-64 and arm64, there's no free (or even that cheap) way to do an ILP32 abi purely in software. x86 and arm allow encodings for memory reference instructions that only use the bottom half of the registers (the E* registers on x86, and the W* registers on arm64). No such encoding exists on PowerPC for memory reference instructions, so you'd be stuck manually masking each generated pointer.
Because of that, the compiler hacks you're talking about are kind of the opposite from what you're describing. The hacks are because on the upstream gcc PowerPC backend, having a 32bit pointers in hardware and having operations that operate on 64bit quantities had the same feature flag despite technically being able to be separately enabled on actual hardware. It was just very rare to do so. So the goal of the hacks was to describe to the compiler that the target has 32 hardware pointers, but still can issue instructions like ldd to operate on the full 64bit GPRs.
reply