Perhaps it would make you feel a little better to learn that RDP was created from Citrix’s ICA protocol during some cross licensing between the two companies.
I worked on using hardware acceleration to replace parts of the ICA client application’s raster functionality for “thin client” devices a lifetime ago.
I’ve also run into problems with tools that aren’t worktree aware so often that I’ve stopped using it.
I’ve been using jujutsu for about 6 months now, and the only time I’ve reached back for git was when I had to rebase and amend someone else’s branch to get it merged (when they weren’t available to do so themselves of course).
Switching between changes in jujutsu has been a pleasant experience for me thus far, although I’m not as good with it as I was with stacked-git to keep local only changes (things I’m hacking to match my workflow / local setup) out of change sets.
The way it displays diffs is also still something I am getting used to, and have made plenty of mistakes when pulling in changes from trunk. That’s probably more of a case of “old dog new tricks” than jujutsu.
Yeah, after the first month of jj, I abandoned git forever, because it's already so much better. There are some hiccups, though.
I switched over to colocation for all repos, because too many things expect git directories to be where they expect.
I think the revset language is cool and powerful, but if I'm honest, it's tempting me to spend too much time trying to master, when 99% of the time all I need is, "show me the nearby ancestors and descendants within k revisions".
I think the diffs need work. Or I need to get comfy with 3-way diffs. It's unfamiliar, and an obstacle to fixing conflicts. Luckily I get maybe 1/10th the conflicts I used to under git.
> I think the revset language is cool and powerful, but if I'm honest, it's tempting me to spend too much time trying to master, when 99% of the time all I need is, "show me the nearby ancestors and descendants within k revisions".
I just spend enough time to write a new function for what I want to do, and then just know the basics for regular day to day stuff. I feel like that gets me really far.
> I think the diffs need work. Or I need to get comfy with 3-way diffs. It's unfamiliar, and an obstacle to fixing conflicts.
You should get comfy, you won't regret it. I haven't got around to trying jj yet, but I use them in git; frequently see people messing things up or just having a hard time resolving a conflict that they wouldn't if they used (& understood) diff3.
In brief: a regular 2-way diff shows you the current state, and what you wanted to change to right? Well 3-way just adds an extra bit of information (the middle) which shows you from what state you were changing to the bottom.
So say you have:
<<<<<< HEAD
def wazzle(widget):
try:
widget.wazzle()
except Exception:
return False
return True
|||||||
def wazzle(widget):
widget.wazzle()
=======
def wazzle(widget):
from wazzler import wazzle
wazzle(widget)
>>>>>>> (deadbeef Abstract wazzle implementation to own package)
If you didn't have the middle, it might not be at all clear why you were getting the conflict, and what the appropriate fix is. It allows you to see that Ah ok, master (or whatever I'm rebasing on or whatever) has changed to return a bool indicating success or error, that's fine, I was just trying to change the wazzle method to pass it to a library function instead.
Or you might have it that the same change is already in the HEAD part at the top, but there's a conflict because they put the import elsewhere. The middle then allows you to see that you were making essentially the same change, you don't care where the import goes (or like their idea better), you can just remove it and stick with the changes on HEAD.
My point is that it's strictly more information, that can help or make it easier to resolve the conflict. It shouldn't be confusing at all, because the same 2-part thing you're accustomed to is there too.
Yes, but if you had for example 1 ETH in FTX when it went down, you're not getting 1 ETH back now, you're only getting the dollar value of that ETH in November 22, which was much lower than current prices (> 50%).
It sounds like you just want Intellij Ultimate. Fleet is explicitly aiming to be VS Code like while Ultimate is just regular Intellij with access to most language plug-ins.
It was disappointing to see, as I am really tired of running multiple IDEs to work on mixed language projects.
mixed language projects are the __norm__, not the exception. I get most often bitten by this when I have both C++ and Python IntelliJ projects configured in one tree, and it starts complaining about project name clashes.
complete waste of time, and i wish they would fix outstanding bugs, especially small cosmetic ones in the existing products, which have the lowest risk of regressions. case in point https://youtrack.jetbrains.com/issue/DBE-11296 ... which after three years elicted the response "We have no plans to implement this feature in the near future."
Sounds like building something simple that aids in tracking “coverage” of reviewing ToSs could be useful to increase that the odds of spotting something untoward?
Iirc there was (is?) a site which gives a rating to the various license agreements of popular services and the like, so maybe it’s a solved problem?
Do you suppose the elastic block stores of the public cloud providers operate in a non polling manner?
The incredibly low level of latency and high levels of iops preclude having it be based on software interrupts, I think even in scenarios where sr-iov is available to the hyper visor.
Yes, SPDK for us sits on adjacent to the VM however, so our use of it for now can be best compared to an alternative to specialized PCIe hardware operating in pass-through mode directly to guests (e.g. AWS Nitro), with enough industry heft to (in the case of AWS especially, and GCP more recently via a probably-Intel collaboration) to get guests to have drivers for those cards. You can see this in action in the Linux drivers for ena (aws) and gve (google).
This is how the usual suspects get past software interrupts adjacent to the VMM, on the "client" side of things. Getting the driver stack integrated into Linux is no small task and letting it percolate into distributions in common use is necessarily a slow process. SPDK permits getting decent performance and gradual deepening of our functionality while still relying on virtio, which has already percolated.
As I understand it, on the storage side, these cards tend to expose an NVMe interface which is somewhat generic, so you don't see the same kind of driver siloing there. There's a related bit of SPDK and hypervisor functionality, vfio-user (vs vhost-user), but we elected not to use it at this time. They both use a similar shared-memory transport.
Azure is an interesting outlier, a large one, in that their reliance on Mellanox (a subsidiary of nvidia) drivers is documented, so they could be considered in a partnership to achieve the same aim. So you could read the mlx drivers in the same fashion as ena and gve.
I've been watching the technology "vdpa" with some interest to have a shot to also provide pass-through PCIe devices to the guest that do not add such a driver dependency so far outside our ability to influence: Microsoft is going to have a bit more equal a relationship with nvidia than we would as it comes to problematic changes in the Mellanox drivers. But I suspect it'll be some years before that can possibly happen, if it happens at all. It's not easy to get a Connect-X 6 DX card, for example. So, there are many problems for the foreseeable future trying to get into hardware, though I'd like to avoid precluding it.
I liked this blog post in getting a feel for this, but in brief, they're computers plugged into computers: https://www.servethehome.com/zfs-without-a-server-using-the-.... Our alternative is to carve off a core or two instead of plugging a computer into the computer.
Maybe at some future time...I suspect, no sooner than five years from now, but probably, should it come to pass, quite a bit later...there will be some commonality and availability in such PCIe cards and Ubicloud or something like it could consider a tangible development theory around them.
Definitely not a concern for “most” folks, but I’ve seen sharding used to separate users into smaller pools which do not have shared fate wrt infra and deployments.
As an extra benefit these pools do not push the limits of vertical scaling to which makes them both easier and cheaper to test under high load conditions.
I worked on using hardware acceleration to replace parts of the ICA client application’s raster functionality for “thin client” devices a lifetime ago.