> ...our team needs visibility into how features are being used in practice. We use this data to prioritize our work and evaluate whether features are meeting real user needs.
You should be able to see what features are being used by seeing what server endpoints are being hit and how often. Don't need intrusive telemetry. Yes, it's not perfect. Many features could use the same endpoint. You could totally anonymise this and you could still get a great understanding what features users are using by looking at endpoint stats.
Companies need to decide whether they want customer goodwill or very detailed insight into what they are doing. By having invasive telemetry you may have less customers (people leaving for github competitors). Is it worth it ?
> It seems that things wouldn't work without a BIOS update: PyTorch was unable to find the GPU. This was easily done on the BIOS settings: it was able to connect to my Wifi network and download it automatically.
Call me traditional but I find it a bit scary for my BIOS to be connecting to WiFi and doing the downloading. Makes me wonder if the new BIOS blob would be secure i.e. did the BIOS connect over securely over https ? Did it check for the appropriate hash/signature etc. ? I would suppose all this is more difficult to do in the BIOS. I would expect better security if this was done in user space in the OS.
I'm much prefer if the OS did the actual downloading followed by the BIOS just doing the installation of the update.
I have never seen a BIOS that didn't allow offline updates? However SSL is much less processing then a WPA2 WiFi stack, I would certainly expect this to be fully secure and boycot a manufacturer who failed. Conversely updating your BIOS without worrying if your OS is rooted is nice.
Isn't this pretty much standard in this day and age? HP for example also has this option in BIOS for their laptops (but you still can either download the BIOS blob manually in Linux or use the automatic updater in Windows if you want).
> Isn't this pretty much standard in this day and age?
If something is "standard" nowadays does it mean it is the right way to go ?
One of my main issues is that this means your BIOS has to have a WiFi software stack in it, have a TLS stack in it etc. Basically millions of lines of extra code. Most of it in a blob never to be seen by more than a few engineers.
Though in another a way allowing BIOS to perform self updates is good because it doesn't matter if you've installed FreeBSD, OpenBSD, Linux, Windows, <any other os> you will be able to update your BIOS.
> It's true that the lack of multithreading in PHP has been a persistent pain.
No actually it's a joy to have no multithreading. It keeps the complexity budget lower. You can do multithreading through a custom PHP module if you had a very specific need. Maybe my requirements were too simple but I've never really felt the need. The shared nothing PHP architecture really helps you get away with this.
Anyways as the parent comment said:
> but if you're building microservices where parallelization is handled by an external orchestrator, then you can design around that pretty effectively.
I feel like I'm on a different planet when I see this kind of comment.
What if you need to call multiple external APIs at once with complex json? Sure you can call them one after another, but if each take (say) 2s to return (not uncommon IME), then you are in real trouble with only one thread - even if it is just for one "request".
I guess I'm spoilt in .NET with Task.WhenAll which makes it trivial to do this kind of stuff.
> What if you need to call multiple external APIs at once with complex json?
A few years ago, I had a PHP project that had grown by accretion from taking a single complex input and triggering 2-3 external endpoints to eventually making calls to about 15 sequentially. Processing a single submission went from taking 5-10 seconds to over five minutes.
This was readily solved by moving to ReactPHP (https://reactphp.org/), which implements async via event loops. I was able to reduce the 15 sequential external API calls to four sequential loops (which was the minimum number due to path dependencies in the sequence of operations). That reduced the five minutes back to an average of 20-30 seconds for the complete process.
It wasn't using true multithreading, but in a situation where most of the time was just waiting for responses from remote servers, an event loop solution is usually more than sufficient.
Yeah, though AFIAK these event loops still suffer from blocking on (eg) complex json parsing or anything CPU driven (where real multithreading shines).
But regardless I agree, I'm just saying that these kind of patterns _are_ needed in any moderately complex system, and taking the view that "it's great not to even have it" in the core framework is really strange to me. Esp given every machine I have these days has >10 CPU threads and it won't be long before 100+ is normal.
> Yeah, though AFIAK these event loops still suffer from blocking on (eg) complex json parsing or anything CPU driven (where real multithreading shines).
This is only a problem if the JSON parsing is being done inside the event loop itself. The idea here is that you'd have a separate JSON-parser service that the code in the event loop passes the JSON into, then continues executing the other operations in the loop while it awaits the response from the JSON parser.
Just translate anything you'd spawn a parallel thread for into something you'd pass to a separate endpoint -- that's what I was referring to when I said that the poor multithreading can be easily worked around if you're achieving parallelization by orchestration of microservices.
> No actually it's a joy to have no multithreading.
To build CPU-bound applications in PHP, you have to install a bunch of packages, rely on Redis, and try to approximate what Python or Go can do in a dozen lines of code. Can that really be enjoyable?
Will this strategy work every time ? Maybe for AI it will work (market is competitive and Apple just purchases the best model for its consumers).
But this approach may not work in other areas: e.g. building electric batteries, wireless modems, electric cars, solar cell technology, quantum computing etc.
Essentially Apple got lucky with AI but it needs to keep investing in cutting edge technology in the various broad areas it operates in and not let others get too far ahead !
It works often enough for the company to be wildly successful. They can simply cut their losses and withdraw from industries where it hasn't, such as EVs.
Their focus is investing in areas where they see something being a competitive differentiator, or where the market has failed to create a competitive environment.
They do not make their own screens because they can source screens from multiple sources and work with those manufacturers to create screens with the properties they want. Same thing with them relying on others for electric batteries - there are plenty of manufacturers to provide batteries to Apple's spec.
They created their own wireless modems because there's only one company they were able to purchase modems from, and those modems did not necessarily have the features Apple wanted.
Apple hasn't announced any interest in selling electric cars, solar cell technology, or quantum computing platforms. I wouldn't expect them to do so until they had a consumer product ready for sale. I doubt they are planning to come out with products in any of these categories soon.
I think their M chips are a good example. They ran on intel for so long, then did the impossible of changing architecture on Mac, even without much transition pain.
Obviously that was built upon years of iPhone experience, but it shows they can lag behind, buy from other vendors, and still win when it becomes worth it to them.
How is changing the architecture of a platform that only you make hardware for doing the impossible?
They could change the architecture again tonight, and start releasing new machines with it. The users will adopt because there is literally no other choice.
Every machine they release will be fastest and most capable on the platform, because there is no other option
Exactly this! Rosetta + the whole app developer community who really quickly released builds for M chips (voluntary or forced, but it did happen).
I had the initial m1 air, and it was remarkable how useable it was. You'd expect all sorts of friction and issue but mostly things just worked (very fast). Even with some Rosetta overhead it was still fast compared to intel macs.
Rosetta 1 delivered 50-80% of the performance of native, during the PPC->Intel transition. It turns out, you can deliver not particularly impressive performance and still not ruin your app ecosystem, because developers have to either update to target your new platform, or leave your platform entirely.
You can also voluntarily cut off huge chunks of your own app ecosystem intentionally, by giving up 32bit support and requiring everything to be 64bit capable.
...because users have no other choice when only one vendor controls the both the hardware+software. They can either use the apps still available to them, or they can leave. And the cost of leaving for users is a lot higher.
Yes. Apple put custom hardware support in the M series chips based on the needs of Rosetta 2. The x86_64 performance on Rosetta 2 was often higher at launch than the prior generation of Intel chips running those same binaries natively.
Microsoft and Qualcomm already knew the performance of x86 app emulation on windows was killing the ARM machine lineup, so Qualcomm was working on extensions to their chips and Microsoft on having Windows support them already, but ARM64EC and Prism didn't launch for two years after the M1 shipped.
It's also notably not the first time they switched. They did the Motorola (I think MIPS?) Archictecure, then IBM PowerPC, then Intel x86 (for a single generation, then x86_64) and now Apple M-Series.
They do the things they think they can do very well.
Why would they try to build electric batteries, wireless modems, electric cars, solar cells, or quantum computers, if their R&D hadn't already determined that they would likely be able to do so Very Well?
It's not like any of those are really in their primary lines of business anyway.
They (Apple) bought out intel's wireless modems and are using them instead of Qualcomm's chips. IIRC, they aren't the best in class when it comes to raw throughput, but quite good in terms of throughput vs power consumption.
This article seems relevant to me for the following scenario:
- You have faulty software (e.g. games) that happen to have split locks
AND
- You have DISABLED split lock detection and "mitigation" which would have hugely penalised the thread in question (so the lock becomes painfully evident to that program and forced to be fixed).
AND
- You want to see which CPU does best in this scenario
In other words you just assume the CPU will take the bus lock penalty and continue WITHOUT culprit thread being actively throttled by the OS.
In the normal case, IIUC Linux should helpfully throttle the thread so the rest of the system is not affected by the bus lock. In this benchmark here the assumption is the thread will NOT be throttled by Linux via appropriate setting.
So to be honest I don't see the merit of this study. This study is essentially how fast is your interconnect so it can survive bad software that is allowed to run untrammelled.
On aarch64 the thread would simply be killed. It's possible to do the same on modern AMD / Intel also OR simply throttle the thread so that it does not cause problems via bus locks that affect other threads -- none of these are done in this benchmark.
> So to be honest I don't see the merit of this study. This study is essentially how fast is your interconnect so it can survive bad software that allowed to run untrammelled.
It seems like a worthwhile study if you want to know what CPU to buy to play specific old games that use bus locks. Games that will never be fixed.
It seemed to me that the issue with the games was that they did split locks at all, and when Linux detected that and descheduled the process, performance was trash. I didn't think they were doing frequent split locks that resulted in bad performance by itself.
You don't need to be a careful shopper for this; just turn off detection while you're playing these games, or tune the punishment algorithm, or patch the game. Just because the developer won't doesn't mean you can't; there's plenty of 3rd party binary patches for games.
I'd like to know more about what it takes to turn on PCI pass through for laptop hardware. On desktops and servers it's typically the IOMMU setting in the BIOS. Is that also commonly available on laptops?
> I would get a new laptop because a laptop without WiFi is useless.
You can run Linux in a VM and PCI passthrough your WiFi Adapter. Linux drivers will be able to connect to your wifi card and you can then supply internet to FreeBSD.
Doing this manually is complicated but the whole process has been automated on FreeBSD by "Wifibox"
is there a similar thing for GPUs? I want to build a workstation and have it work on freebsd but would prefer to use an intel arc card which has no information about freebsd compatibility online
If you have too much emphasis on (invasive) analytics you might end up flying empty i.e. without customers.
reply