ensmallen also has support for batch optimization (like SGD), constrained problems, and SDPs for instance. The set of implemented optimizers is also quite large (40+ algorithms).
Whoa dude! That's quite some inference based on your singular experience. If you look at it differently, you could take what you saw as evidence that everyone out there is a lizard person!
I've never understood this style of comment. You say you agree and make a claim, but might you care to substantiate it or otherwise elaborate on your point of view? I am not saying you are wrong, just, I can't actually engage with the comment here because it doesn't provide anything to engage with.
I have published with Elsevier and Springer. Every single time they proof my article it comes back as unreadable garbage. Sometimes a non-native English speaker has 'corrected' my grammar by adding incorrect articles everywhere, sometimes they fail to wrap formulas that are wider than the page, sometimes they make tables or figures so small they are entirely unreadable.
Elsevier and Springer and others hire the worst bottom-dollar outsourced 'proofers' they can find and every single time it takes so long to get across to them what they've done wrong and to pore through the paper to find all the errors that it would have been better for everyone if they just told me how they wanted the paper and I supplied the PDF!
Anyway I don't disagree with what you've written but I had to get the rant out. Has anyone NOT had this experience??
Indeed, their "typesetting" is mostly done by people with no subject expertise who is more likely to introduce a mistake than correct something that matters.
1. All BLAS implementations I know of don't use Strassen's algorithm.
2. This is such a Stack Overflow answer, please make it stop. Q: "How do I do a thing?" A: "I have unilaterally decided your question is invalid and you should not do a thing." That's really useful!
To the author: hack away. Goto can probably write better ASM but tutorials like this are very helpful for people who'd just like to read some interesting case studies.
But the stackoverflow answer is right, and accounts for the fact that in the majority of cases the person who is asking a confused question is indeed confused.
If what you are trying to do is to get best performance, make use of work that other people already did, rather than wasting your time on a solved problem.
If you want to learn about how efficient matrix multiplication can be implemented, that is a different problem.
The problem with Stackoverflow answers of the form "You asked about doing X, but it looks like you are trying to do X to achieve Y, and X is not a good way to achieve Y. You want to do Z. Here's how to do Z" is that later when people who actually do need to do X are looking for help, their searches keep turning up these damned answers that don't tell how to do X.
SO needs a flag on questions that is set if there is an answer to the actual question as asked and clear otherwise, and that can be used as a search filter.
By the way "make use of work that other people already did" might be nice for getting something built fast but it may not be the best thing.
Any issue that involves writing fast code involves tradeoffs. If you sit down to write something new you may have different views on the tradeoffs than whoever wrote "the fastest" one.
Life ain't so black and white. In fact even these 'best of field' products tend to be ugly inside and unoptimized in places. (source: I develop linear algebra libraries)
Bonus: for low level ASM math, every "solved" problem (which by the way it wasn't) becomes unsolved the second Intel or AMD or whoever releases a new chip or coprocessor.
This is something I'm interested in contributing to. Can you name a few libraries (especially ones implementing new and interesting work) that would welcome open source contributors? Alternatively you can just contact me (via my profile info) if you're working on something in particular but would rather not be identified publicly.
Stop with the clickbait: an "AI" is not running for mayor. At best, if we want to stick with the term "AI" we may say "someone has entered an AI in a Japanese mayoral race". This kind of misleading hype is how previous AI winters happened... (and I haven't even addressed the capabilities of this "AI").
Hey, a step in the right direction! Thank you mods. Now whether or not Michihito (the "AI") actually qualifies as 'artifically intelligent' is an entirely different debate altogether. Presumably voters would actually be electing the two people pushing the effort and writing the code. But I guess you have to use the word AI if you want any votes...