Json schema validation is cpu expensive in nodejs, so you do not want it in production for every request. We made a proxy [1] that only allows requests that are conform to the swagger json which is easy to hook between frontend and backend in dev-mode
I wrote a master thesis where I compared several realtime database products. I promise you amplify is no where "getting there". It is a crippled version of firebase and there is no fix in sight. For example you cannot even do sort-queries with datastore or Amplify is not working with angular because they have broken typings since 9 months with no fix. And the list goes on..
Neither of those issues surprise me, from what I've seen of the project.
I'm not familiar with the project historically, as I've only been using it for about 2 months, but it seems to me like there's been an increase in activity since the start of the year - like the package modularization, new UI components, and new docs (which are still pretty bad to be honest). I totally agree it's sub-par to Firebase, and if you stray anywhere off the beaten path you're completely on your own (i.e. not using React, seemingly (!!)), but I do think it has the potential to become a viable alternative.
My primary issue with the platform is the choice of a NoSQL database, which, in my view, just doesn't match the majority of application requirements - if you want to do any sort of text search, you have to spin up a whole sodding Elasticsearch domain, which are expensive as hell for a new product and take literally 20-30 minutes every time they're updated with an `amplify push` command. I also waited over a week for one to be deleted not too long ago. That's why my plan is to replace the API with a Lambda function running Postgraphile with some JWK logic to use Cognito for authN, while keeping the idM and file storage stuff.
I wrote the beginnings of a Gun implementation in Go based on my reverse engineering of the Gun JS code: https://github.com/cretz/esgopeta. I halted development as I am no longer needing it, but the code might give some insight (assuming it's even accurate, never made it to significant testing).
old school javascript like this have an advantage: you can ship as is, without the need to obfuscate under the excuse of performance.
if you load two hundred build dependencies and then write twenty word long variables, it might look nice for you at development, but a user trying to debug or validate code on the fly would be in a very bad position as you would have to 'minify' (i.e. obfuscate) to not have a 70mb production file.
Code like this is very readable if you know some conventions, things like `cb` being a callback pointer. Its not even close to the cryptic knowledge needed to read assembly on weird platforms, for example. And a far cry from obfuscated.
> without the need to obfuscate under the excuse of performance
> load two hundred build dependencies and then write twenty word long variables [...] a very bad position as you would have to 'minify' (i.e. obfuscate) to not have a 70mb production file
You defeated your own argument here. You're saying it's nice that the code is obfuscated already (short variables) so you can get better performance, it's really the same thing - except doing it by hand has a lot of downsides.
Minifiers and source maps have been around for a long time to get you the best of both worlds, understandable code in development, minified code in production (even though gzip alone gives you 80-90% of the gains). There is absolutely no reason to write obfucasted code like this [1] where you need to guess the meaning of thirty different one-letter variables. Grep'ing the code becomes impossible. This has nothing to do with 'old school' JS, but universal code standards.
There is a difference between the common meaning of parsing (text -> (ast | error)) and the generalized meaning that Alexis uses in the post (less structured -> (more structured | error)).
[1] https://github.com/EXXETA/openapi-cop