Hacker Newsnew | past | comments | ask | show | jobs | submit | ekseda's commentslogin

Json schema validation is cpu expensive in nodejs, so you do not want it in production for every request. We made a proxy [1] that only allows requests that are conform to the swagger json which is easy to hook between frontend and backend in dev-mode

[1] https://github.com/EXXETA/openapi-cop


Can supabase be used with the offline-first approach like horizon, RxDB etc. or does it require an ongoing connection to the server?


I wrote a master thesis where I compared several realtime database products. I promise you amplify is no where "getting there". It is a crippled version of firebase and there is no fix in sight. For example you cannot even do sort-queries with datastore or Amplify is not working with angular because they have broken typings since 9 months with no fix. And the list goes on..


Neither of those issues surprise me, from what I've seen of the project.

I'm not familiar with the project historically, as I've only been using it for about 2 months, but it seems to me like there's been an increase in activity since the start of the year - like the package modularization, new UI components, and new docs (which are still pretty bad to be honest). I totally agree it's sub-par to Firebase, and if you stray anywhere off the beaten path you're completely on your own (i.e. not using React, seemingly (!!)), but I do think it has the potential to become a viable alternative.

My primary issue with the platform is the choice of a NoSQL database, which, in my view, just doesn't match the majority of application requirements - if you want to do any sort of text search, you have to spin up a whole sodding Elasticsearch domain, which are expensive as hell for a new product and take literally 20-30 minutes every time they're updated with an `amplify push` command. I also waited over a week for one to be deleted not too long ago. That's why my plan is to replace the API with a Lambda function running Postgraphile with some JWK logic to use Cognito for authN, while keeping the idM and file storage stuff.


Would love to read the thesis if you can share!


Same here. Is it maybe available from a university website?


I do not understand this post.


Me neither. I _do_ test my JavaScript knowledge everytime I look at a .js file, lol


Willing to give more input and ideas once the first questions show up.

Some hints: enhancing ES8 with a config system, state management (bulk config updates), extending classes & especially configs.


Another one is that the source code of OrbitDB is not completely obfuscated.


Can you elaborate? Source code to GUN is right here: https://github.com/amark/gun/tree/master/src


Yes sorry for not doing this. In the source code folder, please open any random file and try to understand what the code does.

I once tried to fix a bug that I found in GUN but I gave up after just trying to figure out what the code is supposed to do.


Wow. You are right. What the heck.


I wrote the beginnings of a Gun implementation in Go based on my reverse engineering of the Gun JS code: https://github.com/cretz/esgopeta. I halted development as I am no longer needing it, but the code might give some insight (assuming it's even accurate, never made it to significant testing).


Indeed, the source looks like the output of a transpiler.


That's not an issue with the source code linked above which is just basic javascript.


That is very readable javascript.

old school javascript like this have an advantage: you can ship as is, without the need to obfuscate under the excuse of performance.

if you load two hundred build dependencies and then write twenty word long variables, it might look nice for you at development, but a user trying to debug or validate code on the fly would be in a very bad position as you would have to 'minify' (i.e. obfuscate) to not have a 70mb production file.

Code like this is very readable if you know some conventions, things like `cb` being a callback pointer. Its not even close to the cryptic knowledge needed to read assembly on weird platforms, for example. And a far cry from obfuscated.


> without the need to obfuscate under the excuse of performance

> load two hundred build dependencies and then write twenty word long variables [...] a very bad position as you would have to 'minify' (i.e. obfuscate) to not have a 70mb production file

You defeated your own argument here. You're saying it's nice that the code is obfuscated already (short variables) so you can get better performance, it's really the same thing - except doing it by hand has a lot of downsides.

Minifiers and source maps have been around for a long time to get you the best of both worlds, understandable code in development, minified code in production (even though gzip alone gives you 80-90% of the gains). There is absolutely no reason to write obfucasted code like this [1] where you need to guess the meaning of thirty different one-letter variables. Grep'ing the code becomes impossible. This has nothing to do with 'old school' JS, but universal code standards.

[1] https://github.com/amark/gun/blob/master/src/type.js


No this is not readable. You can write readable old school javascript, but just try to tell me what this line does: https://github.com/amark/gun/blob/master/src/type.js#L111


Have you tried reading it?

( Not sure "obfuscated" is the word I'd use, though. )


Check out the readme. There are many erros that can only be catched by validation and not by parsing.


There is a difference between the common meaning of parsing (text -> (ast | error)) and the generalized meaning that Alexis uses in the post (less structured -> (more structured | error)).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: