I love Kysely! I use it for a couple of production applications and it works quite well. Type support is better than Knex, and Kysely is really lightweight so I can use it in my projects without being concerned about performance issues.
Really, when you look at options like this, you start to break them down into 3 distinct categories:
1. Raw adapter usage - writing SQL. Performant, but can get tedious at scale, and weird to add types to.
2. Knex/Kysely, lightweight query builders. Readable, composable, and support types well, but a step removed from the performance of (1). Some would argue (1) is more universally understandable, too, although query builders make things easy for those more-familiar with programming languages than query languages.
3. Full-fledged ORMs like TypeORM, Sequelize, Prisma. Often require much more ecosystem buy-in and come with their own performance issues and problems.
I usually choose 1/2 depending on the project to keep things simple.
I have pretty much had no issue with it so far. The only thing that I would call out is that you _must_ run a migration initially to set things up, or your queries will hang. This has stumped me a few times (despite being obvious after-the-fact). It also interfaces really well with postgres, and has nice support for certain features (like jsonb).
You might also want to consider pgTyped (https://github.com/adelsz/pgtyped). It's supposed to make SQL and TS work together. I haven't gotten around to using it yet but I hear good things.
Can weigh in here, we use pgtyped heavily at work and it’s really good. There are some constraints with the type inference on complex queries, but it’s a decent trade off IMO.
Do you have a bit more information regarding said limitation?
I'm guessing that dynamic queries would of course not be suitable but any other edge cases you have encountered?
I mostly do work on the Python side of things and SQLAlchemy is the de facto ORM there. I hate it. It’s heavy. Opinionated. The happy path seems to be doing things in sessions where joins foreign keys and things are evaluated in the app instead of at the DB level (one can define relationships in models for the ORM to understand but not define explicit foreign key relationships at the DB table level, wtf? Thereby doing joins based on common columns…) and yet I can’t fault ORMs for their ability to get you going fast and help you iterate very quickly.
I recently had the chance to start a new side project and trying to go the pure SQL route which I love was so slow in terms of productivity. When I could just model the tables via SQLAlchemy I was able to get to the meat of the thing I was making much quicker. What I didn’t like was all the additional cognitive overhead but I gained DB agnostic ways of querying my data and could use say SQLite for testing and then say Postgres for staging or production by just changing a config whereas if I write pure SQL I might get into issues where the dialects are different such that that flexibility is not there.
In the end I am very conflicted. Oh, some context I began my professional life as a DBA and my first love was SQL. I like writing queries and optimizing them and knowing exactly what they’re doing etc.
We use in prod variant of no 1. [0]. Why? Because:
* it's sql
* it's extremely lightweight (built on pure, functional combinators)
* it allows us to use more complex patterns ie. convention where every json field ends with Json which is automatically parsed; which, unlike datatype alone, allows us to create composable query to fetch arbitrarily nested graphs and promoting single [$] key ie. to return list of emails as `string[]` not `{ email: string }[]` with `select email as [$] from Users` etc.
* has convenience combinators for things like constructing where clauses from monodb like queries
* all usual queries like CRUD, exists etc. and some more complex sql-wise but simple function-api-wise ie. insertIgnore list of objects, merge1n, upsert etc all have convenient function apis and allow for composing whatever more is needed for the project
We resort to runtime type assertions [1] which works well for this and all other i/o; runtime type assertions are necessary for cases when your running service is incorrectly attached to old or future remote schema/api (there are other protections against it but still happens).
I would prefer 1. if it had full type safety. I’m imagining some sort of a build process that looks for any .sql files in the project and spits out .sql.d.ts in the same directory with the input and output types according to the current schema of the database. Another nice thing about a setup like this imo would be that the .sql files would have the full support of the editor for completions and execution, unlike sql fragments in a Typescript file.
F# can do this, either as part of the compilation via type providers [1] or, in a more lightweight manner, via Roslyn analyzers embedded in the editor [2].
Unsurprisingly, Haskell can also do this via Template Haskell [3], but I haven't used it.
On a similar note, there's also sqlx for rust [1] - and I find it extra impressive that they manage to support not only postgres, but sqlite and and mysql too.
I made a tool that generates Typescript types out of a live Postgres database. I've had a request for Kysely support ([link redacted]), but I more or less forgot about it. I would love to hear if that would be helpful.
I had this issue as well. I found an outstanding issue [0] on github. Running npm install -g npm will update your npm to the absolute latest version, which resolves this erroneous removal.