Hacker Newsnew | past | comments | ask | show | jobs | submit | andrewingram's commentslogin

There's a _big_ continuum between disagreeing over something and an ethical hard line, it feels like a slippery slope to interpete a suggested approach for one end of that line as advocacy for applying that same approach to the other end.

I used something similar for my Super Mario Maker 2 level viewer (eg https://www.smm2-viewer.com/courses/1HH-CJ8-KYF)

In the level data, the start/goal tiles (approx 10x10 blocks of solid ground under the player and goal) and slopes aren't represented by their tile offset in the level data -- whilst all other "ground" tiles are. Instead, for slopes you're only told the start and end coordinates, and they often overlap.

So to render the slopes correctly I had to work out all the rules for which tiles were allowed next to each other, and solve some ambiguity rules -- I figured out that shallow slopes take precedence over steep ones. Eventually I cracked it, but it took quite a week or so of iteration to figure it out.


I have a few parallel AI-authored side projects on the go that have quite different shapes, and I feel quite different things about each

1. A survival horde game (like Vampire Survivors and Brotato). At the moment it's very primitive, very derivative (no new ideas) and not much fun. I have no sense of pride over it, but it is much further along than it would be if i'd been writing it from scratch. I expect once I invest in the fun side (gameplay innovations, graphics) i'll feel a greater sense of attachment, and I plan to do all the art assets myself.

2. A MacOS web app for managing dev env processes, works but is ugly. I don't have confidence in AI making a remotely presentable UI, so I'll be doing that part myself.

3. A useful little utility library. The kind of thing that pre-LLM would've been too far out of my expertise to be motivated to try making. I'm steering the design of it quite heavily, but haven't written any code. It seems like it's already capable of doing very useful things, and I oddly feel quite proud of it. But I have a weird sense of unease in that I _think_ it's good, but I don't _know_ it's good.

I think the main thing I'm learning is to make sure there's always something of yourself in whatever you produce with the help of AI, especially if you want to feel a sense of accomplishment. And make sure you have a good testing philosophy if you're planning to be hands-off with the code itself.


I did a slightly less ambitious prototype a few weeks ago where I created added lazy loading of GCS files into the just-bash file-systems, as well as lots of other on-demand files. Was a lot of fun.


yeah (optional) caching is interesting to think about - incl write_through and write_back


just-bash comes with Python installed, so in a way that's what this has done. I've used this for some prototypes with AI tools (via bash-tool), can't really productionise it in our current setup, but it worked very well and was undeniably pretty cool.


Yeah, whilst git was more popular than mercurial, I still think mercurial would have won if bitbucket had a better UI.

It's interesting to me that the only thing that made me vastly prefer using Github over bitbucket is that Github prioritised showing the readme over showing the source tree. Such a little thing, but it made all the difference.


Ambiguity increasingly feels like the crux of estimation. By that I mean the extent to which you have a clear idea of what needs to be done before you start the work.

I do a lot of fussy UI finesse work, which on the surface are small changes, so people are tempted to give them small estimates. But they often take a while because you’re really learning what needs to be done as you’re doing it.

On the other end of the spectrum I’ve seen tickets that are very large in terms of the magnitude of the change, but very well specified and understood — so don’t actually take that long (the biggest bottleneck seems to be the need to break down the work into reviewable units).

In the LLM age, I think the ambiguity angle is going to much more apparent, as the raw size of the change becomes even less of an input into how long it takes.


I mean, the use of GraphQL for third party APIs has always been questionable wisdom. I’m about a big a GraphQL fan as it gets, but I’ve always come down on the side of being very skeptical that it’s suitable for anything beyond its primary use case — serving the needs of 1st-party UI clients.


Strongly agreed.


It’s no secret that RSC was at least partially an attempt to get close to what Relay offers but without requiring you adopt GraphQL.


It is still a major problem, yes. Interestingly, if you go back to the talks that introduced GraphQL, much of the motivation wasn’t about solving overfetching (they kinda assumed you were already doing that because it was at the peak of mobile app wave), but solving the organisational and technical issues with existing solutions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: