Hacker Newsnew | past | comments | ask | show | jobs | submit | jtbaker's commentslogin

Dang, fun memories of when I was first getting in to geo/data stuff and doing a lot of web mapping stuff with D3, Leaflet and friends. Seems as tools like Vector tiles/PMTiles have supplanted topojson for a lot of visualization oriented use cases.

I'm gonna have to dive into a rabbit-hole! I was working on an ESRI Shapefile to GeoJson converter back in those days. But D3 and Leaflet were such cool tech! MapBox too. Linking SagaGIS with PostGIS to do pre/post wildfire analysis was my jam.

what has meta ever done that would instill trust in you? From the very article you cited:

> The best thing you can do to preserve your privacy and security with your Meta messages is to use end-to-end encryption (E2EE) whenever possible. WhatsApp has E2EE built-in, and Meta has automatically started rolling it out for Messenger, but you might need to manually start an E2EE chat for existing conversations in the app. The same goes for Instagram: Meta offers E2EE, but you need to enable it yourself. In either app, tap the name of the chat to check whether or not that conversation is currently E2EE.


I didn't say that I trust Meta. My point was that saying they're doing it so they can read your messages just means that the people commenting don't know how E2EE works, or how it is still not a 100% secure way of communicating, just a more secure way of communicating. Once one of those ends is compromised, it's game over.

I really don't understand what the point of the quote you're citing? Or how it goes against what I was saying?

The best thing you can do would be to use E2EE. That would be the most secure thing. It won't, however, prevent the makers of your E2EE product from reading the messages once they're unencrypted, regardless of who makes it.


Just set up Pi after listening to Marios talk at AIE Europe[0] and have solid initial impressions! Especially on limited hardware like a MB Air, seems a lot more resource efficient

[0] https://www.youtube.com/live/_zdroS0Hc74?t=3633s


the Stepchange show went fairly deep on this topic in their first episode (listened to it recently). https://www.stepchange.show/coal-part-i


DB seems like the main shortcoming in the stack for them. I don't want to deal with the limitations of D1. Seems like a serverless postgres setup a la Neon/Supabase would be a slam dunk.


They have Durable Objects which should be enough for most use cases (it’s SQLite with no limitations). Have you tried that?


I've used DO's quite a bit. I'm a big fan... however I find the database latency pretty hard to deal with. In the past 6 months I've seen upwards of 30s for little side projects running tiny (100's of kb) databases. Sometimes it's lightning fast... sometimes it's a disaster.

As a consequence I've had to build quite defensively - adopting a PWA approach - heavy caching and background sync. My hope is that latency improves over time because the platform is nice to work with.


Yeah, but then I'm heavily coupled to their proprietary infrastructure. Maybe a good thing for them, but a nonstarter for thinking about building a real business on, for me and many others I'd presume.


our open source system. We use this tool to serve a custom routing engine at day job. Handles 100req/s djikstra in a 2GB pod, due to precalculation of contraction hierarchies.


> And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!?

I think this would be pretty straightforward for Parquet with ZSTD compression and some smart ordering/partitioning strategies.


DuckDB and SQL FTW.


Doesn’t matter. The point is that DuckDB can operate well on a wide range of infrastructure and is well suited for operating in resource constrained environments.


Post to HN apparently


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: