Hacker Newsnew | past | comments | ask | show | jobs | submit | masklinn's commentslogin

To be fair that’s literally just a waste of resources. If you want 128 random bits just get 128 random bits from the underlying source, unless your host langage is amazingly deficient it’s just as easy.

That the problem is already solved does not mean the solution is good. Or that you can’t solve it better.

A uuidv4 is 15.25 bytes of payload encoded in 36 bytes (using standard serialisation), in a format which is not conducive to gui text selection.

You can encode 16 whole bytes in 26 bytes of easily selectable content by using a random source and encoding in base32, or 22 by using base58.


The global uniqueness of a uuid v4 is the global uniqueness of pulling 122 bits from a source of entropy. Structure has nothing to do with it, and pulling 128 bits from the same source is strictly (if not massively) superior at that.

I stand corrected. I was thinking of the sequential nature of uuid 7, or SQL servers sequential id.

> Models in the past did not attempt to account for non-anthropogenic carbon emissions

They're literally mentioned by the first IPCC report already.


Early IPCC reports, all the way up to AR5 basically threw their hands up when it came to permafrost emissions. They admitted we didn't have the necessary data yet and for the most part didn't account for it at all in their models

Check out the 1.5C special report. Go to section 2.2.1.2, last paragraph says

> The reduced complexity climate models employed in this assessment do not take into account permafrost or non-CO2 Earth system feedbacks, although the MAGICC model has a permafrost module that can be enabled. Taking the current climate and Earth system feedbacks understanding together, there is a possibility that these models would underestimate the longer-term future temperature response to stringent emission pathways

https://www.ipcc.ch/sr15/chapter/chapter-2/#:~:text=Geophysi...


The claim being discussed is not that they didn’t account for it, but that they didn’t attempt to account for it. Reading that text, I think they did, but chose not to include it (I guess because they didn’t need to to make their point and, by not including it, avoided opponents from arguing about the validity of the result based on uncertainties in those models)

I don't get the distinction you're trying to make. It seems to me they considered it, but did not even attempt to account for it.

They admitted limitations of the data/research they had available. Their model explicitly does not attempt to account for it.


Is it fair to say they account for it, but don’t try to quantify if?

it did not factor into their models at all. They simply mentioned it. Mostly as an asterisk for why their models are likely an underestimation

> Edit, didn't realise it was this bad:

It's probably not bottomed out yet, some of those trips were booked months in advance and not cancellable without taking a financial hit.


Don't forget the evergreen "it's just politics it doesn't have to affect our relationship".

Oh yes, that's a classic line. They pretend as if we're just debating what the tax rate should be or some other benign talking point.

Any business which exports especially to Canada (because oddly between tariffs and repeated threats of invasion US products and services are not seen in a positive light), likewise any business up or downstream of mostly immigrant workforces.

I’ve become ambivalent about relaxng, I used it a bunch because I like(d) the model, and the “Compact” syntax is really quite readable, and it’s a lot simpler than XML schemas.

However the error messages, at least when doing rng validation via libxml2, are absolutely useless, so when you have a schema error finding out why tends to be quite difficult. I also recall that trying to allow foreign schema content inside your document without validating it, but still validating your own schema, is a bit of a hassle.


> By default, jit_above_cost parameter is set to a very high number (100'000). This makes sense for LLVM, but doesn't make sense for faster providers. It's recommended to set this parameter value to something from ~200 to low thousands for pg_jitter (depending on what specific backend you use and your specific workloads).

Postgres’s PREPARE is per-connection so it’s pretty limited, and then connection poolers enter the fray and often can’t track SQL-level prepares.

And then the issue is not dissimilar to Postgres’s planner issues.


Oracle’s wasn’t but I haven’t used it in a very long time so that may not be longer be true.

The problem though was that it had a single shared pool for all queries and it could only run a query if it was in the pool, which is how out DB machine would max out at 50% CPU and bandwidth. We had made some mistakes in our search code that I told the engineer not to make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: