I think it's important to think about architectural and domain bounds on problems and check if the big-O-optimal algorithm ever comes out on top. I remember Bjarne Stroustrup did a lecture where he compared a reasonably-implemented big-O-optimal algorithm on linked lists to a less optimal algorithm using arrays, and he used his laptop to test at what data size the big-O-optimal algorithm started to beat the less optimal algorithm. What he found was that the less optimal algorithm beat the big-O-optimal algorithm for every dataset he could process on the laptop. In that case, architectural bounds meant that the big-O-optimal algorithm was strictly worse. That was an extreme case, but it shows the value of testing.
Domain bounds can be dangerous to rely on, but not always. For example, the number of U.S. states is unlikely to change significantly in the lifetime of your codebase.
Anecdotally, what we found in Austin was a combination of two factors:
First, awareness of the futility and selfishness of "growth elsewhere" as a solution is much higher in younger people — and by younger, I mean currently under fifty. Generational turnover in Austin had been eating away at the NIMBY majority, and conversations about housing in Austin have long been polarized more by age than by left/right political sentiment. There's a caricature, with a strong vein of truth, of the old Austin leftist who has Mao's little red book on their shelves and thinks apartment buildings are an abomination, and Austinites of that generation are experiencing mortality. At the same time, younger people are adopting more and more urbanist mindsets compared to their parents.
However, I think a much much bigger factor was the influx of younger people, especially young people with experience of larger cities, diluting the votes of the older NIMBYs. Austin has been shaped by growth for half a century, but its "discovery" in the 2000s and very brief status as a darling of coastal hipsters (remember that term?) has had a lasting effect on Austin's popularity and its demographics. It's been twenty years since it was the "it" place for Brooklynites to visit, but in that twenty years, it's had a lot of exposure for young urban dwellers, and some of them discovered they liked it and moved here, bringing their comfort with dense living and their appreciation that growth can bring a lot of positives.
Personally, every homeowner I know in Austin has seen their houses depreciate significantly this decade, and I don't think it changed a single person's mind about Austin's housing policy. People who opposed the reforms are bitter about the outcome, and people who supported the reforms say it sucks for us personally, but it's what we set out to accomplish, and we're glad that it worked.
> Austinites of that generation are experiencing mortality.
This is such a funny and novel way of saying "old people in Austin are dying" I just had to point it out.
Also, I like the way this comment is written in general. Felt easy to read for its length, and most importantly the tone stayed fun and personal while still being informative and on topic.
People see lower property taxes as a silver lining for short-term swings in the market, but I don't know anybody who thinks this is a short-term swing that they can ride out.
Nobody is happy about their property values going down long term. It exposes them to the risk of a big loss if they're forced to sell because of events in their life.
I think it's fine and generous that he credited these rules to the better-known aphorisms that inspired them, but I think his versions are better, they deserve to be presented by themselves, instead of alongside the mental clickbait of the classic aphorisms. They preserve important context that was lost when the better-known versions were ripped out of their original texts.
For example, I've often heard "premature optimization is the root of all evil" invoked to support opposite sides of the same argument. Pike's rules are much clearer and harder to interpret creatively.
Also, it's amusing that you don't hear this anymore:
> Rule 5 is often shortened to "write stupid code that uses smart objects".
In context, this clearly means that if you invest enough mental work in designing your data structures, it's easy to write simple code to solve your problem. But interpreted through an OO mindset, this could be seen as encouraging one of the classic noob mistakes of the heyday of OO: believing that your code could be as complex as you wanted, without cost, as long as you hid the complicated bits inside member methods on your objects. I'm guessing that "write stupid code that uses smart objects" was a snappy bit of wisdom in the pre-OO days and was discarded as dangerous when the context of OO created a new and harmful way of interpreting it.
I guess what I mean by that is that Rob Pike was obviously aware that his rules were not as catchy and pithy as the aphorisms he credited, and his only reason for writing his rules was to improve on them by making them more explicit and less prone to user error. But presenting his versions alongside the more catchy ones means that every time people read them, the catchy ones distract attention and remain more memorable than the improved versions.
The other reasons given make sense to me, but I bet there is also some psychological benefit in having a regularly scheduled escape from home, and having a guilt-free excuse for it built in, which partly compensates for being forced to come in a few days a week. The contrast makes it easier to appreciate the company of your spouse and probably makes child-rearing seem less oppressive. People theoretically could manage this without work imposing it on them, but in practice, having to make and justify the choice creates stress.
Escape from home is healthy. But not when you are escaping into the office. It’s healthy to escape for a hike, for groceries, to take a walk, go to the gym, etc.
My degree is in math, I love Dijkstra, and I think a lot of my colleagues have often created more work than necessary for themselves by treating pieces of code empirically when they could have got a more precise understanding by spending an hour reading it carefully.
However, I think the most fascinating thing about Dijkstra is how wrong he turned out to be in his prediction that an empirical approach would not scale.
I suspect that approaching programming like Dijkstra might have paid off long-term, but it was rarely a good deal in the short term, both for bad reasons (the empirical approach is a quicker and cheaper way to create buggy software that we can sell and claim as achievements on our performance reviews) and valid reasons (the unreliability of humans and hardware ultimately forces us to approach real computer systems, which are always a composite of hardware, software, and humans, empirically anyway.)
Bullshit is so dangerous because it could mean something. That VP could mean, it's time to look beyond the set of mature technologies we've been considering and look at newer technologies that we would normally ignore because they come with risks and rough edges and higher cost of ownership.
So it might be a substantive decision that affects how everybody in the room will do their jobs going forward. Or it could be a random stream of words chosen because they sound impressive, which everyone will nod respectfully at and then ignore. And like an LLM, he might have made it into his current position without needing to know the difference.
Correct, and in my opinion we seem to have a cutting edge machine, the best available. So it was BS. What was really troubling them is that for years the operational delivery part of the business has saved everyone else by finding more and more effiencies. I had stated that it was no longer cost effective to spend the money on the diminishing returns of squeezing tiny %s more out of it. The room took on a complete silence, because their strategy (of leaving it to someone else) has gone. Much harder tasks, what goes through the machine, how it is sold, need to come to the fore... and that is terrifying for people who PowerPoint for a living... so instead, they break the silence with BS, nod, pretend it's not happening.
Most anime is either a guilty pleasure or a guilty displeasure for me. The stuff I like, I feel embarrassed of the part of me that likes it, and I feel embarrassed about what I'm willing to overlook to enjoy it. Then the stuff I don't like, I feel closed-minded about it, like what's wrong with me that I'm too stuffy to enjoy it or too dumb to get it. But I don't have friends or acquaintances who are into it, so it never comes up with other people, and I generally don't think about it.
Oh I understand this one too much as a fan of the Isekai genre. So much slop and poorly done power fantasies. But some amazing content in there. Then I look at something like One Piece and not really vibing with it at all despite being overwhelmingly popular.
> In their 1872 papers, though, Cantor and Dedekind had found a way to construct a number line that was complete. No matter how much you zoomed in on any given stretch of it, it remained an unbroken expanse of infinitely many real numbers, continuously linked.
> Suddenly, the monstrosity of infinity, long feared by mathematicians, could no longer be relegated to some unreachable part of the number line. It hid within its every crevice.
I'm vaguely familiar with some of the mathematics, but I have no idea what this is trying to say. The infinity of the rational numbers had been known a thousand years prior by the Greeks, including by Zeno whom the article already mentioned. The Greeks also knew that some quantities could not be expressed as rational numbers.
I would assume the density of irrational numbers was already known as well? Give x < y, it's easy to construct x + (y-x)(sqrt(2))/2.
Take something like the integers (1,2,3,etc.). They are infinite; given an integer, you can always add 1 and get a new integer.
However, there are "gaps" in that number line. Between 1 and 2, there are values that aren't integers. So the integers make a number line that is infinite, but that has gaps.
Then we have something like the rational numbers. That's any number that can be expressed as a ratio of 2 integers (so 1/2, 123/620, etc.). Those ar3 different, because if you take any two rational numbers (say 1/2 and 1/3), we can always find a number in between them (in this case 5/12). So that's an improvement over the integers.
However, this still has "gaps." There is no fraction that can express the square root of 2; that number is not included in the set of rational numbers. So the rational numbers by definition have some gaps.
The problem for mathematicians was that for every infinite set of numbers they were defining, they could always find "gaps." So mathematicians, even though they had plenty of examples of infinite sets, kind of assumed that every set had these sorts of gaps. They couldn't define a set without them.
Cantor (and it seems Dedekind) were the first to be able to formally prove that there are sets without gaps.
I just don't understand why this was disturbing. Prior to the construction of the reals, the existence of irrational and transcendental numbers was disturbing, because they showed that previous constructions (rational numbers and algebraic numbers) were incomplete. If those gaps were disturbing, a construction without gaps should have been satisfying, reassuring, a resolution of tension. Was there some philosophical or theological theory that required the existence of gaps, that claimed that a complete construction of the number line was mathematically impossible, because of some attribute of God or the cosmos?
I think the issue was that most irrational/transcendental numbers aren’t finitely representable. This means that they are mathematical objects which, each of them individually, somehow consist of an infinity (e.g. an infinite decimal expansion). They are the result or end point of infinitely many steps (e.g. a converging sequence) that you can’t actually reach the end of in practice, and for most of them can’t even write down a finite description on what steps to perform, and which therefore arguably doesn’t “really” exist.
Another point of contention was the notion that the continuous number line would be formed out of dimensionless points. Numbers were thought of as residing on the line, but it was hard to grasp how a line could consist solely of a collection of points, since given any pair of points, there would always be a gap between them. “Clearly” they can’t be forming a contiguous line.
Right, but that's the opposite of what the Quanta article says. The article says that Cantor and Dedekind discovered infinity in bounded intervals. What they discovered (really, what they concocted) was uncountable infinity.
I don't like the way it's written, but what they are talking about is completeness in the sense of "Dedekind completeness"; i.e., that given any two sets A and B with everyone in A below everyone in B, there is some number which is simultaneously an upper bound for A and a lower bound for B.
Note that this fails for the rationals: e.g., if we let A be the rationals below sqrt(2) and B be the rationals above sqrt(2).
In school, we talked about “Dedekind cuts” but we never formalized the definition. Kind of disappointed now because your explanation is very simple and elegant.
> Before their papers, mathematicians had assumed that even though the number line might look like a continuous object, if you zoomed in far enough, you’d eventually find gaps.
I'll try to interpret this sentence.
We all have some mental imagery that comes to mind when we think about the number line. Before Cantor and Dedekind, this image was usually a series of infinitely many dots, arranged along a horizontal line. Each dot corresponds to some quantity like sqrt(2), pi, that arises from mathematical manipulation of equations or geometric figures. If we ever find a gap between two dots, we can think of a new dot to place between them (an easy way is to take their average). However, we will also be adding two new gaps. So this mental image also has infinitely many gaps.
Dedekind and Cantor figured out a way to fill all the gaps simultaneously instead of dot by dot. This method created a new sort of infinity that mathematicians were unfamiliar with, and it was vastly larger than the gappy sort of infinity they were used to picturing.
We've known since Zeno that all of our ways of visualizing infinity in finite terms are incomplete and provably incorrect, despite being unavoidable in human thinking. In other words, we knew the "gaps" reflected incomplete reasoning, not real emptiness between "consecutive" numbers. If Dedekind and Cantor only changed how we visualize infinity, I don't understand why it would cause a stir.
> This method created a new sort of infinity that mathematicians were unfamiliar with, and it was vastly larger
I understand that the construction of the reals paved the way for the later revolutionary (and possibly disturbing, for people with strongly held philosophical beliefs about infinity) discovery that one infinity could be larger than another. But in the narrative laid out by the article, that comes later, and to me it's clear (unless I misread it) that the part I quoted is about the construction of the reals, before they worked out ways to compare the cardinality of the reals to the cardinality of the integers and the rationals.
"Knowing" something and proving it mathematically are two different beasts.
Zeno couldn't prove that there were no gaps; he showed that infinity was different from how we understood finite things, bit that's not the same as proving there are no gaps.
Later, mathematicians proved the existence of irrational numbers. These were "gaps" in the rational numbers, but they weren't all the "same" of that makes sense? The square root of 2 and Euler's number are both irrational, but it's not immediately clear how you'd make a set that includes all the numbers like that.
I'm not sure everyone knew that gaps reflected incorrect reasoning. It would have been natural to assume that all infinite sets were qualitatively the same size, since uncountable infinity was not an idea that had been discovered yet. Zeno's own resolution wasn't that his reasoning wrong, but that our perception of the world itself is wrong and the world is static and unchanging.
As for the importance of visualization (of the reals), I don't think you can cleanly separate it from formalism (as constructed in set theory).
I think we all have built in pre-mathematical notions of concepts like number, point, and line. For some, the purpose of mathematics is to reify these pre-mathematical ideas into concrete formalism. These formalisms clarify our mental pictures, so that we can make deeper investigations without being led astray by confused intuitions. Zeno could not take his analysis further, because his mental imagery was not detailed enough.
From clarity we gain the ability to formalize even more of our pre-mathematical notions like infinitesimal, connectedness, and even computation. And so we have a feedback loop of visualization, formalism, visualization.
I think the article was saying that Dedekind and Cantor clarified what we should mean when we talk about the number line, and dispelled confusions that existed before then.
> If Dedekind and Cantor only changed how we visualize infinity, I don't understand why it would cause a stir.
Because scientific progress is explicitly the process of changing the general mental model of how people approach a problem with a more broadly capable and repeatable set of operations
I should have been more specific; I understand why it was a mathematical breakthrough. What I don't understand is why it would have triggered some kind of psychological horror or philosophical crisis. It was a new way of understanding numbers, but it didn't reveal numbers to be acting any differently than we had always assumed.
If anything, it seems like it would have been comforting to finally have mathematical constructions of the real numbers. It had been disturbing that our previous attempts, the rational and algebraic numbers, were known to be insufficient. The construction of the reals finally succeeded where previous attempts had failed.
I would invite you to be more open to the idea that people don’t live in a world where they operate inside a theoretical framework with localized test actions
major breakthroughs tend to cause existential crises because most people don’t have full scope of their work in order to understand where it is broken
Because painting those who objected to these definitions of mathematical infinity as "horrified" and "disturbed" was a form of character assassination, which was not uncommon at the time. The high moderns didn't play.
Extraordinary claims require extraordinary evidence. Can you cite any claims by mathematicians that there were "gaps"? It isn't even true for rational numbers that you can identify an unoccupied "gap".
Yeah, it took me a second, too. By "gaps" they mean numbers that can't be represented in a given construction. So irrational numbers are "gaps" in the rational numbers, and transcendental numbers are "gaps" in the algebraic numbers. Not the best spatial metaphor.
You’re thinking of this with the benefit of dedekind in your schooling - whether or not your calculus class told you about him.
Density - a gapless number line - was neither obvious nor easy to prove; the construction is usually elided even in most undergraduate calculus unless you take actual calculus “real analysis” courses.
The issue is this: for any given number you choose, I claim: you cannot tell me a number “touching” it. I can always find a number between your candidate and the first number. Ergo - the onus is on you to show that the number line is in fact continuous. What it looks like with the naive construction is something with an infinite number of holes.
I think you are getting away from my point, which pertains to what the article said, which is that mathematicians thought there were "gaps". What mathematician? Can I see the original quote?
The linguistic sleight-of-hand is what I challenge. What is this "gap" in which there are no numbers?
- A reader would naturally assume the word refers to a range. But if that is the meaning, then mathematicians never believed there were gaps between numbers.
- Or could "gap" refer to a single number, like sqrt(2)? If so, it obviously is not a gap without a number.
- Or does it refer to gaps between rational numbers? In other words, not all numbers are rational? Mathematicians did in fact believe this, from antiquity even ... but that remains true!
Regarding this naive construction you are referring to: did it precede set theory? What definition of "gap" would explain the article's treatment of it?
I don’t know the answers to all of your questions - but I believe you’d benefit from some mathematical history books around the formalization of the real analysis; I’m not the best person to give you that history.
A couple comments, though - first, all mathematics is linguistics and arguably it is all sleight of hand - that said the word “gaps” that you’ve rightly pointed out is vague is a journalists word standing in for a variety of concepts at different times.
The existence of the irrationals themselves were a secret in ancient greece - and hence known for thousands of years, but the structure of the irrationals has not been well understood until quite recently.
To talk precisely about these gaps, if you’re not a mathematical historian, you have to borrow terminology from the tools that were used to describe and formalize the irrationals -> if former concepts about the lines sound hand-wavy to you, it is because they WERE handwavy. And this handwaviness is about infinity as well, the two are intimately connected. In modern terms, the measure of the rationals across any subset of the (real) number line is zero - that is the meaning of the “gaps”. There is, between any two rationals, a great unending sea where if you were to choose a point completely at random, the odds of that point being another rational is zero.
EDIT: for a light but engaging read about topics like this, David Foster Wallace’s Everything and More is excellent.
I think you will agree that the bulk of your comment employs a post-set-theory nomenclature.
Regarding "if you were to choose a point completely at random, the odds of that point being another rational is zero", I ponder the question of how one might casually "choose" a value with infinite entropy.
> > Suddenly, the monstrosity of infinity, long feared by mathematicians, could no longer be relegated to some unreachable part of the number line. It hid within its every crevice.
Think of the number line stretching from negative infinity to positive infinity and let C represent the cardinality/size/count of numbers on that number line. Now just take portion of the number line from 0 to 1. Let C1 represent the cardinality/size/count numbers from the truncated line from 0 to 1. You would assume that C > C1. But in fact they are equal. There are just as many infinite real numbers from 0 to 1 as there are on the entire number line. Even worse, this hold true for any portion of the number line, how small or big you make the line. Rather than infinity being in a far distance place at the edge of the line in either direction, there is infinity everywhere along the number line.
> I don't get what "suddenly" became apparent.
It appeared suddenly because prior to cantor/dedekind, mathematics only understood the countably infinite ( natural numbers, integers, rationals, etc ) . By constructing a complete number line, cantor/dedekind showed there is a cardinality greater than infinity ( countable ). The continuum.
Cantor also showed that there is an infinite number of cardinalities.
Complete just means the limit of every sequence is part of the set. So there’s no way to “escape” merely by going to infinity. Rational numbers do not have this property.
How to construct the real numbers as a set with that property (and the other usual properties) formally and rigorously took quite a long time to figure out.
... for "Cauchy sequences", which are basically sequences whose terms become "closer and closer together".
You can still have sequences with no limits (a_n:=n, going to infinity, where all successive terns differ by 1 and which does not have a limit in the usual metric), as well as sequences with multiple limit points (in which case, subsequences can be considered).
Btw this is "Cauchy completeness", so it is a bit different (but equivalent) way to approach the construction of the real numbers from Dedekind's, but it is also one that can apply to more general metric spaces.
You can construct sequences of rational numbers where the limit is not rational (eg it's sqrt 2)
Trivially, the sequence of numbers who are the truncated decimal expansion of root 2 (eg 1.4, 1.41. 1.414, ...) although I find this somewhat unsatisfying.
With the real numbers there are no gaps. There are no sequences of reals where the limit of that sequence is not a real number
> Give x < y, it's easy to construct x + (y-x)(sqrt(2))/2.
That's only obviously irrational if x and y are rational. (But maybe you meant that, given an arbitrary interval a < b, you first shrink it to a rational interval a < x < y < b?)
Application design is still a challenge. I had Monday off and vibe-coded up an app that I've been wanting to use for years. The thing is, I can tell it's going to be challenging to make it something sticky that I actually use.
Which makes sense. The reason I wanted to make this app is that there are two very popular paid apps in the same category that I use every day that don't quite feel the way I want them to. It'll be easy to fix the little annoyances and missing features, but there's a feeling that's missing from them as well. I don't think it's wrong to say that I'm put off by a lack of taste, at least according to my taste. I don't know if I can do better, but I'm looking forward to trying, and I love that Claude makes me fast enough that the project has finally tipped from "I'd love to tackle this, but I know it's too big for me" (which is what I've been thinking for the last 5-10 years) to "I can make a credible attempt at this."
> You shouldn't be able to use AI or automation as the decider to ban someone from your business/service
That would mean dooming companies to lose the arms race against fraud and spam. If they don't use automation to suspend accounts, their platforms will drown in junk. There's no way human reviewers can keep up with bots that spam forums and marketplaces with fraudulent accounts.
Instead of dictating the means, we should hold companies accountable for everything they do, regardless of whether they use automation or not. Their responsibility shouldn't be diminished by the tools they use.
Domain bounds can be dangerous to rely on, but not always. For example, the number of U.S. states is unlikely to change significantly in the lifetime of your codebase.
reply