Hacker Newsnew | past | comments | ask | show | jobs | submit | bpodgursky's commentslogin

Frankly this whole thing is worth it if it scares Taiwan and Japan into building new nuclear capacity. Taiwan has been suicidally turning off nuclear generation for a decade despite it being the last country on earth that wants to rely on naval imports of essential goods.

Could it be because nuclear is highly centralized? I would expect that something like solar/wind power would be better for decentralization (in a war).

Even if you don't blow up a nuclear plant, it seems like cutting the power from one would be relatively easy.


Russia has refrained from hitting Ukraine's nuclear plants directly, and Ukraine has more or less kept them connected to the grid (albeit with nonstop repair efforts).

Transformer substations are more vulnerable targets but it's hard to be decentralized enough to not have those.


The tech companies don't really have any issue paying for the capacity, this is a negligible cost compared to the compute capital, they just want streamlined regulatory approvals to bring the plants online.

> The tech companies don't really have any issue paying

It reduces profit.


Wrong, using grid power without adding capacity will result in tech companies paying more for electricity too. They want to add capacity.

Are you absolutely sure they don't want us to add the capacity for them with a pathway for further government subsidies?

Almost everything in tech has been subsidized in one way or another via tax avoidance schemes or outright lobbying and manipulation of the market.

Why would this be any different?


S&P 500 is down 1.7% you clowns.

The stock market has an unfathomable amount of wealth on it, you just have to be comfortable with big numbers, that's life.


It's currently down to about where it was on Feb 23, everybody panic.


Are you saying this happens routinely?

It doesn't feel routine but it's not something I watch daily



The sheer naivete to think the United States has any legal or moral obligation to respect the privacy of the global population.

Legal, no. Moral, absolutely. I mean, aren't other people human?

Under their moral framework, Israel has no moral obligation against bombing the shit out of Palestine.

Other people are human and I agree that violence used against non-Americans overseas should be used only with very serious consideration and with checks and safeguards (albeit not with the same absolute protections as when used domestically).

But privacy? Yeah, no, if it benefits the safety or interests of US even slightly to know what the world is thinking, seeing, and planning, I have zero qualms using all tools at our disposal.


Do you have any qualms about Chinese surveillance of the USA? Is it cool for them to gather info on you?

What is a qualm? It's their job to do it, and our job to stop them from doing it.

Does that apply to all moral obligations?

Does Iran have a moral obligation to not nuke the global population? What about Israel and Palestine?


THERE IS A NEARLY INFINITE DEMAND FOR SKEPTICAL AND COMFORTING TAKES ABOUT AI CODING

THE MARKET WILL FILL THAT VOID

IT DOES NOT MAKE IT TRUE


Yeah, and for every take like that, there’s an overhyped one that’s just trying to sell you the sensation.

Not a single one of them had reaching stable orbit in the flight plan.

The DoD can invoke the DPA on any company it wants. Not really sure how this becomes Anthropic's fault.

They knew what they were signing on to when they sought DoW funding. I guarantee Dario was briefed on the risks associated with high-profile govt contracting.

Even if not briefed, such a smart person surely knows that he owe's his stash of gold to the willingness of others to spill their blood to protect it. Those willing to spill their blood have historically always had a claim on your gold.

I mean they got threatened with the Defense Production Act. Firmly standing their ground without an inch of give may backfire spectacularly too, if the DoD injects itself into model training.

I think they pretty clearly demonstrated good faith and where it ends up is a tactical choice I'm not in a great position to judge.


If DoD seizes the IP, the issue is they will need the cooperation of their scientists at least in the short term, if they want it to remain a fronter model. The labor angle isn't entirely guaranteed though the white collar worker has very little spine in this country.

> DoD seizes the IP

Almost certainly not on the table. If Hegseth did this he’d crash the market. That’s a red line he will only cross out of stupidity.


> That’s a red line he will only cross out of stupidity.

We've been saying this about many policies of this administration, only to be sorely disappointed.


> We've been saying this about many policies of this administration, only to be sorely disappointed

I'm not saying it's off the table completely, particularly with Hegseth, who is an insecure idiot. But the choices once it's enacted are (a) Trump ordering Hegseth to stop fucking around or (b) a market crash handing Democrats full control of Congress.


> That’s a red line he will only cross out of stupidity.

So you're saying it's highly likely then?


Like when seizing 10% of Intel crashed the market? This isn't exactly the same situation, but I really don't think it's safe to assume that this will be the issue on which the business community will grow a spine.

You might be surprised. When Harry Truman tried to nationalize US steel, it created massive pushback (bad for democracy imo, but the business community defends its interests to the teeth—though the circumstance is he wanted to advance the korean war, which was in the business community's interests).

https://www.ebsco.com/research-starters/law/truman-orders-se...


Nationalizing predicate companies is often seen as a step towards centralization.

> That’s a red line he will only cross out of stupidity.

Do not underestimate Hegseth's stupidity. He's completely unqualified for the job he has and is way out of his depth. Ditto for many others in this administration.


yeah, but the question I'd be asking myself is,

Hey, so what you are saying is that unless we use the AI that we control, to take control of the mass surveillance and autonomous drone strike systems, you will force us to take control of these systems?

I mean, did H just Open Clawed the entire US military?


ChatGPT writing a blog post attacking Gemini security flaws. It's their world now, we're just watching how it plays out.

How do you know that this blog post was written by ChatGPT?

It feels generated to me too. It’s this:

    When you enable the Gemini API (Generative Language API) on a Google Cloud project, existing API keys in that project (including the ones sitting in public JavaScript on your website) can silently gain access to sensitive Gemini endpoints. No warning. No confirmation dialog. No email notification.

Specifically, the last bit - “No warning. No confirmation dialog. No email notification.” Immediately smells like LLM generated text to me. Punchy repetition in a set of 3.

If you scroll through tiktok or instagram you can see the same exact pattern in a lot of LLM generated descriptions.


I think there's a lot more than just that, but I think part of the problem is that you just get an uncanny valley feeling. All of the phrases and rhetorical tricks that these tools use are perfectly valid, but together they feel somehow thin?

That said, some specific things that feel very AI-y are the mostly short, equally-sized paragraphs with occasional punchy one-sentence paragraphs interspersed between them; the use of bold when listing things (and the number of two-element lists); there are a couple of "it's not X, it's Y"-style statements; one paragraph ends with an "they say it's X, but it's actually Y" construct; and even the phrasing of some of the headings.

None of these are necessarily individually tells of AI writing (and I suspect if you look through my own comments and blog posts on various sites, you'd find me using many of the same constructs, because they're all either effective rhetorically, or make the text clearer and easier to understand). But there's something about the concentration of them here that feels like AI - the uncanny valley feeling.

I would put money on this post at least having gone through AI review, if not having been generated by AI from human-written notes. I understand why people do that, but I also think it's a shame that some of the individual colour of people's writing is disappearing from these sorts of blog posts.


Using threes is common in English writing and speaking. It has an optimal balance of expressiveness (three marking a pattern or breadth; creating momentum) without being overwhelming.

It’s not uncommon, as basic writing advice, to use sets of three for emphasis. That isn’t a signifier of LLM generation, in my opinion.


It's also seemingly the only way ChatGPT knows how to write, while being very uncommon for blogposts beforehand. Of course it's not 100% proof, but it's the most likely explanation.

It has a name. The Rule of Threes. https://en.wikipedia.org/wiki/Rule_of_three_(writing)

“The rule of three is a writing principle which suggests that a trio of entities such as events or characters is more satisfying, effective, or humorous than other numbers, hence also more memorable, because it combines both brevity and rhythm with the smallest amount of information needed to create a pattern.”

It’s how I was taught to write, but I understand that my personal experience can’t be generalized to make sweeping statements.

Do you have data that suggests it’s uncommon in human-authored blog posts and more common in LLM-generated text?


> It has a name. The Rule of Threes. https://en.wikipedia.org/wiki/Rule_of_three_(writing)

I don't think that's exactly it.

Speaking of LLM-writing in general, it seems to greatly overuse certain types of constructions or use them in uncommon contexts. So that probably isn't so much using the rule of threes, but overusing the rule of threes in certain specific ways in certain specific contexts.


I don’t necessarily doubt you or the grand-parent comment, but if it’s ‘obvious to even the most casual of observers’ (as my father would say) then it should be easy to have hard data.

This excerpt is demonstrating the use of a literary technique to write non-literary prose. It's an almost sure sign that an LLM is generating the text.

Of course, how could a writer writing have writing chops and use writing techniques? It boggles the mind that anyone thinks that would ever happens. Must have been aliens.

A good writer knows when to use literary techniques.

They work just fine in this post.

Yeah, it's perfectly reasonable device that I often use. I love the circle reasoning being displayed:

  "this sounds like AI"
  "professional writers use this technique"
  "they can't be a professional writer, they're using AI"

No, it’s unpleasant to read. To be clear, it’s possible a person wrote this, and that would not change it being unpleasant.

I’m not a native speaker so my level of AI recognition is already low. I find it very interesting what patters people bring up to declare it’s AI. The 3 punchline one for instance is a pattern I use while speaking. Can’t say I would write like this though.

It's not so much the grouping of 3 or way it's supposed to be punchy specifically that's the problem, that is just one example of what gives the article the "LLM Generated" feeling since whatever cheap model people are using for this kind of spam has some common ticks.

I use groupings of 3 and try to make things punchy myself sometimes, especially when I'm writing something intended to sway others. I think the problem with this article is the way it feels like the perfect average of corporate writing. It's sort of like the "written by committee" feel that incredibly generic pop music often has.

When I write things, I often go back and edit and reword parts. Like the brushstrokes in an oil painting, the flow of thought varies between paragraphs and even sentences. LLMs only generate things from left to right (or vice versa in RTL languages, I presume). I think that gives LLM generated text a "smooth" texture that really stands out to anyone who reads a lot.


I completely agree with you. There's something conspicuous about this particular use of the "group of three" device. It's trying but it's goofy and conspicuous. I think it's not human, it's 52 trillion parameters in a trenchcoat.

I'm not a native speaker and my level of AI recognition is higher than 99.999% of native speakers - and I'd be happy to be tested on it for proof.

The biggest factor is simply how long you've been using LLMs to generate text, how often, how much. It's like how an experienced UI designer can instantly tell that something is off by a single pixel off upon first seeing a UI, whereas if you gave me $200 to find it within 10 minutes I might well fail.


Aside from particulars like the set of 3, LLMs add a lot of emotive language which doesn't mean anything or is a repetition of already established points. Since they can't add any actual substance beyond what was in the prompt, the only thing they do is pad the prompt with filler language.

OK I've seen many people make this point on this site over just the last few months, but where do you think LLMs pick up these patterns? How did this rule of threes https://en.wikipedia.org/wiki/Rule_of_three_(writing) get into the LLM so they are so damn recognizable as LLMs and not as humans?

HN Note: Yes the rule of threes is broader than just this particular pattern here, but in my opinion this common writing and communication pattern is a specific example of the rule of threes.

Punchy repetition in a set of 3. Yes. LLMs are able to capably mimic the common patterns that how to write books have suggested for the last 100 years as ways to make your writing more "impactful" and attention-grabbing. So are humans. They learned it from watching us.

I am a little bit worked up on this as I have felt insulted a couple times at having something I've written been accused of being by an LLM, in that case it was because I had written something from the viewpoint of a depressed and tired character and someone thought it had to be an LLM because they seemed detached from humanity! Success!

I too would like to be able to reliably detect when something has been written by an LLM so I can discount it out of hand, but frankly many of the attempts I see people make to detect these things seem poorly reasoned and actively detrimental.

People have learned in classes and from reading how to improve their writing. LLMs have learned from ingesting our output. If something matches a common writing 101 tip it is just as likely to be reasonably competent as it is to be non-human. The solution to escape being labelled an LLM is not to become less competent as a writer.

I have been overly verbose here, as I am somewhat worked up and angry and it is too late in the morning to go back to sleep but really too early to be awake. I know verbosity is also a symptom of being an LLM, but not giving a damn is a symptom of humanity.


>but where do you think LLMs pick up these patterns?

>LLMs are able to capably mimic the common patterns that how to write books have suggested for the last 100 years as ways to make your writing more "impactful" and attention-grabbing. So are humans. They learned it from watching us.

Don't forget that LLMs (at least the "instruct" versions) undergo substantial post-training to align them with the authors' objectives, so they are not a 100% pure reflection of the distribution seen on the internet. For example, it's common for LLMs to respond with "You're absolutely right!" to every second message, which isn't what humans usually do. It's a result of some kind of RLHF: human labelers liked to hear that they're right, so they preferred answers containing such phrases, and those responses became amplified. People recognize LLM-generated writing because LLMs' pattern distribution is different from the actual pattern distribution found in articles written by humans.


It's too well structured and the message is too clear. HN (and the whole internet) is allergic to proper writing. We praise human sloppiness now.

No, I'm not being sarcastic. People have given up em-dash, which is an official punctuation you use in proper writing. And it's all a downhill from there.


> It's too well structured and the message is too clean. HN (and the whole internet) is allergic to proper writing. We praise human sloppiness now.

Yes. And it's only a matter of time that the model companies start to try to train in that "human sloppiness." After all, a lot of their customers want machines that can pass for humans.

> No, I'm not being sarcastic. People have given up em-dash, which is an official punctuation you use in proper writing. And it's all a downhill from there.

I wouldn't be surprised if the internet language of people devolves into a weird constantly-changing mish-mash of slang and linguistic fads. Basically an arms race where people constantly innovate in order to stay distinct from the latest models.

But the end result of that would be probably fragmentation, isolation, and a kind of dark ages. Different communities would have different slang, and that slang would change so fast that old text would quickly become hard to understand.


Strongly disagree. The post is really poorly structured and circles the drain a few times getting to the thesis.

The issues of style are annoying, but I find it much worse to wade through these 3000 word posts which are far longer than they need to be just because they're so damn cheap to compose.


> The Core Problem

> What You Should Do Right Now

> Bonus: Scan with TruffleHog.

> TruffleHog will verify whether discovered keys are live and have Gemini access, so you'll know exactly which keys are exposed and active, not just which ones match a regular expression.

I don't know exactly, but I'm sure. The cadence, the clarity, the bolding, the italics, it's all just crisp and clean structured and actionable in a way that a meandering human would not distill it down to.


Yup, it was actually an interesting article but there are a few telltale parts that sound like every AI spam post on /r/webdev and similar. "No warning. No confirmation dialog. No email notification." is another. The three negatives repeated is present in so many AI generated promotional posts.

I don't even have a problem with the content itself, I think frankly the smell is that it's too good. It's just fascinating in the sense that it's one LLM attacking another LLM.

I've reached the point where if any blog post has a subheading with some variant of "The Problem", I assume it's been edited with an LLM, because it co-locates with other indicators so strongly.

It's far longer than it needs to be because the writing process was too cheap.

It's too structured and consistent. Imo. Has that AI smell to it, but I guess humans will eventually also start writing more like the AIs they learn from.

AI was trained on human writing.

> AI was trained on human writing.

AI output is not varied like real human writing. This is a very distinctive narrowing of style.


And now humans are trained on AI writing.

Like what happens to YouTube videos that go through the compression algorithm 20 times.


> guess humans will eventually also start writing more like the AIs they learn from.

With the AI feedback loop being so fast and tight for some tasks, the focus moves on to delivery than learning. There is no incentive, space or time for learning.


For me personally, both at work and in my free time, I spend _more_ time on writing things _that matter_ since I’ve freed up time by using LLM’s for boilerplate tasks.

My motto is - If it wasn’t worth writing, it won’t be worth reading.

A good example of writing where I’d recommend using LLM’s is product documentation. You pass the diff, the description of the task, and the context (existing documentation) with a prompt ”Update the documentation…”.

Documentation is important but it’s not prose. However, writing a comment on hacker news is.


Won't be well received here, but this is the truth.

This is the first time I've seen people accuse AI text of being "too structured and consistent" compared to human text. Usually it's about specific patterns or tons of repetition or outright mistakes.

One example of being "too structured" is that LLMs love an explicit introduction and conclusion even when one that isn't really warranted. It's always telling you what it's going to say, and what it just said.

Patterns = consistent?

Patterns like heavy use of certain words or dashes or bullet points don't change how consistent the overall post is.

The fact that according to this reply section most of HN can't tell means that predictably, all hope is lost and there's no point in writing anything by hand any more if you're in it for money/engagement.

While writing this I suddenly realized that marketers and writers probably do a better job at recognizing it than developers and engineers, so maybe all hope isn't.

For those who want to know the tells: overall cadence and frequency of patterns - especially infrequency of patterns - are the biggest ones. And that means that we can't actually give you the best tells, because they're more about what is absent than what is present. What's absent is a single sentence pattern that falls completely out of the LLM go-toes. Anything human written has at most a good mix of both. LLM-written text just entirely lacks it. Humans do use the LLM-preferred patterns, but not for every single sentence. But anyway, here we go.

> Transparently, the initial triage was frustrating; the report was dismissed as "Intended Behavior”. But after providing concrete evidence from Google's own infrastructure, the GCP VDP team took the issue seriously.

^ Fun fact - The ";" would've originally been an em-dash but was either rewritten or a rule was included for this.

> Then Gemini arrived.

^ Dramatic short sentences, a pattern with magnitudes higher LLM-frequency than human frequency, but hasn't reached the public conscious yet a la "not just X but Y".

> No warning. No confirmation dialog. No email notification.

^ Another such pattern. Not just because it's three of them, but also because of the content and repetition. Humans rarely write like that because it again sounds overly dramatic. It's something you see in fiction rather than a technical writeup. In a thriller.

> Retroactive Privilege Expansion. You created a Maps key three years ago and embedded it in your website's source code, exactly as Google instructed. Last month, a developer on your team enabled the Gemini API for an internal prototype. Your public Maps key is now a Gemini credential. Anyone who scrapes it can access your uploaded files, cached content, and rack up your AI bill. Nobody told you.

This style of scenario writing is another one.

> Nobody told you.

Absolute drama queen.

>The UI shows a warning about "unauthorized use," but the architectural default is wide open.

Again.

> The attacker never touches your infrastructure. They just scrape a key from a public webpage.

Again.

> These aren't just hobbyist side projects. The victims included major financial institutions, security companies, global recruiting firms, and, notably, Google itself.

..

> A key that was deployed years ago for a completely benign purpose had silently gained full access to a sensitive API without any developer intervention.

Surprised it hasn't gained consciousness by now. Maybe that's a future plot point.

Here's a great example to train your skills on, because it's rare in that the ratio of "human : straight from LLM" increased gradually as the article goes on: https://www.wallstreetraider.com/story.html

It started at heavy human editing (or just human-written), but less and less towards the end.

The author confirmed this upon pointing it out, FWIW [0].

[0] https://news.ycombinator.com/item?id=47013150


They don't. Many of these claims are due to illiteracy.

Someone is complaining that

> it's all just crisp and clean structured and actionable in a way that a meandering human would not distill it down to.

but this is a security report ... people intentionally write such things carefully and crisply with multiple edits and reviews.


They may have used ChatGPT or similar to help with the prose but the technical content (as discussed elsewhere on this page) is good, so does it really matter if they did?

The problem with AI slop (to me) is more that the technical content is not good or is entirely the product of the LLM. At that point, there's no point in me reading it, I can just prompt the question if I'm interested.

This is original research which wasn't public before, so the value is still there and I didn't think whichever combination of a human and LLM that generated it did a bad job.


I don't wanna be rude but when someone spends months researching an issue, which systems work and which don't, you should probably give some level of grace and understand how they came to those numbers rather than spit out your first mindless critique.

Whenever I see an article, and the top comment is StudMan69 saying "Uh, no, the article's conclusions are all wrong!" I think to myself: "Gosh! If only the article's author had consulted StudMan69 before writing the article, he could have avoided making such a grave mistake!

When the article is by StudMan420 I don't feel that way.

I agree.

This:

> I suspect that removing half of the bus stops in a city will piss people off and cause even less ridership.

is thrown out but how do we know it's true? That commenter throws it out as their opinion but my opinion is the opposite -- the stated preference will be that people think it's bad but the revealed preference will show even more ridership as travel times improve.


I suspect the evidence here would fall mostly on the side of "it increases ridership", though it's probably hard to study, as it's rarely done in isolation, but more commonly as part of route redesign.

I ride the bus and I can tell you right now that I would be pissed if this guy took away my bus stop. That's my critique. I think it's perfectly valid.

Only because you know your loss but cannot imagine your gains in time.

So 1/Nth of the ridership is gonna have their stop deleted at a sum total of X man years. But it's all gonna be worth it based on a projected possible upside that may not materialize dependent upon many factors?

This is even worse than the usual slight of hand wherein one takes a widely diffuse hard to quantify cost and rounds it to zero and then dishonestly acts as though that justifies implementing their pet policy that has some small upside because in this case the downside is known and the upside is less defined.

I'm open to the idea that we could improve the system by deleting stops, but in light of a quantifiable downside I don't see a convincing argument without having some quantification on what the upside looks like.


The downside is you have a longer walk. The upside is you get there faster - unlee your trip is so short you should walk the whole thing. People value their time and so the majority are better off.

The gains just mean that I sit on the bus while twice as many people are trying to board at every stop. The bus is stopped for twice as long.

> The bus is stopped for twice as long.

I'd like to see your math, as it isn't just the loading of passengers that takes time. It would seem that slowing down, completely stopping, lowering the bus, opening the doors, and then closing the doors takes up at least some of the time at each bus stop.


I've watched 30 kids get off at their school in the morning. It takes 15 seconds. By your logic, 30 stops adds 15 seconds to a bus's schedule, which is pants-on-head crazy.

Emptying a school bus completely is a lot faster than a city bus stop where people are simultaneously trying to get off the bus and then the new people are also trying to get on the bus and jockey for position and for a seat before the bus can start moving again

So this used to happen on Dublin Bus, but a while back they solved it with an astonishing innovation... a second door! You get on at the front and off at the back. Given that this has been common elsewhere forever, it's unclear why it took them so long, but...

(Bafflingly, they went through a transition period where ~all of the buses had two doors, but the driver rarely opened the back door. It wasn't really until covid that using the back door became standard. Improved things greatly.)

> and jockey for position and for a seat before the bus can start moving again

Do urban buses where you are require people to be seated? Didn't realise that was a thing anywhere. Any (urban, non-intercity) bus I've ever been on takes off as soon as the last person gets in.


A second door is good. Make it even better by people getting on and off from either - those getting off should be first. Many systems work this way and it works great. Trains oten have even more doors, but for a but that often isn't possible.

The experience I shared was on a city bus.

My point is that you're totally disregarding everything a bus does to stop apart from waiting for passengers to board and de-board. At the very least it has to slow down, then accelerate. Half the time it has to swing the ramp out, which takes forever. Maybe someone has to load or unload a bike. Then it has to re-merge with traffic, and maybe every 10th car will let it in, so that can take a long time too. I don't even know if waiting for passengers is _half_ the time spent, let alone all of it.


I've never been on a city bus where the driver waits for people to be seated. Hell, when I lived in Vancouver, they would start moving before everyone had even paid their fare, basically as soon as the door was closed.

And now most (all?) busses have a fare tap at the back door, so you can board anywhere. Vancouver transit is absolutely top tier, at least for NA.

That would be true if busses didn't have to accelerate, decelerate, open doors, kneel and go through the many parts of stopping that aren't strictly people getting on or off.

The counterpoint is any bus route that has an express option that runs in parallel. Every time I have taken the express route, the bus can be full to the gills, but is always faster than the non-express bus.


that's simply not how it works, and quite obviously so. the stop time is absolutely not linear in the number of people who board the bus. just think about all the time it takes to slow down, possibly make the whole bus kneel, and then sit up again. by your argument, there should be infinity bus stops, each of which only allowing one single person to load. like, what? surely we can think more critically than this...

So your counter argument is that we should actually only have two bus stops. One at the start of the route, and one at the end?

surely we can think more critically than this...


No, we said less stops, not zero. you cannot take this to extreems and prove anything. Stops are a compromise and most of the US has too many.

You would be pissed that you have to walk for an extra 2 minutes? I wouldn't, but sure. Would you also be pissed about overall bus travel time decreasing by a generous amount?

How far do you walk to your bus stop? How far would you have to walk to the next-closest bus stop?

Would it outweigh you having to stop half as often?

All that means is longer lines and congestion of people waiting to board. So the bus is stopped for longer. This seems like a net nothing to me.

Sections of lines that already have meaningful congestion at adjacent stops wouldn't be a good target for balancing. WMATA in D.C. recently eliminated about 5% of bus stops as part of their overhauled bus network, this is how they described their strategy[1]: "We thought carefully about each stop, looking at things like how many people use it, how far away it is from the next stops, and whether it's safe to walk there. We also listened to feedback from thousands of bus riders."

Additionally, many stops with a lot of people loading and unloading are hubs which would never be balanced away, and often are designated timing points where the bus will wait to get back on schedule, so loading/unloading time is often irrelevant because predictability is being prioritized over speed. Improving speed and consistency with techniques like removing unnecessary stops increases predictability and allows for tightening up timetables and minimizing average hold times.

[1] https://www.wmata.com/initiatives/plans/Better-Bus/frequentl...


Doors open time is actually possible to optimize and speed up; with modern tap to pay systems, you can have all door boarding where even at the busiest stops dwells are measured in seconds.

The real killer for bus travel times is not getting up to speed, and the delay from finding a break in traffic when pulling out of a stop.


> longer lines and congestion of people waiting to board

True I've seen that first hand.


what if they removed only 33% of the stops? so per 3 stops, one is removed and the remaining were rearranged. it might even happen that the new bus stop is closer to your house. i agree, for the average person, the distance to the stop increases though.

Its a statement of religious belief, so other opinions are no less relevant that some "authority"

As a religious belief it would be inappropriate for me to report stats from my local cities bus service. First of all they didn't get into a religious opinion logically and rationally, so spouting numbers and facts at them will not make them change their mind. Secondly my local city has multiple simultaneous impacts so its almost impossible to estimate how their experiments with stop removal has affected ridership. The article falsely claims the only variable in the system is stop spacing whereas bus service is in extreme turmoil in most communities.

Pre-covid vs Post-covid is wildly different, there has been massive inflation in operating expenses, there's a long term decline in my area WRT passenger-miles before covid which seems to be increasing post-covid, fares have increased by a factor of a little over 4x since 1990 while incomes have roughly stagnated. The article claims the opex of stops is "high" but our city invested $0 (this is a low crime suburb LOL). We got rid of 1/4 of our routes (and drivers) and increased the standard of stop spacing from never more than 950 feet to an average of about 1100 feet now. The elderly and infirm were very mad and very loud about that and they are the most reliable voters out there but halving the fare quieted them down. We lose so much money on the bus service that giving it away for free wouldn't impact the budget very much.

Currently our opex per passenger mile is about $4.50. Fare for adults is $2. We lose about $7 per ride. The loss per rider would pay for two extra people to take an uber on the same route, so there are continual demands to scrap the entire system to save money. Empty buses driving around is causing more, not less, road congestion, and more, not less, environmental damage. Our "Unlinked Passenger Trip per Vehicle Revenue Mile" is about 0.6, which boils down to on average every mile traveled by a bus driver results in 0.6 passengers stepping aboard. Our routes are about 4 miles long and run about once an hour, so on average a driver picks up about three passengers per 4 mile trip. Our drivers are usually alone in the bus. Another way of looking at it, is on average we pay our bus drivers $23/hr, so an hourly route costs $23 in labor, and they pick up less than $6 in fares during each work hour... The ratios are better during rush hour... but worse outside of rush hour.

(edited: I don't understand some of the numbers on the report, if it costs $23 to pay the driver to run a route that picks up three people the fares can't be more than $6 so even if diesel and maint were free we lose $17 per hour per route, so why does the annual report claim opex per passenger mile traveled is only $4.50? After federal subsidies or similar?)

In the long run, an unusable bus service is simply too expensive of a luxury to fund and we'll end up eliminating it. I don't think changing distance between stops matters if the stops, and the bus, are empty, other than it makes sick and old people very angry. If almost no one uses it, it doesn't cost any extra to stop quite literally on every street corner or even stop at every driveway, so increasing stop distance merely makes people suffer needlessly, which seems unusually evil.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: