Hacker Newsnew | past | comments | ask | show | jobs | submit | csnover's commentslogin

> real scientists’

This is a classic “no true scotsman” fallacy.

Some people working in genetics allow their personal politics to draw conclusions about racial superiority. Some people working in medical science allow their personal politics to draw conclusions about the safety and efficacy of vaccines and other medicines, or about the health effects of certain diets or body weights. Some people working in geology allow their personal politics to draw conclusions about the age or shape of the Earth, or about the safety of extractive industries. Some people working in biology allow their personal politics to draw conclusions about the origin of life. Some people working in computer science allow their personal politics to draw conclusions about the likelihood of AGI and the emergence of machine sentience.

Does this mean genetics, medicine, geology, biology, and computer science “fake” sciences? Of course not. The fact that some people in a field are engaged in scientific malpractice doesn’t invalidate the premise that some subject is worth studying in a scientific way. There are many in the social sciences who publish research that runs counter to their own personal politics because they are, in fact, doing science.

Setting that aside, what even is the relevance of a distinction between “personal politics”, and, say, someone who is willing to accept money in exchange for publishing favourable research in any direction? Or someone who engages in fraud in order to feed their ego? Or people who spend 30 years down the amyloid plaque rabbit hole due to fraud and what appear from the outside to be very unhealthy group dynamics (which might not be so unhealthy if they took some cues from the social sciences)?[0]

But let’s set all that aside for a moment.

If we were to do as you propose and “denounce and defund” whatever you define as the “social sciences”, what method should be used instead to guide our lawmaking and personal decision-making about important questions that fall under that umbrella? Majority rule? Might makes right? Whatever fable we learn as children must be true and remain unquestioned?

What you are proposing is to take a system that at least attempts to be objective some of the time, and say it should be destroyed in favour of… what, exactly?

I cannot object to the premise that there is a lot of junk science in the social sciences. I wish it were better. It is deeply ironic that you are here using research from a social science field as proof for your claim that social sciences should be denounced and defunded.

Getting back to your original claim. Perhaps a big reason why something like physics may seem as though “personal politics don’t determine the outcome” is because most of it is sufficiently abstract that, today, there is rarely some direct conflict with any deeply ingrained cultural belief. Social sciences, on the other hand, usually point the spotlight directly on things people hold as sacrosanct. This is a double-edged sword, since it means researchers are also more likely to put a thumb on the scale—which is exactly what this research suggests. But there is no fundamental error in the idea that the scientific method can and should be used to look at humans and human systems, so a call to “denounce and defund” is reactionary nonsense.

[0] https://www.statnews.com/2025/02/11/amyloid-hypothesis-alzhe...


So many of the questions we all really want answers to are in the social sciences area. While some of us want to see interesting work done in physics, humanity as a whole craves some kind of answers to all those junk studies that at least attempted to apply rigorous methods. Immigration is used as an example in this article. If studying a topic like that is off the table,what are we even left with?


You aren’t the only one who remembers. But in that time it was a self-selecting process. The problem with “the algorithm”, as I see it, is not that it increases the baseline toxicity of your average internet fuckwad (though I do think the algorithm, by seeking to increase engagement, also normalises antisocial behaviour more than a regular internet forum by rewarding it with more exposure, and in a gamified way that causes others to model that antisocial behaviour). Instead, it seems to me that it does two uniquely harmful things.

First, it automatically funnels people into information silos which are increasingly deep and narrow. On the old internet, one could silo themselves only to a limited extent; it would still be necessary to regularly interact with more mainstream people and ideas. Now, the algorithm “helpfully” filters out anything it decides a person would not be interested in—like information which might challenge their world view in any meaningful way. In the past, it was necessary to engage with at least some outside influences, which helped to mediate people’s most extreme beliefs. Today, the algorithm successfully proxies those interactions through alternative sources which do the work of repackaging them in a way that is guaranteed to reinforce, rather than challenge, a person’s unrealistic world view.

Many of these information silos are also built at least in part from disinformation, and many people caught in them would have never been exposed to that disinformation in the absence of the algorithm promoting it to them. In the days of Usenet, a person would have to get a recommendation from another human participant, or they would have to actively seek something out, to be exposed to it. Those natural guardrails are gone. Now, an algorithm programmed to maximise engagement is in charge of deciding what people see every day, and it’s different for every person.

Second, the algorithm pushes content without appropriate shared cultural context into faces of many people who will then misunderstand it. We each exist in separate social contexts with in-jokes, shorthands for communication, etc., but the algorithm doesn’t care about any of that, it only cares for engagement. So you end up with today’s “internet winner” who made some dumb joke that only their friend group would really understand, and it blows up because to an outsider it looks awful. The algorithm amplifies this to the feeds of more people who don’t have an appropriate context, using the engagement metric to prioritise it over other more salient content. Now half the world is expressing outrage over a misunderstanding—one which would probably never have happened if not for the algorithm boosting the message.

Because there is no Planet B, it is impossible to say whether things would be where they are today if everything were the same except without the algorithmic feed. (And, of course, nothing happens in a vacuum; if our society were already working well for most people, there would not be so much toxicity for the algorithm to find and exploit.) Perhaps the current state of the world was an inevitability once every unhinged person could find 10,000 of their closest friends who also believe that pi is exactly 3, and the algorithm only accelerated this process. But the available body of research leads me to conclude, like the OP, that the algorithm is uniquely bad. I would go so far as to suggest it may be a Great Filter level threat due to the way it enables widespread reality-splitting in a geographically dispersed way. (And if not the recommendation algorithm on its own, certainly the one that is combined with an LLM.)


> When they don't have outrage media they form gossip networks so they can tell each other embellished stories about mundane matters to be outraged and scandalized about.

But again in this situation the goal is not to be angry.

This sort of behaviour emerges as a consequence of unhealthy group dynamics (and to a lesser extent, plain boredom). By gossiping, a person expresses understanding of, and reinforces, their in-group’s values. This maintains their position in the in-group. By embellishing, the person attempts to actually increase their status within the group by being the holder of some “secret truth” which they feel makes them important, and therefore more essential, and therefore more secure in their position. The goal is not anger. The goal is security.

The emotion of anger is a high-intensity fear. So what you are perceiving as “seeking out a reason to be angry” is more a hypervigilant scanning for threats. Those threats may be to the dominance of the person’s in-group among wider society (Prohibition is a well-studied historical example), or the threats may be to the individual’s standing within the in-group.

In the latter case, the threat is frequently some forbidden internal desire, and so the would-be transgressor externalises that desire onto some out-group and then attacks them as a proxy for their own self-denial. But most often it is simply the threat of being wrong, and the subsequent perceived loss of safety, that leads people to feel angry, and then to double down. And in the world we live in today, that doubling down is more often than not rewarded with upvotes and algorithmic amplification.


I disagree. In these gossip circles they brush off anything that doesn't make them upset, eager to get to the outrageously stuff. They really do seek to be upset. It's a pattern of behavior which old people in particular commonly fall into, even in absence of commercialized media dynamics.


> In these gossip circles they brush off anything that doesn't make them upset

Things that they have no fear about, and so do not register as warranting brain time.

> eager to get to the outrageously stuff.

The things which are creating a feeling of fear.

It’s not necessary for the source of a fear to exist in the present moment, nor for it to even be a thing that is real. For as long as humans have communicated, we have told tales about things that go bump in the dark. Tales of people who, through their apparent ignorance of the rules of the group, caused the wrath of some spirits who then punished the group.

It needn’t matter whether a person’s actions actually caused a problem, or whether it caused the spirits to be upset, or indeed whether the spirits actually ever existed at all. What matters is that there is a fear, and there is a story about that fear, and the story reinforces some shared group value.

> It's a pattern of behavior which old people in particular commonly fall into,

Here is the fundamental fear of many people: the fear of obsolescence, irrelevance, abandonment, and loss of control. We must adapt to change, but also often have either an inability or unwillingness to do so. And so the story becomes it is everyone else who is wrong. Sometimes there is wisdom in the story that should not be dismissed. But most often it is just an expression of fear (and, again, sometimes boredom).

What makes this hypothesis seem so unbelievable? Why does it need to be people seeking anger? What would need to be true for you to change your opinion? This discussion thread is old, so no need to spend your energy on answering if you don’t feel strongly about it. Just some parting questions to mull over in the bath, perhaps.

Thank you for raising this idea originally, and for engaging with me on it.


The opposite question - why so insistent that people wouldn’t seek it out, when behavior pretty strongly shows it?

Why are you so insistent that people don’t do what they clearly seem to do?

Why is that hypothesis so unbelievable?

Is it the apparent lack of (actual) agency for many people? Or the concerning worry that we all could be steering ourselves to our own dooms, while convincing ourself we aren’t?


> Why are you so insistent that people don’t do what they clearly seem to do?

I’m not rejecting the idea that people fixate on stimuli that produce anger. The question is why they do that, and the answer is unlikely to be “people just want to be angry”.

> Why is that hypothesis so unbelievable?

Because it runs counter to the best available literature I am aware of and is a conclusion based on a superficial observation which has no underlying theoretical basis, whereas the hypothesis I present is grounded in some amount of actual science and evidence. Even the superficial Wikipedia article on anger emphasises the role of threat response here. Mine isn’t, as far as I can tell, some fringe position; it is very much in line with the research. It is also in line with my personal experience. “People just want to be angry” is not.

It is important to understand that the things people try to avoid through gossip, exaggeration, and expressions of anger are not all mortal threats. They can also be very mundane things like not wanting to eat something that they just think tastes bad. So make sure not to take the word “threat” too narrowly when considering this hypothesis.

I don’t have any skin in the game here other than an interest in the truth of the matter and a willingness to engage since I find this sort of thing both interesting and sociologically very important. If you or anyone have some literature to shove in my face that offers some compelling data in support of the “people love feeling angry” hypothesis, then sure, I would accept that and integrate that into my understanding of human behaviour.


Your response is a non-sequitur that does not answer the question you yourself posed, and you are responding to yourself with a chatbot. Given that it is a non-sequitur, presumably it is also the case that no work was done to verify whether the output of the LLM was hallucinated or not, so it is probably also wrong in some way. LLMs are token predictors, not fact databases; the idea that it would be reproducing a “historical exploit” is nonsensical. Do you believe what it says because it says so in a code comment? Please remember what LLMs are actually doing and set your expectations accordingly.

More generally, people don’t participate in communities to have conversations with someone else’s chatbot, and especially not to have to vicariously read someone else’s own conversation with their own chatbot.


It used to be the case that a web developer could be reasonably expected to actually learn and know pretty much all of CSS, but it has reached the point where it is actually not possible for a single person to “learn” CSS in the way you could in the 2000s or 2010s.

Just as one example, there are now, by my count, at least eight[0] layout models (column, anchor, positioned, flow, float, table, flex, and grid), plus several things that sit in some ambiguous middle place (the inline versions of block types, sticky positioning, masonry grid layout, subgrid, `@container`, paged media), each of which is different and each of which interacts with the others in various confounding ways. Flow collapses margins; table elements can’t have margins at all, but tables can have `border-spacing`, which is like `gap`, but different. Flex has a different default `min-inline-size` than flow, and `flex-basis` overrides `inline-size` if it isn’t `auto`, which is its initial value, until you use the recommended `flex` shorthand, at which point it becomes `0%`, unless you redefine it explicitly. Table layout[1] uses a special shrink-wrapping algorithm, which the CSS authors noted back in CSS 2 might make sense to add a way to work more like a regular block-level element, and then that just never happened. Grid is a mix of implicit and explicit placements with competing ways to do the same things (named areas, number ranges, templates on the parent, properties on the child) and a bunch of special sizing algorithm keywords like `minmax` and `fit-content` which only work in grid, some of which also work in flex, most of which don’t work in flow, but some of them do now, but they didn’t before.

You can select your elements with the old CSS 3 selectors, or `:where`, or `:is`, or `&`, or `:has` (but not if they’re nested), or `@scope`, or `@layer`. Definitely don’t try to put trailing commas on your selector lists, though, since that’s not syntactically valid in CSS, until it is, in some future revision.

To make sure your site works correctly with all scripts, all the directional keywords now have logical versions with `inline` and `block` keywords. Unless it’s a transform[2]. Or a gradient[3]. That’ll probably eventually be fixed, just keep checking the spec periodically until you have to re-learn something that used to be false is now true. Which is how “learning” CSS works. There is never an end.

And this is just the tip of the iceberg. There are also all the CSS units, colours, the whole animation engine, forms, pseudo-classes, pseudo-elements, containment, paints, filter effects, environment variables (yes, those are a thing), maths functions, overflow, scroll snaps, backgrounds and borders, feature queries, font features, writing modes, the different-but-not-really CSS of SVG, the half-forgotten weirdo things like `border-image` and `clip-path`, or the half-dozen other major and minor CSS features which I am not even thinking of right now.

CSS doesn’t suck “because we don’t bother learning it”. CSS sucks because its core strength is its core weakness. It is infinitely flexible and extensible, and that means it has been flexed and extended to fulfil every design trend and address every edge case. Then it needs to support all of those things forever. Making CSS do what you want as a web developer has probably never been easier, but “learning” CSS has never been harder.

[0] Please, for my own sanity, resist the urge to pedantically nitpick in the responses about whether everything in my list is actually a “layout model”. I am aware that some of these things overlap more than others. This is just my list. You can make your own list. It’s fine.

[1] Tables also create their own anonymous layout block such that a child `<caption>` element is drawn outside the putative `<table>` in the actual layout. Framesets do a similar thing with `<legend>`. These are all things that are the result of having to retroactively shoehorn weirdo features into CSS in a backwards-compatible way, but that doesn’t make it any less insane to learn.

[2] https://github.com/w3c/csswg-drafts/issues/1544

[3] https://github.com/w3c/csswg-drafts/issues/1724


With the actual layout models, I see it more of an evolution thing. For someone starting on CSS today, you do not have to learn all 8 now if you don't want to, just master the grid. It was designed to be the last one to rule them all.


As someone who uses Debian and very occasionally interacts with the BTS, what I can say is this:

As far as I know, it is impossible to use the BTS without getting spammed, because the only way to interact with it is via email, and every interaction with the BTS is published without redaction on the web. So, if you ever hope to receive updates, or want to monitor a bug, you are also going to get spam.

Again, because of the email-only design, one must memorise commands or reference a text file to take actions on bugs. This may be decent for power users but it’s a horrible UX for most people. I can only assume that there is some analogue to the `bugreport` command I don’t know of for maintainers that actually offers some amount of UI assistance. As a user, I have no idea how to close my own bugs, or even to know which bugs I’ve created, so the burden falls entirely on the package maintainers to do all the work of keeping the bug tracker tidy (something that developers famously love to do…).

The search/bug view also does not work particularly well in my experience. The way that bugs are organised is totally unintuitive if you don’t already understand how it works. Part of this is a more general issue for all distributions of “which package is actually responsible for this bug?”, but Debian BTS is uniquely bad in my experience. It shows a combination of status and priority states and uses confusing symbols like “(frowning face which HN does not allow)” and “=” and “i” where you have to look at the tooltip just to know what the fuck that means.


> As far as I know, it is impossible to use the BTS without getting spammed, because the only way to interact with it is via email, and every interaction with the BTS is published without redaction on the web. So, if you ever hope to receive updates, or want to monitor a bug, you are also going to get spam.

Do the emails from the BTS come from a consistent source? If so, it's not a good solution, but you could sign up with a unique alias that blackholes anything that isn't from the BTS.


The command is `bts` in devscripts. I wrote it in 2001.


The spam issue is probably one of the stronger arguments against email centered design for bug trackers, code forges and the like. It's a bit crazy that in order to professionally participate in modern software development, you're inherently agreeing that every spammer with a bridge to sell you is going to be able to send you unsollicited spam.

There's a reason most code forges offer you a fake email that will also be considered as "your identity" for the forge these days.


> Also, allowing CSS inside SVG is not a great idea because the SVG renderer needs to include full CSS parser, and for example, will Inkscape work correctly when there is embedded CSS with base64 fonts? Not sure.

For better or worse, CSS parsing and WOFF support are both mandatory in SVG 2.[0][1] Time will tell whether this makes it a dead spec!

[0] https://www.w3.org/TR/SVG2/styling.html#StylingUsingCSS

[1] https://www.w3.org/TR/SVG2/text.html#FontsGlyphs


That's how OpenCL died. They made difficult to implement features mandatory.


> - cannot wrap text

This is possible, but only in the stupid way of using a `<foreignObject>` to embed HTML in your SVG (which obviously only works if your SVG renderer also supports at least a subset of HTML). SVG 2 fixes this by adding support for `inline-size`[0], so now UAs just need to… support that.

> - cannot embed font glyphs - your SVG might be unreadable if the user doesn't have the font installed. You can convert letters to curves, but then you won't be able to select and edit text. It's such an obvious problem, yet nobody thought of it, how?

Somebody did think of it. SVG 1.1 added the `<font>` element[1]; SVG 2.0 replaced this with mandatory WOFF support.[2] A WOFF is both subsettable and embeddable using a data URI, and is supported by all the browser UAs already, so it’s obvious why this was changed, but embeddable SVG fonts have existed for a long time (I don’t know why/how they got memory holed).

> - browsers do not publish, which version and features they support

It should be possible to use CSS `@supports` for most of this and hide/show parts of the SVG accordingly in most places.[3] The SVG spec itself includes its own mechanism for feature detection[4], but since it is for “capabilities within a user agent that go beyond the feature set defined in this specification”, it’s essentially worthless.

There are obvious unsolved problems with SVG text, but they are more subtle. For example, many things one might want to render with SVG (like graphs) make more sense with an origin at the bottom-left. This is trivial using a global transform `scaleY(-100%)`, except for text. There is no “baseline” transform origin, nor any CSS unit for the ascent or descent of the line box, nor any supported `vector-effect` keyword to make the transformation apply only to the position and not the rendering. So unless the text is all the same size, and/or you know the font metrics in advance and can hard-code the correct translations, it is impossible to do the trivial thing.

There are other issues in a similar vein where scaling control is just ludicrously inadequate. Would you like to have a shape with a pattern fill that dynamically resizes itself to fill the SVG, but doesn’t distort the pattern, like how HTML elements and CSS `background` work? Good luck! (It’s possible, but much like the situation with text wrapping, requires egregious hacks.)

Some of the new `vector-effect` keywords in SVG 2 seem like they could address at least some of this, but those are “at risk” features which are not supported by UAs and may still be dropped from the final SVG 2 spec.

[0] https://www.w3.org/TR/SVG2/text.html#InlineSizeProperty

[1] https://www.w3.org/TR/SVG11/fonts.html

[2] https://www.w3.org/TR/SVG2/changes.html#fonts

[3] https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/A...

[4] https://www.w3.org/TR/SVG2/struct.html#ConditionalProcessing...


Interesting, I stumbled upon SVG fonts only as a format for webfonts in CSS.


As others have noted, this is not actually a Lua engine written in Rust. It is a wrapper over existing C/C++ implementations of Lua. There is, however, an actual Lua engine written in Rust. It is called piccolo.[0]

[0] https://github.com/kyren/piccolo


Is this one of those things that claims to be fast purely by virtue of being written in rust?


Considering one of the project goals at the top of the readme is "Don't be obnoxiously slow" obviously not - it doesn't claim to be fast at all.


the first line of text in the web page is literally "Blazingly-Fast Lua runtime"


You responded to a post about https://github.com/kyren/piccolo, not to the top level post about an entirely different project.

The word "blazingly" exists nowhere on piccolo's github page according to ctrl-f.


Last year I tried to extend Retro68 to support Palm OS and made quite a lot of progress, but in the end it ended up being too much work and I abandoned the attempt. I suppose now is as good a time as any to mention that it exists (in a very ugly state with lots of unsquashed commits) in case anyone wants to pick up the mantle.[0] At the least, it has an up-to-date and functioning copy of the Palm OS Emulator which, unlike cloudpilot, retains the debugger code so you can actually debug apps with it.[1] (I also reverse-engineered the dana HAL; this is as far as I know the only open-source version of POSE which supports that hardware.)

The thing that blocked me from being able to make any more progress was a bug in GCC that causes it to ICE generating PC-relative code[2], and I absolutely could not understand GIMPLE nor the GCC internals quickly, nor could I commit any more energy to trying to learn them. (Generating PC-relative code is essential for Palm OS because, unlike Mac OS, code sections are in read-only memory and cannot have relocations, so this mode being broken means it can’t really work.)

It is fair to say that I have no idea how close to actually working things actually are in that fork since GCC/binutils are an absolute nightmare[3] and I am no compiler engineer. Retro68 has some scary-looking hacks to deal with exception handling which wouldn’t work as-is with Palm OS, at the least. (Retro68 is also itself quite a pile of hacks which were clearly made without a good understanding of how GCC works, and without any apparent care to making sure it is easy to rebase atop newer versions of GCC.)

If I were to restart the work again I would probably just have GCC do the bare minimum of emitting code with whatever relocations it can, and then just make Elf2Mac rewrite machine code. Elf2Mac is itself fairly clearly an attempt to avoid having to touch binutils as much as possible, since it redoes a lot of the work that libbfd normally does, which makes sense, because working on GCC/binutils is just awful.[4]

[0] https://github.com/csnover/Retro68/tree/palmos

[1] I tried to make the debugger work with the modern GDB remote protocol instead of the Palm-specific protocol which requires the ancient prc-tools version of GDB, but GDB is also broken <https://sourceware.org/bugzilla/show_bug.cgi?id=32120>, so that never worked very well. GDB’s remote protocol also does not support receiving symbols from the remote on-demand for whatever reason—it only allows to receive an ABI-compatible binary with DWARF symbols, which makes it next to impossible to get symbols out of the ROM, which uses MacsBugs format. Working with DWARF also sucks.

[2] I believe it was https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80786

[3] Just as a tiny example: never in my life have I ever considered that someone would solve the “tabs versus spaces” debate by making the rule “two spaces per indent, unless it is eight spaces, in which case use a tab”. What IDE even supports this??

[4] To be clear, I don’t mean to denigrate all the hard work that has been done over decades to create these tools. The GCC toolchain is a triumph and I am sure that my relative intelligence has something to do with why I struggle with it, compared to all the compiler people who happily work with it every day. Nevertheless, it is a forty-year-old codebase, and everyone seems quite content to continue to work more or less within constraints that made sense in the 1980s, and perhaps not so much in 2025.


> Last year I tried to extend Retro68 to support Palm OS

I did that 5 years ago and published it on reddit in /r/Palm:

https://old.reddit.com/r/Palm/comments/fu5870/announce_new_g...

and then

https://old.reddit.com/r/Palm/comments/p81m58/announce_new_g...


Yes, I saw that, it would have been really great if the source code had ever been released, instead I had to start from scratch…

My implementation does not require editing SDK headers and the goal was to support multiseg. If I had stopped at 32k single seg it would probably have been working. But I never got quite as far as being able to e.g. test libgcc, so who knows.


Click the second link. The source code is there. First comment. I only didn’t release it initially because I was really busy and sorting out a clean reproducible process to build it took too long. As soon as I was able to, I posted it.

And I have since made it not require any SDK changes.


Oh, how frustrating. I have no idea how I missed that since I feel like I spent quite a while looking for some later update that included the source. Well, thank you for making sure to release it! I did rewrite most of the Retro68 CMake code too, perhaps for similar reasons, so I can understand how that could have been a problem. At least the newer versions of GCC do not have race conditions in their Makefiles, unlike prc-tools-remix. :-)

The work I did was intended to eventually merge and live alongside the existing stuff in Retro68 instead of just blowing it away, with the hope that nothing like this would ever happen again to anyone else, but of course I failed to actually finish the work.


I never submit to OSS. It is the same as editing wikipedia -- every time I try, it is a political mess and nonsense galore. (My reasoning: If you are paying me for work, you are welcome to criticize, request amendments, etc. If you are not paying me for work, you thank me profusely for the free work I offered and take it...or don't. I am uninterested in your opinions in that case, or requests for changes unless they are bugs). Anyways, I never had goals of upstreaming anything. I was just trying to help others who wanted a working toolchain. My patches work well. People (not just me) have used them. There are also patches to PilRC i released that add some more bitmap compression modes and fix bugs with multi-depth fonts.


Is your POSE 64-bit fixed or still 32-bit? ISTR running into this problem trying to compile the original from source and have yet to find the round tuit.


It is 64-bit fixed (along with a bunch of other show-stopper bugs in the ancient FLTK code, and I got rid of the bizarre UI they they used only for *nix and replaced it with the UI they used for Windows). There were a couple 64-bit safety issues in the prc compiler too which I also fixed.


Good to hear (I have my own 64-bit fixed pilrc but POSE was the missing piece). What do you mean by the Windows UI, though? Likely I'd compile this on my MacBook.


When opening POSE without a previous session, on Windows it would open a reasonable window with some buttons (New, Open, Download, Exit). On *nix, they instead decided to open a blank window that said “Right click on this window to show a menu of commands”. (And then, due to programming errors and bitrot, actually trying to use the context menu would access invalid memory and crash.) So I replaced that bad UI with the less bad one from Windows. :-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: