Hacker Newsnew | past | comments | ask | show | jobs | submit | gdl's commentslogin

That was my first assumption too, but Wireshark doesn't show anything going across the network as I type, and nothing that looks incriminating when I click "donate" with text in the password box. It looks like it's entirely client-side JavaScript as it claims to be. Kind of disappointing, actually.

edit: ...Unless it's clever enough to only be evil some fraction of the time. I didn't actually check through the code.


Theoretically, it could store the password in a cookie, and later retrieve it, along with (somehow) a gmail id.


I disagree.

This is like the inevitable comparison between the US government stealing our freedoms and China (or some other dictatorship). The argument isn't that they are exactly the same thing - it is that one is edging closer to the other than we would like, and we would like to draw attention to the bad similarities in an attempt to fix them.

If we never compare the way things are to the worst ways we know, how can we know if we're falling down that slippery slope until we get to the end and it's too late? Maybe upon reflection we'll realize that we're already several steps down a bad path, even if we're still far from "evil" territory.


I've never looked into GreaseMonkey so I might be missing some context here, but "cruft" sounds like it would be difficult to draw a clear consensus on.

Ads are relatively cut-and-dry. Wikipedia, at least as an ideal, is a collection of objective facts with citations. But "removing cruft"? I'm not saying it can't be done, but I think it would be tough to get a clear consensus within a community on what exactly that entails. Wikipedia itself has flame wars on half of its discussion pages over any details that are less than totally objective, and many that are. It works because the differences being argued about are usually small points that most users wouldn't really notice either way. If you're trying to get consensus on a hundred "bike shed" issues on each page... well, good luck.

It's a very cool and worthwhile idea though. I'd like to see whatever might come of this after a few iterations.


If you're using the word "bad" to describe something, it's probably not good. But if it's going to be bad, then yes, a consistent style of badness is preferable to one that is bad and inconsistent. At least that way people can get used to the specific faults and ignore or work around them as needed.

What you can "get away with" depends on the application and the users. A simple app for internal use can be as ugly as you like, but for something that potential clients will see before choosing whether or not to give you money (even for something unrelated) it would probably be a wise investment to get it done right.


"Somebody should have stopped me before I voted on something I knew nothing about."

Yes: yourself. Is that really so hard?

I'd like to think that we can manage at least that much responsibility on our own without it being forced on us through technical restraints.


But people won't stop themselves. From a user's point of view: yes, you should stop yourself, but that takes effort to remember to read the article first. From pg's point of view (as this argument goes): no matter how many times you ask, people will still vote without reading the article, so if you don't want people to do so, the website needs to prevent them from doing so.

Your argument could be made for many features, like "please don't downvote comments before you have 500 karma", or "rather than protect against SQL injection, I'll ask the users not to do so".

Besides, this could be a way to detect accounts that are votebots -- why would they click on the article unless they know that you're tracking it?

A downside to this change would be that sometimes I'll read an article somewhere else first, and then see it here. I don't always re-open it before voting; I don't think this behavior is bad.


The problem is that the proposed solution is more of a mild nuisance than a solid block. If spambots or the human equivilant want to upvote worthless links, it's trivial to hit the link first. The same minor workaround would be required of legitimate users that, as you mention, may have seen the article elsewhere and already read it. And it might make impulsive people more likely to read the article before voting, but again nothing is guaranteed (and is that really a serious problem anyway?)

Minimalism and simplicity work well for HN, and I don't think there'd be a clear enough benefit here to warrant the added complexity and occasional annoyance to legitimate users.


>Minimalism and simplicity work well for HN, and I don't think there'd be a clear enough benefit here to warrant the added complexity and occasional annoyance to legitimate users.

HN is less simple than you think: you can't vote or make polls until you have a certain level of karma; you may be hellbanned; you can't downvote responses to your comments; comment karma is displayed as the number (max -4, real-karma), but still calculated below that; there's a delay -- not a constant delay, but exponential based on the nesting length -- of time after a comment is posted before any replies can be made. HN is complicated; it just doesn't make it obvious to the user. Going along with other design choices that have been made, pg might implement this by simply dropping votes that are made without loading the page; the user would never even know this happened. It wouldn't make the complexity on the user any greater.

And yes, spambots or human spambots can click the link first: it is trivial. But -- assuming this is a problem; I don't know that -- if some don't know about it, their votes wouldn't count. Problems don't have to be solved in one step; lessening them is useful.


Good point. I was stuck imagining a clumsy "You must read the article before voting" page rather than pg's silent vote dropping trick. The former would hurt the perceived simplicity of the site, while the current uses of the latter work well enough that I usually forget it's even there.

It'd be interesting to see the actual stats on this - what percentage of story votes are made before clicking the link, and do some outliers (spam, or sensationalist titles) get a large enough fraction of pre-read upvotes to justify taking some sort of action? I've assumed that it's low enough to not make much difference, but I could be wildly mistaken.


You can protect against SQL injection, but you can't make anyone read the article. The best you could do is make them click the link first.

Just like you can't make anyone read a EULA before clicking "I agree". You can force them to scroll through it, but then you sometimes end up with the comical scroll-all-the-way-down button as well. (Can't remember where I've seen that, but it's more than once.)


I agree. But I think the extra level of annoyance is enough of a "sin warning". If I really want to vote something up w/o reading, I'll sin.

But the annoyance of having 25 new tabs would dissuade me enough of the time to make the tweak worthwhile.

As for spammers -- you have to take a separate approach with them. What I'm targeting are not the malicious but the _careless_ rulebreakers.

(I didn't go into this level of detail in the article because I thought it would come up in comments.)


Ideally, you'd want the users to read and understand the article, if understandable. However, as this is difficult to implement, opening the page is a the minimum you'd have to do to understand it. If users voting on articles without reading them is a problem, requiring users to at least click on the article would lessen the problem.


OK so just click through in a background tab and vote up.


We all make mistakes sometimes.

I believe that other people make the mistake too, and that the bias is in favor of exactly the stuff that HN &friends are trying to keep out of the top.


Interesting, but I don't think there's enough data on the Lisp side to interpret much. From memory, wasn't it a few dozen Lisp entries against well over a thousand for Java?

Raw statistical uncertainty aside, if we're trying to judge the languages themselves it would need to be done with programmers skilled with their chosen language. I'd guess that most of the Java entries were by people that knew Java reasonably well from working with it every day, but I wouldn't be surprised if many in the Lisp section were people that had little or no prior experience with Lisp and decided that this would be a fine opportunity to learn it.


There were only a handful of Lisp entrants while Java had more than any other language in the competition. I agree with you that we shouldn't read much into this. I also think we shouldn't be reading much into the posts about how it was Lisp that won the competition.


I agree with the possibility of increased control, and that it could be a very bad thing. I disagree with the notion that we should choose to not support Wikileaks based on that chance.

If the politicians are acting like children throwing tantrums, we need to curtly explain to them that this is not appropriate behavior. Giving them lollipops will appease them and shut them up for now, but will only encourage their bad habits and make things worse farther down the road when they've come to expect such treatment.


My mom thinks I'm Supreme Lord of all Technology, so I would be selling myself short if I got a job as a mere "senior developer", right?

Ignore the job titles and all that. Where will you be happiest? Where will you be challenged and be able to grow, and where will you stagnate and end up with an obsolete skillset in five years? (Hint: you won't be motivated to grow your skills as much if you're already resting at the "top")

I obviously don't know all the details, but if you're even asking this question I'm guessing that you want us all to tell you to break from this place and move on to better things. So I'll suggest you do exactly that.

Go.


That's what social bookmarking sites (like del.icio.us) are for. When I make the choice to come to a news site I'd rather see news, or at least current discussion relating to a "classic" link. If I instead see a bunch of old links without any active discussion attached, I'm quickly going to find another place to get my news.


Why wouldn't the old links have an active discussion? Perhaps not as active as the news, but still active.


New comments have to compete with old ones on the page. Once a page hits one million words it is pretty much done, unless there's a mechanism by which it gets edited, condensed, summarized, or updated.

One technique that has been tried is giving every visitor the power to edit, summarize, or update the entire page. That's the Wikipedia model. But, of course, you lose the history of the conversation [1], and the individual voices, and you're subject to the editorial whims of whoever happens by.

Another idea is to keep the content in the form of discrete comments but allow visitors to rearrange the comments. That's kind of how Stack Overflow works. The HN voting system also serves to rearrange comments on the page. These things are kind of indirect, though, and they do nothing to deal with the volume problem. Words take time to read; you have to cut down the supply somehow, and that inevitably requires some rewriting as well as cherry-picking.

Or you could just periodically archive the discussion and start it over with a fresh page, ending up with a series of discussions, each lasting only a day or two, but possibly related to or built on predecessor discussions that stretch back into history. This is pretty much how HN works.

---

[1] Yeah, there are edit logs. Most people don't read those for fun. Most of the events in an edit log are tedious and unenlightening, like reading raw Postscript source.


With news, or when old links are posted on news sites, the fact that it was just recently posted means that there will be a lot of people focused on it at that specific point in time, and in turn conversations can play out on the forum over the next several hours. There's a value to being able to have more-or-less realtime conversations like that going.

If the old links were just kind of hanging around the news page, without anything to suggest that they're specifically relevent to the moment, it would be hard for them to attract the kind of time-focused attention that you would get from a freshly-posted item. You could probably get by with a "featured classic link" that changes every day or so to keep people interested, but that would be more of a small feature or gimmick than a new type of site. Trying to push it much more than that would, I suspect, thin out the commenters too much, killing the ability to have back-and-forth conversations and turning it into more of a bookmark site.

There's a place for collecting good, old links, but news sites are for news. I think trying to force both ideas into one site would feel too awkward to work well.

(But by all means, prove me wrong. I thought Twitter was stupid, too.)


I like the concept. A lot of people ignore rebates entirely because they're so often not worth the time, frustration, and hassle. If a site like this could be set up to be a quick and easy process, at least on the seller's end, I could a lot of people going for it. On the other end you've got easy money for stay-at-home moms or others with time on their hands.

I see it more as a logistical and maybe legal problem. Will companies honor rebates not submitted by the original buyer? If not, how easy would it be for them to find out that it's happening, and how big of a fuss might they make over the site? And as the parent brought up, what about fraud? Or not realizing some clause in a given rebate and not sending some vital piece of information with it?

It might all work out, or there might be some catch that renders it totally impractical, but I think it's worth looking into.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: