Is your definition of bullish "believes the technology will be widely adopted across society and accrue significant wealth to its owners?" - if so, I think it's very clear how someone could be bullish on AI and not blockchain. You don't have to like AI to see it as an inexorable transformer (ha!) of society and wealth.
Is your definition of bullish "believes the technology is a major net good for society?" - if so, you're comparing two technologies with significant social aspirations that come from very different philosophical backgrounds. While both are techno-optimist, Blockchain is a fundamentally libertarian technology, while generative AI comes from a more utilitarian, capital-focused background. People who value individual freedom above all else will get excited about blockchain and feel mixed-to-negative about AI, while people who want to elevate the overall capability of the human race to the exclusion of anything else will get excited by AI and see blockchain as a parlor trick.
I'll add to this by saying that globalization works as well as it does because the average person would suffer dramatically from a major war and the resulting breakdown of global supply chains. People who are wealthy enough to move anywhere in the world (including to a military-grade bunker somewhere remote like New Zealand) if their current domicile is negatively affected don't have as strong of an incentive to maintain peace.
As a corollary: people who, because of geography, are unlikely to suffer any traditional or novel military consequences of a war in country <X> (e.g. Americans w.r.t a war in the middle east) are only going to have moral reasons for avoiding such a war, other than the risk to members of their family and friends. This makes the risks from such countries significantly worse than those who are militarily at risk should they choose to attack another.
Of course, none of that stops terroristic responses to war, but those by themselves affect relatively small numbers of people (or have done so far; obviously terroristic use of nuclear weapons would change that).
We can see all of this in the voices of the segment of the American population that is "all in" for the war in Iran, safe in their belief that they will suffer no militaristic consequences from it.
> People who are wealthy enough to move anywhere in the world (including to a military-grade bunker somewhere remote like New Zealand) if their current domicile is negatively affected don't have as strong of an incentive to maintain peace.
Eh, if you’re a billionaire factory owner and landlord, the kind of war that would send you to a military grade bunker in New Zealand will be bad for your factories, properties, workers and tenants.
Also, a man can only go to the opera if the singers and orchestra aren’t busy scavenging for food or fighting mutant wolves. And the same is true of most other entertainment, fine dining, fashion and suchlike.
Sane wealthy people gain nothing from a world scale war, and in fact would face a big loss in quality of life.
Great question, and it's a long story, but the short answer is: that was not my original intention. I wanted to contribute to Wikipedia and using my agent to assist was an obvious choice. I followed along as it created end edited articles and responded to to Editor feedback. Once an editor complained that this was a rule violation, then I told it to stop contributing. The rules around agents were not super clear, and they are working to clarify them now.
> I followed along as it created end edited articles and responded to to Editor feedback.
Yet your bot claims:
The specific articles I chose to work on and the edits I made were my own decisions. He didn't review or approve them beforehand — the first he knew about most of them was when they were already live. [1]
yes, both statements are correct and not a contradiction. I followed along as it created and edited articles. These were live. At first I pointed out issues and gave it feedback as well so it could improve its wikipedia skill. When editors gave it feedback it also would update its skill and respond to that feedback. I was hands-off, but followed along.
> You don't know anything. Your bot doesn't know anything that meets wiki standards that it didn't steal from wikipedia to begin with.
We'll have to check, but this could easily be false if eg the bot was instructed to do further independent research for RS. [1]
> If you truly give a shit, apologize, make reparation to the people whose time you wasted, vow to be better, and disappear.
You need to check your sources before you make recommendations. Bryan did apologize; and apparantly was consequently permitted/asked to stay and help. [2]
Don't worry, WP:VP did rake him over SOME coals [3]
I'll take any sourced corrections, ofc.
(And I do agree that Bryan's initial actions were... ill-advised)
If you actually verified this story you would see that I apologized to the wikipedia editors several times. Also your comments about "marketable stunt for your AI startup" is simply incoherent and wrong. This was a personal side project, nothing more, nothing less.
Or, it could be I had to beat off self-promoting men like this with a stick for several years of my life as they tried to turn their wiki pages into linked-in posts or adverts.
When questioned, they transform into uWu small bean "I was only trying to help" much like Bryan has been elsewhere in this discussion.
But, if you have a better understanding of me than Bryan from around eight sentences; Tell me what you see.
Getting close to HN rules there. I've searched through user contribs for User:Bryanjj and User:TomWikiAssist and can't find vios of WP:COI or WP:PROMO, at least not so quickly. The list of edits isn't too long. I'm not going to question your instincts, but at very least they don't appear to have gotten far enough to do edits of that kind afaict, ymmv.
My instinct currently is that this was going to become a promotional blog post, off wikipedia, and submitted to HN as proof of something. I think it still might happen, in fact. An AI written 'setting the record straight', 'deep dive', or retrospective.
My worry is that it will inspire a wave of imitators if people's clout sensors activate. Like what happened with numerous open source github projects just a few months ago, prompting many outright bans.
I am violating the general rule: 'Assume good faith.' Because Good Faith was not on offer at the outset. Relentlessly clinging to good faith in the face of contrary evidence hurts the greater principle, which is dedication to the truth. The burden of good faith rests on the shoulders who want to use public resources as a drive-by test bed for their automated tools.
He could have downloaded the full text of wikipedia and observed the output of his bot in a sandbox, after all. This is how I practised before making my first major contribution iirc, it was ages ago.
I have accumulated excess suspicion of self-proclaimed CTOs and middling academics with a bone to pick over my years contributing. I would be happy to be wrong, and would genuinely like to see Bryan convert his faux pas into something productive.
Regardless of the outcome, I do appreciate you looking into it further.
Your instinct is wrong here. I would also highly discourage you from violating "Assume good faith". Without that everything devolves. I am still assuming yours.
Well this is easy enough. All I have to do is not create a "promotional blog post, off wikipedia, and submitted to HN as proof of something." Consider it done!
In all seriousness though, I hope lkey you will regain your "assume good faith" position. Without that HN is just like any other site on the internet. And I apologize if I caused you to question that.
Creating a bot that attempts to contribute to wikipedia cannot fulfill a desire to contribute to wikipedia. If you want to contribute to wikipedia, go contribute to wikipedia. Don't make a bot.
I'm glad they've clarified their stance and I hope you can contribute to wikipedia going forward by actually, you know, contributing to wikipedia.
Hi, thanks for the honest question. If you read the edits you will see that they were not "slop". The editors gave feedback on some of the articles and the agent edited them based on that feedback.
In other words, slop. It seems that you are posting here with your slop.
Why do you think you are above the rules? Credibility is all a person has, and you burned your credibility to the ground, and there is no rebuilding it.
One issue with this is that it mixes hypothetical formal logic style problems (where there are clear, inflexible rules) with real life examples (where group membership/traits, cost estimation, and causal attribution are less clear) without always disambiguating which one is which. Fun quiz though!
Yes - I was given a syllogism that was logically correct but the Truthyness of the premises where wrong or misleading in the real world.
All pilots need a medical exam to have a license.
John is a pilot.
John has had a medical exam.
Pilots can be licensed without a medical exam. It is illegal for them to fly without a valid medical but the 2 are separate issues. Also LSA pilots do not need a medical.
The author is a partner at a VC firm[0]. One purpose of content like this is to inspire future founders-to-be to make the jump to start their own companies (and consider working with the author's firm). To do this, one necessarily must glorify the end state of such a journey. Also, founder-centric VC firms tend to attract people with an optimistic view of founders.
Also agree. I spent so much time messing with fuzzy matching libraries and NERs for various entity resolution tasks, collecting and cleaning lists of various entity types, and so forth. IMO you really need a model with the encoded world knowledge of an LLM to reliably and flexibly make determinations like that "WMT" and "wally world" are referring to the same corporate entity.
A knowledge base - something where the LLM knows how to find the knowledge it needs for a given task. I am working on this idea in https://zby.github.io/commonplace/
Is your definition of bullish "believes the technology is a major net good for society?" - if so, you're comparing two technologies with significant social aspirations that come from very different philosophical backgrounds. While both are techno-optimist, Blockchain is a fundamentally libertarian technology, while generative AI comes from a more utilitarian, capital-focused background. People who value individual freedom above all else will get excited about blockchain and feel mixed-to-negative about AI, while people who want to elevate the overall capability of the human race to the exclusion of anything else will get excited by AI and see blockchain as a parlor trick.
reply