Hacker Newsnew | past | comments | ask | show | jobs | submit | jph00's commentslogin

He states in the article that they use LLMs for this purpose and find them extremely useful.

Which can be true without this also being true:

> using these tools interactively

I did read the article. It seems to me they're using LLMs in a prepared manner instead, as mere scanners that produce reports.


Perhaps I'm misreading something? From my reading of the article, it doesn't sound like Anthropic offered to let him use Mythos in any other way than that.

He explains in the article that he failed to actually secure access in the end, even though it was approved. Someone else prompted the model on his behalf, and just passed on the findings.

There's always someone making this claim when negative comments about AWS come up.

They almost always come from people that don't have experience running substantive infra at scale without AWS, so they can't make an informed comparison. The complexity of doing so, for a lot of infra, turns out to be lower than using AWS. Also, you end up with transferable skills and a deeper understanding of the foundational protocols and systems. And you save a lot of money, both because you don't have to pay to manage that complexity, and the systems themselves are cheaper.


Fully agreed.

It's super difficult on a psuedo-anonymous forum to discern if a comment comes from a neutral place or a heavily biased one.

This is made even worse when there's a financial or reputational incentive for people to parrot something. If I had invested the bulk of my professional career in Microsoft: I would be genuinely uncomfortable with criticism. I would subconsciously feel threatened and would work to convince myself and others that it's not so bad, or that criticisms are overblown.

This is even more true if the company actively spends (your companies) money to make you feel good. You neither feel the actual pinch of the financials and you feel good about the company.

It's really clever, and we're emotional animals: so we don't always make the most pragmatic choices.


The statement “there exists a project where zig led to an extremely high amount of crashes/memory bugs” does not imply “all zig projects have an extremely high amount of crashes/memory bugs”.

This is a classic logic problem - eg “there is an orange cat” doesn’t imply “all cats are orange”.


"already substantially completed" isn't accurate. $450m of the eventual $1.65b cost had been spent at that point - so less than half.


I'd call that substantial


Indeed, considering the much of the cost in the end consists of carrying costs, litigation, and year-of-expenditure overruns that were caused by the delay.


Yeah this has always been the glaring blind spot for most of the "AI Safety" community; and most of the proposals for "improving" AI safety actually make these risks far worse and far more likely.


It makes quite a lot of sense to focus on reducing the risks of every human everywhere dying, rather than the risks of already existing oppression getting worse.


No, you are deeply misunderstanding the issue. Creating a rivalrous good that powers fight over for control, then use violence to maintain control of, creating a global feudalism, is not "existing oppression getting worse". It actually makes the risks of every human everywhere dying far higher, and even if that doesn't happen, decreases global utility by a similar percentage (99%, instead of 100%). It could actually be worse, if average human utility becomes negative.


GPL was created as a workaround for copyright - it wouldn’t have been needed if there wasn’t copyright. There are complex arguments both for and against copyright and there’s no reason to simply assume it must always be just as now even as circumstances change.


Nearly all my coding for the last decade or so has used literate programming. I built nbdev, which has let me write, document, and test my software using notebooks. Over the last couple of years we integrated LLMs with notebooks and nbdev to create Solveit, which everyone at our company uses for nearly all our work (even our lawyers, HR, etc).

It turns out literate programming is useful for a lot more than just programming!


This seems to be the best link? https://solve.it.com/

The name is quite hard to search for, as it's used by a lot of different things.

Jeremy it's pretty hard to understand what this is from the descriptions, and the two videos are each ~1 hour long. Please consider showing screenshots and one or two short videos.


This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.

Centralizing power is dangerous and leads to power struggles and instability.


In the alternative, asymmetry is guaranteed.

When you only allow gov and big tech access to powerful AI, you create a much more dangerous and unstable world.


Ideally, sufficiently powerful AI would not be created unless the necessary safety mechanisms are established.

But also, that’s a different kind of asymmetry?


Yes there is. Lots of researchers are more interested in making a contribution to societal flourishing than in making incredible sums of money. That’s why there’s still lots of top AI researchers in academia.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: