Hacker Newsnew | past | comments | ask | show | jobs | submit | bluegatty's commentslogin

I think its only been like 10 weeks. I meant that's forever in AI time, but not a long time in normie people time.

The 'real world' analogy is much simpler: standards.

Canonical UX patterns are generally beneficial and most 'design' attempts are well-meaning dark patterns.

Xerox figured out windows, scroll bars, buttons, groups in the 1970s and most web interfaces are STILL not up to that standard!

Heck - they're not as good as Visual Basic apps from the 1990s.

Largely due to lack of design discipline.


The power saw makes average cuts, it didn't disemploy carpenters, we just made better homes.

We make more homes, but I would say the construction of the average home is worse after the invention of the power saw than before it.

Good gosh no.

That's like saying 'cars were better made in the 1950's because they used tons of steel'. Like they were 'heavier and more robust' - but that doesn't mean better.

Foundations are way better, more robust, especially weatherized. Windows today are like magic compared to windows 100 years ago.

What we do more poorly now is we don't use wood everywhere, aka doors, and certain kinds of workmanship are not there - like winding staircases, mouldings - but you can easily have that if you want to pay for it. That's a choice.

AI is power and leverage, it will make better things as long as it's directed by skilled operators.


Yes, houses got better because materials got better. Windows are better. But the construction of the houses is worse.

The precision of how the wood or material meets is worse (when cut at the site). There is a huge amount of sloppy work in modern construction.


I'm interested in how one would prove that one way or another.

It seems to me that in the past there probably was lots of shoddy workmanship and just no-one paid attention to it.

But I have no proof of that.


Fortunately, there are millions of buildings that remain standing as evidence of what was done in the past. So at least there's that!

Buildings don't get taken down because 'they were built poorly', it's cheaper to rebuild than refurbish.

And we can accommodate for 'selection bias'.

We have all of the historical evidence we could ever want for 'how things were built', basically 'infinity examples'.

I think some things were more robust, particularly some of the old framing, like in Europe, with non load-bearing walls etc. Those will stand for 1K years, but arguably unnecessary.


Massive selection bias - only the good quality ones remain standing, the low quality ones are not.

You have to get a representative sample, that's the tricky part.

So there's that!


this is not true in my experience. prefab kits of all sizes (from sheds to houses to barns, like were once possible to order from a Sears catalog) have worse tolerances than a carpenter working on site. you can measure 3 times and cut perfectly, and still end up with a few mm gap (or sometimes worse) after tiny errors accumulate as you assemble piece after piece. it _requires_ measuring as you go and cutting on site to handle this small amount of drift and to really produce something of high quality. it doesn't come in a box

Correct about large scale kits. I had meant to head off the fact that preassembled pieces like windows have improved a lot, things that used to be assembled on-site but are now delivered as a unit or small kit.

No, it doesn't. The power saw makes perfect cuts. That's why carpenters use them.

Yes but I would add 1) a tiny bit of cheat for self awareness. like if you REALLY don't want to go, you don't have to. But you REALLY have to NOT want it and 2) making it fun just changes the threshold so much. Commenter above indicated Rollerblading - it was the same for me. I can Rollerblade every day without having to convince myself, that is powerful.

But you do have a point and it is strong.

"Just Go" irrespective of how you feel about it, it's like "Going to Work" - like you get dressed, you drive there, you do the thing ... similar.


If you really don't want to go, just show up and do the little you can. Showing up is the hardest part. After showing up, you'd often find you want to do more than you originally anticipated you were capable of.

Codex with 5.4 xhigh. It's a bad communicator but does the job.

You mean codex (client) with GPT 5.4 xhigh? I am using Codex 5.3 (model) through Cursor, waiting for Codex 5.4 model as I had great experience so far with 5.3.

yes codex. it has 5.4.

It's bad at long running tasks.

Yes and no. It's bad because of shorter context but it does have auto-compaction which was much better than Claude. If you provide it documentation to work from and re-reference, it works long-running.

Honestly - 'every inch of IQ delta' seems to be worth it over anything else.

I'm a long time Claude Code supporter - and I'm ashamed to admit how instantly I dropped it when discovering how much better 5.4 is.

I don't trust Claude anymore for anything that requires heavy thinking - Codex always finds flaws in the logic.

But this happens every few months.


I tried to use 5.4 for something pretty straightforward - create scripts to automate navigating a game UI and capturing the network traffic. 5.4 was super frustrating, constantly stopping and waiting for feedback etc, even after telling it to never wait and just iterate/debug. I quit and switched to Opus 4.6 and it did much more of the work by itself.

I've never run into that problem, but these were coding solutions in codex with a strong plan, steps to work towards.

It could be that if you're using massive tokens on a 'plan' then then want to limit u in a way, or even if the objective is not perfectly clear they don't want semi-random token use.

See if the token/sub solution behaves differently. Make sure that when it 'compacts' that it re-reads your instructions clearly.


nope

Well I wish I could help - but things changes so fast - codex with opus 4.7 is not very strong. you have to set the effort level relatively high though.

I can totally see myself getting larked by this.

It literally sounds as if something is going on, to the point where I'm even questioning if there is systemic patterns in the darn trains!

It's weirdly easy to listen to.


second, please help us laypeople here

It's potentially useful for computer algebra with complex numbers - we might be able to simplify formulas using non-standard methods, but instead via pattern matching. We might use this to represent exact numbers internally, and only produce an inexact result when we later reduce the expression.

Consider it a bit like a "church encoding" for complex numbers. I'll try to demonstrate with an S-expression representation.

---

A small primer if you're not familiar. S-expressions are basically atoms (symbols/numbers etc), pairs, or null.

    S = <symbol>
      | <number>
      | (S . S)      ;; aka pair
      | ()           ;; aka null
    
There's some syntax sugar for right chains of pairs to form lists:

    (a b c)          == (a . (b . (c . ()))   ;; a proper list
    (a b . c)        == (a . (b . c))         ;; an improper list
    (#0=(a b c) #0#) == ((a b c) (a b c))     ;; a list with a repeated sublist using a reference
---

So, we have a function `eml(x, y) and a constant `1`. `x` and `y` are symbols.

Lets say we're going to replace `eml` with an infix operator `.`, and replace the unit 1 with `()`.

    C = <symbol>
      | <number>
      | (C . C)      ;; eml
      | ()           ;; 1
We have basically the same context-free structure - we can encode complex numbers as lists. Let's define ourselves a couple of symbols for use in the examples:

    ($define x (string->symbol "x"))
    ($define y (string->symbol "y"))
And now we can define the `eml` function as an alias for `cons`.

    ($define! eml cons)

    (eml x y)
    ;; Output: (x . y)
We can now write a bunch of functions which construct trees, representing the operations they perform. We use only `eml` or previously defined functions to construct each tree:

    ;; e^x

        ($define! exp     ($lambda (x) (eml x ())))
        
        (exp x)
        ;; Output: (x)
        ;; Note: (x) is syntax sugar for (x . ())

    ;; Euler's number `e`

        ($define! c:e     (exp ()))
        
        c:e          
        ;; Output: (())
        ;; Note: (()) is syntax sugar for (() . ())

    ;; exp(1) - ln(x)

        ($define! e1ml    ($lambda (x) (eml () x)))
        
        (e1ml x) 
        ;; Output: (() . x)

    ;; ln(x)

        ($define! ln      ($lambda (x) (e1ml (exp (e1ml x)))))
        
        (ln x)
        ;; Output: (() (() . x))

    ;; Zero

        ($define! c:0      (ln ()))
        
        c:0
        ;; Output: (() (()))

    ;; -infinity

        ($define! c:-inf   (ln 0))
        
        c:-inf
        ;; Output: (() (() () (())))

    ;; -x
        
        ($define! neg      ($lambda (x) (eml c:-inf (exp x))))

        (neg x)
        ;; Output: ((() (() () (()))) x)
        
    ;; +infinity
    
        ($define! c:+inf   (neg c:-inf))
        
        c:+inf
        ;; Output: (#0=(() (() () (()))) #0#)
        
    ;; 1/x
    
        ($define! recip    ($lambda (x) (exp (eml c:-inf x))))
        
        (recip x)
        ;; Output: (((() (() () (()))) . x))
  
    ;; x - y
    
        ($define! sub      ($lambda (x y) (eml (ln x) (exp y))))
    
        (sub x y)
        ;; Output: ((() (() . x)) y)
    
    ;; x + y
    
        ($define! add      ($lambda (x y) (sub x (neg y))))
    
        (add x y)
        ;; Output: ((() (() . x)) ((() (() () (()))) y))
    
    ;; x * y
    
        ($define! mul      ($lambda (x y) (exp (add (ln x) (exp (neg y))))))
        
        (mul x y)
        ;; Output: (((() (() () (() . x))) (#0=(() (() () (()))) ((#0# y)))))
        
    ;; x / y
    
        ($define! div      ($lambda (x y) (exp (sub (ln x) (ln y)))))
        
        (div x y)
        ;; Output: (((() (() () (() . x))) (() (() . y))))
        
    ;; x^y
    
        ($define! pow      ($lambda (x y) (exp (mul x (ln y)))))
  
        (pow x y)
        ;; Output: ((((() (() () (() . x))) (#0=(() (() () (()))) ((#0# (() (() . y))))))))
  
I'll stop there, but we continue for implementing all the trig, pi, etc using the same approach.

So basically, we have a way of constructing trees based on `eml`

Next, we pattern match. For example, to pattern match over addition, extract the `x` and `y` values, we can use:

    ($define! perform-addition
        ($lambda (add-expr)
            ($let ((((() (() . x)) ((() (() () (()))) y)) add-expr))
                (+ x y))))  

    ;; Note, + is provided by the language to perform addition of complex numbers

    (perform-addition (add 256 512))
    ;; Output: 768
So we didn't need to actually compute any `exp(x)` or `ln(y)` to perform this addition - we just needed to pattern match over the tree, which in this case the language does for us via deconstructing `$let`.

We can simplify the defintion of perform-addition by expanding the parameters of a call to `add` as the arguments to the function:

    ($define! $let-lambda
        ($vau (expr . body) env
            ($let ((params (eval expr env)))
                (wrap (eval (list* $vau (list params) #ignore body) env)))))
                
    ($define! perform-addition
        ($let-lambda (add x y)
            (+ x y)))

    ($define! perform-subtraction
        ($let-lambda (sub x y)
            (- x y)))


    ($define! sub-expr (sub 256 512))
    ;; Output: #inert
    sub-expr
    ;; Output: ((() (() . 256)) 512)

    (perform-subtraction sub-expr)
    ;; Output: -256

There's a bit more work involved for a full pattern matcher which will take some arbitrary `expr` and perform the relevant computation. I'm still working on that.

Examples are in the Kernel programming language, tested using klisp[1]

[1]:https://github.com/dbohdan/klisp


It'd be nice if they didn't use the term at all because I don't think they're useful relevant or real.

If we thought of all of this as 'stochastic data systems' then our heads would be in the right place as we thought about it just as 'powerful software' that can be used for good or bad purposes, and the negative externalizes will be derived from our use of it, not some inherent property.


On the other hand, "magical new systems that provide almost unlimited capacity for intelligent work" is probably a more functional mental model. Genie can give you 1000 wishes till you reach your session limit.

Not quite 1000 on Codex as of last day or two!

It would have been better if they didn't bootstrap it off the outright theft of a very large amount of IP only to lock it behind a paywall.

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Reverend Mother Gaius Helen Mohiam, Dune

'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.

AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.

- giving it too much trust, being lazy, improper guards and accidents - leveraging it for negative things (black hats, military targetting) - states and governments using it as instrument of control etc.

That's it.

Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.

Democracy, vigilance, laws, responsibility are what we need, in all things.


Exactly what I tried to articulate yesterday in https://news.ycombinator.com/item?id=47718812#47719503

> 'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.

In my view that line of argument is pro-AI hype. It's the Big Tech CEOs themselves who often share their predictions of the end of the world as we know it caused by AI. It's FUD that makes the technology sound more powerful and important than it is.


It’s like how the Viagra ads used to warn users to “seek medical help for erections lasting more than four hours.”

>ridiculous sci-fi nonsense

Give it a decade.

I think it may be like saying atomic bombs were sci-fi nonsense in the 1930s.


It's not ARC though. They use fancy threading mechanisms to avoid having to check on every access. Much faster.

"Swift uses Automatic Reference Counting (ARC) to track and manage your app’s memory usage."

https://docs.swift.org/swift-book/documentation/the-swift-pr...


I meant 'not like rust-Arc' :)

... which is what I thought people were referring to.

Yes - it's ref counting, but it's not like 'Arc' at all - ARC is way more thread aware and most 'access' doesn't have to do thread checking. Much faster.

So - using Rust 'Arc' would not be at all like using Swift 'ARC' in the end.


It is still primarily ARC. There's simply some optimizations in place, but still no real GC. So you'll never get those dreaded pauses at runtime. It COULD be a real rust competitor if a couple issues were addressed, mostly devx (in the standard tooling sense not the misuse of term this article has, meaning expressiveness, correctness, etc. which Swift already has in spades)

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: