Hacker Newsnew | past | comments | ask | show | jobs | submit | CrispinS's commentslogin

> What's moon plus sun?

Eclipse, obviously.


That’s sun minus moon. Moon plus sun is a wildly more massive, nuclear furnace of a moon that also engulfs the earth.


Reminds me of this AI word combination game recently shared on HN, with almost exactly these mechanics:

https://neal.fun/infinite-craft/

For the record, Sun+Moon is indeed eclipse.


>Moon plus sun is a wildly more massive, nuclear furnace of a moon that also engulfs the earth.

i just looked up mass of sun vs mass of moon (they differ by 10^30 vs 10^20), and the elemental composition of the sun: the moon would entirely disappear into the insignificant digits of trace elements which are in the range of .01 % of the sun. I could be off by orders of magnitude all over the place and it would still disappear.


Wait so moon plus sun != sun plus moon? :Thinking:


celestial objects don't need to obey algebraic commutativity!


I wonder if SCP-1313 does


This thread reminds me of Scribblenauts, the game where you conjure objects to solve puzzles by describing them. I suspect it was an inspiration for Baba Is You.


Scribblenauts was also an early precursor to modern GenAI/word embeddings. I constantly bring it up in discussions of the history of AI for this reason.


Could you explain? :3


Not sure about that. You can't have an eclipse without both the moon and the sun. Ergo, the eclipse is the totality (sorry!) of the sun and moon, or sun+moon (+very specific boundary conditions).

Still think it was a good response :)


Here i was, like an idiot, thinking it was moon light


Or potentially a sun that lasts slightly longer?


but then eclipse + moon = sun, which doesn't make much sense either :/


Not obvious. Astronomers are actively looking for signatures of exomoons around exoplanets. So "sun plus moon" could mean that too.


The OP said moon + sun, rather than sun + moon. We have no idea yet if celestial math is non-communicative.


*commutative


Well, that too.


Well you find the signature by looking for a dip in but sun's luminosity. So minus might be the better relationship here


Moon plus sun would be sun because the sun would be an absorbing element.


Moon implies there is a planet the moon is orbiting. So unless the planet and its moon are too close to the sun the long term result could also be: solar system.


This goes to show how that plus operation is awfully defined.


That's operator overloading for you.


The set of celestial objects visible to the naked eye during the day.


INSUFFICIENT DATA FOR MEANINGFUL ANSWER.


The thing I love about blog posts like these is how it reminds me that the tech world is a vast ocean that encompasses so many disciplines; it's not all full stack web development.

Related: I did not understand 95% of what she wrote.


On some of my cover letters I wrote "full stack from the transistors upwards", because at one point or another I have shipped code in:

- IC design software (at a startup bought by Cadence)

- an IC (contract out of Dallas semi)

- FPGA HFT acceleration

- fixing some OS drivers for Windows CE

- finding a compiler bug

- various bits of embedded firmware in C and assembly for various platforms

- debugging with a scope

- desktop applications

- a web server (defunct ZWS)

- web apps (Perl. Long time ago)

Somehow I've never written a react app.


> Somehow I've never written a react app.

Count your blessings.


lol


When I first came to HN, I didn't know what `hn`, `pg`, or other initialisms meant. But I saw people boasting in the new vocabulary of "full stack developer." And I assumed that if companies loved "javascript down to redis" that they would really love that I could do front end all the way down to embedded development. Think of the problems all my full stack knowledge could solve!

Never got an offer through "who's hiring" though.


I wrote here a couple days ago: "For a Hacker News degenerate, everything in the world revolves around bean-counting B2B SaaS CRUD crapps, but it doesn't mean it's all there is to the world, right?"


I didn't even know that 180nm was still a thing but clearly it is because apparently the cost difference is like USD 100M for 180nm vs USD 10B or more for the latest tech?

Is it true that we will likely have these 180nm chips for things like light bulbs for the foreseeable future?


Yes, actually 180 nm still represents a sizable amount of the market, in terms of volume! In more niche applications where chips contain lots of analog functionlity, you can still find plenty of designs being done in 180, 130, 110, and 65 nm. Most corporate designs don't disclose this, but I'd venture to guess the majority of integrated circuits in your home are made on these larger "process nodes". I work in 65nm and 130nm, for example. Free to ask if you want to know more!


I work in a similar market, and we're only just starting to phase out these larger nodes and move to 22nm simply for wafer availability.

It doesn't benefit from 22nm - analog blocks generally don't scale down at all, they have to be a particular size to achieve particular current handling, inductance etc. requirements. But we need the production line availability.


I'm not OP, but perhaps you, or somebody else here, could answer my question, albeit one that is slightly off-topic. In the recent years, in part courtesy of cryptoindustry investment, there were many advancements in zero-knowledge mathematics and applied cryptography. I've been on-and-off researching computational approaches to liquid democracy[1], on the off-chance that we may one day apply it in my country, Ukraine, and I came to conclusion that open hardware-as-public good are table stakes to that end. The modern computers are way too complex, and the trust in them is at an all-time low. To bring computation into politics—it's a tall order. However, if we could buy a fab, design some hardware transparently, allow inspections from civil groups and scientists, maybe that could work... What kind of costs are we looking at for establishing something like 130nm process, and would it be possible to buy out the necessary IP, too, so that everything could be done in the open?

Does this even work longterm? I'd like to think transparent-by-design hardware manufacturing is not a pipe dream, but if that's the case, I would hate to give it too much thought.

[1] https://en.wikipedia.org/wiki/Liquid_democracy


Hey, I'm not a system-level digital designer, but for government-level initiatives to provide 130nm and 65nm fabs for public benefit, yes it exists!

From the 2025 Free Silicon Conference:

https://wiki.f-si.org/index.php?title=The_Transparent_Refere...

https://wiki.f-si.org/images/e/eb/OpenFab%40FSiC2025.pdf

The initiative started in Germany, where the research institute IHP already provides an open source 130nm PDK and associated foundry, but interest is spreading. Here's the abstract from that talk:

"The European Chips Act aims to double Europe’s share in global semiconductor manufacturing to 20% by 2030. However, most current investments focus on leading-edge nodes and pilot lines, which – while important – are not sufficient to achieve broad capacity scaling. At the same time, demand for mature nodes (≥65 nm) remains strong: over two-thirds of chips in automotive and industrial sectors still rely on nodes ≥90 nm, and this trend is expected to persist through 2030. This contribution introduces the concept of a Transparent Reference Fab – a fully open, scalable semiconductor fabrication model designed to serve as a blueprint for sovereign and trustworthy chip manufacturing in Europe. Unlike traditional pilot lines, the Transparent Reference Fab is production-ready and replicable. It includes open access to process design kits (PDKs), equipment configurations, process recipes, and operational know-how. The fab targets mature nodes, especially 65 nm CMOS, and is intended to be built on existing infrastructure to reduce time-to-market and technical risk. We argue that such a model can significantly multiply Europe’s production capacity by enabling private and public actors to replicate the reference fab across regions. This approach would not only strengthen Europe’s position in strategic semiconductor supply chains but also foster innovation, education, and security through transparency. The paper presents the strategic rationale, technical architecture, and implementation path, positioning the Transparent Reference Fab as a critical instrument for European resilience and competitiveness."


Wow, thanks! I was completely unaware of it, of course.


This project exists, here it is: https://opentitan.org/


I previously came across OpenTitan, but it's hardware design only, right? It doesn't actually concern itself with bringing up transparent manufacturing process?

For example, I couldn't find anything about the costs necessary to bring up a fab?


A project that addresses that issue is betrusted: https://betrusted.io/ Their plan for fab trust is not to bring up a fab,but to design for inspectability: https://bunnie.org/iris/


I happen to own a Precursor, and indeed used it for some experiments, but it's unfortunately limited by Xilinx Spartan-7 availability, which is one of the few FPGA's that have been reverse-engineered, and they probably don't make it anymore... Another one that has been RE'd is Lattice ECP5 but it's in the same category. I'm pretty sure you couldn't make 50 million devices like that. I know they've been looking into alternatives, but haven't caught up yet.


Their next one (https://baochip.com/) is going to be a SoC, piggy backed on another company's SoC. So not completely open source RTL, but enough to prove their technology on a larger scale. Bunnie's presentation of it is here: https://media.ccc.de/v/39c3-xous-a-pure-rust-rethink-of-the-... (25 minutes in)


Thanks for offering. Do you do analog design, and which market niche are you targeting: low cost per part or something else?


I work in custom CMOS image sensor design, targeting scientific imaging applications like electron microscopes, X-ray microscopy, and detectors for high-energy physics. Our designs aren't that cost sensitive from a unit cost perspective, because we are at most probably making several thousand of the chips. So the cost per chip can effectively range from 10-100$ at this scale, after yield losses. But the fixed costs of engineering and 'mask creation' for process nodes can range from 300k$ for nodes around 180 nm, to over 500k$ for 65nm, and above 1m$ for 28nm and below.

We can save money during initial prototyping, by creating a small test structure as small as 1mmm^2, which reduces the cost of a prototype run to 5k$ - 10k$. Some services that provide this are MOSIS [0] in the US, and Europractice [1] in the EU. But when we go to a full production run, there's no way to get around creating a 'full reticle' design, as image sensors have a physical dimension determined by focal plan size requirement of imaging application. For example, in digital camera, if a sensor is 'full frame' then it obviously has to be 36mm x 24mm, regardless of if the process node would have let you shrink it. And if you make a serious mistake, then you need to do another production run, which means you pay the 300k$ - 1m$ once again.

In terms of the circuit functionality, image sensors require a mixture of analog and digital design, but in this area, even many of the digital circuits are custom designed, rather than relying on foundry-provided 'standard cells' and an automatic place-and-route flow.

[0] https://www.mosis.org/ [1] https://europractice-ic.com/


Oh thanks, this is really interesting. Is there a limit to how far you can scale down your node to build the full frame image sensor: is 180nm the largest feasible node?


Modern commercial image sensors are made in process nodes down to 28nm [0], and for visible light have pixels measuring 0.7-1.5 μm. At [0] there a diagram which gives a feel for what technology nodes are available and used for different applications. For example, RF ICs and power management ICs also typically use larger process nodes, and not just for reasons of cost. In fact a larger node, doesn't necessarily even mean older. For example, many technologies allowing better power handling capabilities in integrated circuits have come exclusively to larger nodes.

Regarding node sizes for image sensors, TSMC built a 28nm fab recently for Sony exclusively to make their latest sensors. There was actually a HN post about that a couple years ago [1]. Also, it's important to note that in many applications, the image sensor layer is now actually stacked, with a layer of DRAM (in 45 nm, for example) between, and a ISP (image signal processor) chip on the bottom made in a smaller digital process. You can see an image of that stack up here [2].

[0] https://image-sensors-world.blogspot.com/2020/08/tsmc-report... [1] https://news.ycombinator.com/item?id=24321804 [2] https://fuse.wikichip.org/news/763/iedm-2017-sonys-3-layer-s...


This is great: thanks for all this.


More thank light bulbs. As you have correctly pointed it out, its a matter of economics: 180nm is CHEAP! So a lot more things become economically viable, think of all the weird specialized ASICs that used to be to expensive to build.


Not only that, but 180nm/130nm is the only option that is OpenSourced, as of now. Transistor Libraries for ICs (or, PDKs) have long been proprietary. I'm only aware of IHP and Sky130, which are actually banking on Fossi or Libre Silicon design.


That's what is expected to finally kill Moore's law: the economics. At some point it'll still be technically possible to fabricate smaller IC structures, stack more layers etc, but the tech to do so (and fabs to do it at scale) will be costly enough that it's just not worth it.

The other point is of course a next-gen fab first needs to be built, and get those yields up. While previous-gen fab already exists - with all the fine-tuning already done & kinks ironed out. Not to mention maaanny applications simply don't need complex ICs (typical 32bit uC comes to mind, but even 8bit ones are still around).


True, someone needs to build that computer after all.


I suppose it's fitting that an article concerned largely with AI was written largely by AI. (I noticed a lot of GPT-isms.)

I mean, it is mostly solid advice, e.g. asking AI to cite sources (and checking them!) and asking about the assumptions it's making.

And on the subject of automating things or making things more efficient, I'd extend that to a general reminder that just because things are the way the are, doesn't mean they have to be that way.

Which sounds obvious, but it's so easy to get used to a situation in your life that you don't like, but it's not so horrendous that you're motivated to do something about it. And then it just becomes background and you forget that there's the possibility of a better reality.

Speaking from many personal experiences here...


Did you use an AI to tweak/refine your comment? It's:

* Written more formally than the typical HN comment

* Uses uncommon language like "jocular asides" and "whimsical similes"

* Fails to recognize that those mentioned phrases are cliches that people have been using for ages, long before LLMs

In short, recalibrate your AI radar, it's malfunctioning.


Heh, I guess so. It's just an uneasy feeling I can't get rid of. Maybe I'm just being paranoid. Then again, I wonder if the said greentexts are AI-generated still. At least the contents are likely to be fakes.


AI isn't this funny.


Not everything you dislike is AI. I don't see any signs at all of AI authorship.


I actually didn't dislike the premise of the article at all, and agree with some/many of the points (I've even favourite'd it). It showed a perspective I hadn't explicitly thought of before.

The sentence structures I mentioned in my earlier comment are what are often associated with AI. Once you start noticing them, you'll find them a lot on online content. Lmk if you want to learn more, there's a YouTube video on identifying AI comments. I had independent found many of them myself, which would be very unlikely if these were genuinely not ai language traits.


I agree with your last sentence, but on the subject of positive portrayals of US armed forces, the studios actually have an incentive to play nice. The DoD will let film productions use real equipment and personal, but only after vetting the script and making changes as they see fit.

For example, the Transformers movies: https://www.wired.com/2008/12/pentagon-holl-1/

The general concept: https://en.m.wikipedia.org/wiki/Military-entertainment_compl...


I can't believe a software developer is using an operating system/pdf viewer that isn't patched for security vulnerabilities as major as an RCE.

Unless this was a zero day, but I would have assumed the article would mention that fact ..


I really wish we had details here too, but someone made a good point:

"Hey, you need a PDF viewer with scripts enabled for the digital signing.. can you install Adobe XXX?" would be a good line to get the mark to use a less-than-secure PDF viewer.

But also, since it was the North Korea hacking group, I'm not ruling out a 0-day... hopefully more details will come at some point.


Emojis? For the current weather, a single emoji could convey quite a lot; e.g. a snowflake for sub-60 weather (I have a low tolerance for cold), a sun for 60-80, fire emoji for 80+...

Now, I don't know if anyone truly needs the weather in their terminal prompt, but it is doable.


Microcharts or sparklines are another option. I've seen a few implementations along these lines for shell prompts / shell use.

This might be useful for temperature, humidity, wind, preciptitation, and similar measures, either as quantities or timelines.

https://en.wikipedia.org/wiki/Sparkline

https://github.com/deeplook/sparklines

Similar:

https://www.linux-magazine.com/Issues/2016/183/Calc-Conditio...


perhaps the RPROMPT would be better. i usually use it to show time %T.


curl 'wttr.in/?format=%c'

see the readme here for all options:

https://github.com/chubin/wttr.in


> The second link returned on him was from ADL. No way that's an organic result.

It might be, actually. I understand why you'd think that, but look at the results for other search engines.

Kagi: ADL in 2nd place

Bing: ADL in 3rd place

Yandex: ADL not on the first page, but SPLC[1] is the the 6th result

[1]: https://www.splcenter.org/fighting-hate/extremist-files/indi...


This logic kind of fails quickly. I bet you wouldn't use it to show that Tiananmen Square did not happen, by showing all Chinese Search Engine are in apparent agreement on it not happening.


Well, no, which is why I threw in Kagi and Yandex as well. I can imagine Google and Microsoft altering rankings for certain results for political reasons, but Kagi seems too small to care about that, and Yandex isn't operating from the same political playbook as western corporations.

Now, in defense of your theory, I did double check Kagi and found out that they use Bing and Google for some queries, so the only truly "untainted" one is Yandex, which doesn't have ADL on the first page, or the next five that I checked.

That said, as I mentioned they do surface SPLC, which is similar in tone and content.

Limited sample size, but I think it's still plausible that ADL is an organic result.

I also checked Yahoo, and it has ADL as the third result.

I checked Baidu and Naver, and didn't see ADL, but I assume they're prioritizing regional content.


Does it often happen to you that you talk about Ai and, three minutes later, find yourself arguing with every search machine on the planet that it’s impossible that someone would say nasty things about your favorite fascist?


Guess it depends on the "algorithm" but if we were still in the PageRank era there's no way in hell ADL or SLPC would be anywhere near the top results for "Alex Jones", considering how many other news stories, blogs, comments, etc. about him exist.


The PageRank era ended almost immediately. Google has had a large editorial team for a long, long time (probably before they were profitable).

It turns out PageRank aways kind of sucked. However, it was competing with sites that did “pay for placement” for the first page or two, so it only had to be better than “maliciously bad”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: