Hacker Newsnew | past | comments | ask | show | jobs | submit | nickjj's commentslogin

tmux by itself lets you create any number of sessions, windows and panes. You can arrange them for anything you want to do.

Having a pane dedicated to some LLM prompt split side by side with your code editor doesn't require additional tools, it's just a tmux hotkey to split a pane.

There's also plugins like tmux resurrect that lets you save and restore everything, including across reboots. I've been using this set up for like 6-7 years, here's a video from ~5 years ago but it still applies today https://www.youtube.com/watch?v=sMbuGf2g7gc&t=315s. I like this approach because you can use tmux normally, there's no layout config file you need to define.

It lets me switch between projects in 2 seconds and everything I need is immediately available.


Oh I mixed up tmux and termux.. woops.

> properly defining the spec

Why do you often need to re-prompt things like "can you simplify this and make it more human readable without sacrificing performance?". No amount of specification addresses this on the first shot unless you already know the exact implementation details in which case you might as well write it yourself directly.

I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worthy of being git commit.

I sometimes use AI for tiny standalone functions or scripts so we're not talking about a lot of deeply nested complexity here.


> I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worth of being git commit.

Are you stuck entering your prompts in manually or do you have it setup like a feedback loop like "beautify -> check beauty -> in not beautiful enough beautify again"? I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.


I do everything manually. Prompt, look at the code, see if it works (copy / paste) and if it works but it's written poorly I'll re-prompt to make the code more readable, often ending with me making it more readable without extra prompts. Btw, this isn't about code formatting or linting. It's about how the logic is written.

> I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.

If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning. If not, we're admitting it's providing an initial worse result for unknown reasons. Maybe it's to make you as the operator feel more important (yay I'm providing feedback), or maybe it's to extract the most amount of money it can since each prompt evaluates back to a dollar amount. With the amount of data they have I'm sure they can assess just how many times folks will pay for the "make it better" loop.


Why do you orchestrate the AI manually? You could write a BUILD file that just does it in a loop a few times, or I guess if you lack build system interaction, write a python script?

> If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning.

This is the wrong way to think about AI (at least with our current tech). If you give AI a general task, it won't focus its attention at any of these aspects in particular. But, after you create the code, if you use separate readability and optimization feedback loops where you specifically ask it to work on those aspects of the code, it will do a much better job.

People who feel like AI should just do the right thing already without further prompting or attention focus are just going to be frustrated.

> Btw, this isn't about code formatting or linting. It's about how the logic is written.

Yes, but you still aren't focusing the AI's attention on the problem. You can also write a guide that it puts into context for things you notice that it consistently does wrong. But I would make it a separate pass, get the code to be correct first, and then go through readability refactors (while keeping the code still passing its tests).


> Why do you orchestrate the AI manually?

I have zero trust in any of these tools and usually I use them for 1 off tasks that fit well with the model of copy / pasting small chunks of code.

> But, after you create the code, if you use separate readability and optimization feedback loops where you specifically ask it to work on those aspects of the code, it will do a much better job.

I think that's where I was going with the need to re-prompt. Why not provide the result after 5 internal rounds of readability / optimization loops as the default? I can't think of times where I wouldn't want the "better" version first.


Make (or whatever successor you are using, I'm sure no one actually uses nmake anymore) is pretty reliable in filling in templates that feed into prompts. And AI is pretty efficient at writing make files, lowering the effort/payoff payoff threshold.

> I think that's where I was going with the need to re-prompt. Why not provide the result after 5 internal rounds of readability / optimization loops as the default? I can't think of times where I wouldn't want the "better" version first.

I don't think this would work very well right now. I find that the AI is good at writing code, or maybe optimizing code, or maybe making the code more readable (that isn't one I do often, but optimization all the time), but if I ask it to do it all at once it does a worse job. But I guess you could imagine a wrapper around LLM calls (ClaudeCode) that does multiple rounds of prompting, starting with code, then improving the code somewhat after the code "works". I kind of like that it doesn't do this though, since I'm often repairing code and don't want the diff to be too great. Maybe a readability pass when the code is first written and then a readability pass sometimes afterwards when it isn't in flux (to keep source repository change diffs down?).


There's two secret sauces to making Claude Code your b* (please forgive me future AI overlords), one is to create a spec, the other is to not prompt merely "what" you want and only what you want, but what you want, HOW you want it done (you can get insanely detailed or just vague enough), and even in some cases the why is useful to know and understand, WHO its for sometimes as well. Give it the context you know, don't know anything about the code? Ask it to read it, all of it, you've got 1 million tokens, go for it.

I have one shot prompted projects from empty folder to full feature web app with accounts, login, profiles, you name it, insanely stable, maybe and oops here or there, but for a non-spec single prompt shot, that's impressive.

When I don't use a tool to handle the task management I have Claude build up a markdown spec file for me and specify everything I can think of. Output is always better when you specify technology you want to use, design patterns.


Besides having a script to run `myip`, `myip --local` or `myip --public`, the post and video go over using strace to monitor network traffic to make sure the command to help get your local IP address isn't accessing the public internet.

Being able to scale an image without losing quality is going to be handy. I always found it odd that scaling down an image now and then scaling it back to its original size 2 seconds later with the same tool resulted in a loss of quality and having to delete the layer, then re-import the image to get the original quality back.

This plugin https://github.com/LinuxBeaver/Gimp_Layer_Effects_Text_Style... also makes adding text effects with GIMP pretty good. This is unrelated to 3.2 but turned out to be a necessity for me.


It's because each transform was "destructive" (like filters use to be by default). What link & vector layers do instead is store a transform matrix, so each transform just updates the matrix instead of actually re-rasterizing the layer each time.

We were hoping to expand that feature to all layer types for 3.2, but we ran out of time to properly test it for release. It'll like be finished for the next minor release.


It sounds like you are a gimp developer. Curious about the use of AI to work on it. Do the gimp devs use AI to write code?

I can't speak for all of us, but generally no (in terms of GenAI at least). There are concerns about generated code not being compatible with GPL, and honestly a lot of the drive-by GenAI coded merge requests tend to not work.

I see you are getting downvoted but I don't blame you for this question. I've been curious about what developers of established products are doing with LLM assisted coding myself.

Like most of us, they're certainly using ai-assisted auto-complete and chat for thinking deep. I highly doubt they're vibe coding, which is how I interpret the parent's question and probably why they are being down voted.

This is insulting to our craft, like going to a woodworkers convention and assuming "most of [them]" are using 3D-printers and laser cutters.

Half the developers I know still don't use LSP (and they're not necessarily older devs), and even the full-time developers in my circle resist their bosses forcing Copilot or Claude down their throats and use in fact 0 AI. Living in France, i don't know a single developer using AI tools, except for drive-by pull-request submitters i have never met.

I understand the world is nuanced and there are different dynamics at play, and my circles are not statistically representative of the world at large. Likewise, please don't assume this literally world-eating fad (AI) is what "most of us" are doing just because that's all the cool kids talk about.


> Half the developers I know still don't use LSP

Your IDE either uses an LSP or has its own baked-in proprietary version of a LSP. Nobody, and I mean nobody, working on real projects is "raw dawgin" a text file.

Most modern IDE's support smart auto-complete, a form of AI assistance, and most people use that at a minimum. Further, most IDE's do support advanced AI assisted auto-complete via copilot, codex, Claude or a plethora of other options - and many (or most) use them to save time writing and refactoring predictable, repetitive portions of their code.

Not doing so is like forgoing wheels on your car because technically you can just slide it upon the ground.

The only people I've seen in the situation you've described are students at university learning their first language...


I guess I'm nobody then.

I write code exclusively in vim. Unless you want to pretend that ctags is a proprietary version of an LSP, I'm not using an LSP either. I work at a global tech company, and the codebase I work on powers the datacenter networks of most hyperscalers. So, very much a real project. And I'm not an outlier, probably half the engineers at my company are just raw dawgin it with either vim or emacs.


Ctags are very limited and unpopular. Most people do not use them, by any measurement standard.

Using a text editor without LSP or some form of intellisense in 2026 is in the extreme minority. Pretending otherwise is either an attempted (and misguided) "flex" or just plain foolishness.

> probably half the engineers at my company are just raw dawgin it with either vim or emacs

Both vim and emacs support LSP and intellisense. You can even use copilot in both. Maybe you're just not aware...


When your language has neither name-mangling nor namespaces, a simple grep gets you a long way, without language specific support. Ma editor (not sure if it counts as IDE?) uses only words in open documents for completions and that is generally enough. If I feel like I want to use a lot of methods from a particular module I can just open that module.

I don't use an IDE under the common definition. All my developer friends use neovim, emacs, helix or Notepad++. I'm not a student. The people i have in mind are not students.

Your ai-powered friends and colleagues are not statistically representative. The world is nuanced, everyone is unique, and we're not sociologists running a long study about what "most of us" are doing.

> forgoing wheels on your car

Now you're being silly. Not using AI to program is more akin to not having a rocket engine on your car. Would it go faster? Sure. Would it be safer? Definitely not. Do some people enjoy it? Sure. Does anyone not using it miss it? No.


Like 99.9999 of woodworkers already cheat by using metal and not wood tools

I didn't say using different technology was cheating, and metal tools are certainly part of woodworking for thousands of years so that's not really comparable.

It's also very different because there's a qualitative change between metal woodworking tools and a laser cutter. The latter requires electricity and massive investments.


Metal tools also require massive investments compared to plain wood tools.

I take it you also mean vibe coding to be one shot and go?

Many years ago I tested a native OS/2 image editor with this feature. It also made it possible to undo an individual transform or effect in the current stack while leaving the rest untouched. Will that be possible in Gimp as well?

Yes, it's planned for transform tools and already possible with filters. Technically our transform tools are already capable of this (they use GEGL operations the same as our non-destructive filters). We just need to tweak it to not immediately commit the transform, and then implement a UI.

When does the final calculation happen then, at file save/export? Will be unexpected. Or does it end up in the final format? That's going to be a nightmare, because then you can't use GIMP to redact data anymore.

That's up to you. Right now filters work the same way - you can merge them automatically on creation, merge them at some point while working, or merge them on export. For formats like PSD, we'll eventually add the option to export as non-destructive filters as well.

We don't want to take away choices - we just want to add more options for people's workflows.


> I always found it odd that scaling down an image now and then scaling it back to its original size 2 seconds later with the same tool resulted in a loss of quality

Maybe it's because I grew up with Paint Shop Pro 6 and such, but that seems completely normal and expected to me


I was using Photoshop, I don't remember when exactly but it's probably in the 15-20 year range when non-destructive scaling was available. I don't remember not having it. Glad to see GIMP is moving in this direction.

> I always found it odd that scaling down an image now and then scaling it back to its original size 2 seconds later with the same tool resulted in a loss of quality

I'm honestly baffled at your surprise... say, if you crop an image, and 2 seconds later you enlarge it to its original size; do you expect to get the inital image back? Or a uniform color padding around your crop?

Scaling is just cropping in the frequency domain. Behaviour should be the same.


From a developer perspective you're obviously correct, but from a user perspective it doesn't make sense that the tool discards information, especially when competing tools don't do that.

Of course as a developer that makes it all the more impressive - kudos to the team for making such big progress, I can't wait to play around with all the new improvements!


Cropping IS a destructive operation. If the program isn't throwing information away, then it doesn't actually do cropping, but some different operation instead.

From a user perspective I wouldn't like it, if I were to crop something and the data would be still there afterwards. That would be a data leak waiting to happen.


I genuinely can't empathize with this objection. To me it's basically the same as arguing against Undo/Redo in a text editor because someone could come along and press Undo on my keyboard after I've deleted sensitive data.

What percentage of users sends around raw project files from which they've cropped out sensitive data to users who shouldn't see that data, vs. what percentage of users ever wants to adjust the crop after applying other filters? The latter is basically everyone, the earlier I'm guessing at most 1%?


but nobody argues against undo/redo in gimp!

going by your text editor analogy, we are arguing against implementing undo/redo as a "non-destructive delete", based on adding backspace control characters within the text file. I want infinitw undo/redo, but i also want that when I delete a character it is really gone, not hidden!


Sorry, but I still don't see it - the text editor analogy is stretched far too thin. If I share a project file, I want the other user to see all this stuff. If I don't want them to see all this stuff, I send them an export.

It would be a true shame if every useful feature was left out due to 1% of use cases becoming slightly different.


Nice - all actions performed on a layer should retain a hidden "raw original" so we get non-destructive transforms.

I don't think it's that

Here's an example from Gemini with some Lua code:

    label = key:gsub("on%-", ""):gsub("%-", " "):gsub("(%a)([%w_']*)", function(f, r) 
      return f:upper() .. r:lower() 
    end)

    if label:find("Click") then
      label = label:gsub("(%a+)%s+(%a+)", "%2 %1")
    elseif label:find("Scroll") then
      label = label:gsub("(%a+)%s+(%a+)", "%2 %1")
    end
I don't know Lua too well (which is why I used AI) but I know programming well enough to know this logic is ridiculous.

It was to help convert "on-click-right" into "Right Click".

The first bit of code to extract out the words is really convoluted and hard to reason about.

Then look at the code in each condition. It's identical. That's already really bad.

Finally, "Click" and "Scroll" are the only 2 conditions that can ever happen and the AI knew this because I explained this in an earlier prompt. So really all of that code isn't necessary at all. None of it.

What I ended up doing was creating a simple map and looked up the key which had an associated value to it. No conditions or swapping logic needed and way easier to maintain. No AI used, I just looked at the Lua docs on how to create a map in Lua.

This is what the above code translated to:

    local on_event_map = {
      ["on-click"] = "Left Click",
      ["on-click-right"] = "Right Click",
      ["on-click-middle"] = "Middle Click",
      ["on-click-backward"] = "Backward Click",
      ["on-click-forward"] = "Forward Click",
      ["on-scroll-up"] = "Scroll Up",
      ["on-scroll-down"] = "Scroll Down",
    }

    label = on_event_map[key]
IMO the above is a lot clearer on what's happening and super easy to modify if another thing were added later, even if the key's format were different.

Now imagine this. Imagine coding a whole app or a non-trivial script where the first section of code was used. You'd have thousands upon thousands of lines of gross, brittle code that's a nightmare to follow and maintain.


This sounds like moving the goal posts but Gemini is generally not a good model. Try Sonnet

I have 2 wired headphones.

My main ones are Sony MDR-V6s which I've had for 10 years. They are the best headphones I've ever owned and they sound just as good today as they did a decade ago. They were originally made in 1985 and the wire never tangles.

The other are crappy $8 earbuds / mic combo that are maybe 7 years old and work just fine.

I have wireless earbuds that I occasionally use since the Pixel 9a has no 3.5mm jack. They are worse in every way that I care about. I have to babysit them to make sure they are charged.

Sure the wired earbuds get tangled sometimes but it's not a big deal to address that. I also think wired is an advantage for portable usage. For example, for running or doing any activity the wire ensures if they fall out of your ear you won't lose them. They also don't need a case so you can stuff them anywhere without a bulge.


I really admire Carmack and followed everything id software since the beginning.

They really did put a lot of things out in the open back then but I don't think that can be compared to current day.

Doom and Quake 1 / 2 / 3 were both on the cusp of what computing can do (a new gaming experience) while also being wildly fun. Low competition, unique games and no AI is a MUCH different world than today where there's high competition, not so unique games and AI digesting everything you put out to the world only to be sold to someone else to be your competitor.

I'm not convinced what worked for id back then would work today. I'm convinced they would figure out what would work today but I'm almost certain it would be different.

I've seen nothing but personal negative outcomes from AI over the last few years. I had a whole business selling tech courses for 10 years that has evaporated into nothing. I open source everything I do since day 1, thousands of stars on some projects, people writing in saying nice things but I never made millions, not even close. Selling courses helped me keep the lights on but that has gone away.

It's easy to say open source contributions are a gift and deep down I do believe that, but when you don't have infinite money like Carmack and DHH the whole "middle class" of open source contributors have gotten their life flipped upside down from AI. We're being forced out of doing this because it's hard to spend a material amount of time on this sort of thing when you need income at the same time to survive in this world.


Yes, I remember writing a VB6 driven editor. I was so happy when I got find and replace to work.

I still have the marketing page copy from 2002:

    <UL>
      <LI>Unlimited fully customizable template files</LI>
      <LI>Fully customizable syntax highlighting</LI>
      <LI>Very customizable user interface</LI>
      <LI>Color coded printing (optional)</LI>
      <LI>Column selection abilities</LI>
      <LI>Find / Replace by regular expressions</LI>
      <LI>Block indent / outdent</LI>
      <LI>Convert normal text to Ascii, Hex, and Binary</LI>
      <LI>Repeat a string n amount of times</LI>
      <LI>Windows Explorer-like file view (docked window)</LI>
      <LI>Unlimited file history</LI>
      <LI>Favorite groups and files</LI>
      <LI>Unlimited private clipboard for each open document</LI>
      <LI>Associate file types to be opened with this editor</LI>
      <LI>Split the view of a document up to 4 ways</LI>
      <LI>Code Complete (ie. IntelliSense)</LI>
      <LI>Windows XP theme support</LI>
    </UL>
Back then we used uppercase HTML tags.

Windows XP theme support! That was advanced!

Haha thanks.

I went all-in developing that editor. It had a website and forums but it wasn't something I sold, you could download it for free. Funny how even back then I tolerated almost no BS for the tools I use. I couldn't find an editor that I liked so I spent a few weeks making one.

Fast forward 20 years and while I'm not using my own code editor the spirit of building and sharing tools hasn't slowed down. If anything I build more nowadays because as I get older the more I want to use nice things. My tolerance has gotten even stricter. It's how I ended up tuning my development environment over the years in https://github.com/nickjj/dotfiles.


This is definitely aging me, but I'm still disappointed that all caps didn't win. That style made it so much easier to visually parse tags when scanning through the HTML code. I admit that syntax highlighting has mostly done away with that benefit, and now that I'm used to the lower case I don't mind it anymore, but the uppercase always felt better to me. Even reading that example above it feels more natural. Style is a hard thing.

I agree, even with syntax highlighting it visually looks more appealing in caps.

Is it only possible to have success with paid versions of these LLMs?

Google's "Ask AI" and ChatGPT's free models seem to be consistently bad to the point where I've mostly stopped using them.

I've lost track of how many times it was like "yes, you're right, I've looked at the code you've linked and I see it is using a newer version than what I had access to. I've thoroughly scanned it and here's the final solution that works".

And then the solution fails because it references a flag or option that doesn't even exist. Not even in the old or new version, a complete hallucination.

It also seems like the more context it has, the worse it becomes and it starts blending in previous solutions that you explained didn't work already that are organized slightly different in the code but does the wrong thing.

This happens to me almost every time I use it. I couldn't imagine paying for these results, it would be a huge waste of money and time.


It depends.

Google's AI that gloms on to search is not particularly good for programming. I don't use any OpenAI stuff but talking to those that do, their models are not good for programming compared to equivalent ones from Anthropic or google.

I have good success with free gemini used either via the web UI or with aider. That can handle some simple software dev. The new qwen3.5 is pretty good considering its size, though multi-$k of local GPU is not exactly "free".

But, this also all depends on the experience level of the developer. If you are gonna vibe code, you'll likely need to use a paid model to achieve results even close to what an experienced developer can achieve with lesser models (or their own brain).


Set up mmap properly and you can evaluate small/medium MoE models (such as the recent A3B from Qwen) on most ordinary hardware, they'll just be very slow. But if you're willing to wait you can get a feel for their real capabilities, then invest in what it takes to make them usable. (Usually running them on OpenRouter will be cheaper than trying to invest in your own homelab: even if you're literally running them on a 24/7 basis, the break even point compared to a third-party service is too unrealistic.)

Subjectively, but with tests using identical prompts, I find the quality of qwen3.5 122b below claude haiku by as much as claude haiku is below claude sonnet for software design planning tasks. I have yet to try a like-for-like test on coding.

> But, this also all depends on the experience level of the developer. If you are gonna vibe code,

Where I find it struggles is when I prompt it with things like this:

> I'm using the latest version of Walker (app launcher on Linux) on Arch Linux from the AUR, here is a shell script I wrote to generate a dynamic dmenu based menu which gets sent in as input to walker. This is working perfectly but now I want to display this menu in 2 columns instead of 1. I want these to be real columns, not string padding single columns because I want to individually select them. Walker supports multi-column menus based on the symbol menu using multiple columns. What would I need to change to do this? For clarity, I only want this specific custom menu to be multi-column not all menus. Make the smallest change possible or if this strategy is not compatible with this feature, provide an example on how to do it in other ways.

This is something I tried hacking on for an hour yesterday and it led me down rabbit hole after rabbit hole of incorrect information, commands that didn't exist, flags that didn't exist and so on.

I also sometimes have oddball problems I want to solve where I know awk or jq can do it pretty cleanly but I don't really know the syntax off the top of my head. It fails so many times here. Once in a while it will work but it involves dozens of prompts and getting a lot of responses from it like "oh, you're right, I know xyz exists, sorry for not providing that earlier".

I get no value from it if I know the space of the problem at a very good level because then I'd write it unassisted. This is coming at things from the perspective of having ~20 years of general programming experience.

Most of the problems I give it are 1 off standalone scripts that are ~100-200 lines or less. I would have thought this is the best case scenario for it because it doesn't need to know anything beyond the scope of that. There's no elaborate project structure or context involving many files / abstractions.

I don't think I'm cut out for using AI because if I paid for it and it didn't provide me the solution I was asking for then I would expect a refund in the same way if I bought a hammer from the store and the hammer turned into spaghetti when I tried to use it, that's not what I bought it for.


What LLM are you using? What you describe should be no problem for gemini free or claude haiku and above. Other models, I dunno.

Both ChatGPT's anonymous one as well as Google's "AI mode" on their search page which brings you to a dedicated page to start prompting. I'm not sure if that's Gemini proper because if I goto https://gemini.google.com/app it doesn't have my history.

The "AI mode" in Google search is pretty bad for programming. It is not Gemini.

I don't have direct experience with ChatGPT but those that do that I've talked to place it behind Gemini and Claude models.

Try free Claude or Gemini on the web and see if you have a better experience. Claude free is better than Gemini free. (actually, Gemini free seems extra dumb lately).


Thanks, I tried both and the results were not good IMO.

I gave them the same prompts. Both failed to give a working solution. I lost track of how many times it said "This is the guaranteed to work final solution" which still had the same problem as the 5 previous failures.

I gave up after around 40 failed prompts in a row where it was "Absolutely certain" it will work and is the "final boss" of the solution.


I personally didn't get good results until I got the $100/mo claude plan (and still often hit $180/mo from spending extra credits)

It's not that the model is better than the cheaper plans, but experimenting with and revising prompts takes dozens of iterations for me, and I'm often multiple dollars in when I realize I need to restart with a better plan.

It also takes time and experimentation to get a good feel for context management, which costs money.


I bought the $200 plan so after my extras started routinely exceeding that. Harsh.

But, let me suggest that you stop thinking about planning and design as "prompts". I work with it to figure out what I want to do and have it write a spec.md. Then I work with it to figure out the implementation strategy and have it write implementation.md. Then I tell it I am going to give those docs to a new instance and ask it to write all the context it will need with instructions about the files and have it write handoff.md.

By giving up on the paradigm of prompts, I turned my focus to the application and that has been very productive for me.

Good luck.


plan.md / implementation.md is just a prompt.

You're not telling me to do anything different.


Yes, unfortunately the free version of Claude, Gemini or ChatGPT coding models can't compare with the paid ones, and are just not that useful. But, there are alternatives like GLM and Grok that can be quite useful, depending on the task.

PS: The cheapest still very useful alternative I've found is GitHub's Copilot at €10/m base price, with multiple models included. If you pick manually between cheap models for low complexity and save Opus 4.6 for specific things, you can keep it under budget.

At least from what I’ve seen, yes you do have to pay for anything useful. But just the cheaper plans seem worth the price.

> Sure a lot of niceties are missing but compared to the experience most people have with their $500 laptops, this is going to be night and day.

In September I picked up a laptop for $575.

Its specs are 15.6" 1080p IPS display, AMD Ryzen 7 6800H (8 cores, 16 threads) CPU, 32 GB of DDR5 memory, Radeon 680M iGPU that can allocate 8 GB of GPU memory, 1 TB SSD with a backlight keyboard. Weighs about 3.5 pounds and has (5) USB ports plus HDMI port. It comes with a 2 year warranty as well.

Running Arch Linux on it with niri and it's really nice for what it is.

There are decent laptops out there at affordable prices.


What’s it made of? How’s the touchpad feel?


It's metal "A-shell", I got mine in black. It also comes in blue or rose gold.

I'm not a huge touchpad fan, but it's very usable. It doesn't misclick.

When picking it, you can choose up to 64 GB of RAM and a 2 TB SSD. It was $50 more for 2 TB instead of 1 TB.

The only problem is it seems to be out of stock. It's a Nimo N155. Amazon resellers are also marking it up like crazy compared to its official listed price.


You literally just compared a laptop running arch to a mac. You’re not the target audience lmao


The laptop ships with Windows 11 but its parts are compatible with Linux too.

That is an important note though, the price includes a valid Windows 11 license.


That wasn’t the point. You’re a person who runs arch, that means most likely your requirements for a computer are VERY different than the target for this Mac. There’s always some other computer you can buy, but most people will just buy the Mac

> You’re a person who runs arch, that means most likely your requirements for a computer are VERY different than the target for this Mac

I do software development, video + image editing, writing and gaming. My requirements are it runs well, I can depend on it and I don't mind if it has a fan.

I only replied because the OP's comment made it seem like it's difficult to find a good laptop in the $600 range. If macOS is optional you can get quite decent specs.


arch :D I loled. the people who run macs don't want Linux.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: