I have had the most stable experience with Asus and Ubiquiti. Asus is great for home setups, while Ubiquiti gives you more control if you are willing to spend time configuring it. Reliability mattered more to me than fancy features.
For me it is not ideas or building, it is staying focused on what actually matters. It is easy to keep adding features instead of solving one real problem well.
From what I have seen, the teams that keep things simple tend to work better with coding agents. Most collaboration still happens in GitHub issues/PRs, and the agents are just another tool in the workflow. When too many docs and tools get involved, things start to feel messy quickly.
Spreadsheets did not replace programmers, they mostly changed who could build small solutions. A lot of quick calculations and internal tools moved to spreadsheets, while developers focused more on building systems around them. In many teams it actually increased the need for proper software later on.
One thing I have been noticing is that when AI answers everything instantly, people stop digging deeper themselves. It helps with speed, but it may reduce the small bits of learning that normally accumulate over time. The long-term effect on how we build shared knowledge will be interesting to watch.
Interesting approach. I have noticed the same issue — AI tools generate a lot of code and unit tests, but real user-flow or edge-case testing often gets skipped. Having something that reads the PR context and suggests missing scenarios could actually catch problems earlier.
i agree, but i wwant to add that perhaps just specs might not give you full testing coverage, have to add other artifacts too, like prod logss and incidents and using some layer of ontology + KG to produce meaningful data connectins and understanding. vector db alone will only give semantic search and grossly incompetent to connect data artifacts. for example for vector db, word apple and company apple might both be same without outlininig the context.
Turning on any network discovery feature by default feels wrong for a browser that positions itself around privacy. Even if the risk is small, users should clearly opt in to anything that changes the browser’s network behavior. Transparency matters more than convenience in cases like this.
I have seen a few small teams try Discord for internal chat. It works fine for quick conversations, but over time the lack of structured threads and searchable history can make work discussions harder to track. It tends to feel more like a community space than a workplace tool.
One thing that helped older communities was friction — things like slower posting, reputation built over time, and real participation history. When identity grows from consistent behavior instead of instant access, it is much harder for bots to blend in. Communities used to value that patience a lot more.
In our team we treat AI-generated code the same way we treat junior-written code — the important part is whether the author actually understands it. During review we usually ask for a short explanation of the approach and edge cases they considered. If they can not explain it clearly, the code probably needs another pass.
reply