That’s fair, NixOS avoids the direct stuff from Docker itself but if you’re basing on an Alpine image or something that would probably be more minimal / smaller
I don't get JJ. Every time it's posted people gush about how JJ enables some super complicated workflow that I can't wrap my head around. I have a simple feature branch/rebase workflow in git that has served me well for decades so I guess I don't understand why I would want to complicate things with (in this case) an "octopus merge/megamerge". Wouldn't that make it more difficult to reason about the repository/history?
If you wrangle a lot of in flight changes that are not yet merged into your teams primary git repo, it's very helpful. I have some 10-30 changes in various states at any time. Sometimes they have dependencies on each other sometimes they don't. Placing them all into one branch can work but it's a lot less ergonomic in many ways. jj makes my life simpler because it accommodates my workflow in a way git doesn't.
Honestly, if you don't find it appealing you don't need to use it. I think a lot of folks don't find vim appealing and stick to vscode and that's okay too.
> I have some 10-30 changes in various states at any time. Sometimes they have dependencies on each other sometimes they don't.
This is the sort of scenario that leans me towards thinking tools are being praised by how they support major red flags in development flows.
Having dozens of changes in flight in feature branches that may or may not be interdependent is a major red flag. Claiming that a tool simplifies managing this sort of workflow sounds like you are mitigating a problem whose root cause is something else.
To me it reads like praising a tool for how it streamlines deployments to production by skipping all tests and deployment steps. I mean, sure. But doesn't this mask a far bigger problem? Why would anyone feel the need to skip checks and guardrails?
Some will say that it's a "red flag", others will say that those saying it's a red flag lack the experience of working on diverse set of projects with various needs and requirements.
> Some will say that it's a "red flag", others will say that those saying it's a red flag lack the experience of working on diverse set of projects with various needs and requirements.
What if those who call out red flags actually do so based on experience,particularly in understanding how and why red flags are red flags and why it's counterproductive to create your own problems?
I mean, if after all your rich experience working on diverse set of projects with various needs and requirements, your answer to repeatedly shooting yourself in the foot is that you need a tool to better aim around your toes... What does it say about what lessons you draw?
Take Linux: they've got a "super-long-term-support" branch, a "long-term-support" branch, a "stable" branch, and the next "stable" branch. Stable is supported until the release of the next "stable" & 3 months after that, with (usually) 2-6 months between stable releases. "long-term-support" branches are supported for about 5 years, "super-long-term-support" for a few years after that. So there can be up to 4 released branches actively supported at any given time, ignoring any feature branches.
Every time I hear about this megamerge and stacked pr nonsense, it just smells to me. Like, why does your engineering organization have a culture where this sort of nonsense is required in the first place? Anytime I see articles like this gushing about how great tool XYZ is for stack merging and things like that, all I hear is "you don't have a culture where you can get someone looking at and mainlining your PR on the same day"
The jj lovers can go build their massive beautiful branches off in a corner, I'll be over here building an SDLC that doesn't require that.
Not all software is developed by one software organization.
Programs to manage “stacks of patches” go back decades. That might be hundreds that have accumulated over years which are all rebased on the upstream repository. The upstream repository might be someone you barely know, or someone you haven’t managed to get a response from. But you have your changes in your fork and you need to maintain it yourself until upstream accepts it (if they ever call back).
I’m pretty sure that the Git For Windows project is managed as patches on top of Git. And I’ve seen the maintainer post patches to the Git mailing list saying something like, okay we’ve been using this for months now and I think it’s time that it is incorporated in Git.[1]
I’ve seen patches posted to the Git mailing list where they talk about how this new thing (like a command) was originally developed by someone on GitHub (say) but now someone on GitLab (say) took it over and wants to upstream it. Maybe years after it was started.
Almost all changes to the Git project need to incubate for a week in an integration branch called `next` before it is merged to `master`.[1] Beyond slow testing for Git project itself, this means that downstream projects can use `next` in their automated testing to catch regressions before they hit `master`.
Makes total sense! But what you described is like less than 5% of the use case here. Right tool for the right job and all that, what doesn't make sense is having this insanity in a "normal" software engineering setup where a single company owns and maintains the codebase, which is the vast majority of use cases.
It depends. We have pretty good review culture (usually same day rarely more than 24H), but some changes may need multiple rounds of review or might be have flaky tests that uncovers after a few hours. Also some work is experimental and not ready for pushing out for review. Sometimes I create a very large number of commits as part of a migration DND I can't get them all reviewed in parallel. It can be a lot of things. Maybe it happens more with monorepos.
All fair points, indeed I face each of the challenges you listed periodically myself. But it's never been often enough to feel like I need to seek out an entirely different toolchain and approach to manage them.
Well, fortunately Jujutsu isn’t an entirely different toolchain and/or approach. It’s one tool that’s git-compatible and is quite similar to it. But where it’s different, it’s (for me) better.
Yeah, I've never used Jujutsu, but from what I've seen so far everything it does can be done with Git itself, just perhaps in a (sometimes significantly) less convenient way.
Sure, true, I would say "often significantly" though, to the extent that you would never bother doing half the things with git that you can do with Jujutsu because it's such a pain.
Why frame this as either/or? Those aren't the only two options.
There are different types of "large" PR's. If I'm doing a 10,000 LOC refactor that's changing a method signature, that's a "large" PR, but who cares? It's the same thing being done over and over, I get a gist of the approach, do some sampling and sanity checks, check sensitive areas, and done.
If I'm doing something more complex and storied to the point it requires stacks with dependencies, then I'm questioning why I haven't split and chunked the thing into smaller PR's in the first place and having those reviewed. Ultimately the code still has to get reviewed, so often it's about reframing the mindset more than anything else. If it organizationally slows me down to the point that chunking the PR into smaller components is worse than a stacked-pr like approach, I'm not questioning the PR structure, I'm questioning why I'm being slowed down organizationally. Are my reviews not picked up fast enough? Is the automated testing situation not good enough? The answer always seems to come back to the process and not the tooling in these scenarios.
What problem does the stacked PR solve? It's so I can continue working downstream while someone else reviews my unmainlined upstream code that it depends on. If my upstream code gets mainlined at a reasonable rate, why is this even a problem to be solved? It also implies that you're only managing 1-3 major workstreams if you're getting blocked on the feature downstream which also begs the question, why am I waterfalling all of my work like this?
Fundamentally, I still have to manage the dependency issue with upstream PR's, even when I'm using stacked PR's. Let's say that an upstream reviewer in my stacked PR chain needs me to change something significant - a fairly normal operation in the course of review. I still have to walk down that chain and update my code accordingly. Having tools to slightly make that easier is nice, but the cost benefit of being on a different opt in toolchain that requires its own learning curve is questionable.
> If I'm doing something more complex and storied to the point it requires stacks with dependencies, then I'm questioning why I haven't split and chunked the thing into smaller PR's in the first place and having those reviewed.
It looks like you see stack PR as an inherent complex construct, but IMO splitting the implementation into smaller, more digestable and self-contained PRs is what stack PR is about
So if you agree that is a better engineering practice, then jj is only a tool that helps you do that without thinking too much about the tool itself
Why would you like git and not jj is beyond me, this must be something like two electric charges being the same and repelling themselves. It’s the same underlying data structure with a bit different axioms (conflicts allowed to be committed vs not; working tree is a commit vs isn’t).
Turns out these two differences combined with tracking change identity over multiple snapshots (git shas) allow for ergonomic workflows which were possible in git, just very cumbersome. The workflows that git makes easy jj also keeps easy. You can stop yelling at clouds and sleep soundly knowing that there is a tool to reach for when you need it and you’ll know when you need it.
No, but there are a lot of them, and principal and staff engineers, and solo folks who would get to set the culture if they ever succeed.
A lot of people's taste making comes from reading the online discussions of the engineering literati so I think we need old folks yelling at clouds to keep us grounded.
I don't get git. Every time it's posted people gush about how git enables some super complicated workflow that I can't wrap my head around. I have a simple edit/undo workflow in my editor that has served me well for decades so I guess I don't understand...
There is a limit to how far one needs to abstract personally.
I don't layer my utensils for example, because a spoon is fit for purpose and reliable.
But if I needed to eat multiple different bowls at once maybe I would need to.
For my personal use case, git is fit for purpose and reliable, even for complex refactoring. I don't find myself in any circumstances where I think, gosh, if only I could have many layers of this going on at once.
Even if you're working on one single thread of development, jj is easier and more flexible than git though. That it works better for super complicated workflows is just a bonus.
jj reduces mental overhead by mapping far more cleanly and intuitively to the way people tend to work.
This is a little weird at first when you’ve been used to a decade and a half of contorting your mental model to fit git. But it genuinely is one of those tools that’s both easier and more powerful. The entire reason people are looking at these new workflows is because jj makes things so much easier and more straightforward that we can explore new workflows that remove or reduce the complexity of things that just weren’t even remotely plausible in git.
A huge one for me: successive PRs that roll out some thing to dev/staging/prod. You can do the work all at once, split it into three commits that progressively roll out, and make a PR for each. This doesn’t sound impressive until you have to fix something in the dev PR. In git, this would be a massive pain in the ass. In jj, it’s basically a no-op. You fix dev, and everything downstream is updated to include the fix automatically. It’s nearly zero effort.
Another is when you are working on a feature and in doing so need to add a capability to somewhere else and fix two bugs in other places. You could just do all of this in one PR, but now the whole thing has to b reviewed as a larger package. With jj, it’s trivial to pull out the three separate changes into three branches, continue your work on a merge of those three branches, and open PRs for each separate change. When two of them merge cleanly and another needs further changes, you just do it and there’s zero friction from the tool. Meanwhile just the thought of this in git gives me anxiety. It reduces my mental overhead, my effort, and gives overburdened coworkers bite-sized PRs that can be reviewed in seconds instead of a bigger one that needs time set aside. And I don’t ever end up in a situation where I need to stop working on the thing I am trying to do because my team hasn’t had the bandwidth to review and merge my PRs. I’ve been dozens of commits and several stacked branches ahead of what’s been merged and it doesn’t even slightly matter.
If you think about it, git is really just a big undo/redo button and a big "merge 2 branches" button, plus some more fancy stuff on top of those primitives.
"Merge 2 branches" is already far from being a primitive. A git repository is just a graph of snapshots of some files and directories that can be manipulated in various ways, and git itself is a bunch of tools to manipulate that graph, sometimes directly (plumbing) and sometimes in an opinionated way (porcelain). Merging is nothing but creating a node (commit) that has more than one parent (not necessarily two) combined with a pluggable tool that helps you reconcile the contents of these parents (which does not actually have to be used at all as the result does not have to be related to any of the parents).
(you may know that already, but maybe someone who reads this will find this helpful for forming a good mental model, as so many people lack one despite of working with git daily)
Even with that workflow jj can help a lot. Haven't you ever been annoyed by situations like, while working on a few features at once, having unrelated changes from different feature branches piling up in the stash? Or wanting to switch to another branch mid-rebase without losing your place? jj's working-copy-as-commit model and its first-class treatment of conflicts address those pain points.
No? You work on something and finish it. At most I have 2-3 feature branches open. If none are in review, I have commits in them with current work. Maybe I use the stash 2-3 times a year when I am heavily experimenting with different implementations.
Depending on people workflow/mindset we often face stacked branches, lots of fixup commits, and over the years new git commands and tricks emerged to deal with that but not in cohesive way I guess. JJ seems (I only tried it a short while long ago) to address just that.
I didn't at first and let's be honest most of our workflow is within what you described, however in the era of AI assisted coding I found JJ more pragmatic. I can easily abandon or create new work without having to stash or commit etc. Another thing I really like about JJ it's undo, did I commit something and realised I forgot to add something? jj undo and undos the last operation. Not only my workflow is ergonomically more flexible because of JJ but also less forgiving without having to memorise a lot of git commands. And integrates with already git repos for smooth transition!
The thing is, JJ makes mega merges easy... Which opens paths to simple but powerful workflows that match reality better. Having multiple converging changes, or even separated bits of history for $reasons becomes ready without rebar and serializing PRs.
And better conflict resolution means it often becomes viable to just have mega merge add next release
> I don't get JJ. Every time it's posted people gush about how JJ enables some super complicated workflow that I can't wrap my head around.
This. Things like stacks and mega-merges are huge red flags, and seeing enthusiastic people praising how a tool is more convenient to do things that raise huge red flags is perplexing.
Let's entertain the idea of mega-merges, and assume a tool fixes all tool-related issues. What's the plan to review the changes? Because what makes mega merges hard is not the conflicts but ensuring the change makes sense.
I use jj but not mega merges. But as I understand it you're not going to push the merge itself for review. It allows you to work locally on multiple branches at once. But when ready you push the individual branch, pre merge, for review.
More replaceable batteries can have secondary effects that most people would probably like though - like the ability to by a used phone on ebay/FB marketplace that doesn't have an abysmal battery.
Seems like a solved problem for consoles, at least. On the Nintendo switch you can "pause" any game regardless of if the devs implemented it by pressing the home button which suspends the entire game at the OS level
If by solved you mean it's a feature you're required to support... It can never be truly seamless when things like wall clock or device state (SD card is missing suddenly) or network connections disappear.
That is different, because you can't interact with the game anymore. In game pause can let you change your settings or map for example. Menus and map are still running in the game loop, so then you need to make they get the input events but not the gameplay part.
Nintendo, like all other platform-owners (e.g. Meta/Quest, Sony, Microsoft) is VERY strict about games released on their platform and have very strict requirements before anything is allowed to be sold with the Nintendo label. I very highly doubt they let devs NOT implement the pause ability. AFAIK you can't just OS-pause a game and expect it to run fine when it resumes, there are soooo many systems at play: animation, physics, sound, input, etc. that need to be cleanly stopped/resumed that I doubt it's as easy as just OS-pausing.
This used to be true, but one trip to any modern e-store front should dispell the notion. So much slop. Even for arguable non-slop, so much that just rapidly crashes and is unplayable. The extent of platform certifying these days for most titles seems to be: can launch, can back out to the console top level, and maybe doesn't crash if a controller is added/removed.
The thing is, in isolation, balancing the budget looks pretty easy. It's only because you have to deal with particular interest groups and a populace who has come to believe that any tax increase means they're getting shafted. I was able to balance that budget with the following changes:
1. Top one percent effective tax rate goes from 24 to 30 percent.
2. Higher income goes from 12.26 to 14.26.
3. Upper middle income goes from 7.7 to 8.7.
4. Middle income goes from 4.8 to 5.8.
5. Lower middle income goes from .1 to 1.1
6. Lower income goes from -4.1 to -3.1
7. Social payroll taxable maximum goes to 90% of taxable income.
Those changes alone, with absolutely no spending changes, balance the budget. Now, I'm not proposing that those changes are politically viable, and you can certainly fiddle with my distribution if you think something else would be fairer (I think it's fair because the rich have done much better than everyone else over the past 40 years so I think they can afford to pay more, but I also think that everyone should have to contribute something more or else you get the current problematic belief that the issue can be solved just by taxing somebody else), but I would strongly disagree if you wanted to argue that those changes would result in any substantial change in standard of living for anyone.
I think, numerically, the problem can pretty easily be solved just by taxation alone (though I think it would make sense to add some spending cuts), just not politically.
Site looked interesting, so I was just like "what would it look like to have a top tax rate like we did in the 1950s-1980s (Before Reagan dropped it)?" [1] And the hilarious thing is the propaganda it spews without any backing of data:
>At a certain point, increases in tax rates will not raise more revenue. Once someone's tax rate becomes sufficiently high, they might work less or try harder to evade taxes. Based on existing evidence, this simulation assumes that increasing this group’s tax rate beyond your current level is unlikely to raise more revenue.
And then it pretends like the maximum amount of income you can get out of the richest 1% is $203B.
This is rich people's propaganda that we've bought into by pretending that Rich people don't need this country. But that's a lie. If it were true, they would just move, instead of fighting tooth and nail for tax cuts in every single election.
Also the breakdown of spending categories and the way they're represented are pretty clearly politically motivated, and the Numbers look a little suspicious to me. They don't even align with the CBO numbers.
Another really obvious thing missing is a "Capital Gains Tax"
Which is currently pegged to like 20%, and how CEOs get all of their income. So If Capital Gains was taxed as income, I think that would at least start to make the Income tax realistic.
> Those changes alone, with absolutely no spending changes, balance the budget.
I tried out the calculator and put in all your changes, and the budget wasn't balanced, there was still a 1.4T deficit (as opposed to the current 1.9T deficit). The app only claims the budget is "sustainable" now because it assumes GDP keeps growing at the same rate (which might not be true), and if so we'll hit a 3%-of-GDP "deficit target" in 25 years. Also adjusting a negative tax rate kind of seems like it is, in fact, reducing spending (i.e. the federal government reduces the amount of tax credits it gives out). This also assumes the federal government will not introduce new programs, new spending, etc. So really all you did was reduce the deficit by .5T along with a hope and a prayer that the economy will continue to grow at the same rate for the next 25 years (while at the same time the federal government does not increase spending). I personally think it's bad to have a deficit at all and that we should work towards zero deficit and eventually surplus (yes, I know there are all sort of growth hacks and such you can do with debt, but historically politicians have succumbed to slippery slope deficit increases and so for that reason alone I think holding politicians to a zero deficit standard is best -- do it for a few generations and now there's a precedent that protects us from getting into the situation we are currently in). To me a "balanced budget" is that your spending is <= your income.
Anyway, interesting calculator app. I do see the value in raising taxes for sure, but it's not easy politically to raise taxes and it's also not easy politically to cut spending (whichever group likes the thing you cut will scream), so ultimately it might have to be a hybrid solution where democrats increase taxes without increasing spending when they are in power and republicans cut spending without decreasing taxes when they are in power. When I say that out loud though it seems like a pipe dream, sigh...
It's more likely both are true. We can afford to do more for the people, but at the same time we are over-spending. Streamlining some of these functions would be nice. One area we are vastly over-spending is highway and roadway construction, for example. Even if we can afford it, we shouldn't pay for it. There are other more politically hot topics here and both general sides of the debate have merit, but we should try to not be dogmatic about it and instead think in systems terms and long-term outcomes. When I see a city or state spending $400,000/each on units for housing homeless people, well, that's obviously a misuse of funds. That's not sustainable. We shouldn't do it even if we can afford it. When we spend $50 billion in a week of the Iran war (which I support but just as an example), well, that $50 billion could have paid off a lot of mortgages - so maybe we should or could do that instead.
Maybe start with universal healthcare and rezoning laws so Airbnb can’t sit on housing. Make public college free. Reduce military spending drastically. Force billionaires to pay a 25% tax on net worth (they’d still increase their wealth).
I don't like or valorize billionaires, I guess (I mostly don't care about them), but I don't understand what's "inhumane" here. There aren't very many billionaires. Billion dollar companies are far more salient to ordinary people than billionaires are. And, obviously, you can't fund universal health care by liquidating the billionaires!
I've never really understood why people are so het up about billionaires. The distinction between them and decimillionaires seems mostly like comic book lifestyle stuff; like, OK, they fly their pets private for visitation with their ex-spouses or whatever, I guess that's offensive aesthetically?
Far, far more damaging to ordinary people is the Faustian bargain struck between the upper middle class and the (much smaller) upper class, which redistributes vast sums of many away from working class people into the bank accounts of suburban homeowners.
(Because fundamental attribution error guarantees threads like this will devolve into abstract left vs. right valence arguments, a policy stake in the ground: I broadly favor significantly higher and more progressive taxes, starting with a reconsideration of the degree to which we favor cap gains.)
I really applaud the work McKenzie Scott is doing. A lot of billionaires play the "aw shucks if only someone would tax me" - nothing is stopping them from just donating to the government if they really thought that. We have a housing problem, why not play Sim City in real life and build houses for people or something? Personally I think it would be a blast.
Similarly though, there's nothing stopping you personally from taking $50, $100, whatever and walking over to a shelter or food bank and donating. You don't need to wait for the government to stand up a program. Lead by example like McKenzie Scott is. We donate money to local organizations - again, no barriers here.
I don't care if someone is a billionaire, though of course we should tax them "appropriately". But if you're really mad about billionaires and you want these programs, you should be giving away your own money too and there's nothing stopping you. Waiting until you get just the right program or tax the right person is a bad strategy if you really care about some of these issues.
Yeah, it's always funny to see how MMT is a perfectly acceptable way to create tax cuts and enable corporate welfare but if you suddenly want universal medicare or childcare suddenly we care about budgets or MMT is suddenly impractical.
This should actually allow for a balanced budget and still affording everything. The problem is, the USA has the best government money can buy and it wasn’t bought by the people.
Pretty much all playdate games could just be enjoyed in the browser. The only thing you're missing is the physical crank and honestly it's a little gimmicky.
> You don’t get points for being clever. You win by paying more.
And yet... Wireguard was written by one guy while OpenVPN is written by a big team. One code base is orders of magnitude bigger than the other. Which should I bet LLMs will find more cybersecurity problems with? My vote is on OpenVPN despite it being the less clever and "more money thrown at" solution.
So yes, I do think you get points for being clever, assuming you are competent. If you are clever enough to build a solution that's much smaller/simpler than your competition, you can also get away with spending less on cybersecurity audits (be they LLM tokens or not).
Unfortunately as a diabetic, oatmeal is one of the most difficult foods to control. I question how healthy it is given how high and how fast my blood sugar spikes after eating some. Oats are converted to glucose very quickly it seems, and that's without all the added sugar OP recommends. I won't dispute that it's delicious though.
Use thicker oats. Do not add sugar or any sweet milk. Also, if you sprinkle ceylon cinnamon and fenugreek powders, the impact will be less. For more effect, I used to microwave it in black tea instead of water.
Heating and then cooling oatmeal should allow it to form some resistant starch of type RS3. This will spike glucose a little less, but it causes much more gas.
Other effective hacks are gymnema, berberine, thiamine, and benfotiamine supplements, all of which help with glucose regulation.
Acacia fiber powder in oatmeal could be a worthy hack too, but I have yet to try it.
I used oatmeal with water and it has always spiked - every body is different. How much less did it spike when you used fenugreek ? What other blood sugar spike hack do you use?
Others that I know of — frozen bread changes starch, or extra virgin oil and almond butter are high in oleic acid so with the right amount it won’t spike as much
I have now updated the parent comment with more hacks. As for a quantification, I don't have one. The only related metric I quantify is HbA1c that I measure at home every few months.
Are we talking of steel-cut oats here? The glycemic index for steel-cut oats is moderate. Instant oats, on the other hand, raise your blood glucose very rapidly.
Oats are about 15% calories from protein, 85% from carbs.
High protein foods would be:
egg white (90% calories from protein), chicken breast (80%), lean fish such as cod (90%)
medium protein foods would be:
fatty beef (e.g. ribeye) 50% calories from protein, cottage cheese (60%), fatty fish like salmon (55% calories from protein), whole eggs (fatty yolk plus white, 36% calories from protein), soybeans (36-40%)
low protein foods would be:
lentils (30%), 2% milk (26% calories from protein), lima beans (22% protein), parmesan cheese (30%), summer squash/zuccini (24% protein), most mushrooms (25-30%).
very-low protein foods would be:
rice (9%), onions (9%), winter squash (10%), red bell peppers (12-13%), sweet corn (12%)
Here, by very low, I mean if you try to get your protein from these sources, you will end up obese unless you expend extreme amounts of energy exercising or maintain serious protein deficiencies (muscle loss). You can get decent amount of protein if you are downing lentils, whole milk, parmesan, soybeans, salmon, etc, e.g. you don't need to eat high protein foods, but this is about the bottom level to get reasonable protein while maintaining reasonable weight unless you are a day laborer or expending massive calories.
At only 15% calories from protein (the rest being carbs), oats would be not much better than corn in terms of protein content per calorie consumed. Nothing wrong with eating some corn on the cob, but that's not gonna be a major source of protein for anyone unless you are willing to consume huge amounts of carbs.
I assume they mean non animal sources of protein. Of course it'll be hard for plant based foods to compete. Out of non animal sources, oatmeal is pretty good, especially as a cheap staple food no less
The human body does not grade on a curve. There is as much protein in oatmeal per calorie as there is in red bell peppers and obviously people don't cite red bell peppers as a high protein food, this is true even if for some reason they really prefer to eat red bell peppers.
If you want something from non-animal sources, go for mushrooms and soybeans, which have twice as much protein per calorie as oats. Mushrooms are an under-rated source of protein, as is cottage cheese.
> The ARR shows the extent by which total predicted COVID-19 deaths exceeded officially reported COVID-19 deaths during the period. A limited number of counties had ARRs < 1, which suggests that there were more officially reported COVID-19 deaths than total predicted COVID-19 deaths. One reason that a county could have an ARR < 1 is if death certifiers recorded people as dying from COVID-19 when they had COVID-19 but actually died from another unrelated cause.
Afaik that was a story that spread around, but has very little connection with reality. As i recall this was mostly down to people who don’t know how to read a death certificate misunderstanding what goes in the cause of death field. I.e. something along the lines of it would list the cause of death as “organ failure”, because that was what caused death - but covid caused the organ failure.
Open to a good source that says otherwise, of course
Not to mention if you made one app in college and then didn't keep up with the SDK updates, Google perma-closes the entire Play account such that the only way to publish a new app is by creating a brand new gmail account
Forcing people to keep up with SDK updates is a bad thing in itself. Let people target the earliest possible feature set and make the app run on as many phones as possible rather than showing scary messages to people due to targeting an older API.
I think the problem is that older SDK versions allowed you to do things like scan local WiFi names to get location data, without requiring the location permission.
So bad actors would just target lower SDK versions and ignore the privacy improvements
The newer Android version could simply give empty data (for example, location is 0,0 latitude longitude, there are no visible WiFi networks), when the permission is missing and an app on the old SDK version requests it.
Of course, they don't like this because then apps can't easily refuse to work if not allowed to spy.
Phone companies are required to make sure 911 works on their phones. Random people on the internet aren't required to make sure 911 works on random apps, even if they look like phones.
reply