It's a slightly different argument. The level of "reasonable risk" depends on the attacker in both situations.
The odds of any individual crafting a special packet to crash my system are absurdly low.
However, "absurdly low" is good enough. All it took was one individual to come up with the ping-of-death and one more to write a script to automate it, and systems worldwide were being taken down by random teenagers in the late nineties.
As a result of these and other absurd attacks, any modern IP stack is hardened to extreme levels.
In contrast, my house lock is pretty easy to pick (much easier than crafting the ping-of-death), and I sometimes don't even remember to lock it. That's okay, since the threat profile isn't "anyone on the internet," but is rather limited (to people in my community who happen to be trying to break into my house).
I don't need to protect my home against the world's most elite criminals trying to break in, since they're not likely to be in that very limited set of people. I do any software I build.
That applies both to system threats and to component threats. Digital systems need to be incredibly hard.
Google used to know that too. I'm not sure when they unlearned that lesson.
Do you think there’s a standard for “incredibly hard” that all applications need to follow? Or that it varies from one application to another depending on context?
It depends on context. There are many pieces here:
1) Cost of compromise.
- For example, medical data, military secrets, and highly-personal data need a high level of security.
- Something like Sudoku high scores, perhaps not so much.
2) Benefit of compromise. Some compromises net $0, and some $1M.
- Something used by 4B users (like Google) has much higher potential upside than something used by 1 user. If someone can scam-at-scale, that's a lot of money.
- Something managing $4B of bitcoin or with designs for the F35 has much higher upside than something with Sudoku high scores.
3) Exposure.
- A script I wrote which I run on my local computer doesn't need any security. It's like my bedroom door.
- A one-off home, school, or business-internal system is only exposed to those communities, and doesn't need to be excessively hardened. It's more-or-less the same as physical security.
- Something on the public internet needs a lot more.
This, again, speaks to number of potential attackers (0, dozens/hundreds, or 7B).
#1 and #2 are obvious. #3 is the one where I see people screw up with arguments. Threats which seem absurdly unlikely are exploited all the time on the public internet, and intuition from the real world doesn't translate at all.
If I’m reading you right, if a business had a non-critical internal system (internal network behind a strong VPN) with the potential for a CSRF attack, you wouldn’t call that a risk?
Having is having glass windows (at least at street level).
Whether it's a risk worth addressing depends on a lot of specifics.
For example, a CSRF attack on something like sharepoint.business.com could be externally exploited with automated exploits. That brings you to the 7B attacker scenario, and if the business has 100,000 employees, likely one of them will hit on an attack.
A CSRF attack on a custom application only five employees know about has decent security-by-obscurity. An attacker would need to know URLs and similar business-internal information, which only five people have access to. Those five people can just as easily walk into the CEOs office and physically compromise the machine.
The odds of any individual crafting a special packet to crash my system are absurdly low.
However, "absurdly low" is good enough. All it took was one individual to come up with the ping-of-death and one more to write a script to automate it, and systems worldwide were being taken down by random teenagers in the late nineties.
As a result of these and other absurd attacks, any modern IP stack is hardened to extreme levels.
In contrast, my house lock is pretty easy to pick (much easier than crafting the ping-of-death), and I sometimes don't even remember to lock it. That's okay, since the threat profile isn't "anyone on the internet," but is rather limited (to people in my community who happen to be trying to break into my house).
I don't need to protect my home against the world's most elite criminals trying to break in, since they're not likely to be in that very limited set of people. I do any software I build.
That applies both to system threats and to component threats. Digital systems need to be incredibly hard.
Google used to know that too. I'm not sure when they unlearned that lesson.