Today BigDecimal (arbitrary precision) should be the easy to use default and float the annoying to invoke exception. But it’s understandable why the designers wouldn’t have done that in the 90s.
Most of the time it doesn’t matter. Double (or float) by default is premature optimization. When you start dealing with millions or billions of values, sure reach for the annoying, counterintuitive, footgun laden numeric types. But that’s not most code.
Well (tens) thousands doubles vs BigDecimals (think of marketing data) and any operations over them is a massive difference.
The premature optimization is beyond uncalled for. Learning to use floating point type is something that most developers should do. No need for weasel words, either (annoying, counterintuitive).