I guess in IT it's the other way around, where you can start with advanced abstractions and frameworks without bothering what kind of rabbit hole is underneath (at least while it's going smooth), it gets deeper as you go closer to the hardware. While in maths you need to understand the basics to even know what's going on with more advanced areas. And this already applies to the fundamentals too, where in school it's expected you understand the stuff from previous years. At least that's my impression as a non-mathematician.
And then there's the mix between the two in languages that rely heavily on powerful type systems.
No, you can do the same in math. It just means ignoring the proofs and just using the theorems as they are. However sometimes the application and proof requires similar ways of thinking, and you kind of miss out by skipping the proof. But the same can be occasionally true for software engineering.
And then there's the mix between the two in languages that rely heavily on powerful type systems.