But that still had nothing to do with comparing lossy vs lossless, or am I misunderstanding you?
How does a live performance that you hear with your ears at a specific place in a room help you pick out missing parts in a different recording, played by different people in a different place, recorded with multiple mics and then mixed and mastered?
It seems plausible to me. I assume that when you are doing a comparison, you are comparing a single source to a memory (does anyone do comparisons by playing two synchronized sources together, possibly into different ears?) In that case, I can well imagine that listening to multiple live performances primes one's mind to remember clearly how a given presentation sounded, and to pick out small differences, precisely because live performances are all slightly different. I would further imagine that performing a piece, and particularly practicing with the rest of the orchestra or conducting a practice, further enhances one's ability to notice and characterize small differences.
Of course, this might be utter nonsense, and I will bow to bayindirh's judgement on that!
Of course you can train your ears, to be able to hear more detail and learn to differentiate and identify frequencies.
That'll definitely help with hearing differences between two tracks. But i don't think you can compare live music to recorded tracks. Especially acoustic instruments, the room is such a big factor with those.
I am not actually suggesting that one should compare recorded music to live performances for the purpose of comparing audio encoding technologies. Oddly, your reply to bayindirh is in complete agreement with what I wrote here.
I think he explained it a little better. I totally understand that you'll learn to identify which frequencies and sounds belong to which instrument and in turn learn to identify when those are missing.
I guess that also teaches your ears to identify differences in other situations more clearly.
That's what mixing and mastering engineers practice their whole career and get really good at.
It has, but in a different w.r.t comparing different sound systems with the same recording. Let me try to explain. You might know some of the following, sorry if it's a re-explanation.
In a proper concert hall, sound is expected to be homogenous, so you should be able listen to the orchestra equally well, with the same sound balance (or mix) regardless of the place you sit. Similarly, recordings are done from suspended or positioned (and ideally tuned) mics, so you can capture the orchestra as someone sitting in the audience. At least this is how our performances were recorded.
The mastering is then done to match the recorded sound to the hall's sound, and balance any imperfections or clean the orchestra's inner talk between pieces (yes, we communicate a lot :D ).
When you listen an orchestra live, you will have a lossless blueprint of the piece in your mind (track by track if you can separate the instruments). If you can get a recording of the same performance, you can compare it with the live performance. That's absolutely correct.
But if you listen to a recording of a different orchestra playing the same piece, the arrangement and instrumentation will be same (you may have 8 violins instead of 12 but, violins won't be changed by violas most of the time). So, the atmosphere of the piece will be the same. Assuming the recording is done by competent folks, the spectrum would be the same (~20Hz -> ~20Khz roughly).
After some point, even if you're listening to a different orchestra, you can start to point to the things that should be there. It's very hard to describe, but every instrument has a base sound and details on top of it (you can tell they're all trumpets, but different brands or models. Similarly you can tell they're double basses but they're different in some ways). That base sound starts to erode too when you have a lossy compression, and in turn it affects the sound of the piece, regardless of the finer details (which are mostly affected by resins, bows, styles, etc.).
It's a "these two instruments shouldn't interact like this in this piece. Something is missing!" kind of feeling. This missing part is either something at the high or low end, almost an harmonic. It's not noticeable unless you're looking for it, but it's there.
That difference can be clearly heard by re-encoding a FLAC as a high bitrate MP3 and taking their differences. It's a hiss-like sound by contains a lot of the said harmonics and you can almost listen to the piece just by listening to it. Someone did that and published the differences, but it was some years ago. I'm not sure I can replicate or find the article. That article took differences of the exact same recording but, it can be applied by your brain to different recordings after some time.
Hope I've succeeded to clarify it somewhat. It's something very hard to describe by words. Please ask more questions if you want to. :) I'd be happy to try more.
Comparing two orchestras can be similar to comparing a recording in MP3 to FLAC.
I think i get the point in that learning to listen to those details and recognize frequencies can enhance your ability to spot differences in encoded audio.
How does a live performance that you hear with your ears at a specific place in a room help you pick out missing parts in a different recording, played by different people in a different place, recorded with multiple mics and then mixed and mastered?