If I’m recording from N neurons, I’m recording from an N-dimensional system. Each neuron’s firing rate is an axis in this space. If each neuron is maximally uncorrelated from all other neurons, the system will be maximally high dimensional. Its dimensionality will be N. Geometrically, you can think of the state vector of the system (where again, each element is the firing rate of one neuron) as eventually visiting every part of this N-dimensional space. Interestingly, however, neural activity actually tends to be fairly low dimensional (3, 4, 5 dimensional) across most experiments we’ve recorded from. This is because neurons tend to be highly correlated with each other. So the state vector of neural activity doesn’t actually visit every point in this high dimensional space. It tends to stay in a low dimensional space, or on a “manifold” within the N-dimensional space.
Agreed, it's really cool :). A lot of this is very new -- it's only been in the past decade and a half or so that we've been able to record from large populations of neurons (on the order of hundreds and up, see [0]). But there are a lot of smart people working on figuring out how to make sense of this data, and why we see low-dimensional signals in these population recordings. Here are some good reviews on the subject: [1], [2], [3], [4], and [5].
I'm curious about how much of this apparent low dimensionality is explained by (1) the physical proximity of the neurons being recorded, (2) poverty of the stimuli (just 4 sequences in this paper, if I'm not mistaken)
Both good questions. It could very well be that low dimensionality is simply a byproduct of the fact that neuroscientists train animals on such simple (i.e., low-dimensional) tasks. This paper argues that [0]. As for your first point, it is known that auditory cortex exhibits tonotopy, such that nearby neurons in auditory cortex respond to similar frequencies. But much of cortex doesn't really exhibit this kind of simple organization. Regardless, technological advancements are making it easier for us to record from large populations of neurons (as well as track behavior in 3D) while animals freely move in more naturalistic environments. I think these kinds of experiments will make it clearer whether low-dimensional dynamics are a byproduct of simple task designs.
Look up state space, then neural population and neural coding.
This isn't really something about neurons per se, it's about systems.
Suppose I have a system that can be fully characterized (for my purposes) by two number: temperature and pressure. If I take every possible temperature and every possible pressure, these form a vector space. But notice that temperature and pressure are not positions in the real world. It's a "state space" or "configuration space". At any moment in time, I could measure my system's temperature and pressure, and plot a point at (temperature(t), pressure(t)). As the system changes through time according to whatever rules govern its behaviour, I could take snapshots and plot those points (temperature(t+1), pressure(t+1)), (temperature(t+2), pressure(t+2)). This would give a curve "trajectory" that represents the systems evolution over time.
Okay, that's a 2D state space. But imagine I had a simulation of 10 particles (maybe some planetary simulation for a game). For each point I have maybe a 3D position (x,y,z) and a 3D velocity (vx, vy, vz). So I need 6 numbers to fully describe the state of each particle, and I have 10 particles. Therefore to fully describe the state of the whole system, I need 60 numbers. I therefore have a 60-dimensional state space. But each of these dimensions does not represent a position measurement along some axis in the world. In fact, only 30 of them do (3 * 10), the other 30 represent velocities.