The Limits To Memory: Balancing Inhibition and Excitation in the Parietal Cortex

Most computational models of working memory do not explicitly specify the role of the parietal cortex, despite an increasing number of observations that the parietal cortex is particularly important for working memory. A new paper in PNAS by Edin et al remedies this state of affairs by developing a spiking neural network model that accounts for a number of behavioral and physiological phenomena related to working memory.

First, the model: Edin et al simulate the intraparietal sulcus (IPS; with ~1000 excitatory units and 256 inhibitory units) and the dorsolateral prefrontal cortex (dlPFC; with the same number and type of units) as interacting via biased competition, such that dlPFC sends global, nonspecific activation to cells in the IPS (both to excitatory and inhibitory units). Excitatory units in the IPS also had lateral connections whose strength decreased as a function of distance (each IPS unit has stronger connectivity for units "nearby," i.e. those coding for similar visual characteristics). Critically, the lateral connectivity among inhibitory units was wider, such that "distant excitatory cells effectively inhibit each other." This type of connectivity naturally leads to local "bumps" of high activity, separated by areas of relatively low activity. In other words, Edin et al simulate a kind of "stripe-like" architecture where different locally-arranged ensembles of neurons can self-sustain activity independent of the ensembles formed by other clumps of neurons. Interestingly, dlPFC activity was less spatially specific, and sent only excitatory activity into the IPS. Edin et al simulated environmental input to the network by introducing excitatory activity to different IPS cells, with the exact units depending on the simulated visual stimulus itself.

Because the connectivity described above was stochastic, the authors were left with a number of networks whose specific architectures differed. Comparing networks with different intrinsic capacities (that is, the number of distinct "bumps" or "clumps" of activity they could maintain in the IPS), mean activation among the excitatory cells did increased more sharply than the mean activation among the inhibitory cells. In other words, networks with higher capacity appeared to have not only more inhibitory units active - supporting those networks' ability to keep the maintained representations separate - but also more excitatory units active (at least up to a point).

Using this model, Edin et al developed a condensed mathematical formula describing the how IPS firing rate arises from working memory load based on parameters like effective connection strength between excitatory & inhibitory neurons, and the nonspecific excitatory input receives in the IPS from the dlPFC. Critically, firing rate needs to be above a threshold for memory maintenance to be possible, and as firing rate increases, so too does the widespread inhibitory bias to the rest of the population of units in the IPS. Thus, there is an energy landscape where as capacity (p) increases, so does the total energy in the network, up to a point at which the network can no longer stably sustain its activity. Here's the image used to convey this idea in the article:

i-a2fc3a8b5532a28fed3b055a9d06ce08-EnergyLandscape.jpg

Because excitatory activity intrinsic to the IPS also increases the amount of inhibition in the IPS, the capacity is inherently limited. But if the dlPFC sends its nonspecific excitatory drive to the excitatory units, it can increase the representational capacity of the IPS (I think simply by "powering" the entire network to increase its gain - that is, increasing the firing rate of memory-storing neurons in the IPS, with only secondary and presumably weaker effects on inhibitory units). As Edin et al note, the model makes a number of predictions (e.g., dlPFC activity should correlate with IPS activity and with behavioral performance at high memory loads, and IPS activity should predict behavioral performance at all loads) which the authors verified from their previous neuroimaging work.

Interestingly, this fMRI data came from the "filtering" paradigm used by McNab and Klingberg. I assume that the next step for this modeling effort is to include a gating system, presumably controlled by the basal ganglia, for determining which information gets biased by the dlPFC and thus stably represented in the parietal cortex.

On the other hand, as the authors note, many of their results rely on specific assumptions made by their model's architecture (and this is to be congratulated, since the whole point of modeling is to make these assumptions explicit!) The best example of this is the specific type of connectivity they endorse: local excitatory and local inhibitory connectivity, with a weaker and broader "neighborhood bias" for the latter. I'm not aware of any observations of this kind of connectivity in the IPS, but it seems reasonable to me.

More of an issue is the "global" excitatory signal from the dlPFC, which would seem problematic for getting this model to account for the dlPFC's putative role in selective biasing of information and manipulation of that information. A more local and directed form of biasing would seem to be necessary, although this likely relies on gating mechanisms in the basal ganglia (not simulated by Edin et al).

Finally, these simulations do not include learning. That is, Edin et al do not specify how these biases would emerge from an initially-random or undeveloped network. This question seems particularly of interest to Edin et al., given the work that this group has done on the training of working memory. One speculation is that training would selectively increase the gain on the neighborhood biases observed in the parietal cortex, such that more items could be maintained in a given region of the IPS. Alternatively, or additionally, training might have its effects in regions that were not the focus of this model, including the strength of activation from the dlPFC or the selectivity of this activation (via basal ganglia mechanisms).

More like this

Two seemingly contradictory trends characterize brain development during childhood and adolescence: Diffuse to focal: a shift from relatively diffuse recruitment of neural regions to more focal and specific patterns of activity, whether in terms of the number of regions recruited, or the magnitude…
A principal insight from computational neuroscience for studies of higher-level cognition is rooted in the recurrent network architecture. Recurrent networks, very simply, are those composed of neurons that connect to themselves, enabling them to learn to maintain information over time that may be…
A variety of new cognitive neuroscience shows how our ability to ignore distractions - to "perceptually filter", in a sense - is based on a ventral attentional network, is related to working memory, and may be involved in putative inhibitory tasks. First, a little background. In 2004, Vogel &…
One of the bottlenecks in human memory capacity is its "filtering efficiency" - irrelevant information in memory only detracts from an already-constrained memory span. New work by McNab & Klingberg images the neural structure directly responsible for such filtering, and shows it can predict…

When you are in not good state and have got no cash to go out from that, you will have to receive the mortgage loans. Because that will help you emphatically. I get college loan every year and feel great just because of it.

I had a dream to start my commerce, however I did not earn enough of money to do it. Thank goodness my mate proposed to utilize the loans. Therefore I used the credit loan and realized my dream.