Recall order#

A key advantage of free recall is that it provides information not only about what items are recalled, but also the order in which they are recalled. A number of analyses have been developed to charactize different influences on recall order, such as the temporal order in which the items were presented at study, the category of the items themselves, or the semantic similarity between pairs of items.

Each conditional response probability (CRP) analysis involves calculating the probability of some type of transition event. For the lag-CRP analysis, transition events of interest are the different lags between serial positions of items recalled adjacent to one another. Similar analyses focus not on the serial position in which items are presented, but the properties of the items themselves. A semantic-CRP analysis calculates the probability of transitions between items in different semantic relatedness bins. A special case of this analysis is when item pairs are placed into one of two bins, depending on whether they are in the same stimulus category or not. In Psifr, this is referred to as a category-CRP analysis.

Lag-CRP#

In all CRP analyses, transition probabilities are calculated conditional on a given transition being available [Kah96]. For example, in a six-item list, if the items 6, 1, and 4 have been recalled, then possible items that could have been recalled next are 2, 3, or 5; therefore, possible lags at that point in the recall sequence are -2, -1, or +1. The number of actual transitions observed for each lag is divided by the number of times that lag was possible, to obtain the CRP for each lag.

First, load some sample data and create a merged DataFrame:

In [1]: from psifr import fr

In [2]: df = fr.sample_data('Morton2013')

In [3]: data = fr.merge_free_recall(df, study_keys=['category'])

Next, call lag_crp() to calculate conditional response probability as a function of lag.

In [4]: crp = fr.lag_crp(data)

In [5]: crp
Out[5]: 
      subject   lag      prob  actual  possible
0           1 -23.0  0.020833       1        48
1           1 -22.0  0.035714       3        84
2           1 -21.0  0.026316       3       114
3           1 -20.0  0.024000       3       125
4           1 -19.0  0.014388       2       139
...       ...   ...       ...     ...       ...
1875       47  19.0  0.061224       3        49
1876       47  20.0  0.055556       2        36
1877       47  21.0  0.045455       1        22
1878       47  22.0  0.071429       1        14
1879       47  23.0  0.000000       0         6

[1880 rows x 5 columns]

The results show the count of times a given transition actually happened in the observed recall sequences (actual) and the number of times a transition could have occurred (possible). Finally, the prob column gives the estimated probability of a given transition occurring, calculated by dividing the actual count by the possible count.

Use plot_lag_crp() to display the results:

In [6]: g = fr.plot_lag_crp(crp)
../_images/lag_crp.svg

The peaks at small lags (e.g., +1 and -1) indicate that the recall sequences show evidence of a temporal contiguity effect; that is, items presented near to one another in the list are more likely to be recalled successively than items that are distant from one another in the list.

Compound lag-CRP#

The compound lag-CRP was developed to measure how temporal clustering changes as a result of prior clustering during recall [LK14]. They found evidence that temporal clustering is greater immediately after transitions with short lags compared to long lags. The lag_crp_compound() analysis calculates conditional response probability by lag, but with the additional condition of the lag of the previous transition.

In [7]: crp = fr.lag_crp_compound(data)

In [8]: crp
Out[8]: 
       subject  previous  current  prob  actual  possible
0            1     -23.0    -23.0   NaN       0         0
1            1     -23.0    -22.0   NaN       0         0
2            1     -23.0    -21.0   NaN       0         0
3            1     -23.0    -20.0   NaN       0         0
4            1     -23.0    -19.0   NaN       0         0
...        ...       ...      ...   ...     ...       ...
88355       47      23.0     19.0   NaN       0         0
88356       47      23.0     20.0   NaN       0         0
88357       47      23.0     21.0   NaN       0         0
88358       47      23.0     22.0   NaN       0         0
88359       47      23.0     23.0   NaN       0         0

[88360 rows x 6 columns]

The results show conditional response probabilities as in the standard lag-CRP analysis, but with two lag columns: previous (the lag of the prior transition) and current (the lag of the current transition).

This is a lot of information, and the sample size for many bins is very small. Following [LK14], we can apply bins to the lag of the previous transition to increase the sample size in each bin. We first sum the actual and possible transition counts, and then calculate the probability of each of the new bins.

In [9]: binned = crp.reset_index()

In [10]: binned.loc[binned['previous'].abs() > 3, 'Previous'] = '|Lag|>3'

In [11]: binned.loc[binned['previous'] == 1, 'Previous'] = 'Lag=+1'

In [12]: binned.loc[binned['previous'] == -1, 'Previous'] = 'Lag=-1'

In [13]: summed = binned.groupby(['subject', 'Previous', 'current'])[['actual', 'possible']].sum()

In [14]: summed['prob'] = summed['actual'] / summed['possible']

In [15]: summed
Out[15]: 
                          actual  possible      prob
subject Previous current                            
1       Lag=+1   -23.0         0         2  0.000000
                 -22.0         0         2  0.000000
                 -21.0         0         4  0.000000
                 -20.0         0         6  0.000000
                 -19.0         1         7  0.142857
...                          ...       ...       ...
47      |Lag|>3   19.0         1        30  0.033333
                  20.0         2        19  0.105263
                  21.0         1        14  0.071429
                  22.0         0         7  0.000000
                  23.0         0         2  0.000000

[7520 rows x 3 columns]

We can then plot the compound lag-CRP using the standard plot_lag_crp() plotting function.

In [16]: g = fr.plot_lag_crp(summed, lag_key='current', hue='Previous').add_legend()
../_images/lag_crp_compound.svg

Note that some lags are considered impossible as they would require a repeat of a previously recalled item (e.g., a +1 lag followed by a -1 lag is not possible). For both of the adjacent conditions (+1 and -1), the lag-CRP is sharper compared to the long-lag condition (\(| \mathrm{lag} | >3\)). This suggests that there is compound temporal clustering.

Lag rank#

We can summarize the tendency to group together nearby items by running a lag rank analysis [PNK09] using lag_rank(). For each recall, this determines the absolute lag of all remaining items available for recall and then calculates their percentile rank. Then the rank of the actual transition made is taken, scaled to vary between 0 (furthest item chosen) and 1 (nearest item chosen). Chance clustering will be 0.5; clustering above that value is evidence of a temporal contiguity effect.

In [17]: ranks = fr.lag_rank(data)

In [18]: ranks
Out[18]: 
    subject      rank
0         1  0.610953
1         2  0.635676
2         3  0.612607
3         4  0.667090
4         5  0.643923
..      ...       ...
35       43  0.554024
36       44  0.561005
37       45  0.598151
38       46  0.652748
39       47  0.621245

[40 rows x 2 columns]

In [19]: ranks.agg(['mean', 'sem'])
Out[19]: 
       subject      rank
mean  24.90000  0.624699
sem    2.24488  0.006732

Category CRP#

If there are multiple categories or conditions of trials in a list, we can test whether participants tend to successively recall items from the same category. The category-CRP, calculated using category_crp(), estimates the probability of successively recalling two items from the same category [PNK09].

In [20]: cat_crp = fr.category_crp(data, category_key='category')

In [21]: cat_crp
Out[21]: 
    subject      prob  actual  possible
0         1  0.801147     419       523
1         2  0.733456     399       544
2         3  0.763158     377       494
3         4  0.814882     449       551
4         5  0.877273     579       660
..      ...       ...     ...       ...
35       43  0.809187     458       566
36       44  0.744376     364       489
37       45  0.763780     388       508
38       46  0.763573     436       571
39       47  0.806907     514       637

[40 rows x 4 columns]

In [22]: cat_crp[['prob']].agg(['mean', 'sem'])
Out[22]: 
          prob
mean  0.782693
sem   0.006262

The expected probability due to chance depends on the number of categories in the list. In this case, there are three categories, so a category CRP of 0.33 would be predicted if recalls were sampled randomly from the list.

Category clustering#

A number of measures have been developed to measure category clustering relative to that expected due to chance, under certain assumptions. Two such measures are list-based clustering (LBC) [SBW+02] and adjusted ratio of clustering (ARC) [RTB71].

These measures can be calculated using the category_clustering() function.

In [23]: clust = fr.category_clustering(data, category_key='category')

In [24]: clust.agg(['mean', 'sem'])
Out[24]: 
       subject       lbc       arc
mean  24.90000  2.409398  0.608763
sem    2.24488  0.127651  0.016809

Both measures are defined such that positive values indicate above-chance clustering. ARC scores have a maximum of 1, while the upper bound of LBC scores depends on the number of categories and the number of items per category in the study list.

Distance CRP#

While the category CRP examines clustering based on semantic similarity at a coarse level (i.e., whether two items are in the same category or not), recall may also depend on more nuanced semantic relationships.

Models of semantic knowledge allow the semantic distance between pairs of items to be quantified. If you have such a model defined for your stimulus pool, you can use the distance CRP analysis to examine how semantic distance affects recall transitions [HK02, MP16].

You must first define distances between pairs of items. Here, we use correlation distances based on the wiki2USE model.

In [25]: items, distances = fr.sample_distances('Morton2013')

We also need a column indicating the index of each item in the distances matrix. We use pool_index() to create a new column called item_index with the index of each item in the pool corresponding to the distances matrix.

In [26]: data['item_index'] = fr.pool_index(data['item'], items)

Finally, we must define distance bins. Here, we use 10 bins with equally spaced distance percentiles. Note that, when calculating distance percentiles, we use the squareform() function to get only the non-diagonal entries.

In [27]: from scipy.spatial.distance import squareform

In [28]: edges = np.percentile(squareform(distances), np.linspace(1, 99, 10))

We can now calculate conditional response probability as a function of distance bin using distance_crp(), to examine how response probability varies with semantic distance.

In [29]: dist_crp = fr.distance_crp(data, 'item_index', distances, edges)

In [30]: dist_crp
Out[30]: 
     subject    center             bin      prob  actual  possible
0          1  0.467532  (0.352, 0.583]  0.085456     151      1767
1          1  0.617748  (0.583, 0.653]  0.067916      87      1281
2          1  0.673656  (0.653, 0.695]  0.062500      65      1040
3          1  0.711075  (0.695, 0.727]  0.051836      48       926
4          1  0.742069  (0.727, 0.757]  0.050633      44       869
..       ...       ...             ...       ...     ...       ...
355       47  0.742069  (0.727, 0.757]  0.062822      61       971
356       47  0.770867  (0.757, 0.785]  0.030682      27       880
357       47  0.800404  (0.785, 0.816]  0.040749      37       908
358       47  0.834473  (0.816, 0.853]  0.046651      39       836
359       47  0.897275  (0.853, 0.941]  0.028868      25       866

[360 rows x 6 columns]

Use plot_distance_crp() to display the results:

In [31]: g = fr.plot_distance_crp(dist_crp).set(ylim=(0, 0.1))
../_images/distance_crp.svg

Conditional response probability decreases with increasing semantic distance, suggesting that recall order was influenced by the semantic similarity between items. Of course, a complete analysis should address potential confounds such as the category structure of the list. See the Restricting analysis to specific items section for an example of restricting analysis based on category.

Distance rank#

Similarly to the lag rank analysis of temporal clustering, we can summarize distance-based clustering (such as semantic clustering) with a single rank measure [PNK09]. The distance rank varies from 0 (the most-distant item is always recalled) to 1 (the closest item is always recalled), with chance clustering corresponding to 0.5. Given a matrix of item distances, we can calculate distance rank using distance_rank().

In [32]: dist_rank = fr.distance_rank(data, 'item_index', distances)

In [33]: dist_rank.agg(['mean', 'sem'])
Out[33]: 
       subject      rank
mean  24.90000  0.625932
sem    2.24488  0.003466

Distance rank shifted#

Like with the compound lag-CRP, we can also examine how recalls before the just-previous one may predict subsequent recalls. To examine whether distances relative to earlier items are predictive of the next recall, we can use a shifted distance rank analysis [MP16] using distance_rank_shifted().

Here, to account for the category structure of the list, we will only include within-category transitions (see the Restricting analysis to specific items section for details).

In [34]: ranks = fr.distance_rank_shifted(
   ....:     data, 'item_index', distances, 4, test_key='category', test=lambda x, y: x == y
   ....: )
   ....: 

In [35]: ranks
Out[35]: 
     subject  shift      rank
0          1     -4  0.518617
1          1     -3  0.492103
2          1     -2  0.516063
3          1     -1  0.579198
4          2     -4  0.463931
..       ...    ...       ...
155       46     -1  0.581420
156       47     -4  0.504383
157       47     -3  0.526840
158       47     -2  0.504953
159       47     -1  0.586689

[160 rows x 3 columns]

The distance rank is returned for each shift. The -1 shift is the same as the standard distance rank analysis. We can visualize how distance rank changes with shift using seaborn.relplot().

In [36]: g = sns.relplot(
   ....:     data=ranks.reset_index(), x='shift', y='rank', kind='line', height=3
   ....: ).set(xlabel='Output lag', ylabel='Distance rank', xticks=[-4, -3, -2, -1])
   ....: 
../_images/distance_rank_shifted.svg

Restricting analysis to specific items#

Sometimes you may want to focus an analysis on a subset of recalls. For example, in order to exclude the period of high clustering commonly observed at the start of recall, lag-CRP analyses are sometimes restricted to transitions after the first three output positions.

You can restrict the recalls included in a transition analysis using the optional item_query argument. This is built on the Pandas query/eval system, which makes it possible to select rows of a DataFrame using a query string. This string can refer to any column in the data. Any items for which the expression evaluates to True will be included in the analysis.

For example, we can use the item_query argument to exclude any items recalled in the first three output positions from analysis. Note that, because non-recalled items have no output position, we need to include them explicitly using output > 3 or not recall.

In [37]: crp_op3 = fr.lag_crp(data, item_query='output > 3 or not recall')

In [38]: g = fr.plot_lag_crp(crp_op3)
../_images/lag_crp_op3.svg

Restricting analysis to specific transitions#

In other cases, you may want to focus an analysis on a subset of transitions based on some criteria. For example, if a list contains items from different categories, it is a good idea to take this into account when measuring temporal clustering using a lag-CRP analysis [MP17, PEK11]. One approach is to separately analyze within- and across-category transitions.

Transitions can be selected for inclusion using the optional test_key and test inputs. The test_key indicates a column of the data to use for testing transitions; for example, here we will use the category column. The test input should be a function that takes in the test value of the previous recall and the current recall and returns True or False to indicate whether that transition should be included. Here, we will use a lambda (anonymous) function to define the test.

In [39]: crp_within = fr.lag_crp(data, test_key='category', test=lambda x, y: x == y)

In [40]: crp_across = fr.lag_crp(data, test_key='category', test=lambda x, y: x != y)

In [41]: crp_combined = pd.concat([crp_within, crp_across], keys=['within', 'across'], axis=0)

In [42]: crp_combined.index.set_names('transition', level=0, inplace=True)

In [43]: g = fr.plot_lag_crp(crp_combined, hue='transition').add_legend()
../_images/lag_crp_cat.svg

The within curve shows the lag-CRP for transitions between items of the same category, while the across curve shows transitions between items of different categories.