How does the semantic constraint modulate the network dynamics that underpin sound-to-meaning mapping?
In everyday speech, we rarely hear words in isolation. Words are always accompanied by the preceding semantic and syntactic sentential context which helps us to process speech faster by anticipating the words likely to follow.
In this E/MEG experiment we asked how does the semantic constraint modulate speech-to-meaning mapping. We presented participants two-word spoken phrases where we varied the degree to which the first word semantically constrained the second: strong
constraint (Strong C, e.g. cycling helmet), weak constraint (Weak C, e.g. brown cow), and baseline condition in the form of word lists (No C, e.g. tune socks). We asked participants to answer an occasional semantic relatedness question (school bus - children?).
To confirm differences in semantic constraint, we performed behavioural gating pre-tests where participants listened to the phrases phoneme by phoneme and guessed the noun. The pretests showed that as the semantic constraint increased (cycling > plastic > shuffle), the word was identified earlier and the cohort size decreased, therefore confirming conditional differences.
Example plots for helmet from the behavioural gating pretests (Strong C - cycling; Weak C - plastic; No C - shuffle). The plots show the change in summed confidence levels (across the participants) for cohort candidates at each phoneme of helmet.
Final plot shows the superimposed summed confidence levels of the target words. Vertical bars indicate the recognition points. Note that Strong C has the earliest recognition point and the smallest cohort size.
In the following E/MEG analysis, we aligned the trials by the uniqueness point (UP) of the second word and to define networks modulated by semantic constraint and tease them apart from the task-positive networks we used a group level temporal independent component analysis (tICA). This involved concatenating the trial data across participants to determine language and task-positive networks shared across the sample.
An initial time-frequency univariate contrast revealed that the Strong C shows higher beta power between 15-34 Hz compared to other conditions. Following this analysis to reduce comparisons and computations we restricted the tICA on the beta signal. Group ICA revealed a maximum of 14 temporal ICs which showed the spatial distributions below.
Spatial distributions of the ICs computed by correlating the IC time series with the vertex time series. Here the correlations above r=0.1 are displayed for clarity.
Note that these networks include the visual (IC7, IC9), auditory (IC14, IC12), motor (IC2, IC3), frontal (IC4, IC5), temporal (IC6, IC8, IC11), temporo-parietal networks (IC10, IC13) as well as the multiple demand network (IC1).
The aim of the following GLM analysis was to test the relationship between the network activity of these components and processing of contextual semantics. The time series of these networks were modelled using a GLM, with three dummy condition variables and confounding variables (i.e. modifier's concreteness and log frequency). The beta time series of condition variables were then tested for differences using one way ANOVAs. Once the effects of the confounding variables were removed only 2 networks out of 14 showed significant modulations to differences in semantic constraint: IC10 and IC13.
The beta time series showed significant differences between conditions 300 ms before the word's recognition point, where Strong C showed higher amplitude compared to the remaining conditions.
These findings indicate that 1) the sentential semantic constraint facilitates speech comprehension possibly by partially preactivating the anticipated semantic representations and further restricting the cohort; and 2) that bilateral temporoparietal areas (with foci in angular gyri) are responsible for computing the semantic fit between the cohort candidates and the contextual semantics.
How do the syntactic constraints modulate the connectivity of the left frontotemporal syntax network?
Like semantic constraint, syntactic structure of the preceding speech segment provides crucial information for anticipating the upcoming speech and rapid and incremental speech processing. Previous studies consistently show the involvement of LIFG and LpMTG during syntactic processing. In this MEG experiment we investigated how the effective connectivity in this left frontotemporal network is modulated by syntactic prediction when we encounter local syntactic ambiguities in continuous speech.
In the experiment participants heard sentences with local syntactic ambiguities which were quickly resolved by the following verb. These ambiguities (e.g. landing planes) could be temporarily interpreted either as a adjective or a verb depending on prior probabilities (i.e. phrase frequencies). We explored dynamic changes in connectivity in the network as we predict the upcoming word depending on prior syntactic probabilities, and how the network recovers from failed predictions.
Here we employed Dynamic Causal Modelling (DCM) for ERP, with a windowing approach. We constructed two network architectures with nodes LHG, LpMTG and LIFG (see figure below). To test for the top-down modulation of early sensory areas, these architectures differed with respect to the LHG-LIFG connection. All possible combination of connections were modulated. We aligned the trials by the verb onset and computed connectivity patterns in 100 ms time windows. We compared these connectivity patterns across sentences with more expected and less expected syntactic structure.
Total of 78 models were compared through family comparisons. The first family comparison tested for differences in two model architectures, and revealed significant increase in exceedance probability in 0-100 ms and 200-300 ms. This suggests that the LHG-LIFG connection is modulated early in the word. The second family comparison tested for modulations of forward, backward and both forward and backward connections. The latter family consistently showed higher exceedance.
A. fMRI contrast comparing sentences that revealed the less expected vs expected syntactic reading.
B. DCM nodes based on the fMRI contrast.
C. Serial model architecture.
D. Fully connected model architecture.
E. Examples of modulated serial models (dark lines show modulations).
F. Examples of modulated fully connected models.
A. Results of the first family comparison, comparing the exceedance probabilities of the serial (S) and fully connected (FC) architectures.
B. Results of the second family comparison comparing the families that have modulations in the forward (F), backward (B) and both forward and backward connections (FB).
Bayesian Model Averaging was used to average coupling changes within families that showed significant differences. These coupling changes were then tested across subjects, using permutation Wilcoxon signed rank tests.
Figure below shows the results. Here values above and below 1 indicate percent increases and decreases in coupling respectively (e.g. 1.08 means 8% increase). These results show that when the anticipated syntactic structure proves to be wrong, the syntactic representations are
updated via early increased feedforward connectivity from LHG and LpMTG to LIFG between 0-200 ms after the verb onset, and later increase in recurrent frontotemporal connectivity between 100-500 ms.
The connectivity changes are in line with the predictive coding account, that the mismatch between the anticipated and perceived syntactic structure (i.e. prediction error) is initially sent from LHG and LpMTG to LIFG. This initated the prediction update involving recurrent information flow between LIFG and LpMTG, which is finalised within 500 ms after the verb onset.
Results of the Wilcoxon signed rank tests. Solid black lines indicate significant changes in coupling for the less expected vs expected syntactic reading. Values indicate the median coupling gains.