If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
1 First authorship is shared by VP and JA, and senior authorship is shared by LB and SMJ who equally contributed to this work; the specific choice of order was decided by a coin flip.
Valentina Pergher
Correspondence
Corresponding author. Department of Psychology, Harvard University, Cambridge, MA, USA.
1 First authorship is shared by VP and JA, and senior authorship is shared by LB and SMJ who equally contributed to this work; the specific choice of order was decided by a coin flip.
Affiliations
Department of Psychology, Harvard University, Cambridge, MA, USALaboratory of Neuro and Psychophysiology, KU Leuven University, Belgium
1 First authorship is shared by VP and JA, and senior authorship is shared by LB and SMJ who equally contributed to this work; the specific choice of order was decided by a coin flip.
1 First authorship is shared by VP and JA, and senior authorship is shared by LB and SMJ who equally contributed to this work; the specific choice of order was decided by a coin flip.
Affiliations
School of Education, University of California, Irvine, Irvine, CA, USA
Precision Neuroscience & Neuromodulation Program, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
1 First authorship is shared by VP and JA, and senior authorship is shared by LB and SMJ who equally contributed to this work; the specific choice of order was decided by a coin flip.
Susanne M. Jaeggi
Correspondence
Corresponding author. School of Education, University of California, Irvine, Irvine, CA, USA.
1 First authorship is shared by VP and JA, and senior authorship is shared by LB and SMJ who equally contributed to this work; the specific choice of order was decided by a coin flip.
Affiliations
School of Education, University of California, Irvine, Irvine, CA, USADepartment of Cognitive Sciences, University of California, Irvine, Irvine, CA, USA
1 First authorship is shared by VP and JA, and senior authorship is shared by LB and SMJ who equally contributed to this work; the specific choice of order was decided by a coin flip.
Lorella Battelli
Footnotes
1 First authorship is shared by VP and JA, and senior authorship is shared by LB and SMJ who equally contributed to this work; the specific choice of order was decided by a coin flip.
Affiliations
Department of Psychology, Harvard University, Cambridge, MA, USACenter for Neuroscience and Cognitive [email protected], Istituto Italiano di Tecnologia, Rovereto, ItalyBerenson-Allen Center for Noninvasive Brain Stimulation and Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
1 First authorship is shared by VP and JA, and senior authorship is shared by LB and SMJ who equally contributed to this work; the specific choice of order was decided by a coin flip.
Across 19 studies, transcranial direct current stimulation significantly enhances working memory training benefits.
•
Effects at follow-up were even stronger than at immediate post-test.
•
The most pronounced transfer occurred in untrained working memory tasks.
Abstract
Background
Transcranial direct current stimulation (tDCS) has shown potential as an effective aid to facilitate learning. A popular application of this technology has been in combination with working memory training (WMT) in order to enhance transfer effects to other cognitive measures after training.
Objective
This meta-analytic review aims to synthesize the existing literature on tDCS-enhanced WMT to quantify the extent to which tDCS can improve performance on transfer tasks after training. Furthermore, we were interested to evaluate the moderating effects of assessment time point (immediate post-test vs. follow-up) and transfer distance, i.e., the degree of similarity between transfer and training tasks.
Methods
Using robust variance estimation, we performed a systematic meta-analysis of all studies to date that compared WMT with tDCS to WMT with sham in healthy adults. All procedures conformed to PRISMA guidelines.
Results
Across 265 transfer measures in 18 studies, we found a small positive net effect of tDCS on improving overall performance on transfer measures after WMT. These effects were sustained at follow-up, which ranged from 1 week to one year after training, with a median of 1 month. Additionally, although there were no significant differences as a function of transfer distance, effects were most pronounced for non-trained working memory tasks.
Conclusions
This review provides evidence that tDCS can be effective in promoting learning over and above WMT alone, and can durably improve performance on trained and untrained measures for weeks to months after the initial training and stimulation period. In particular, boosting performance on dissimilar working memory tasks may present the most promising target for tDCS-augmented WMT.
Transcranial direct current stimulation (tDCS) is a non-invasive form of electrical brain stimulation that can modulate cortical excitability in a polarity-dependent fashion, where brain regions under the positive anode are typically excited, and regions under the negative cathode are typically inhibited [
]. These changes in neural sensitivity can manifest at the behavioral level, and have been shown to affect subsequent learning and memory consolidation on tasks presented during stimulation [
]. However, its effects on more generalized learning beyond the specifically stimulated task is a relatively underexplored phenomenon. Such transfer effects is precisely the goal of cognitive training, such as working memory training (WMT), which involves multiple sessions of intensive training on computerized tasks that tax the working memory (WM) system. Extensive research has confirmed that WMT alone, in the absence of tDCS, shows robust transfer effects onto similar WM tasks [
Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review.
Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review.
] however found no effect across seven studies, but used specific meta-analytic criteria when choosing stimulation sites and outcome measures that may have been sensible decisions for their purposes but are not a comprehensive representation of the field as a whole (e.g., a comparison of the same studies from their forest plot and ours shows sometimes considerably different effect sizes). Moreover, given that these previous meta-analyses consisted of ten or fewer studies, they may have been underpowered to reliably detect tDCS effects on WMT. Thus, to date, there has not been a comprehensive and high-powered meta-analysis conducted to assess general effects of tDCS on WMT across all transfer measures and stimulation sites. The goal of the present report, therefore, is to conduct such a comprehensive meta-analysis. With nineteen studies in our meta-analytic sample, not only can we provide a more updated synopsis of the field, with approximately double the sample size compared to previous meta-analyses [
] that increases our analytic power and maximizes the information from all outcome measures in each study. Additionally, a second goal was to evaluate moderators of the tDCS effect on WMT, such as the time point of assessment and the transfer distance [
Induction of long-term potentiation-like plasticity in the primary motor cortex with repeated anodal transcranial direct current stimulation – better effects with intensified protocols?.
], we hypothesized stronger meta-analytic effects at follow-up time points compared to immediate post-test. We remained agnostic with respect to transfer distance. Although the cognitive training literature in general, in the absence of tDCS, shows stronger transfer to tasks that are more similar to the training intervention [
Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review.
], it is unclear what effect the addition of tDCS might have. Although it could conceivably reinforce natural learning by promoting even stronger near transfer, it could also complement natural learning by boosting the signal to noise ratio of far transfer and facilitating its detection relative to WMT alone.
2. Methods
All procedures in this work adhered to guidelines laid out in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [
We systematically searched for original research articles, conference papers, and Masters/Ph.D theses from Google Scholar and PubMed, from 2010 until September 2022 (See Fig. 1). We searched for the following keywords, alone or in combination: ‘N-back training’, ‘updating training’, ‘N-back training game’, ‘updating training game’, ‘cognitive training’, ‘WM training’, and ‘transcranial direct current stimulation’. Additionally, we checked the references of selected papers for potentially relevant articles. To be included in the meta-analysis, a study must involve both anodal tDCS and WMT among healthy adults, utilize a sham-tDCS control group that also engages in WMT, investigate transfer effects to unstimulated outcome measures, and involve 3 or more sessions. We excluded studies with fewer than 3 sessions because we were primarily interested in the long-term effects of tDCS and WMT on learning and consolidation. Thus, we wanted to avoid the temporary effects of short-term changes in cortical excitability or simple repeated practice on a task, and considered 3 sessions to be the minimum acceptable threshold for a training paradigm. When both anodal and cathodal experimental groups were compared to sham, as done in one study [
Effects of transcranial direct current stimulation paired with cognitive training on functional connectivity of the working memory network in older adults.
], such as executive function or other memory tasks. These studies were included if WMT was a primary element in the training intervention (i.e., comprised at least 50% of the tasks). In the end, 18 articles met our inclusion criteria, incorporating data from 710 participants (see Table 1 for study characteristics and Supplementary Materials for a bibliography).
letter updating task, n-back, WM task, VLM task, VR
no significant differences were found in the trained letter updating and Markov decision-making tasks for both active tDCS + WM training and sham tDCS + WM training groups
significant improvements in the cumulative recall (with daily spacing) and in delayed recall trained task but not recognition for the active tDCS + WM training compared to sham tDCS + WM training group
significant improvments in WM learning for task-congruent active tDCS + WM training compared to sham tDCS + WM training group. No significant differences for task-incongruent
no significant differences in the improvements in the trained task between active tDCS + WM training and sham tDCS + WM training groups
Talsma et al. 2017
letter n-back, spatial n-back, automated OSPAN
significant improvements in the first session for active tDCS + WM training when compared to sham tDCS + WM training group. No improvements in the training task performance overall
Teixeira-Santos et al. 2022
RAPM set 1, RAPM set 2, RAPM total, digit span forward, digit span backward, dual n-back
significant improvement in the trained dual n-back tas for both active tDCS + WM training and sham tDCS + WM groups
significant improvements in the trained task for active left PFC 1 mA anodal tDCS + WM training groups compared to sham tDCS + WM training group
Study
Transfer outcomes
Follow-up outcomes
Antonenko et al. 2022
significant differences in one near-transfer task (superior n-back) in the active tDCS + WM training group compared to sham tDCS + WM training group
(1 month) group differences in one near-transfer task (superior n-back) in the active tDCS + WM training group. No significant differences between active and sham tDCS + WM training groups
Assecondi et al. 2022
no significant differences between active tDCS + WM training and sham tDCS + WM training groups
(1 month) no significant differences between active tDCS + WM training and sham tDCS + WM training groups
Au et al. 2016
significant improvements in visual n-back and backward block-tapping after active R tDCS + WM training compared to sham tDCS + WM training group. No significant differences between active L tDCS + WM training and sham tDCS + WM training groups
(3–13 months) improvements maintained in the trained n-back task after active tDCS + WM training compared to the sham tDCS + WM training group
Au et al. 2022
no transfer to other cognitive tasks
NA
Byrne et al. 2020
no significant differences between active tDCS + WM training and sham tDCS + WM training groups
NA
Jones et al. 2015
both active and sham tDCS + WM training groups showed equivalent improvements in the transfer tasks
(1 month) only the active tDCS + WM training groups maintained improvements for both trained and transfer tasks compared to sham tDCS + WM training group.
Jones et al. 2020
significant improvements in the trained task for active tDCS + WM training compared to sham tDCS + WM training group
NA
Ke et al. 2019
significant improvements in a similar untrained version of the N-Back (shape N-Back) for the active tDCS + WM training compared to sham tDCS + WM training group
NA
Martin et al. 2013
no significant differences between active tDCS + WM training and sham tDCS + WM training group
(4–5 weeks) no significant difference between active tDCS + WM training and sham tDCS + WM training groups
Nilsson et al. 2017
no transfers to other cognitive domains
NA
Nissim et al. 2019
significant improvements in the 2-back task accuracy for the active tDCS + WM training group compared to the sham tDCS + WM training group
NA
Richmond et al. 2014
no statistically significant difference between active tDCS + WM training and sham tDCS + WM training groups
NA
Ruf et al. 2017
significant improvements in the 3-back for the task-congruent active tDCS + WM training compared to the sham tDCS + WM training group. No significant differences for task-incongruent
(3–9 months) improvements on trained and transfer tasks persisted for the task-congruent active tDCS + WM training compared to sham tDCS + WM training group. No improvements for the task-incongruent
Shires et al. 2020
no significant differences between active tDCS + WM training and sham tDCS + WM training groups for transfer tasks
(1 month) no significant differences between active tDCS + WM training and sham tDCS + WM training groups
Stephens & Berryhill 2016
2 mA tDCS + WM training group showed significantly greater far transfer compared to the sham 2 mA tDCS + WM training group (both in standard far transfer and WCPA, OT-DORA). 1 mA tDCS + WM training group showed significantly greater far transfer compared to sham 1 mA tDCS + WM training group for OT-DORA test
(1 month) 2 mA tDCS + WM training induced significantly greater far transfer gains compared to sham 2 mA tDCS + WM training group. No significant differences for the 1 mA tDCS + WM training group
Talsma et al. 2017
no transfer to other cognitive domains
NA
Teixeira-Santos et al. 2022
significant improvements in RAPM only for the active tDCS + WM training group
(15 days) significant improvements in RAPM and forward Digit Span only for the active tDCS + WM training group
Weller et al. 2020
no transfer to other cognitive domains
(3 months) significant improvements in the trained task for active left PFC 1 mA anodal tDCS + WM training groups compared to sham tDCS + WM training group
After study selection, coding commenced independently by two study authors (VP and MAS). For each study, we extracted information relevant to effect size calculation including sample sizes, means, and standard deviations, separately for each outcome measure for both the tDCS + WMT and the sham + WMT groups. To minimize subjectivity in our meta-analysis, all reported outcomes from each study were coded.
The only exceptions were the ΔC variables in Assecondi et al. (2022), which represent shifts in decision-making bias and strategy, and are not measures of improvement, and the cumulative recall and recognition outcomes from Au et al. (2022), which are direct measures of declarative memory learning from a secondary training task, and unrelated to WMT.
In addition, we also extracted information concerning potential moderating factors such as time point of assessment (immediate post-test vs. follow-up) and transfer distance (trained WM, untrained WM, episodic memory, or non-memory). Transfer distance [
] refers to the degree of similarity between training and transfer tasks, ranging from assessment versions of the trained WM tasks to non-memory tasks which are in a completely different cognitive domain. We also coded information related to several exploratory moderators such as age, length of the intervention, anodal stimulation site, stimulation duration, and stimulation intensity. These variables were classified as exploratory in our analyses because they vary between-studies and provide purely correlational information given the myriad other factors that vary between studies that could potentially also explain differences in effect size. Our primary moderators, time point and transfer distance, on the other hand, largely vary within-studies, and thus these analyses more closely approximate a causal framework by providing a degree of control over study-specific idiosyncrasies. These issues are described in greater length elsewhere [
]. Disagreements between coders were discussed and either resolved or reviewed by a third author (JA) for arbitration.
When data were not available, we resorted to two options. First, we emailed the authors of the original articles for the missing information. Second, we attempted to estimate the missing data from relevant figures if provided using WebPlotDigitizer [
Long-term effects of transcranial direct current stimulation combined with computer-assisted cognitive training in healthy older adults: NeuroReport. vol. 25. 2014: 122-126https://doi.org/10.1097/WNR.0000000000000080
] was excluded completely because data were only available for significant outcomes, which creates a biased effect size estimate for this study. Three outcome measures from Talsma et al., 2016 [
] were excluded (operation span and reaction times for two n-back tasks) because the direction of effects could not be determined from the reported statistics. However, since there was no obvious bias or systematicity to these missing effects, the remaining outcome measures from the study were included.
2.3 Effect size calculations and statistical analyses
Effect sizes were calculated as standardized mean differences between gain scores of the active and sham tDCS groups. These gain scores were calculated as the difference between pretest and posttest, or between pretest and follow-up, standardized by the pooled standard deviation at pretest [
]. Effect sizes corresponding to measures where lower scores indicate better performance, such as reaction time, were multiplied by −1 so that higher effect sizes reflect better performance for all measures.
Due to the multilevel structure of our meta-analysis, with effect sizes for each outcome and time point nested within studies, we used robust variance estimation [
] to aggregate all effect sizes into one net effect with a random effects model. Robust variance estimation is a technique that allows us to account for dependencies within the data while utilizing information from all outcomes (rather than averaging all outcomes within a study together). This was done using a combination of the metafor and clubsandwich packages in R. We first used the “impute_covariance_matrix” function to compute a full sampling variance-covariance matrix. Since correlations between outcomes within a single study were not reported, we assumed a default correlation of 0.5 for most studies. However, effect sizes from several studies [
] also contained dependencies from multiple experimental groups being compared to the same control group. In order to account for the stronger correlations between outcomes due to these additional dependencies, correlations within these studies were assumed to be 0.75. We also re-computed the matrix with correlations of 0.1 and 0.9 for all studies to test the sensitivity of the model to extreme values. We then fed this variance-covariance matrix into the “rma.mv” function to run a multivariate meta-analysis, followed by the “coef_int” and “coef_test” functions to calculate a robust variance estimate of the overall effect size. This multivariate meta-analysis is essentially an intercept-only meta-regression. Moderation analyses were run in the same manner simply by adding covariates to this model.
For visualization purposes, the figures show study-level effect sizes that were averaged together, with variance calculated using formula 24.3 on p. 229 from Borenstein (2009) [
]. Thus, the forest plots only show one effect size per study. However, the overall meta-analytic effect size and associated confidence interval, both as reported in the text and displayed in the figures, was calculated using the robust variance estimation described above. A forest plot containing all effect sizes from all outcomes in each study is available in the supplementary online materials (Figs. S1–S8).
2.4 Publication bias
Publication bias, or the tendency of a field to preferably publish significant effects, was evaluated within our sample of studies using Egger's regression [
], followed by a sensitivity analysis estimating a range of effect sizes based on varying degrees of potential publication bias. First, we generated a funnel plot of each effect size against its standard error to visualize possible small-study effects, which could indicate, among other things, the existence of publication bias. Under conditions of no bias, effect sizes should appear symmetric around the mean, with large studies (indexed by low standard errors) clustering tightly together near the top, but with increasing variability in effect size in smaller studies closer to the bottom. Where bias is present, it is expected to disproportionately affect smaller studies, where only those with large and significant effects will get published while those with null or negative effects will remain in the proverbial “file drawer”. In order to quantify these effects, we used Egger's regression, which is essentially a meta-regression of effect size on study precision, which traditionally involves either the standard error or sampling variance. This was done using the same meta-regression methods described above with the metafor and clubsandwich packages, but using study precision as a continuous moderator. However, since both the standard error and the variance are correlated with the standardized mean difference effect sizes used in the current meta-analysis [
Induction of long-term potentiation-like plasticity in the primary motor cortex with repeated anodal transcranial direct current stimulation – better effects with intensified protocols?.
], we used a modified measure of standard error to reflect study precision, , which is strictly a function of sample size, , as per previous recommendations [
]. Secondarily, regardless of the statistical result of Egger's regression, we also conducted a sensitivity analysis using the PublicationBias package in R, and calculated corrected effect sizes assuming significant results were 1.5–10 times more likely to be published than nonsignificant results. These values conservatively reflect estimates of publication bias greater than those found in a meta-meta-analysis of meta-analyses across a variety of disciplines and journals [
Across 265 outcome measures within 18 studies, our overall analysis found a net effect of g = 0.14 (p = 0.03, 95% CI [0.01, 0.26]; Fig. 2), indicating a small net benefit of tDCS on improving later cognitive performance in both trained and untrained outcome measures over and above WMT alone. Sensitivity analyses using within-study correlations of r = 0.1 or r = 0.9 respectively revealed nearly identical effect size estimates of g = 0.136 (p = 0.034, 95% CI[0.012, 0.260]) and g = 0.137 (p = 0.036, 95% CI[0.010, 0.264]). Thus, our effects are robust against different specifications of within-study correlations. Heterogeneity was significant and substantial (Q = 1124.11, p < 0.01, I2 = 77%), indicating the likely existence of moderating variables that influence the effect of tDCS on transfer.
Fig. 2Overall Forest Plot. We observed an overall small, but significant effect of tDCS on improving transfer outcomes after WMT. Note that the overall meta-analytic effect (g = 0.14) is a robust variance estimate that utilizes data from all outcomes, and also averages across all timepoints. See Fig. 3, Fig. 4 for a breakdown of post-test and follow-up timepoints.
Fig. 3Forest plot at post-test. We observed a small, but significant effect of tDCS on improving transfer outcomes at post-test. All assessments took place 1 business day after the end of training.
Fig. 4Forest plot of effects at follow-up. Effects remained significant at follow-up. The median follow-up time was 1 month, but ranged from 1 week to 1 year.
Neither of our hypothesized moderators were significant (Table 2). Although the effects of tDCS at both immediate post-test (g = 0.12, p = 0.04; Fig. 3) and follow-up (g = 0.19, p < 0.05; Fig. 4) were individually significant, our meta-regression model demonstrated that these effect sizes did not differ significantly from each other (b = 0.06, p = 0.30; Table 1). Similarly, there was no difference in effect sizes as a function of transfer distance (b = −0.01, p = 0.88), although untrained WM measures were the only subset of outcomes that were individually significant and had the largest effect size (g = 0.19, p = 0.01). None of the other outcome types (trained WM, episodic memory, and non-memory tasks) were significant in of themselves. As a post-hoc analysis, we also re-ran the meta-regression with different groupings, but similarly found no significant effects when collapsing outcome types into WM and non-WM tasks (b = 0.02, p = 0.85), or untrained WM and other tasks (b = 0.03, p = 0.58).
Note that a few studies also targeted the right posterior parietal cortex (PPC; k = 2, n = 21) or alternated between the right PPC and right PFC (k = 3, n = 24), but were left out of the meta-regression in order to focus on the two most commonly targeted stimulation sites (right and left PFC). Incidentally, their subset effect sizes were g = 0.112 (right PPC) and g = 0.05 (alternating right PFC/PPC), and non-significant.
Denotes exploratory moderators (significance should be interpreted with caution, see Discussion).
(continuous variable)
265
18
–
–
0.09
[-0.08, 0.26]
Separate meta-analytic estimates are presented for different data subsets, along with meta-regression statistics. For the meta-regressions, the Timepoint covariate was coded as a dummy variable (1 = Follow-up, 0 = Post-test), Transfer Distance was coded continuously (0 = Trained WM, 1 = Untrained WM, 2 = Episodic Memory, 3 = Non-Memory), Age was coded as a dummy variable (0 = Young Adults, 1 = Old Adults), Stimulation Site was coded as a dummy variable (0 = Left PFC, 1 = Right PFC), Training Length was coded continuously based on the number of intervention days, Stimulation Duration was coded continuously based on the number of minutes of stimulation, and Stimulation Intensity was coded continuously based on current intensity in milliAmperes.
n = sample size (# of outcomes); k = sample size (# of studies); g = Hedges' g effect size; CI = confidence interval; p = p-value; b = regression coefficient.
a Denotes exploratory moderators (significance should be interpreted with caution, see Discussion).
b Note that a few studies also targeted the right posterior parietal cortex (PPC; k = 2, n = 21) or alternated between the right PPC and right PFC (k = 3, n = 24), but were left out of the meta-regression in order to focus on the two most commonly targeted stimulation sites (right and left PFC). Incidentally, their subset effect sizes were g = 0.112 (right PPC) and g = 0.05 (alternating right PFC/PPC), and non-significant.
Regarding our exploratory moderators, one significant effect emerged, indicating that studies stimulating the right prefrontal cortex (PFC; g = 0.26, p = 0.049) outperformed those targeting the left PFC (g = 0.08, p = 0.20). Meta-regression showed a significant difference between these two effect sizes (b = 0.22, p = 0.02). However, this exploratory moderation effect should be interpreted cautiously (see Discussion).
3.3 Study quality and risk of bias
Overall, all studies bordered between low to some risk of bias, driven primarily by the lack of pre-registrations that introduce potential bias within the “selection of reported results” domain. See Supplementary Table S1.
3.4 Publication bias
We found no statistical evidence of publication bias or other small study effects in the funnel plot (Fig. 5). Egger's regression showed no significant relationship between precision and effect size, (b = 0.40, 95% CI [−2.11, 2.91], p = 0.67), with a non-significant intercept of 0.01 (95% CI [−0.78,0.80], p = 0.98). Furthermore, in case we were underpowered to detect publication bias if it did exist, our sensitivity analyses demonstrated that our overall effect would not be nullified even if significant studies were 1.5 times (g = 0.11, 95% CI[0.02, 0.19], p = 0.02) or 5 times (g = 0.07, 95% CI[0.01, 0.13], p = 0.04) more likely to be published than non-significant studies. Publication bias would have to be 8 times more likely for our results to begin to lose significance (g = 0.06, 95% CI[-0.01, 0.12], p = 0.052).
Fig. 5Funnel Plot. There is no statistical evidence of asymmetry based on Egger's regression, and thus no detectable small study effects or publication bias in our analysis. Similarly, sensitivity analyses also found no appreciable effects of publication bias.
The present meta-analysis aimed to investigate the extent to which tDCS facilitates transfer of WMT onto trained and untrained outcomes after intervention, even in the absence of further stimulation. Overall, across 18 studies reporting data on 265 outcomes measured at various time points after training, our results showed a small, but significant benefit of tDCS on enhancing cognitive performance (g = 0.14, 95% CI [0.01, 0.26]), over and above WMT alone. Moreover, these benefits were present not only immediately after training, but were also sustained at follow-up, which took place between 1 week and 1 year after training, with a median follow-up time of 1 month. The durability of these effects, which arise from short training regimens ranging from 3 to 20 days (median: 5.5 days), speak to the practical utility of tDCS as a tool to enhance long-term learning and skill consolidation.
Although the reported effect size is small, the implications are important, not just for the use of tDCS, but also for the WMT field, independently of tDCS. Previous work and a number of meta-analyses have already demonstrated that tDCS can enhance learning and memory consolidation of task-specific memoranda [
]. However, the current meta-analysis goes one step further and provides proof of principle that it can also be used to facilitate the generalization and transfer of this learning onto a diverse array of other tasks. These sorts of transfer effects are a controversial phenomenon, especially in the WMT literature [
Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review.
], and our present results contribute to this long-standing debate. Insomuch as tDCS facilitates pre-existing task-relevant neural activity, the presence of tDCS-enhanced transfer presupposes the existence of neural activity during training that eventually leads to this transfer. Thus, our results can be used as a proxy to better understand the nature and extent of transfer effects that arise from WMT. We provide evidence that this transfer occurs non-specifically to a variety of both trained and untrained tasks, although future research will need to clarify the true extent of this transfer outside the WM domain. Given the controversy surrounding “far” transfer effects to broad cognitive domains or skills relevant to daily living [
Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review.
], tDCS may be a useful tool moving forward to increase the signal to noise ratio in this literature.
Despite the presence of significant heterogeneity in the current meta-analysis, suggesting the presence of factors that may moderate the effect size, we were unable to convincingly detect what these factors may be. There were no differential effects between post-test and follow-up time points, as we hypothesized, nor were there differential effects among different types of outcomes as a function of transfer distance. Nevertheless, despite the highly overlapping confidence intervals between different task types, the most robust effects manifested among untrained WM outcomes, which were the only subset of outcomes that were statistically significant on their own (i.e., significant compared to zero, but not compared to other task types). Interestingly, these effects were more robust even than the trained WM measures, which were assessment versions of the training task administered post-intervention in the absence of stimulation. However, we note that many studies did not measure or report effects on the trained WM task; thus it is possible that a larger sample would show a more robust pattern of effects. All that notwithstanding, it would not be surprising if tDCS effects truly do manifest more easily with untrained WM measures rather than trained, considering the well-established training-specific improvements known to arise from WMT alone [
Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review.
], which may eclipse the marginal gains from tDCS. Thus, it's possible that similar, but not identical, tasks may more reliably evince tDCS effects as they are similar enough to the trained task to induce learning but also dissimilar enough to leave room for improvement in the sham groups, who also undergo WMT themselves.
Besides the hypothesized moderators described above, there was also one significant meta-regression effect among our exploratory moderators. Specifically, studies that stimulated the right PFC (anode over F4) outperformed those that stimulated the left PFC (anode over F3). Despite the relatively strong effect size associated with this meta-regression (Table 2), we caution against over-interpretation of these exploratory results. First of all, it is critical to understand why these moderators were labeled exploratory. Unlike our hypothesized moderators, these are neither theory-driven nor do they vary within-studies, which is important for providing some degree of control over study-specific idiosyncrasies. Between-study moderators are purely correlational and can be driven by other factors associated with a particular study other than the analyzed moderator, as elaborated elsewhere [
]. This can be especially problematic in meta-analyses with a small total number of studies which can allow for spurious correlations to arise, such as ours with nineteen. Thus, while these results demonstrate that existing studies targeting the right PFC, as they have been typically conducted in WMT studies, have been relatively more successful at eliciting reliable transfer effects, researchers should NOT conclude that stimulating the right PFC is more effective than stimulating the left for eliciting transfer. Rather, they should consider all factors associated with these handful of studies, such as the types of stimulation parameters and outcomes they happened to evaluate as well as a myriad of other variables, when planning future research.
The general and durable effects of tDCS-enhanced WMT on broad cognitive measures is in agreement with a previous meta-analysis that found the strongest effects of tDCS on WM within the context of training rather than on performance during or immediately after stimulation [
]. Accordingly, other meta-analyses that have examined the immediate effects of tDCS on WM performance within a single-session have yielded mixed results, with small and scattered effects [
]. Cumulatively, this suggests that the immediate impact of tDCS on cortical excitability may be minimal in terms of boosting cognition, but may still have downstream effects on learning and consolidation that can be detected days or even months after the initial stimulation period. The exact nature of what is being learned or consolidated is beyond the scope of this paper to address, but appears to go beyond just task-specific strategies, as even performance on untrained tasks of varying degrees of similarity is improved. However, in contrast with these results, Nilsson et al. [
] found no meta-analytic effect of tDCS-enhanced WMT on WM tasks across seven studies. Nevertheless, they did find a small positive effect size, but may have been underpowered to detect its significance. Moreover, besides their own empirical study, the only study in their meta-analysis with a negative effect estimate based on their meta-analytic criteria actually showed a positive overall effect in the original report [
]. All other included studies reported small to moderate positive effect sizes consistent with the range reported here (Table 2).
Finally, we address the issue of publication bias, an issue that can affect the integrity of all meta-analyses if studies reporting significant results are more likely to be published than those that report null results. We argue that such bias is minimal in our current analyses, and we base this argument on both theoretical and empirical grounds. First of all, given the multiple transfer and training tasks, the individual studies in our meta-analysis have many criteria by which they might be published. In other words, a study could become publishable if even just one of those tasks showed significant effects, or if there was an effect on training but not transfer, meaning that the rest of the included transfer tasks would get published alongside, regardless of their significance. This is illustrated by our overall forest plot (Fig. 2), which for illustration purposes averages all transfer outcomes within a study into one aggregate effect. Here, we see that only 3 out of 19 studies we analyzed even show a significant overall transfer effect, whereas the other 16 show confidence intervals that overlap with zero. Thus, there exists a mechanism in our corpus of studies that pushes through many non-significant transfer results. Accordingly, we did not detect any statistical evidence of publication bias or small-study effects (Fig. 5). Moreover, a recently published meta-meta-analysis suggests that publication bias in the field of psychology is generally very low [
]. They estimate that significant results are only about 1.54 times more likely to be published than non-significant results. Nevertheless, we ran a sensitivity analysis according to the procedure laid out by Mathur and VanderWeele [
], and found that significant results would have to be over 10 times more likely to be published in order to nullify our results.
5. Conclusions and future directions
The present meta-analysis found a small but significant effect of multi-session tDCS coupled with WMT on facilitating transfer broadly onto a variety of trained and untrained tasks. In particular, effects were most pronounced for untrained WM tasks, which may represent the ideal subset of outcomes to leverage the effects of tDCS during WMT, given that they are similar enough to the trained task to elicit reliable learning effects, but dissimilar enough that the gains are not masked by the robust training-specific improvements that WMT alone tends to elicit [
Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review.
]. Moreover, the tDCS advantage persisted for a period of weeks to months after training, suggesting that tDCS not only facilitated greater learning but also resistance to forgetting over time. This speaks to the practical utility of using tDCS to boost long-term learning in the real world.
However, given the small effect sizes reported herein, we would like to emphasize two important points. The first is that transfer from WMT alone is already known to be difficult to elicit, with some estimates for certain tasks indistinguishable from zero, and median effects for all tasks ranging roughly between g = 0.2 to g = 0.5 [
Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review.
]. This range already sets a soft ceiling for tDCS effects, as we would not reasonably expect enhancements from tDCS to be more than what can originally be learned in the first place. Thus, our overall effect size of g = 0.14 seems to be a reasonable and realistic degree of enhancement, representing an approximately 25%–50% increase over and above WMT alone. Secondly, given that the lower range of transfer from WMT borders on zero, it is clear that transfer cannot and should not be expected to occur for all cognitive tasks, and the field lacks a clear theoretical understanding of which types of outcomes do and do not show transfer. In the absence of this understanding, effect sizes derived from extant studies are likely diluted by a number of experimental outcomes from which no learning or transfer is actually occurring, and a more targeted approach in the future may yield stronger effects.
Funding
This work was supported by the KU Leuven Postdoctoral Mandate (PDM) for V.P, as well as the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health (NIH) through Grant TL1 TR001415 to JA, and National Institute on Aging through Grants R01AG049006 and K02AG054665 to S.M.J. E.S. and L.B. were supported by the NIH (R01 AG060981-01), the Alzheimer's Drug Discovery Foundation (ADDF) and E.S. by the Association for Frontotemporal Dementia (AFTD) via GA 201902–2017902. L.B. was also supported by the Blavatnik Family Foundation.
Declarations of interest
None.
CRediT authorship contribution statement
Valentina Pergher: conceived the original idea, wrote the first draft of the manuscript, conducted the systematic search and coded the manuscripts. Jacky Au: performed the computations and ran the meta-analysis, wrote the first draft of the manuscript, and all authors discussed the results and contributed to the final draft. Mahsa Alizadeh Shalchy: conducted the systematic search and coded the manuscripts.
Appendix A. Supplementary data
The following is the Supplementary data to this article:
Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review.
Induction of long-term potentiation-like plasticity in the primary motor cortex with repeated anodal transcranial direct current stimulation – better effects with intensified protocols?.
Effects of transcranial direct current stimulation paired with cognitive training on functional connectivity of the working memory network in older adults.
Long-term effects of transcranial direct current stimulation combined with computer-assisted cognitive training in healthy older adults: NeuroReport. vol. 25. 2014: 122-126https://doi.org/10.1097/WNR.0000000000000080