• No results found

Feasibility Study of Assessing the Preclinical Alzheimer Cognitive Composite (PACC) Score via Videoconferencing

N/A
N/A
Protected

Academic year: 2021

Share "Feasibility Study of Assessing the Preclinical Alzheimer Cognitive Composite (PACC) Score via Videoconferencing"

Copied!
42
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Groningen

Feasibility Study of Assessing the Preclinical Alzheimer Cognitive Composite (PACC) Score via Videoconferencing

Seghezzo, Giulia ; Van Hoecke, Yvonne ; James, Laura; Davoren, Donna; Williamson, Elizabeth ; Pearce, Neil; McElvenny, Damien; Gallo, Valentina

Published in:

Journal of Neurology

IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below.

Document Version

Early version, also known as pre-print

Publication date: 2021

Link to publication in University of Groningen/UMCG research database

Citation for published version (APA):

Seghezzo, G., Van Hoecke, Y., James, L., Davoren, D., Williamson, E., Pearce, N., McElvenny, D., & Gallo, V. (2021). Feasibility Study of Assessing the Preclinical Alzheimer Cognitive Composite (PACC) Score via Videoconferencing. Journal of Neurology.

Copyright

Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).

Take-down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.

(2)

Journal of Neurology

Feasibility study of assessing the Preclinical Alzheimer Cognitive Composite (PACC)

Score via videoconferencing

--Manuscript

Draft--Manuscript Number: JOON-D-20-02865R1

Full Title: Feasibility study of assessing the Preclinical Alzheimer Cognitive Composite (PACC) Score via videoconferencing

Article Type: Original Communication

Corresponding Author: Valentina Gallo University of Groningen NETHERLANDS Corresponding Author Secondary

Information:

Corresponding Author's Institution: University of Groningen Corresponding Author's Secondary

Institution:

Corresponding Author E-Mail: v.gallo@rug.nl;v.gallo@qmul.ac.uk

First Author: Giulia Seghezzo

First Author Secondary Information:

Order of Authors: Giulia Seghezzo

Yvonne Van Hoecke Laura James Donna Davoren Elizabeth Williamson Neil Pearce Damien McElvenny Valentina Gallo Order of Authors Secondary Information:

Funding Information: Drake Foundation

(EPMSZO61) Prof Neil Pearce

Abstract: Background

The Preclinical Alzheimer Cognitive Composite (PACC) is a composite score which can detect the first signs of cognitive impairment, which can be of importance for research and clinical practice. It is designed to be administered in person; however, in-person assessments are costly, and are difficult during the current COVID-19

pandemic. Objective

To assess the feasibility of performing the PACC assessment with videoconferencing, and to compare the validity of this remote PACC with the in-person PACC obtained previously.

Methods

Participants from the HEalth and Ageing Data IN the Game of football (HEADING) Study who had already undergone an in-person assessment were contacted and re-assessed remotely. The correlation between the two PACC scores was estimated. The difference between the two PACC scores was calculated and used in multiple linear regression to assess which variables were associated with a difference in PACC scores.

Findings

Of the 43 participants who were invited to this external study, 28 were re-assessed.

(3)

The median duration in days between the in-person and the remote assessments was 236·5 days (7·9 months) (IQR 62·5). There was a strong positive correlation between the two assessments for the PACC score, with a Spearman correlation coefficient of 0·75 (95% CI 0·56, 0·95). The multiple linear regression found that the only predictor of the PACC difference was the time between assessments.

Interpretation

This study provides evidence on the feasibility of performing cognitive tests online, with the PACC tests being successfully administered through videoconferencing. This is relevant, especially during times when face-to-face assessments cannot be performed. Response to Reviewers: We thank the reviewers for their valuable comments, and we enclose in the document

our responses, and the data relating to the registration of the protocol for this study. PROTOCOL

The protocol for this study was published, as requested by the journal:

Seghezzo, G., Van Hoecke, Y., James L., Davoren, D., Williamson, E., Pearce, N., McElvenny, D., Gallo, V. “Feasibility study of assessing the Preclinical Alzheimer Cognitive Composite (PACC) Score via Videoconferencing” Protocol Exchange (2020) DOI: 10.21203/rs.3.pex-1256/v1

REVIEWER 1 COMMENTS

1. It is very unclear why a Spearman rather than a Pearson correlation was used. At the very least either the Pearson or both should be presented.

Spearman was used due to small sample size, and due to non-normal distribution of the two PACC scores, however, its limitation is that it only assesses if there is an association between the two values, it cannot determine the distribution of the data, which Pearson can. The Pearson correlation has now replaced the previous Spearman correlation.

2. There seems no real need for the figures-they do not add anything to the evaluation.

- Figure 2: Scatterplots show the correlation between the two PACC scores and their distribution;

- Figure 3: The Bland-Altman plot shows the agreement between the two scores, and how there are no differential biases between the scores. It is difficult to convey, in words, the bias present;

-Figure 4: This is a visual representation of the possible effect of time between assessments on the difference between the PACC scores. This figure aids in the interpretation of the beta coefficient of the linear regression.

Any of these could be moved to the supplementary material if all reviewers agree, however we believe they aid the interpretation of results.

3. While the authors are aware their sample is biased to those who are more computer literature, this point needs to be expanded.

There is some evidence that our sample might be bias towards participants who are more computer literate, as it is based on participants who were willing to participate to a second assessment, remotely. This makes it difficult to generalize our results to the general public including a less computer literate population. Therefore, while it does not affect the association found, this will hinder the external validity of our results, meaning the same association might not be found when we include a wider range of computer literacy into the sample. These considerations have been added in the discussion, providing also some reference to substantiate the argument.

4. The authors need to note that the education level of those who refused the study was significantly less than those who accepted. A chi-square comparison is appropriate (those who are above versus those at GSC or below)

There is a statistically significant difference between the education level of those who participated in the remote assessment to those who did not, with those participating in the remote assessment having a higher educational qualification (Chi-square p=0.024). This has now been made explicit in the text. Again, while hampering generalizability of results, this has no impact on their internal validity. Such considerations have been added to the discussion

5. A discussion of the problems of evaluating those with more advanced symptoms

(4)

than seen here.

This study only provides evidence of the remote administration of the PACC in a substantially cognitive integer population. It is difficult to extrapolate the same considerations to a more compromised population. However the PACC is designed to assess subtle cognitive changes detectable before any clinical diagnosis of cognitive impairment, therefore our populations represents a typical population to be assessed using this score test. We have now made explicit in the discussion that generalizing the present data to a cognitively impaired population requires further evidence.

Reviewer #4 Comments

This is a timely manuscript on an important topic, but unfortunately there are some major problems with the research design.

The main difficulty is the test-retest interval and the use of a self-selected, convenience sample. The sample size is small on the participation rate is only 65%. The patients that declined participation may have incurred changes in mental status during the long test-retest interval. To conclude that the remote assessment is feasible is not a strong enough conclusion for publication.

The original HEADING sample was selected randomly in batches from a list of all members in the PFA database meeting the study requirements. No bias is present in this initial invitation into the main HEADING study. The subsample of this sub-study initially comprised all HEADING participants assessed at the time the COVID-19 lockdown measures were introduced. The data already available from the participants who had participated in-person previously were used to validate the remote

assessments. Therefore the original sample was either convenience nor self-selected. Nonetheless, response rate is less than ideal. Also, there is some evidence suggesting the presence of a selection bias: participants assessed remotely were more educated and younger. This hampers generalizability of results which has been now discussed extensively in the discussion.

The time between assessments was long and varying, ranging from 3 months to 9 months. While a long time had passed between assessments, original research for the PACC score found that a significant difference in scores was found at earliest 12 months [7]. The participants that did not participate in the remote assessment had a median time between assessments of 266 days (q25, p75: :206, 268), which was about a month longer than the average time between assessments of 236.5 days (p25, p75: 194.5, 257) of those who did participate in the remote assessment.

We indeed found a difference in PACC among those re-assessed and those who declined. This information has been added in Table 1, and discussed in the discussion. The correlation between the in-person and remote testing is statistically significant but far from ideal. It accounts for 56% of the variance. Should be compared with the PACC reliability as originally established.

The correlation is high, and the large confidence interval is due to the low power this study had and is considered with the other limitations presented in the manuscript. The original research of the PACC does not provide a reliability of their composite score, however, there is research on the individual components of the PACC. The DSST has the highest reliability of 0.86 [.85, .87], followed by the LMDR 0.65 [.63, .67] and the MMSE with 0.49 [.47, .52] [8]. This study uses the FNAME-12, while the FNAME-16 found a reliability of 0.62 (p<0.001) [9]. However, reliability of a study is dependent of many methodological and personal factors, including the cognitive function of a participant along with the time between assessments, potential learning effects, measurement error, and random error, including regression to the mean [10]. It should be noted that these studies mentioned had a longer time between assessments (average, 12 months) which introduces more variability into the sample the longer the assessment, as well as performing different tests [10]. Therefore, given these considerations, the reliability of the individual tests cannot be immediately compared, however it does reflect similarly to the correlation found in this study.

If the question is whether the remote test can supplant the in-person assessment, the design would ideally be a random allocation to each testing condition at baseline and a 2-3 week follow-up. The design would be counterbalanced, ½ getting each format first. We agree that this would be the ideal study design to test if the remote PACC could entirely replace the in person one. In fact, we had already prepared a protocol for such

(5)

a study as an add-on to the BRAIN/HEADING studies. Unfortunately, however, with the sudden implementation of the lockdown measures, such a study was not feasible any more for some time, and would have been delayed. We decided therefore to leverage on the resources we had available at the time of lockdown to starting explore this concept. For this reason we are only sharing our consideration on the feasibility of administering the PACC remotely and if this would produce similar results to those already obtained in-person. We have noted in the conclusion that such a study will be eventually needed.

It would be helpful to know what the test-retest reliability of the PACC is in the basic psychometric validation research.

See above. We have added considerations on the PACC reliability in the manuscript, and fully responded to the reviewer previous question.

The platform used by telemedicine is important as each one has different size of images presented. What platform was used is only cursorily described. I wonder how the visual stimuli could possibly be presented on a smart phone.

This is a good point, and we agree that the videoconferencing device could have made a difference (e.g. the Face-Name test where participants are shown a face with name and occupation pairings, as the level of detail can be different given the size of the device). Given that this was a pragmatic study, we preferred to include any

participants, irrespective of their access to different devices. Despite we have collected information on software used, unfortunately we did not collect information on the device used. Retrospectively we should have collected this information as well, adjusting for software and device. Stillerova et al. have performed video assessments using multiple software and various devices, and did not find any difference among modes, however we cannot confirm the same for our study.

I am troubled by the mailing of test materials as opposed to a strictly screen sharing procedure as has been utilized in the literature - and is becoming routine in this pandemic environment. For example, the DSST would be presented on the screen as the examiner commences timing, rather than patients, some of whom are cognitively impaired, essentially timing themselves.

As described in the text, in order to ensure maximal comparability with the in person test, the participants did not time themselves, the examiner commenced the time when telling the participant to begin, after completing the practice portion of the DSST and explaining the test to the participants. Giving the key of the DSST on the screen would imply that the participant as to look at the screen, then look at their sheet to write their response, this will slow them down, and alter the DSST results which assesses psychomotor speed.

There are online versions of the DSST, however since the original in-person DSST was administered on paper, we aimed at keeping it as similar as possible to minimize variability. Furthermore, in keeping the assessments as similar as possible, minimal computer skills were implied in the assessment, besides screen sharing for the FNAME. The test was in a sealed envelope, with clear instructions not to open prior to instruction by the examiner, and once complete, the participants held up their DSST worksheet to the camera, so the examiner could mark how far they completed the DSST. The DSST is then immediately placed in the return envelope in front of the assessor.

I note that some patients had interrupted internet signal and confusion in setting up. This should be expanded upon.

The interrupted internet signal was adjusted for in analysis, as this was recorded during each videoconference (by recording the internet speed (mbps)). All participants were set a simple step by step instruction guide of how to use Skype or Zoom if they were not familiar with any videoconferencing software. Once on the call, the

participants did not have to do anything, as the research assistants facilitated the call with screensharing when necessary.

There are copyright protections for tests (eg MMSE, WMS) to prevent patients having access to test materials when there is no professional oversight. What is to prevent a patient from sharing this information with others? Did the authors have permission from test publishers to mail the stimuli?

(6)

We have checked with the London School of Hygiene and Tropical Medicine copyright team for permission. There are no issues with the MMSE worksheet, as all that was sent was a blank page with space for a sentence to be written, and a picture of two overlapping pentagons, which hold no intellectual property. The WMS was only given orally, and the participants were watched to ensure they did not write anything, which they did not, along with the FNAME as well. The DSST worksheet was sent to the participants in a sealed envelope with clear instructions not to open prior to the assessment and was immediately returned after the completion of the remote assessment. Nonetheless we requested a retrospective license for using the test remotely, we appreciate the concern, and we will add a note in the future warning participants to not photocopy the worksheets and to return then as they were received. Given all of these potential confounds it is surprising that a correlation of 0.75 was found. If the sample size was sufficient and the design counterbalanced the conclusion would have greater credibility.

Of course, a larger sample size and counterbalanced study design would provide greater credibility to our study. The small sample size does decrease the power to detect significant results in this instance, increasing the chance for an overestimation of the true effect estimate. However, Figure 2 and Figure 3 show there is an agreement between the two assessments. Given the COVID restrictions the researchers could not perform an appropriate counterbalanced design, however, think sharing the results found are still worthwhile in the field. See conclusions

Author Comments: Prof Valentina Gallo Campus Fryslan University of Groningen Leeuwarden The Netherlands email: v.gallo@rug.nl https://www.rug.nl/staff/v.gallo/research Journal of Neurology Editor-in-chief Prof Roger Barker

January 8th, 2021

Dear Roger,

Thanks for the interesting and insightful comments we have received from the Journal of Neurology reviewers.

As requested by the editorial board, we have registered the protocol for this study; moreover we have addressed all the reviewer comments in the present revised version.

We have been invited by more than one source to share our positive experience on continuing our research during the lockdown and the COVID-19 restrictions. We are therefore looking forward to quote this paper outlining our efforts in doing so, including the discussion of its strengths and limitations.

Please find enclosed a revised version of the manuscript entitled, “Feasibility study of assessing the Preclinical Alzheimer Cognitive Composite (PACC) Score via

videoconferencing” alongside the rest of the material.

We thank you in advance for considering our paper once more.

Yours faithfully, Prof Valentina Gallo

(7)

We thank the reviewers for their valuable comments, and we enclose in the document our responses, and the data relating to the registration of the protocol for this study.

PROTOCOL

The protocol for this study was published, as requested by the journal:

Seghezzo, G., Van Hoecke, Y., James L., Davoren, D., Williamson, E., Pearce, N., McElvenny, D., Gallo, V. “Feasibility study of assessing the Preclinical Alzheimer Cognitive Composite (PACC) Score via Videoconferencing” Protocol Exchange (2020) DOI: 10.21203/rs.3.pex-1256/v1

REVIEWER 1 COMMENTS

1. It is very unclear why a Spearman rather than a Pearson correlation was used. At the very least either the Pearson or both should be presented.

Spearman was used due to small sample size, and due to non-normal distribution of the two PACC scores, however, its limitation is that it only assesses if there is an association between the two values, it cannot determine the distribution of the data, which Pearson can. The Pearson correlation has now replaced the previous Spearman correlation.

2. There seems no real need for the figures-they do not add anything to the evaluation.

- Figure 2: Scatterplots show the correlation between the two PACC scores and their distribution;

- Figure 3: The Bland-Altman plot shows the agreement between the two scores, and how there are no differential biases between the scores. It is difficult to convey, in words, the bias present;

-Figure 4: This is a visual representation of the possible effect of time between assessments on the difference between the PACC scores. This figure aids in the interpretation of the beta coefficient of the linear regression.

Any of these could be moved to the supplementary material if all reviewers agree, however we believe they aid the interpretation of results.

3. While the authors are aware their sample is biased to those who are more computer literature, this point needs to be expanded.

There is some evidence that our sample might be bias towards participants who are more computer literate, as it is based on participants who were willing to participate to a second assessment, remotely. This makes it difficult to generalize our results to the general public including a less computer literate population. Therefore, while it does not affect the association found, this will hinder the external validity of our results, meaning the same association might not be found when we include a wider range of computer literacy into the sample. These considerations have been added in the discussion, providing also some reference to substantiate the argument.

Authors' Response to Reviewers' Comments Click here to access/download;Authors' Response to Reviewers' Comments;JOON review comments_response.docx

(8)

4. The authors need to note that the education level of those who refused the study was significantly less than those who accepted. A chi-square comparison is appropriate (those who are above versus those at GSC or below)

There is a statistically significant difference between the education level of those who participated in the remote assessment to those who did not, with those participating in the remote assessment having a higher educational qualification (Chi-square p=0.024). This has now been made explicit in the text. Again, while hampering generalizability of results, this has no impact on their internal validity. Such considerations have been added to the discussion

5. A discussion of the problems of evaluating those with more advanced symptoms than seen here.

This study only provides evidence of the remote administration of the PACC in a substantially cognitive integer population. It is difficult to extrapolate the same considerations to a more compromised population. However the PACC is designed to assess subtle cognitive changes detectable before any clinical diagnosis of cognitive impairment, therefore our populations represents a typical population to be assessed using this score test. We have now made explicit in the discussion that generalizing the present data to a cognitively impaired population requires further evidence.

Reviewer #4 Comments

This is a timely manuscript on an important topic, but unfortunately there are some major problems with the research design.

The main difficulty is the test-retest interval and the use of a self-selected, convenience sample. The sample size is small on the participation rate is only 65%. The patients that declined participation may have incurred changes in mental status during the long test-retest interval. To conclude that the remote assessment is feasible is not a strong enough

conclusion for publication.

The original HEADING sample was selected randomly in batches from a list of all

members in the PFA database meeting the study requirements. No bias is present in this initial invitation into the main HEADING study. The subsample of this sub-study initially comprised all HEADING participants assessed at the time the COVID-19 lockdown measures were introduced. The data already available from the participants who had participated in-person previously were used to validate the remote assessments. Therefore the original sample was either convenience nor self-selected.

Nonetheless, response rate is less than ideal. Also, there is some evidence suggesting the presence of a selection bias: participants assessed remotely were more educated and younger. This hampers generalizability of results which has been now discussed extensively in the discussion.

The time between assessments was long and varying, ranging from 3 months to 9 months. While a long time had passed between assessments, original research for the

(9)

PACC score found that a significant difference in scores was found at earliest 12 months [7]. The participants that did not participate in the remote assessment had a median time between assessments of 266 days (q25, p75: :206, 268), which was about a month longer than the average time between assessments of 236.5 days (p25, p75: 194.5, 257) of those who did participate in the remote assessment.

We indeed found a difference in PACC among those re-assessed and those who

declined. This information has been added in Table 1, and discussed in the discussion. The correlation between the in-person and remote testing is statistically significant but far from ideal. It accounts for 56% of the variance. Should be compared with the PACC reliability as originally established.

The correlation is high, and the large confidence interval is due to the low power this study had and is considered with the other limitations presented in the manuscript. The original research of the PACC does not provide a reliability of their composite score, however, there is research on the individual components of the PACC. The DSST has the highest reliability of 0.86 [.85, .87], followed by the LMDR 0.65 [.63, .67] and the MMSE with 0.49 [.47, .52] [8]. This study uses the FNAME-12, while the FNAME-16 found a reliability of 0.62 (p<0.001) [9]. However, reliability of a study is dependent of many methodological and personal factors, including the cognitive function of a participant along with the time between assessments, potential learning effects, measurement error, and random error, including regression to the mean [10]. It should be noted that these studies mentioned had a longer time between assessments (average, 12 months) which introduces more variability into the sample the longer the assessment, as well as performing different tests [10]. Therefore, given these considerations, the reliability of the individual tests cannot be immediately compared, however it does reflect similarly to the correlation found in this study.

If the question is whether the remote test can supplant the in-person assessment, the design would ideally be a random allocation to each testing condition at baseline and a 2-3 week follow-up. The design would be counterbalanced, ½ getting each format first.

We agree that this would be the ideal study design to test if the remote PACC could entirely replace the in person one. In fact, we had already prepared a protocol for such a study as an add-on to the BRAIN/HEADING studies. Unfortunately, however, with the sudden implementation of the lockdown measures, such a study was not feasible any more for some time, and would have been delayed. We decided therefore to leverage on the resources we had available at the time of lockdown to starting explore this concept. For this reason we are only sharing our consideration on the feasibility of administering the PACC remotely and if this would produce similar results to those already obtained in-person. We have noted in the conclusion that such a study will be eventually needed.

It would be helpful to know what the test-retest reliability of the PACC is in the basic psychometric validation research.

(10)

See above. We have added considerations on the PACC reliability in the manuscript, and fully responded to the reviewer previous question.

The platform used by telemedicine is important as each one has different size of images presented. What platform was used is only cursorily described. I wonder how the visual stimuli could possibly be presented on a smart phone.

This is a good point, and we agree that the videoconferencing device could have made a difference (e.g. the Face-Name test where participants are shown a face with name and occupation pairings, as the level of detail can be different given the size of the device). Given that this was a pragmatic study, we preferred to include any participants, irrespective of their access to different devices. Despite we have collected information on software used, unfortunately we did not collect information on the device used. Retrospectively we should have collected this information as well, adjusting for software and device. Stillerova et al. have performed video assessments using multiple software and various devices, and did not find any difference among modes, however we cannot confirm the same for our study.

I am troubled by the mailing of test materials as opposed to a strictly screen sharing procedure as has been utilized in the literature - and is becoming routine in this pandemic environment. For example, the DSST would be presented on the screen as the examiner commences timing, rather than patients, some of whom are cognitively impaired, essentially timing themselves.

As described in the text, in order to ensure maximal comparability with the in person test, the participants did not time themselves, the examiner commenced the time when telling the participant to begin, after completing the practice portion of the DSST and explaining the test to the participants. Giving the key of the DSST on the screen would imply that the participant as to look at the screen, then look at their sheet to write their response, this will slow them down, and alter the DSST results which assesses

psychomotor speed.

There are online versions of the DSST, however since the original in-person DSST was administered on paper, we aimed at keeping it as similar as possible to minimize variability. Furthermore, in keeping the assessments as similar as possible, minimal computer skills were implied in the assessment, besides screen sharing for the FNAME. The test was in a sealed envelope, with clear instructions not to open prior to

instruction by the examiner, and once complete, the participants held up their DSST worksheet to the camera, so the examiner could mark how far they completed the DSST. The DSST is then immediately placed in the return envelope in front of the assessor.

I note that some patients had interrupted internet signal and confusion in setting up. This should be expanded upon.

The interrupted internet signal was adjusted for in analysis, as this was recorded during each videoconference (by recording the internet speed (mbps)). All participants were set a simple step by step instruction guide of how to use Skype or Zoom if they were not

(11)

familiar with any videoconferencing software. Once on the call, the participants did not have to do anything, as the research assistants facilitated the call with screensharing when necessary.

There are copyright protections for tests (eg MMSE, WMS) to prevent patients having access to test materials when there is no professional oversight. What is to prevent a patient from sharing this information with others? Did the authors have permission from test publishers to mail the stimuli?

We have checked with the London School of Hygiene and Tropical Medicine copyright team for permission. There are no issues with the MMSE worksheet, as all that was sent was a blank page with space for a sentence to be written, and a picture of two

overlapping pentagons, which hold no intellectual property. The WMS was only given orally, and the participants were watched to ensure they did not write anything, which they did not, along with the FNAME as well. The DSST worksheet was sent to the participants in a sealed envelope with clear instructions not to open prior to the assessment and was immediately returned after the completion of the remote assessment. Nonetheless we requested a retrospective license for using the test remotely, we appreciate the concern, and we will add a note in the future warning participants to not photocopy the worksheets and to return then as they were received. Given all of these potential confounds it is surprising that a correlation of 0.75 was found. If the sample size was sufficient and the design counterbalanced the conclusion would have greater credibility.

Of course, a larger sample size and counterbalanced study design would provide greater credibility to our study. The small sample size does decrease the power to detect

significant results in this instance, increasing the chance for an overestimation of the true effect estimate. However, Figure 2 and Figure 3 show there is an agreement between the two assessments. Given the COVID restrictions the researchers could not perform an appropriate counterbalanced design, however, think sharing the results found are still worthwhile in the field. See conclusions

(12)

5

Feasibility study of assessing the Preclinical

1

Alzheimer Cognitive Composite (PACC)

2

Score via videoconferencing

3

Giulia Seghezzo1, Yvonne Van Hoecke2, Laura James1, Donna Davoren2, Elizabeth Williamson2, Neil

4

Pearce2, Damien McElvenny2,3,4, Valentina Gallo1,2,5

5

6

1. Institute of Population Health Sciences, Queen Mary University of London, London, UK

7

2. London School of Hygiene and Tropical Medicine

8

3. Institute of Occupational Medicine, Edinburgh, UK

9

4. Centre for Occupational and Environmental Health, University of Manchester, Manchester, UK

10

5. Campus Fryslân, University of Groningen, Leeuwarden, the Netherlands

11 12 13 14 Corresponding author 15

Prof Valentina Gallo 16 Campus Fryslân 17 University of Groningen 18 Leeuwarden 19 The Netherlands 20 v.gallo@rug.nl 21 Click here to access/download;Manuscript;JOON_revised_clean.docx Click here to view linked References

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(13)

5

Abstract

22

Background

23

The Preclinical Alzheimer Cognitive Composite (PACC) is a composite score which can detect 24

the first signs of cognitive impairment, which can be of importance for research and clinical 25

practice. It is designed to be administered in person; however, in-person assessments are costly, 26

and are difficult during the current COVID-19 pandemic. 27

Objective

28

To assess the feasibility of performing the PACC assessment with videoconferencing, and to 29

compare the validity of this remote PACC with the in-person PACC obtained previously. 30

Methods

31

Participants from the HEalth and Ageing Data IN the Game of football (HEADING) Study who 32

had already undergone an in-person assessment were re-contacted and re-assessed remotely. The 33

correlation between the two PACC scores was estimated. The difference between the two PACC 34

scores was calculated and used in multiple linear regression to assess which variables were 35

associated with a difference in PACC scores. 36

Findings

37

Of the 43 participants who were invited to this external study, 28 were re-assessed. The median 38

duration in days between the in-person and the remote assessments was 236·5 days (7·9 months) 39

(IQR 62·5). There was a strong positive correlation between the two assessments for the PACC 40

score, with a Spearman correlation coefficient of 0·75 (95% CI 0·56, 0·95). The multiple linear 41

regression found that the only predictor of the PACC difference was the time between assessments. 42

Interpretation

43

This study provides evidence on the feasibility of performing cognitive tests online, with the PACC 44

tests being successfully administered through videoconferencing. This is relevant, especially 45

during times when face-to-face assessments cannot be performed. 46

47

Key Words:

48

Telemedicine, Cognitive Testing, Cognitive Decline, Mild Cognitive Impairment 49 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(14)

5

Declarations

50

Acknowledgements:

We are grateful to all the HEADING study participants for the time,

51

interest, and commitment they have shown in contributing to the data collection. Thank you to

52

the Professional Footballers’ Association for their continued support with recruitment; We would

53

like to thank Prof Carol Brayne who has provided very valuable input on each of the phases of

54

the HEADING study by chairing the HEADING study Independent Oversight Committee (IOC).

55

Thanks also to the IOC Members for their invaluable advice and guidance throughout; Bill

56

Treadwell, Simon Jones, Dr. Collette Griffin, Professor Sinead Langan, Tim Lindsay, Lauren

57

Pulling, Tim Stevens, John Bramhall Richard Jobson and Charlotte Cowie. We would like to

58

thank Kirsty Lu and Sebastian Crutch as UCL assisted in facilitation and interpretation d the

59

neuropsychological tests. We are grateful to Ms Saba Mian who trained G Seghezzo, L James

60

and Y van Hoecke in administering the PACC.

61 62

Availability of data and material

: Dr Gallo had full access to all of the data in the study and 63

takes responsibility for the integrity of the data and the accuracy of the data analysis. She 64

declares that this manuscript is honest, accurate, and transparent account of the study being 65

reported; that no important aspects of the study have been omitted. All co-authors had full access 66

to the data and can take responsibility for the integrity of the data and the accuracy of the data 67

analysis. 68

69

Funding

: This study was funded by the Drake Foundation as part of the BRAIN study funded 70

to London School of Hygiene and Tropical Medicine (EPMSZO61) in collaboration with Queen 71

Mary University of London and the Institute of Occupational Health. The funder had no role in 72

the preparation of the manuscript, which has been written by the co-authors completely 73

independently. 74

75

Ethics Approval:

The HEADING Study was approved by the London School of Hygiene & 76

Tropical Medicine’s Ethical Committee (16282). Written informed consent was obtained from 77

the participants, with further verbal consent to be re-assessed remotely. 78

79

Authors’ contributions

: 80

Study concept and design: V Gallo, D McElvenny 81

Analysis and interpretation of data: G Seghezzo, D McElvenny, E Williamson 82

Drafting of the manuscript: G Seghezzo, Y van Hoecke 83

Data collection: G Seghezzo, Y van Hoecke, L James, D Davoren 84

Critical revision of the manuscript for important intellectual content: V Gallo, N Pearce 85

86

Conflict of interest statement

: We declare that we have no conflict of interests. 87 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(15)

5

Introduction

88

Dementia is a growing public health challenge, with an estimated 40-50 million people living with 89

this condition globally [1]. Worldwide, the prevalence of dementia more than doubled from 1990 90

to 2016, mainly due to the ageing population; it is now the 5th leading cause of death globally [1,2]. 91

Dementia onset is usually preceded by Mild Cognitive Impairment (MCI), with population-based 92

studies finding up to 22% of people with MCI developing dementia [1]. Currently, there is an 93

increasing interest in the early diagnosis of dementia, to allow potential screening programs, as 94

well as clinical trials testing disease-modifying drugs early on in the neuropathological process 95

[3]. In this context, assessing patients at very early stages of MCI is important. 96

97

The Preclinical Alzheimer Cognitive Composite (PACC) is a composite score which combines 98

tests that assess episodic memory, timed executive function, and global cognition, and it has been 99

shown to be able to detect the first signs of cognitive decline, before clinical signs of MCI manifest 100

[4]. The PACC score is increasingly used in epidemiological studies to assess an association 101

between exposures and early changes in cognitive function [5,6]. The PACC is designed to be 102

administered in person, by a trained research psychologist or nurse. However, in epidemiological 103

studies, in-person assessments are costly, often require extensive travelling, and are difficult in the 104

current pandemic situation. Assessing cognitive function in older adults may be possible via 105

videoconferencing, but there have been calls for further validation studies [7]. Some early studies 106

have shown that remote video assessments are feasible on cognitively normal participants, as well 107

as those with Alzheimer’s disease, dementia and Parkinson’s disease [8-12]. However, no study 108

has assessed the feasibility of performing videoconference assessments in the participant’s home 109

with their own equipment, with the majority of studies assessing feasibility by performing video 110

assessments in clinics [7,8, 11-13]. 111

112

The HEalth and Ageing Data IN the Game of football (HEADING) Study is an ongoing study 113

assessing the relationship between concussions and repetitive sub-concussive head injuries in 114

retired football players, and cognitive function as measured with the PACC score. In March 2020, 115

due to the COVID-19 pandemic, and the imposed lockdown by the UK government, the 116

HEADING Study could no longer assess its participants in-person, prompting a need to find other 117

modes of assessment. The aim of this study is to assess the feasibility of performing the PACC 118 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(16)

5

score via videoconferencing and comparing the validity of the remote PACC score with the in-119

person PACC score obtained previously, by recalling participants of the HEADING study who 120

had already been assessed for a new remote assessment. 121

Methods

122

Source population

123

Participants in the HEADING Study were selected from the Professional Footballer’s Association 124

(PFA) member database, a union for current and former professional football players of the English 125

Premier League. Any male member over the age of 50 with an address in England was sent an 126

invitation in the mail regarding the study, and a request to contact the study team to schedule an 127

appointment. Appointments were held in clinics in London or Manchester, or at the participant’s 128

home. The in-person assessment included a lifestyle questionnaire, exposure assessment 129

questionnaire, and cognitive tests, in addition to some physical measures, the assessment protocol 130

is similar to that of the BRAIN Study[5], apart from the addition of repetitive sub-concussive head 131

injuries to the exposure assessment. 132

133

The HEADING Study recruitment was ongoing when, on 23/03/2020 due to the COVID-19 134

pandemic, a lockdown in the UK was announced and in-person assessment were no longer 135

possible. 136

137

All participants who had already completed the in-person assessment for the HEADING Study 138

between July 2019 and March 2020 were contacted by telephone and/or by email, requesting their 139

voluntary participation in an additional remote assessment. Participants were asked if they had the 140

capability to perform video calls, by having access either a computer, tablet or smartphone with a 141

camera. Step-by-step instructions and over the phone support were offered to participants if they 142

were not familiar with downloading or using any videoconferencing software (instructions 143

provided for Skype and Zoom). If the participant agreed and met the requirements of joining a 144

video call, an appointment was scheduled with a video-conferencing software the participant was 145

most familiar with (Zoom®, Skype®, Microsoft Teams®, Facebook Portal® or FaceTime®). 146 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(17)

5

The HEADING Study was approved by the London School of Hygiene & Tropical Medicine’s 147

Ethical Committee (16282). Participants were not involved in the design of the study nor of the 148 present sub-study. 149 150

In person assessment

151

The PACC score used in the HEADING Study is based on that used in the British 1946 Birth 152

Cohort [6,14] and in the BRAIN Study [5], and consists of the following: 153

 The Mini Mental State Examination (MMSE) total score (0-30 points): used to assess 154

multiple cognitive domains including orientation to time and place, attention and 155

calculation, recall, language, writing, visuospatial function, and executive function. 156

 The total score of the 12-item Face-Name Associative Memory Test (F-NAME 12A) (0-157

96 points): used to assess the ability of the participant to recall names and occupation of a 158

number of people showed in pictures. 159

 The delayed recall score on the logical memory IIa subtest from the Weschler Memory 160

Scale (0-25 story units): used to assess the ability to freely recall a short story. 161

 The Digit Symbol Substitution Test (DSST) score from the Weschler Adult Intelligence 162

Scale Revised (0-93 symbols): used to assess attention and psychomotor speed. 163

Each of the four component scores was divided by the standard deviation (SD) of that component 164

to form standardised z-scores. The mean of these z-scores was then calculated to form the 165

composite score [4]. A complete PACC score for this study was defined as of having the MMSE 166

and at least two other tests completed [5]. 167

168

Remote Video Assessments

169

Prior to the assessment participant packs were posted to the participant address which contained 170

all materials necessary for their assessments. These included: (i) cover letter; (ii) blank paper (for 171

MMSE commands involving grabbing paper with right hand, folding paper and placing it on lap) 172

(iii) MMSE worksheets (draw pentagons and write sentence); (iv) DSST worksheet; (v) Post 173

Assessment Interview; (vi) stamped return envelope. 174

175

As the DSST was a timed task, the worksheet was enclosed in a sealed envelope within the 176

participant pack, with the following sentence ‘Please do not open until you are told to.’ Participants 177 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(18)

5

then opened the sealed envelope when instructed to do so by the research assistant during the 178

remote assessment. Additional material available to each of the remote assessors included: (i) timer 179

for the MMSE, F-NAME, Logical memory test, and DSST; (ii) wristwatch for the MMSE; (iii) 180

Stimulus card for the MMSE; (iv) PowerPoint file for the F-NAME; (v) hard copy of the narrative 181

of the Logical Memory test; (vi) hard copy of the worksheet for the DSST; and (vii) pen for scoring 182

the tests and taking notes. 183

184

The order of the tests was changed slightly from the order of the in-person assessment to fit with 185

time restrictions required for the tests, as well as to ensure that the remote assessment was short 186

and did not include too many gaps between tests (Box 1): there was a 20-minute delay between 187

the Immediate Recall Logical Memory Test and the Delayed Recall Logical Memory Test; 188

similarly, there was a 30-minute delay between the Cued Face Name Associative Memory Test 189

and the Delayed Face Name Associative Memory Test. The tests were scheduled to take 60 190

minutes in total. 191

192

In addition, during the remote assessment, the participants were asked to check and report their 193

internet speed, using a website (www.fast.com). At the end of the call they were asked to complete 194

a post-assessment interview to record how they felt about the two assessments compared (in-195

person and remote). The post-assessment interview comprised of three questions, the first two 196

were ratings on a scale of 1-5 assessing how comfortable the participants felt with the in-person 197

and remote assessments (1 being very uncomfortable and 5 being very comfortable). The third 198

question was open-ended for the participants to give their opinion on the two assessments and if 199

they believed they were comparable. At the end of the assessment, the participant was asked to 200

place in the stamped return envelope the MMSE writing and drawing sheet, the DSST worksheet 201

and the post-assessment interview including a unique ID given to them to be identifiable to the 202

researchers, before they were returned. 203

204

A database was created on Excel which included the participant ID, the assessor from the in-person 205

assessment and the assessor for the remote assessment (different assessors were used for the in-206

person and remote assessments), internet speed of the participant, video software used, date of in-207

person and remote assessment, completed HEADING lifestyle questionnaire from the in-person 208 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(19)

5

assessment, in-person PACC test scores and remote PACC test scores, and the post-assessment 209 interview. 210 211

Data Analysis

212

Descriptive statistics were produced including means and medians and graphical displays of 213

distributions using histograms and scatterplots. Participants were included in the analysis if they 214

had a complete PACC score for both the remote and in-person assessments. Those who 215

participated in both assessments were compared to those who only performed in-person 216

assessments with descriptive statistics, such as mean and medians, Chi-square and t-test and 217

scatterplots. Since the PACC score is bases on the standardized test results of a sample, the PACC 218

score for the in-person assessment was calculated twice, first with all the participants who 219

completed only the in-person assessments, then again for the sample who completed both 220

assessments, the second to be used for the difference measure. The correlation between the two 221

PACC scores was estimated, and the difference between the two PACC scores was calculated. A 222

positive difference implies the remote PACC score is higher than the in-person score, and a 223

negative difference represents a higher in-person score. This difference measure was then used in 224

a multiple linear regression to assess the role of variables potentially associated with a difference 225

in PACC scores. The time elapsed between the two tests was modelled as both a categorical and 226

continuous variable in order to explore a possible effect of time. Continuous variables (age, time 227

and internet speed) were centered on the mean for the regression analysis. To better interpret the 228

results of the regression, a marginal effect plot was explored on the mean PACC difference in the 229

sample by varying time between assessments. Differences in scores of the individual tests 230

comprising the PACC were also analyzed separately with the same approach. Agreement of the 231

two measures was further assessed with a Bland-Altman plot. All analyses were further run without 232

an identified outlier. 233

Results

234

As of March 13th, 2020, 45 participants had been assessed for the HEADING Study. These 45 235

participants were invited to take part in the remote assessment; 31(69%) agreed to participate in 236

the remote assessment. At the time of this analysis, there were 30 participants who had completed 237

a virtual PACC assessment and had data available for the in-person assessment (67% of the total). 238

Of the 30 participants in the feasibility study, two participants completed only two of the four tests, 239 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(20)

5

with one participant suffering from aphasia and another having recently undergone hand surgery 240

inhibiting their ability to do the DSST and half the MMSE. Therefore, leaving 28 participants 241

(Figure 1). The median age was 60 years old (IQR 16) (Table 1), with 57% of the participants 242

being educated up to GCSE standard. Only 10% of the participants had ever smoked, and 75% of 243

the participants drank alcohol. A comparison between the original sample, and the participants 244

included in the remote assessment is reported in Table 1. Participants who accepted to be re-245

assessed were on average younger (p=0.03), more educated (p=0.02) and had a higher PACC score 246

(p=0.05). 247

248

The shortest time between assessments spanned 103 days, while the longest time between 249

assessments was 293 days (3·4 and 9·6 months, respectively). The median duration in days 250

between the in-person and the subsequent remote assessments was 236.5 days (or 7.9 months) 251

(IQR 62·5). When the time between assessments variable was categorized, 5 (17%) participants 252

had less than 149 days (4·9 months) between assessments, three (10%) between 150-199 days (4·9 253

– 6·5 months), 12 (40%) ranged between 200-249 days (6·6 – 8·2 months) and ten (13%) over 250 254

days (8·2 months) between assessments. Most of the remote assessments (80%) were performed 255

using applications Skype and Zoom. 256

257

The PACC scores for the two assessments are plotted in Figure 2. There was a strong positive 258

correlation between the two assessments for the PACC score, with a Pearson correlation 259

coefficient of 0·82 (95% CI 0·66, 0·98). Summary statistics for the PACC scores and the PACC 260

difference are shown in Table 2. A Bland-Altman plot was further used to assess agreement 261

between the two PACC scores (Figure 3). This suggests that the difference between in-person and 262

remote assessments are not detected differentially in those with higher or lower scores. 263

264

The multiple linear regression included the age of participants, highest educational qualification, 265

internet speed, in-person and remote assessors, and time between assessments for 27 participants 266

due to a missing value for internet speed (Table 3). The regression produced a constant value of -267

0·17 (95% CI -0·54, 0·19), meaning the expected PACC difference between the in-person PACC 268

was 0·17 points higher than the remote PACC for an ‘average’ person in the sample (specifically, 269

a person aged 61 years, with GCSE education level, assessed by video assessor 1, person assessor 270 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(21)

5

4, with an average time of 218 days between assessments and with an internet speed of 31mbps). 271

The time between assessments, as a continuous variable, was identified to be associated with the 272

PACC difference. Time between assessment predicted a decrease in PACC difference by -0·004 273

points (β=-0·004 95% CI: -0·007, -0·00008) with increasing time between assessments (in days). 274

This means that the difference in PACC scores between each increasing day will differ on average 275

by 0.004 points. The analysis of marginal effects showed that when the two tests were administered 276

relatively closer in time, the mean difference was positive (the subsequent remote test performance 277

was better), but the difference became negative (in-person performance better) with increasing 278

time difference between the assessments (Figure 4). When analyzing the individual tests, time 279

between assessments was also identified with the Logical Memory test (data not shown). The 280

analysis run after removing the outlier did not change the results, as shown in Table 3. The 281

responses of the post-assessment questionnaire are displayed in Supplemental Table 1. 282

283

Overall, the participants reacted well to the remote assessment, with 22 (78%) participants 284

responding that they felt extremely comfortable performing the remote assessment on the post-285

assessment interview, with 3 participants scoring the remote assessment worse than the in-person 286

assessment (Supplemental Table 1). Participants mentioned that although there are more problems 287

with remote assessment (interrupted internet signal, confusion in setting up) the testing process 288

felt equal to that already performed in-person. 289

Discussion

290

This study provides evidence of the feasibility of administering the tests comprising the PACC 291

score via videoconferencing: administering the tests sending the participants some material via the 292

post, in advance was shown to be feasible. The differences between the in-person and remote-293

administered PACC were overall very small. There weren’t systematic differences between the 294

two PACC scores, arguing against a potential bias introduced by the remote assessment: those who 295

performed worse presented similar in-person/remote differences compared to those who 296

performed best. 297

298

The only variable which predicted the PACC score differences was the time between assessments; 299

its association with the PACC difference being very small. The marginal effect plot showed a 300 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(22)

5

tendency of better performance at the remote assessment, on average, when the time difference 301

between the two was shorter, but this association reversed as the time between assessments 302

increased. This could be interpreted as a potential learning effect - given that all the participants 303

had already undergone the face-to-face assessment prior to the remote assessment- which wears 304

off over time [15,16]. A learning effect involves increases in repeated test scores due to factors 305

such as memory for specific test items, learned strategies for problem solving or general experience 306

and comfort with testing. This learning effect was observed despite the average time between tests 307

was 7·2 months in this study, which is longer than the two to three months other studies have used 308

when considering learning effects [7,10] suggesting that the learning effect in substantially 309

cognitively intact people could be longer than previously recognized. This difference between the 310

two measures could also be interpreted as participants being in a more comfortable setting in their 311

home performing better [17]. 312

313

The fact that the learning effect wears off over time is predictable, less easy to interpret is a 314

tendency toward a reverse effect (remote assessment worse than in person). One possible 315

interpretation is that it is a detection of very subtle cognitive decline over time among the 316

participants. The BRAIN study has shown that PACC scores decrease with age among retired 317

rugby players (manuscript under review) [5]. Given the long test-retest interval between the two 318

assessments, this could be a possibility. The PACC was originally established to measure cognitive 319

decline over time, being administered every 6 months over the course of 36 months. However, an 320

average of seven months between the assessments may not be long enough to detect a change in 321

cognitive function, as seen by Donohue et al., where the earliest detectable change in PACC scores 322

was at 12 months [4]. Finally, it is not possible to rule out completely that different assessors for 323

the in-person and remote assessments might have had an effect on the scoring, although this was 324

also adjusted for in the analysis. 325

326

Importantly, the effect of time on the scoring difference is no relevant for the HEADING and many 327

other epidemiological studies which use the remote PACC assessment solely in person, or 328

remotely. The high correlation between the two sets of tests and the absence of a clear bias 329

affecting disproportionally people performing less well, suggest that this is a valid method to be 330 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(23)

5

used in epidemiological studies on a given population with high computer literacy and cognitively 331

relatively integer. 332

333

Participants accepted well the remote cognitive testing finding it comfortable to be assessed 334

remotely, nonetheless all of them had already undergone the assessment, so they knew what to 335

expect. It remains to be explored if participants assessed remotely only would self-rate the 336

assessment as comfortable as well. 337

338

Limitations

339

The response rate to reassessment was not ideal (65%), introducing a potential for selection bias, 340

as those who did not participant were different to those who did. For example, the sample who 341

agreed to participate declared to be more adept in using technology and have videoconferencing 342

devices and internet connection available to them. Of those who declined to participate, two 343

mentioned they either did not have a device for videoconferencing, or they had no internet 344

connection. Studies found that participants who are less computer literate have increased computer 345

anxiety, which could affect scores of computer tests [9, 11]. Therefore, this makes our results not 346

immediately generalizable to a less technologically confident population as the same correlation 347

may not be found. Likewise, participants who did not agree to participate had a longer interval of 348

time between assessments, with a median of 266 days (q25, q75: 206, 268). It is unlikely that 349

cognitive status changed over the course between the in-person assessment and the retest, however 350

this is a possibility. The median PACC of the in-person assessments was lower in those who did 351

not agree to participate in the remote assessment, compared to those who did (p=0.05). This would 352

potentially introduce a bias towards a more cognitively able population in the results. 353

354

Contextualization of results

355

The present results are in line with previous studies comparing face-to-face and virtual assessments 356

[7-13, 18-20]. Telemedicine is a growing field, becoming more relevant particularly with regards 357

to assessing cognitive function, as it has an advantage of reaching more participants and reducing 358

the burden of lengthy travelling, cutting time, cost and making the participant feel more 359

comfortable in their own home [7,11,19,20]. Moreover, telemedicine can be used to reduce in-360

person contact to abide by recent government guidelines by increasing the possibility of social 361

distancing as well as the security of being able to reach at risk participants during these times. In 362 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(24)

5

this study, the use of videoconferencing was chosen over telephone assessment because the PACC 363

assessment involved the need to see the participant performing tasks, as well as sharing the screen 364

for the Face-Name Association Task. Nonetheless, videoconferencing can be seen as more 365

insightful because it captures non-verbal cues that can’t be done in telephone interviews such as 366

facial expressions and attentiveness [7]. 367

368

Conversely, compared to face-to-face assessments, remote assessments have some disadvantages, 369

such as loss of attention due to surroundings as well as the potential for participants writing down 370

answers or looking at a calendar. For instance, this study noted one participant being distracted 371

during the video call by their surroundings at home, while another participant received a phone 372

call during the digit symbol substitution test. The analysis accounted for internet speed to adjust 373

for potential connection problems that could have interfered with the assessment. Furthermore, the 374

analysis took into account the software used for the videoconferencing, however what was not 375

taken into account was the device used, as the visual cues, particularly with the FNAME, could be 376

altered on a smartphone compared to on a computer, as the stimuli would be smaller. Stillerova et 377

al. assessed remote testing with different software and different devices and found no difference 378

among modes used [9]. The marking of the overall scores can also be adjusted for video 379

assessment, such as Timpano et al. lowering the cut off for the virtual MMSE, to account for poor 380

internet speed and other factors that could influence the assessment [8]. Other potential problems, 381

such as writing down questions and changing answers, were addressed by ensuring that 382

participants showed their responses for the MMSE writing and drawing tasks and for the digit 383

symbol substitution test. Besides the disadvantages that may arise from remote assessments, this 384

study also had a limitation of a small sample size, thus introducing variability in the results and 385

reducing statistical power, limiting the ability for clear interpretation of the results. This reduction 386

in power is also denoted by the large confidence intervals seen in the correlation coefficient, 387

preventing the reliability of the results to be generalized to a broader audience. Further non-linear 388

trends of the effect of time could therefore not be explored given the small sample size. A 389

sensitivity analysis removing the outlier did not affect the results. 390 391 392 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

Referenties

GERELATEERDE DOCUMENTEN

Andere aanwijzingen voor mitochondriële disfunctie zijn studies die laten zien dat Alzheimer patiënten veel meer mutaties en deletions in het mitochondriële DNA (mtDNA) hebben.. Deze

suggests the exact opposite. The results vary a lot across countries. For some countries the labor market variables have a significant effect on the predictability of the CSV.

During the community meetings and discussions are used as traditional participatory methods like seasonal calendar (to acquire information about seasonal

Our standard treatment for stage I peripheral NSCLC is 3x18 Gy, with at least a 48-hour interval between fractions and an overall treatment time (OTT) of eight days. To

Inclusion criteria for the participating youths were: (1) enrollment in a public school within Aarhus Municipality; (2) aged 7–16 years and in 0–9th grade (excluding second semester

The public discussion about the film’s future after the change from silent to sound film, the emerging debate about the coming new censorship law, the competition between

90 Assesing cultural influences tively good predictor of migrant pupils' performance on more crystallized achievement measures (CITO tests), but was relarively unsuccessful

As these requirements compromise TSO benchmarking from a national perspective, it might be useful to select other peers, such as Distribution Network Operators (DNOs), for at