Advertisement
Letter| Volume 16, ISSUE 1, P79-81, January 2023

Transcranial magnetic stimulation (TMS) localization by co-registration of facial point clouds

Open AccessPublished:January 19, 2023DOI:https://doi.org/10.1016/j.brs.2023.01.837
      Dear Editor,
      The effectiveness of Transcranial Magnetic Stimulation (TMS) is crucially dependent upon accurate localization to ensure treatment of the correct brain region. The development of TMS Navigation Systems represented a major advance upon the use of caps to subjectively position the TMS coil on the scalp. In particular, after computer based co-registration of anatomical landmarks pinpointed on the patient's head with corresponding landmarks pinpointed on a segmentation produced from a 3D Magnetic Resonance Imaging (MRI) scan the distance and angle between the stimulation coil and the relevant brain region in MRI scan can be precisely and elegantly controlled [
      • Caulfield K.A.
      • Fleischmann H.H.
      • Cox C.E.
      • Wolf J.P.
      • George M.S.
      • McTeague L.M.
      Neuronavigation maximizes accuracy and precision in TMS positioning: evidence from 11,230 distance, angle, and electric field modeling measurements.
      ]. However, the procedure is manually intensive requiring the use of a so-called indicator to mark the landmarks on the patient's head together with manual marking of the scalp on the segmented MR image. Furthermore, a headband has to be worn by the patient to assist in the former of these processes. We have developed a fully automatic procedure based on facial point cloud registration that completely removes all manual steps from the localization procedure and which does not require the patient to wear an inconvenient and potentially cumbersome headband.

      1. TMS localization method based on Co-registration of facial point clouds recorded by a cloud camera and 3D Magnetic Resonance Imaging (MRI)

      The constituent images to facilitate the new localization procedure are a point cloud image of the patient's face recorded by using a time-sharing point cloud camera (Humanplus Intelligent Robotics Co., Ltd, Beijing, China) and a 3D T1-weighted anatomical image encompassing the patient's head and brain acquired on a Magnetic Resonance Imaging (MRI) system (United Imaging Healthcare Co., LTD, Shanghai, China). The resolution of the point cloud image is 1280 × 800 and the MR image, which was acquired with TR 6.9 ms, TE 2.9 ms and flip angle 8 has a Field of View (FOV) of 256 mm × 256 mm × 220 mm and 1 mm isotropic resolution. U-net was used to obtain a segmentation of the cerebral cortex in the 3D MR image [
      • Ronneberger O.
      • Fischer P.
      • Brox T.
      U-net: convolutional networks for biomedical image segmentation.
      ] and which the medical doctor uses to specify the target area for the TMS treatment.
      The new method is illustrated in Fig. 1. First, the face region of the patient's head is extracted from the depth map corresponding to the first time frame of the camera point cloud recording of the patient's head and a Multi-Task Convolutional Neural Network (MTCNN) algorithm is used to extract five salient features, namely lateral corners of left and right eyes, tip of the nose, and left and right corners of the mouth [
      • Zhang K.
      • Zhang Z.
      • Li Z.
      • Qiao Y.
      Joint face detection and alignment using multitask cascaded convolutional networks.
      ]. The 3D coordinates of the pixels corresponding to these features are used to generate an Oriented Bounding Box (OBB) which is referred to as the source facial point cloud [
      • O'Rourke J.
      Finding minimal enclosing boxes.
      ]. Next the classical Marching Cubes surface rendering algorithm [
      • Lorensen W.E.
      • Cline H.E.
      Marching cubes: a high resolution 3D surface construction algorithm.
      ] available in the Visualization Toolkit (VTK) 8.2.0 is used to obtain a corresponding 3D reconstruction of the patient's face in the 3D MR image and which is referred to as the target point cloud. Then, through the same method using MTCNN algorithm, five feature points are extracted from the target point cloud. An initial relatively coarse co-registration of the source, and target, facial point clouds obtained from the point cloud camera, and 3D MR image, respectively, is obtained by an affine transform of the five pairs of feature points (see Fig. 1 (a)). Next, the ICP registration algorithm [
      • Besl P.J.
      • McKay N.D.
      A method for registration of 3-D shapes.
      ] is fast to perform and provides a second finer co-registration (see Fig. 1 (b)). The OBB is updated with information obtained from the co-registration. The captured second frame camera point cloud is first multiplied by the rotation matrix obtained from the first frame registration. Then, the updated OBB is used to filter the second frame camera point cloud to perform ICP registration with the target point cloud. The point cloud co-registration error is defined as the average distance between all the corresponding points in the source and target point cloud data. If the error is greater than 2 mm, the step is repeated, otherwise it's considered qualified. Subsequent frames are all processed in the same way. See supplementary item for the flow chart of the method.
      Fig. 1
      Fig. 1Illustration of the new facial point cloud co-registration method. The red points (8426 points) in panels (a) and (b) refer to the source point cloud extracted from the cloud camera image, and the green points (34135 points) refer to the target point cloud extracted from the 3D MR image. The relationship between the two sets of point clouds after the initially coarse five-point affine co-registration is shown in (a). The relationship between the two sets of point clouds after the subsequent fine ICP co-registration is shown in (b). The result of applying the transformation produced by the co-registration in (b) to the face image recorded by the cloud camera and the head segmentation produced from the 3D MR image is shown in (c). A schematic diagram of the relationship between the TMS coil and brain target region shown in yellow is presented in (d). The red conical region corresponds to the local distribution of the magnetic field produced by the TMS coil. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
      Precise information regarding the location and orientation of the TMS coil is provided by it being attached to a so-called positioning reflection ball and which is also captured by the time-sharing point cloud camera. By multiplying the real space coordinates of the TMS coil by the rotation matrices obtained in the above registration progress, the position of the TMS coil relative to the 3D T1 structural image target is completed, having been mediated by the point cloud camera facial point cloud source image.

      2. Agreement between localization obtained by using navigation system and new Co-registration method

      This investigation which was performed for human subjects was approved by the biomedical Ethics Review Committee of West China Hospital of Sichuan University, China and fully informed consent was obtained from all 25 participants. Reference data were obtained by using a NDI navigation device NetBrain Neuronavigation 9000 [

      J. Zhou, A. Fogarty, K. Pfeifer, J. Seliger, and R. S. Fisher, "EEG evoked potentials to repetitive transcranial magnetic stimulation in normal volunteers: inhibitory TMS EEG evoked potentials," Sensors, vol. 22, no. 5, doi: 10.3390/s22051762.

      ] which required the subject to wear a headband and involved significant manual intervention and compared with corresponding values obtained by the new facial point cloud co-registration method. The distance between the coordinates measured by using the two approaches, and which represents the positioning difference between the two methods, was 4.7 ± 2.4 mm.

      3. Point cloud Co-registration performance

      The time for the coarse five-point co-registration, and subsequent fine ICP co-registration, to be performed by using an Intel i7-9700K CPU is 15 ms, and 35 ± 13 ms, respectively, and the error was 1.7 ± 0.5 mm, and 1.2 ± 0.2 mm, respectively. The time required for subsequent inter-frame registration was about 30 ms, and the error was less than 1 mm.

      Funding

      This work was supported by the National Natural Science Foundation of China (Grant No. 82027808), Sichuan Science and Technology Program (2022YFS0069) and the Science Specialty Program of Sichuan University (Grant No. 2020SCUNL210).

      Declaration of competing interest

      The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

      Appendix A. Supplementary data

      The following is the Supplementary data to this article.

      References

        • Caulfield K.A.
        • Fleischmann H.H.
        • Cox C.E.
        • Wolf J.P.
        • George M.S.
        • McTeague L.M.
        Neuronavigation maximizes accuracy and precision in TMS positioning: evidence from 11,230 distance, angle, and electric field modeling measurements.
        Brain Stimul. 2022; 15: 1192-1205https://doi.org/10.1016/j.brs.2022.08.013
        • Ronneberger O.
        • Fischer P.
        • Brox T.
        U-net: convolutional networks for biomedical image segmentation.
        in: Navab N. Hornegger J. Wells W.M. Frangi A.F. Medical image computing and computer-assisted intervention – MICCAI 2015. Springer International Publishing, Cham2015: 234-241 (2015)
        • Zhang K.
        • Zhang Z.
        • Li Z.
        • Qiao Y.
        Joint face detection and alignment using multitask cascaded convolutional networks.
        IEEE Signal Process Lett. 2016; 23 (October 01, 2016): 1499-1503https://doi.org/10.1109/lsp.2016.2603342
        • O'Rourke J.
        Finding minimal enclosing boxes.
        Int J Comput Inf Sci. 1985/06/01 1985; 14: 183-199https://doi.org/10.1007/BF00991005
        • Lorensen W.E.
        • Cline H.E.
        Marching cubes: a high resolution 3D surface construction algorithm.
        in: Presented at the Proceedings of the 14th annual conference on Computer graphics and interactive techniques. 1987https://doi.org/10.1145/37401.37422 ([Online]. Available:)
        • Besl P.J.
        • McKay N.D.
        A method for registration of 3-D shapes.
        IEEE Trans Pattern Anal Mach Intell. 1992; 14: 239-256
      1. J. Zhou, A. Fogarty, K. Pfeifer, J. Seliger, and R. S. Fisher, "EEG evoked potentials to repetitive transcranial magnetic stimulation in normal volunteers: inhibitory TMS EEG evoked potentials," Sensors, vol. 22, no. 5, doi: 10.3390/s22051762.