Victor Staartjes

Victor Staartjes

Affiliated to Research
Visiting address: Eugeniavägen 6, Elite Hotel Carolina Tower, plan 4, ME Neurokirurg, 17164 Stockholm
Postal address: K8 Klinisk neurovetenskap, K8 Neuro Edström/Elmi Terander, 171 77 Stockholm

Research

    • Machine Intelligence incl. Neuroimaging
    • Neurosurgery (Skull Base, Spine)
    • Epidemiology and Biostatistics
    • Clinical Study Design

Articles

All other publications

Grants

  • Swiss National Science Foundation
    1 February 2024 - 31 January 2027
    Background and rationale:Successful and safe neurosurgery is strictly dependent on surgical orientation. The surgeon must know exactly where she/he finds her/himself at any given moment in time and must also know towards which direction she/he has to move during the following surgical steps. Appropriate surgical orientation leads to improved surgical efficacy and, most importantly, to a reduction of surgical complications due to decreased unnecessary manipulation of otherwise healthy neural structures.Surgical orientation implies identification of the anatomical structures within the visual field of the surgeon as well as anticipation of the position of critical anatomical structures which are yet to be encountered during the next steps – This is especially true in neurosurgery and in particular in e.g. intracranial tumor surgery, where normal neuroanatomy is often heavily distorted and displaced and even the mere differentiation of healthy brain tissue versus tumor tissue becomes a challenge. Especially during surgery in or near critical “eloquent” areas of the brain, care must be taken to protect healthy neural tissue – along with the cerebral vasculature – while still achieving the surgical goal.Several methods have been developed over the years to improve surgical orientation. Identification can be enhanced by methods exploiting physical characteristics other than light reflection, such as for example intraoperative ultrasound [1] or fluorescence-based methods [2]. Each of these methods requires however, that the surgeon learns to interpret a new imaging modality, as well as additional expensive infrastructure. Moreover, these methods at times only deliver marginally increased information, because of their user-dependence (ultrasound) and issues with sensitivity and specificity (fluorescence). Neuronavigation – the orientation within the three-dimensional operating room by use of cameras and fiducial arrays, overlaid on preoperative neuroimaging - has also been applied to enable identification of visible structures as well as to allow more accurate assessment of those structures that are not yet visible in the current field of view. However, neuronavigation relies on preoperative imaging only. This means that it cannot account for any intraoperatively occurring unexpected events – in some sense, this is like driving in a self-driving car that navigates only based on a map of the Swiss or Korean road network, but without any sensors or cameras to identify potential dangers such as pedestrians or other vehicles. Reality is always more complex than its model – this applies also to neurosurgical operative anatomy.Any system aimed at significantly improving surgical orientation should therefore ideally deliver reliable additional information in real-time during the operation, based mainly on intraoperative measurements and using pre-operative measurements as a guide, and in a user-friendly manner.Machine vision as well as machine learning techniques in general have already been well-integrated into several domains that requires navigation, including for self-driving technology. Here, machine vision is employed for automated recognition of the environment for navigation in real-time. The same principles may also be applied in microsurgery for automated identification of visible anatomical structures and anticipation of those anatomical structures that are not yet within the current surgical visual field. The goal of this Korean-Swiss collaboration is to jumpstart a collaborative project to develop an innovative intraoperative tool allowing for real-time anatomical guidance based on the streamed endoscopic or microscopic images. As outlined below, this project is an excellent opportunity for this international collaboration, as expertise in different but equally crucial subprojects is optimally provided at the USZ, UZH, and ETH as well as at Yonsei University, Ilsan Hospital, and Eulji Medical Center - all leaders in digital health - enabling this project in the first place and handing it the best chances at successful completion.We propose to combine the unique datasets, world-class anatomical and surgical expertise, and machine vision expertise to develop a state-of-the-art neurosurgical orientation software that is aimed at reducing complications by providing real-time identification, localization, and more generally, understanding of microsurgical anatomy during spinal and cranial neurosurgery.Overall objectives:To improve orientation during intracranial and spinal neurosurgery, and thus patient safety.Specific aim:Develop a machine vision-based software compatible with readily available intraoperative visualization techniques (operating microscope, endoscope) and capable of providing real-time anatomical orientation to the neurosurgeon during surgery.Methods: Videos of surgical procedures extracted from multiple databases of over 10’000 videos (which together may constitute the largest high-quality collection of neurosurgical videos worldwide) will be reviewed by neuroanatomical experts and anatomical structures will be labeled. Using the resulting unique dataset of labeled videos, state-of-the-art deep learning-based computer vision algorithms will be developed to recognize and label – as well as predict – critical anatomical structures live during surgery. Particular focus will be laid on the common neurosurgical operative approaches: The pterional, retrosigmoid, and transsphenoidal approach as well as interlaminar and transforaminal spinal endoscopic approaches.Expected results: Within the given time frame of 36 months, we expect to develop a surgical orientation software (AENEAS) that recognizes and locates structures of interest visible in the scene and provides directions to those not yet within the current field of view. We will then perform extensive validation of the software with new surgical data that was not available to the model in the training data.Impact:By significantly improving anatomical orientation during neurosurgery, the proposed project could contribute to a significant decrease in surgical complications, and thus in a corresponding improvement of patient safety, while still allowing for safe resection of intracranial tumors, clipping of aneurysms, and removal of herniated discs among other examples. Moreover, by clarifying the steps of complex surgical procedures, the technique may also result in shorter surgical procedures and thus in less anesthesia-related complications, an improvement in hospital logistics, and in a decrease of surgical costs. The software would also help in surgical training. After thorough prospective clinical validation and integration into existing surgical microscopy and endoscopy suites, AENEAS could supplement or even partially replace other currently adopted tools for surgical orientation (neuronavigation, ultrasound, intraoperative magnetic resonance imaging, intraoperative angiography et cetera), which would in turn impact the costs of these expensive but crucial surgical procedures. Lastly, the field of application of AENEAS could be extended to other surgical domains, thereby significantly enlarging the potential audience and market.

News from KI

Events from KI