2021-22

14 research projects were selected for the academic year 2021/22 in the "training through research" program dedicated to second year Master students.

Automating OCT scanning for colorectal applications

In the recent years, there has been progress in the diagnosis and staging of tumors, as well as the planning of the subsequent treatments. In situ biopsies can be performed using Optical Coherence Tomography (OCT) imaging. By using a rotational OCT probe inside a catheter, it is possible to acquire volumic images. This can be applied to the detection of colon polyps in the digestive tract if the catheter is placed into the channel of a flexible endoscope.

Colon cancer is one of the most common cancer types worldwide, and it is crucial to develop minimally invasive techniques that allow for its early diagnosis, in order to improve the prognosis of the patients. The use of OCT would allow for a real-time diagnosis, providing not only superficial but also structural information about the tissues, as compared to standard colonoscopy.

In order to obtain reliable information, it is required to ensure that a systematic scanning of the suspicious region is realized. This is a difficult task in the colon since this organ does not have a constant diameter and its tissues are very deformable. Besides, manual scanning is quite difficult and it requires a lot of expertise.

The objective of this research project is to find a way of combining the 7 DOF of a robotic system (a telemanipulated robot for gastro-enterology together with a steerable OCT catheter) to ensure full coverage of a colon patch during scanning. For that aim, the robotic system, the tissue surface and virtual OCT images are simulated to plan and compare possible scanning methods. Furthermore, OCT images can provide information on distance and position of the catheter with respect to the tissue, which can be used to correct the trajectory during scanning. The final goal is to develop a strategy for automatic scanning of the colon.

  • Abstract by HealthTech Master student Tania Olmo Fajardo
  • Master project supervised by F. Nageotte, B. Rosa & M. Gora

Biomechanical modeling and simulation of the human balance system

The inner ear is a very complex structure, it hosts a fundamental component regulating balance and maintaining gaze; the vestibular system. It can detect motion thanks to the otolith organs, which can detect linear accelerations, and three semicircular canals (SCC), which act as sensors of angular acceleration of the head, using deflections of the cupular diaphragm to generate neural information. Many studies analyzed this system using Finite Element Method (FEM). Obtaining an accurate geometry of the system has become easier due to the advanced medical imaging techniques, but the challenge remains to find the correct mechanical parameters of the different structures of the system, namely the cupula, the crista, the membranes, the endolymph, and the perilymph. To the best of our knowledge, most of these works did not consider the perilymph in their models; the fluid that lies inside the bony labyrinth, surrounding the membranes. We believe that this fluid may have an important effect on the behavior of the vestibular system, and thus we aim to develop a complete FE model of the vestibular system to have a better understanding about it.

The work in the lab started by developing a model of the lateral canal, and then continued to obtain a geometrically accurate model of the human membranous labyrinth. This model will be the basis to develop the complete model and start tuning the different parameters and mechanical properties of the different components of the system, which will help analyze the system under normal conditions and in case of dysfunctions as well.

  • Abstract by HealthTech Master student Abdulmassih Saba
  • Master project supervised by D. Baumgartner & A. Charpiot

Biopsy simulation of musculoskeletal tumors

A biopsy is a widely performed procedure for cancer diagnosis. Percutaneous biopsies provide the lowest invasiveness rate for the patient, but are not easy to perform: even under imaging guidance, no direct vision of the tumor is available, making the needle placement and manipulation difficult. Training methods are being developed, particularly real-time simulations allowing user-interaction through a haptic device, as studied in the context of this internship. In this work, simulations, using the Finite Elements Method, are performed with SOFA. Beam elements are used to model needles, while tissues are modeled with linear elastic models coupled with a co-rotational method for a better relevance of the computation of tissue deformations. Needle-tissue interactions are performed using a constraint-based model and Lagrange multipliers. The aim of this project is to improve the haptic feedback of the simulation by integrating the forces resulting from the discrete interaction model along the shaft of the needle. To do so, the intersection of elements must be computed in order to distribute the forces along the intersected elements between the needle and the tissues.

A comparative study of needle models was first performed between SOFA and a time-constraint-free simulation software. This study provided similar results of needle bending over loads, with a highest difference of 2mm in the displacement of the needle tip, for a 12cm needle under a load of 24g at its tip. A SOFA plugin related to the insertion of needles into soft tissues was also improved in order to handle different cases, such as crossing a thin tissue layer or inserting the needle into a volume. Eventually, another Sofa plugin related to contact detection was changed in order to provide geometrical elements with another definition: one must be able to consider elements computed from intersections the same way as base elements from the mesh. Future work consists in validating the needle model with an analytical study of the Beam Theory, and to code the algorithm to compute the intersection between elements. The new model for biopsy simulation, especially the haptic rendering, will then have to be validated.

  • Abstract by HealthTech Master student Claire Martin
  • Master project supervised by H. Courtecuisse, D. Baumgartner, N. Bahlouli & M. Ehlinger

Wrist Fracture Surgical Video Analysis for AI-Based Training

Currently, the gold standard for learning surgical techniques is to be coached in the operating room by an expert surgeon. Nevertheless, it has been shown that modeling laparoscopic cholecystectomy, laparoscopic sleeve gastrectomy and colorectal surgical procedures using artificial intelligence-based tools can lead to intra-operative assistance and optimisation of surgical care in these procedures. However, no such technology has been dedicated to the minimally invasive plate osteosynthesis surgical procedure of distal radial fractures. The objective of this project is therefore to develop an artificial intelligence system focused on this surgery.

For this purpose, an annotated dataset of videos of the procedure must be created. This dataset can then be used to train and test an algorithm whose objective will be to automatically recognize the surgical phases on videos. The annotation of the videos collected in the operating room is an opportunity to carry out several studies: one aiming at studying the inter-individual and intra-individual variability during the annotation tasks, and another one to evaluate the difference in the perception of the surgical phases of the studied procedure between several surgeons.

In the long term, such an algorithm could be used during interventions to support the personal and professional confidence of surgeons by supervising the tasks to be performed and emitting warnings, if necessary (for example, in case of deviation from the standard procedure), thus acting as a psychological support. After the surgery, such technology would be very useful to facilitate the viewing of surgical videos if these videos could be automatically annotated. In addition, this system could provide automated analysis of the performed procedure to highlight deviations from the protocol and provide feedback to the surgeons to help them improve their surgical practice.

  • Abstract by HealthTech Master student Camille Graëff
  • Master project supervised by T. Lampert, P. Liverneaux & N. Padoy

CRYO-Track : Guiding multiple needles on planned trajectories

Background: The needle insertion for cryoablation of tumors in the liver problem has a lot of complexity, it must be taken in to account several constraints and behaviors as the natural breathing movement of the patient.

Objective: In this study, a guiding system was developed to guide the insertion of the needle using external markers to predict the position of the internal structures and a real time evaluation of the trajectory and its feasibility.

Methods: A gelatin phantom was built to simulate the patient body, generate the data to fit the model, and to evaluate the system. An electromagnetic tracking system was used to get the position of the needle and two cameras were used to make a stereo vision system to get the position of the external marks. The model’s performance in predicting the target position was evaluated in terms of the root mean square errors (RMSEs) of real and predicted target and structure positions and the camera calibration model was evaluated using the mean of the reprojection error.

Partial Results: The average reprojection errors found for all the points were 0,01533 pixels for the right camera and 0,01461 pixels for the left camera. The final roots mean square error for the model in the test set was 0,70504 mm.

  • Abstract by HealthTech Master student João Victor Galvão da Mata
  • Master project supervised by C. Essert & A. Mukhopadhyay

Finite Element and Deep Learning Joined Approach for Developing an Augmented Reality Tool dedicated to Livery Surgery

Radiofrequency ablation (RFA) therapy is a widely used mini-invasive technology to treat the liver tumor, consisting of the insertion of a radiofrequency electrode able to increase the temperature in the patient's tissue, thus producing degeneration and coagulation necrosis on the local tumoral area. To perform a correct percutaneous incision and reach the target, the physician needs to be navigated by the computed tomography (CT), ultrasound, or magnetic resonance (MR) imaging technique. However, the accuracy of the procedure is a crucial aspect of the surgery's success and it depends on the surgeon's experience in interpreting the limited information given by these 2D imaging modalities. We propose an augmented reality tool based on the Hololens2 device that allows the surgeon to project directly on the patient the 3D render of the relevant organ of interest in the exact anatomical position and gives him visual feedback to perform an incision with the correct angle and in the right direction and depth. Our work relies on a multimodal marker and an external electro-magnetic navigation system to achieve the alignment of the virtual scene directly on the patient and provide the surgeon with the correct 3D perception.

  • Abstract by HealthTech Master student Alessandro Albanesi
  • Master project supervised by J.-P. Mazellier & H. Courtecuisse

deep-LIOUS: an AI-enabled educational tool for laparoscopic intraoperative ultrasound

Internship Description: To develop a GUI-enabled educational tool for Intraoperative Ultra-sound (IOUS) with automatic anatomical landmark detection to provide training capabilities to end user (i.e. surgeon) for Intraoperative Ultrasound (IOUS).

Common treatment for liver cancer is liver resection. Traditionally, liver resection has been done with open surgery (e.g. laparoscopy). However, as laparoscopy has proven to cause less complications, Laparoscopic Liver Resection (LLR) is becoming more adopted. In order for surgeons to be able to orientate themselves in the liver space, Laparoscopic Intraoperative Ultrasound (LI-OUS) is used as imaging within LLR because it is not harmful to patients (i.e. radiation-free), it is compact and inexpensive compared to other imaging modalities. Although Laparoscopic Intraoperative Ultrasound provides excellent anatomical information compared to e.g. CT, image interpretation of internal liver anatomical structures is very challenging. Therefore LLR requires skills and experience to simultaneously understand complex liver anatomies through Laparoscopic Intraoperative Ultrasound and perform resection. As a result, LIOUS increases further increases the complexity of LLR, thus precluding its adoption and dissemination. Hence, there is a clear need to train surgeons to improve their skills to recognize internal liver anatomies via LIOUS. In this project an advanced software system was researched and developed to build an educational tool for surgeons. To build this tool, an acquisition pipeline was developed and tested for collection of LIOUS and CT ex-vivo liver image data along with tracking information of LIOUS transducer. volume reconstruction is performed on the acquired liver LIOUS-tracked sequence to enable visualization and processing within 3D Slicer. Furthermore, automatic registration between LIOUS and CT volumes was performed to enable with afaster and better cross-modality anatomy annotation (i.e. semantic segmentation). These annotated data were eventually used to train and validate various segmentation models for automatic anatomy recognition in LIOUS. With such an automatic cross-modality anatomy recognition our tool allows surgeons to train themselves and correctly identify liver anatomical structures on LIOUS and therefore improve their surgical skills.

  • Abstract by Master student Karl-Philippe Beaudet
  • Master project supervised by A. Karargyris & J. Verde

Solution for magnetic resonance elastography using artificial intelligence

Magnetic resonance elastography (MRE) is a phase contrast medical resonance imaging (MRI) technique that maps non-invasively the mechanical properties (mainly stiffness) from the propagation of shear wave into an organ. MRE provides in vivo mechanical values that provide information on the pathological state of many biological soft tissues. MRE requires substantial data processing, and the reconstruction of the stiffness maps from imaged shear displacement fields remains an essential step in MRE protocols but still requires significant progress to expand its application in clinical practice. Currently, based on the use of MRI phase images to encode the shear wave displacement field, the reconstruction methods either have a limited reliability and quantitativity or a long reconstruction time.

The objective if this study is to propose a new MRE method that could provide fast quantitative MRE measurements by using artificial intelligence (AI) to process mechanical reconstructions from MRI raw images. By combining the use of AI and reconstruction based on raw MRI data, this original approach will make it possible to bypass certain steps that are sources of error and noise, such as phase unwrapping, and thus to combine both the robustness of the measured data and the speed of execution, two fundamental points in clinical MRE practice.

  • Abstract by Master student Claudia Boily
  • Master project supervised by S. Chatelin & G. Exarchakis

Multiscale model based on medical imaging and influence of the KLF10 gene on the biomechanical behavior of skeletal muscle

Elastography is an imaging method that aims at mapping the mechanical properties of a tissue in vivo, mostly non-invasively. Elastography is based on 3 successive steps: (1) the mechanical excitation of the sample or organ, (2) then the encoding of the sample response using medical imaging (MRI, echography or optical imaging), and (3) the reconstruction of the mechanical properties.

Magnetic resonance elastography (MRE) is a specific dynamic elastography method using shear wave propagation as mechanical excitation and MRI facilities for displacement encoding. In a more marginal but no less effective way, MRE has been developed for preclinical studies, such as investigation of skeletal muscle pathologies in mice. In the literature, the few MRE setups all propose to investigate mouse muscles using an electromechanical shaker actuator coupled to a carbon fiber rod and a piston to generate shear waves, source of many experimental limitations.

In this study we propose to develop an innovative and non-invasive MRE setup for the in vivo investigation of skeletal muscles in mouse legs in a preclinical 7T MRI scanner. We aim at solving these technological barriers by developing a compact and adjustable setup with piezoelectric actuation directly used within the tunnel of a 7T preclinical MRI scanner. For this purpose, a specific actuator has been designed and calibrated using 3D printed parts before future integration and validation on the Bruker 7T MRI scanner (IRIS platform, ICube laboratory, Strasbourg, France).

In the longer term, the aim of the project is to develop this mechanical imaging process to study in vivo the influence of a specific gene expression on the mechanical properties of slow- (Soleus) and fast-twitch (Extensor Digitorum Longus or EDL) muscles using a genetically modified mouse model.

  • Abstract by HealthTech Master student Aude Loumeaud
  • Master project supervised by S. Chatelin & S. Bensamoun

Robotic assistance for Blood-Brain-Barrier opening using focused ultrasound

Advancements in physics and medical imaging technologies have facilitated tackling one of the main challenges in brain disease medicine: blood-brain barrier (BBB) reversible and targeted opening. A recent approach induces mechanical pressure on the tight junctions of the BBB by utilizing focused ultrasound. The acoustic pressure cavitates microbubbles injected in the bloodstream to mechanical stress the tight junctions of the BBB in a targeted manner, resulting in a temporary barrier disruption.

Promising results regarding the method's safety and efficacy are being reported all around the world.

In this project, a robotic framework is proposed for mobilizing the FUS transducer in order to sonicate a pre-defined target area in the brain. Along with an adequate experimental setup and 3D registration framework, an optimized sonication route generator is detailed and tested for optimal cavitation of the target area under the strict time constraints imposed by the nature of the problem.

The robotic framework is described and then tested in the novel robotic operating system ROS2 platform and a thorough system performance analysis is done to determine its safety and applicability.

  • Abstract by HealthTech Master student Adnan Saood
  • Master project supervised by J. Vappou, F. Nageotte & L. Barbé

Camera network for skin lesion analysis by optical biopsy

Knowing that there is a serious lack of dermatologists in France, it is necessary to save them time by helping them in their diagnosis so that they can take care of more patients. The purpose of this project will therefore be to design an optical instrument for analyzing skin lesions in a fast and safe way. Based on a modular set of networked cameras, our instrument will exploit the principle of Mueller polarimetry and artificial intelligence algorithms to diagnose these lesions. In particular, this instrument will provide dermatologists with further information by capturing physical properties of the skin invisible to the naked eye, for a more advanced expertise.

The main objective therefore consists in the transfer by Wi-Fi of skin lesion images acquired by five cameras to the dermatologist's computer. Each camera will be controlled independently by a dedicated Raspberry Pi Single-Board Computer, which will also control several LEDs in order to vary the polarization states and angles. A sixth Raspberry Pi will be used as a router to act as a gateway between the Wi-Fi camera network, for which it will constitute the access point, and the dermatologist's Wi-Fi network, for which it will be the client. After having configured and deployed these Wireless Local Area Networks, we will develop in Python two REST APIs (Representational State Transfer Application Program Interfaces) to allow communication between the different hosts. Thus, sending requests from the dermatologist's computer via our APIs will make it possible to configure and control the camera network which will return to it, at the end of the capturing process, the medical images acquired in TIF format. These images will then be analyzed by the artificial intelligence software which will finally return to the dermatologist the diagnosis of the captured skin lesions.

  • Abstract by HealthTech Master student Florian Déchaux
  • Master project supervised by T. Noël & J. Zallat

System for sensorimotor function recovery for hemiparetic patients

Stroke is among the leading causes of adult long-term physical disability, and the population needing rehabilitation of upper limbs is constantly increasing. Robotic devices in rehabilitation have been studied extensively, yet their application in clinical practices is still limited. Most of these devices are viable engineering products, but the neuro-physiological issues underlying impaired sensorimotor functions is often overlooked in the design process.

The goal of this project is to use an end-effector robot with two degrees of freedom (DoF) for upper-limb rehabilitation of hemiparetic patients.

Surface electromyogram (sEMG) signals are measured as biofeedback to evaluate muscle activity. This biofeedback is used as input to adapt the assistance provided by the robot by tuning the parameters of an impedance controller. The main research problem is to discover how these parameters can be tuned according to certain features of sEMG signals recorded from multiple muscles.

  • Abstract by HealthTech Master student Nada Salman
  • Master project supervised by B. Bayle and M. Pradines

Prediction of endoscope movements in robotic endoluminal interventions

Colorectal cancers raise major health concerns. Endoscopic submucosal dissection (ESD), a type of endoluminal surgery, is one of the main curative treatments for early-stage lesions, but demands a high technical level from the surgeon partly because of the difficulty to control flexibe endoscopes. Therefore, teleoperated robotic platforms, including the STRAS robot designed at ICube, have been developed to answer these problems. However, STRAS comes with new control issues that affect the final position of the camera and require the surgeon to make corrections regularly. In the end, the operator is distracted from the dissection task.

Thus, this project aims at predicting the probability of a camera movement occurring in a chosen time interval, given the endoscopic videos and kinematic information. This information is provided by a dataset acquired with the STRAS. Eventually, this project could allow to automate the motions of the endoscopic camera, so that the surgeon can focus on dissection.

Works conducted so far focused on appreciating the feasibility of the prediction task. We first used kinematic data to compute the trajectories of the camera and the instruments. We also collected key features in the video frames to determine whether visual information could be linked to kinematic parameters or trigger a camera motion. Early results show that camera and instruments trajectories can only be approximated, since hysteresis and tissue interactions bring non-linearities that are complex to model. In addition, it seems that the procedures of the dataset do not all follow the same trajectory pattern, and that a certain configuration of the instruments does not give clue as to whether the camera is about to move. This suggests that prediction may not be achieved with this sole information.

Upcoming works will focus on methods using time-dependency, such as long short term memory (LSTM), to try solving these problems.

  • Abstract by HealthTech Master student Edgard Weissrock
  • Master project supervised by F. Nageotte & B. Rosa

Robotized additive manufacturing of soft robots for medical devices

Focused ultrasound therapy emerged in the 2000s as a new, totally non-invasive, non-ionising therapy based on the physical interaction between the high-intensity ultrasound wave and the tissue to be treated, for example to locally burn tumours. Despite their undeniable therapeutic potential, the implementation of these approaches is often complex and difficult. Recent work carried out at the ICube laboratory in collaboration with the company Image Guided Therapy, as part of the UFOGuide project, has enabled the first clinical trial of thermoablation of bone tumors on a patient’s forearm to be successfully completed. The system is based on a high intensity focused ultrasound probe, a passive mechanical system for positioning and orienting the probe, the end organ is held in position by stiffening the flexible structure using granular jamming technology, and a software for real-time registration and thermometry using MRI. The radiologist locates the anatomical target to be destroyed, manually positions and orients the probe using the locating system and mechanical support, and once the patient is placed in the MRI, the ablation is monitored in real time using MRI thermometry. Although this approach is rather imprecise, it offers flexibility in positioning and orientation, making it particularly suitable for medical practice. If the positioning of the ultrasound focal point is not entirely satisfactory, it is possible to correct it slightly by shifting the ultrasound focus, which is achieved by electronically shifting the phase of the sources. However, in order to correct an error that is too large or to cover a larger area, the radiologist must move the structure supporting the probe. This step can become time-consuming and error-prone. The ICube laboratory has been working for several years on MRI-compatible actuation solutions, in particular with the development of a pneumatic linear stepper actuator made entirely of polymers. This technology is based on Inchworm-type kinematics, integrating two grippers and an extension chamber. The extension chamber has the particularity of being a flexible elastomer structure with a reinforcement structure to carry out the translation. Initially made by multi-material additive manufacturing (MMAM), the extension chamber proved to be unreliable with a limited life span. A second version made by silicone overmolding with a machined reinforcement structure solved these problems at the expense of the design flexibility that MMAM can offer. In this Master project, we are interested in a new manufacturing method to produce this extension chamber by combining both the flexibility of additive manufacturing and the mechanical characteristics of a silicone overmolded chamber. In this work, we propose an additive manufacturing technique using an anthropomorphic robotic arm to deposit a silicone layer around a previously machined reinforcement structure.

  • Abstract by HealthTech Master student Jérémy Sand
  • Master project supervised by L. Barbé, B. Wach, M. Bednarczyk & F. Geiskopf