SenoCAD Research Projects

NIDAM-BC:Decision making methodology and novel biomarkers for Non-Invasive Diagnosis And Monitoring in Breast Cancer patient

Breast cancer remains the second leading cause of deaths in occidental women. Despite improvements of disease-free intervals and overall survival, result to breast cancer therapy is still largely unpredictable and the tools available to predict who will respond optimally to which treatment are still relatively crude. Furthermore, conventional diagnosis and monitoring methods suffer from several limitations: poor specificity of breast MRI, late evaluation of response to treatment and lack of accurate imaging methods to assess complete cure. There is a clear need to diagnose breast cancer at an earlier stage and to overcome the limitations of treatment monitoring. Complementary innovative technologies to analyze a panel of biomarkers in non-invasive samples will be used jointly in NIDAM-BC to improve the specificity of early diagnosis and provide the means of monitoring of breast cancer patients under treatment. Non invasive methods such as metabolomic profile analysis, characterization of circulating tumor cells (CTCs), functional imaging in MRI with microcirculation analysis and diffusion analysis will be considered as potential information sources. The focus will be on primary breast cancer patients to improve early clinical diagnosis, and on locally advanced breast cancers and metastatic breast cancer to demonstrate the benefit of incorporating novel biomarkers into treatment follow up. Our major goal is to provide a multimodal biomarker analysis workflow for clinical application that overcomes limitations of conventional methods. The overall objectives of the project are: (1) to develop methods and devices for multimodal analysis of imaging and metabolic biomarkers that will improve the accuracy of clinical diagnostics; (2) to provide prediction models based on multiple quantitative parameters to earlier predict therapeutic failure in locally advanced and metastatic breast cancer.

 


 

WHBUS: Computer-aided methods for early breast cancer detection by fusion of 3D whole breast ultrasound and elastography

 

About 20% to 30% of breast cancers are missed using only X-ray mammography. Furthermore, the number of false positives generated by conventional Computer-Aided Detection (CAD) methods is too high. This results in a large number of unnecessary biopsies. Hence, significant improvements both in detection sensitivity and specificity are mandatory in order to design an accurate detection system for breast cancer screening. Furthermore, additional descriptors such as the volume of detected cancers and ultrasound BI-RADS descriptors will be useful for diagnosis purposes.

In this industrial research project, the first technological improvement is to use jointly 3D whole breast ultrasound imaging, elastography and full-field digital mammography. Automated 3D breast ultrasound systems, which have been recently improved by Siemens Medical Solutions U.S.A., will allow a better detection of cancers in dense breasts for which X-ray mammography is not well suited. Furthermore, elastography will allow a much better characterization of candidate abnormalities than X-ray imaging. The second technological improvement is to combine unsupervised CAD methods with cascaded supervised CAD methods in order to design an innovative and efficient hybrid CAD method. The third technological improvement is to implement these advanced CAD methods on novel highly parallel architectures, in order to allow the processing of huge amounts of 3D imaging data in practically real-time.

Expected impact: (1) significant increase of cancer detection rate, from about 80% using X-ray breast imaging only to more than 97% using 3D breast ultrasound as an adjunct of X-ray mammography, (2) significant reduction of the number of unnecessary biopsies, so that CAD methods will reach the average level of performances of screening radiologists. These achievements will result in a better acceptance of CAD methods by European radiologists, allow an efficient use of automated 3D ultrasound imaging for early breast cancer screening, and will result in significant healthcare cost reduction.

 


HN-SEGM: Automatic segmentation of head-neck lesions in 3D PET images.

 

There is a need for improved methods for accurate and fast segmentation of lesions in 3D PET images. The automatic segmentation developed at SenoCAD Research GmbH integrates several segmentation algorithms. The objective is to allow the segmentation of 3D PET images with good accuracy. The proposed segmentation method takes raw data and the location of the lesion center in the annotated slice as input. The lesion is initially segmented in the annotated slice using our poly-segmentation algorithm. Then, it is progressively segmented in neighboring slices using recursive backtracking. Bifurcations and end points are automatically detected.  The proposed algorithm only analyzes PET imaging data and CT data has not at all been used.

Our poly-segmentation algorithm is based on a multi-resolution approach, which segments small lesions using recursive thresholding after unsharp masking, and combines three segmentation algorithms for the delineation of larger lesions. Firstly, the morphological segmentation by watersheds has been used for initialization purposes. Internal markers have been generated by extraction of regional maxima of the morphologically filtered image. The external marker of the lesion has been generated using two different approaches: (i) in the annotated slice, the external marker is generated by thresholding of intensity background, after statistical analysis of granular structure of the background; (ii) in neighboring slices, the external marker is generated by dilation of the lesion segmented in the previous slice. These two markers and the morphological gradient have been used to segment the lesion by the method of watersheds. Secondly, segmentation by iterative intensity thresholding has been used to improve the initial segmentation. The threshold has been adaptively estimated from image statistics in order to overcome the lack of robustness of thresholding methods. The rationale of this approach is as follows: contours of lesions in PET images can generally be based on visual perception and on the detection of closed gradient crest lines. However, this is not the case for large and significant lesions which are characterized by a high signal to noise ratio. In such cases, the precise extent of segmented lesions is mainly based on underlying physics and statistical detection. Thirdly, a region growing method, which combines morphological operations and statistical analysis, has been used to further improve the final segmentation of lesions.

With the proposed poly-segmentation algorithm, there user is not at all involved in contouring of lesions.

The designed system is a prototype of the actual PET segmentation software. The final application will have a user friendly interface for easy and fast interaction. The interface is currently under development using the Qt software framework. Furthermore, the segmentation algorithms will also be accelerated using Graphics Processing Units (GPUs) for time consuming operations. A clinical validation should be later performed on several hundreds of cases. After this validation, our segmentation software will be proposed as a full-edge product for use in clinical practice.

 

Fast interactive 3D segmentation of head-neck lesions in PET images

 

The accurate segmentation of lesions in PET images is generally time consuming. The objective of the interactive segmentation method developed at SenoCAD Research GmbH is to allow rapid and accurate segmentation of 3D PET images. Our method integrates an automatic segmentation method and an interactive tool allowing the user to improve delineations. First, the automatic 3D segmentation of lesions is performed using the poly-segmentation algorithm that we have designed for automatic segmentation of lesions in 3D PET images: the lesion is initially automatically segmented in the annotated slice, then it is progressively segmented in neighboring slices using recursive backtracking. Bifurcations and end points are automatically detected.  The proposed algorithm only analyzes PET imaging data and CT data has not at all been used to segment lesions.

After reviewing of computer segmentations by the user, lesion delineations are further improved using our interactive segmentation algorithm. The user is only involved in interactive contouring of lesions in a few selected slices. As a result, it will reduce the amount of user interaction involved in contouring. In fact, our segmentation tool does not involve any manual settings for initializing each contour and only requires the drawing of small internal segments to get a larger lesion marker by growing computer segmented lesions. The drawing of such segments can be quickly performed and provides robust segmentation results, since lesion delineation is not sensitive to the exact location of the internal lesion marker as shown by our experimental results.

It took the user about 7 minutes to complete the experimental protocol. The user did not take any break during the protocol. This interaction time is decomposed as follows:  (i) an average time of 40 seconds per VOI was necessary for display of segmented images and selection of segmentations to be improved; (ii) an average time of 20 seconds per VOI was required for interactive segmentation of lesions in 10 selected slices; (iii) the average total time taken by the user to complete the experimental protocol was 60 seconds per VOI.

The designed system is a prototype of the actual PET segmentation software. The final application will have a user friendly interface for easy and fast interaction. The interface is currently under development using the Qt software framework. Since Qt is a cross platform framework, it would also be possible to make this application for mobile phone and iPad. This could be easily used by the radiologist even when he is on the move. The DICOM data will be stored into an image database in a datacenter for better accessibility. Furthermore, the algorithms will also be accelerated by use of Graphics Processing Units (GPUs) for time consuming operations. A clinical validation should be later performed on several hundreds of cases. After this validation, our segmentation software will be proposed as a full-edge product for use in clinical practice.