top of page

1st Workshop on Medical Image Learning with Less Labels and Imperfect Data

Program for Medical Image Learning with Less Labels and Imperfect Data (October 17, Room Madrid 5)

8:00-8:05

8:05-8:45

Opening remarks

Keynote Speaker: Kevin Zhou, Chinese Academy of Sciences

Keynote Speaker: Pallavi Tiwari, Case Western Reserve University

Oral Presentations (6 minutes for each paper)

1) Self-supervised learning of inverse problem solvers in medical imaging [slides]

2) Weakly Supervised Segmentation of Vertebral Bodies with Iterative Slice-propagation [slides]

3) A Cascade Attention Network for Liver Lesion Classification in Weakly-labeled Multi-phase CT Images [slides]

4) CT Data Curation for Liver Patients: Phase Recognition in Dynamic Contrast-Enhanced CT [slides]

5) Active Learning Technique for Multimodal Brain Tumor Segmentation using Limited Labeled Images [slides]
6) Semi-supervised Learning of Fetal Anatomy from Ultrasound [slides]
7) Multi-modal segmentation with missing MR sequences using pre-trained fusion networks [slides]
8) More unlabelled data or label more data? A study on semi-supervised laparoscopic image segmentation [slides]
9) Few-shot Learning with Deep Triplet Networks for Brain Imaging Modality Recognition [slides]

Coffee Break

Keynote Speaker: Bennett Landman, Vanderbilt University [slides]

Oral Presentations (6 minutes for each paper)

10) A Convolutional Neural Network Method for Boundary Optimization Enables Few-Shot Learning for Biomedical Image Segmentation [slides]

11) Transfer Learning from Partial Annotations for Whole Brain Segmentation [slides]

12) Learning to Segment Skin Lesions from Noisy Annotations [slides]
13) A Weakly Supervised Method for Instance Segmentation of Biological Cells [slides]
14) Towards Practical Unsupervised Anomaly Detection on Retinal Images [slides]
15) Fine-tuning U-Net for ultrasound image segmentation: which layers? [slides]
16) Multi-task Learning for Neonatal Brain Segmentation Using 3D Dense-Unet with Dense Attention Guided by Geodesic Distance [slides]

Closing remarks

program

10:20-10:30

10:30-11:15

11:15-11:55

11:55-12:00

8:45-9:25

9:25-10:20

Why we organize this workshop:

In the last few years, deep neural networks have emerged as state-of-the-art approaches for various medical image analysis tasks including detection, segmentation, and classification of pathological regions. However, there are several fundamental challenges that prevent deep neural networks from achieving its full potential in medical applications.
 
First, deep neural networks often require a large number of labeled training examples to achieve superior accuracy over traditional machine learning algorithms or match human performance. For example, it takes more than 100,000 clinically labeled images obtained from multiple medical institutions for deep networks to match human dermatologist’s diagnostic accuracy. While crowdsourcing services provide an efficient way to create labels for natural images or texts, they are usually not appropriate for medical data which have high privacy standards. In addition, annotating medical data requires significant medical/biological knowledge which most crowdsourcing workers do not possess. For these reasons, machine learning researchers often rely on domain experts to label the data. This process is expensive and inefficient, therefore, often unable to produce a sufficient number of labels for deep networks to flourish. 
 
Second, to make the matter worse, medical data are often noisy and imperfect. This could be due to missing information in medical records or heterogeneity in sensing technologies and imaging protocols. These effects pose a great challenge for conventional learning frameworks. They also sometimes create a discrepancy between training and test data, thus, decrease the overall system’s accuracy to an extent that is no longer safe to use. Finally, human annotations are another major source of errors. For example, high intra and inter-physician variations are well-known in medical diagnostic tasks such as classification of small lung nodules or histopathological images. This leads to erroneous labels that could derail our learning algorithms. How to effectively deal with imperfection in medical data/labels remains an open research question. 
 
This workshop aims to create a forum for discussing best practices in medical image learning with label scarcity and data imperfection. It potentially helps answer many important questions. For example, several recent studies found that deep networks are robust to massive random label noises but more sensitive to structured label noises. What implication do these findings have on dealing with noisy medical data? Recent work on Bayesian neural networks demonstrates the feasibility of estimating uncertainty due to the lack of training data. In other words, it enables our classifiers to be aware of what they do not know. Such a framework is important for medical applications where safety is critical. How can researchers of MICCAI community leverage this approach to improve their systems’ robustness in the case of data scarcity? Our prior work shows that a variant of capsule networks generalizes better than convolutional neural networks with an order of magnitude fewer training data. This gives rise to an interesting question: Are there better classes of networks that intrinsically require less labeled data for learning? Humans always have an edge over deep networks when it comes to learning with small amounts of data. However, recent work on one-shot deep learning has surpassed human in an image recognition task using only a few training samples for each task. Do these results still hold for medical image analysis tasks? 
 
This forum is urgently needed because the issues of label noises and data scarcity are highly practical but largely under-investigated in the medical image analysis community. Traditional approaches for dealing with these challenges include transfer learning, active learning, denoising, and sparse representation. The majority of these algorithms were developed prior to the recent advances of deep learning and might not benefit from the power of deep networks. The revision and improvement of these techniques in the new light of deep learning are long overdue.

 

What will be covered:

Our workshop will cover the following research topics related to medical image learning:

  • Designs of new network architectures that generalize well with less training data

  • Research involving analysis of deep networks’ behaviors (as well as other learning models) in the face of noises

  • Methods such as one-shot learning or transfer learning that leverage large imperfect datasets and a modest number of labels to achieve good performances 

  • Methods for removing rectifying noisy data or labels

  • Techniques for estimating uncertainty due to the lack of data or noisy input such as Bayesian deep networks

  • Other sensible approaches for dealing with label scarcity and imperfect data

Agenda

  • The MIL3ID 2019 proceedings will be published as a volume in the Springer Lecture Notes in Computer Science (LNCS) series.

  • Papers are limited to 8 pages. Papers should be formatted in Lecture Notes in Computer Science style. Style files can be found on the Springer website. The file format for submissions is Adobe Portable Document Format (PDF). Other formats will not be accepted.

  • Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. 

  • MIL3ID reviewing is strictly double blind: authors do not know the names of the reviewers of their papers, and reviewers do not know the names of the authors. Please see the Anonymity guidelines of MICCAI 2019 for detailed explanations of how to ensure this.

  • Supplemental material submission is optional. This material may include: videos of results that cannot be included in the main paper, anonymized related submissions to other conferences and journals, and appendices or technical reports containing extended proofs and mathematical derivations that are not essential for the understanding of the paper. Contents of the supplemental material should be referred to appropriately in the paper and that reviewers are not obliged to look at it.

  • Our policy is that in submitting a paper, authors implicitly acknowledge that NO paper of substantially similar content has been or will be submitted to another conference or workshop until MIL3ID decisions are made.

  • MIL3ID is using the same online submission system for the final submission. Please follow the instructions from the Springer website to prepare your final submission. Authors of each accepted paper needs to upload a single zip file including the following materials: 1) a completed copyright form, 2) a PDF of the camera ready, and 3. source files of the camera ready, namely, a Word file or a .tex file plus all figures, style files, special fonts, .eps, .bib etc.

Submission: https://cmt3.research.microsoft.com/MIL3ID2019

  • The MIL3ID 2019 proceedings will be published as a volume in the Springer Lecture Notes in Computer Science (LNCS) series.

  • Papers are limited to 8 pages. Papers should be formatted in Lecture Notes in Computer Science style. Style files can be found on the Springer website. The file format for submissions is Adobe Portable Document Format (PDF). Other formats will not be accepted.

  • Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. 

  • MIL3ID reviewing is strictly double blind: authors do not know the names of the reviewers of their papers, and reviewers do not know the names of the authors. Please see the Anonymity guidelines of MICCAI 2019 for detailed explanations of how to ensure this.

  • Supplemental material submission is optional. This material may include: videos of results that cannot be included in the main paper, anonymized related submissions to other conferences and journals, and appendices or technical reports containing extended proofs and mathematical derivations that are not essential for the understanding of the paper. Contents of the supplemental material should be referred to appropriately in the paper and that reviewers are not obliged to look at it.

  • Our policy is that in submitting a paper, authors implicitly acknowledge that NO paper of substantially similar content has been or will be submitted to another conference or workshop until MIL3ID decisions are made.

  • MIL3ID is using the same online submission system for the final submission. Please follow the instructions from the Springer website to prepare your final submission. Authors of each accepted paper needs to upload a single zip file including the following materials: 1) a completed copyright form, 2) a PDF of the camera ready, and 3. source files of the camera ready, namely, a Word file or a .tex file plus all figures, style files, special fonts, .eps, .bib etc.


Important Dates: 

  • Full Paper Submission Deadline: Midnight, Pacific Time (23:59 PDT), August 5, 2019

  • Notification of Acceptance: August 15, 2019

  • Deadline for Camera Ready Submission: August 20, 2019

  • Workshop: October 17, AM

 

About Organizers:

  • Dr. Hien V Nguyen is an Assistant Professor in the Department of Electrical and Computer Engineering, University of Houston. 

  • Dr. Badri Roysam (Fellow IEEE, AIMBE) is the Hugh Roy and Lillie Cranz Cullen University Professor, and Chairman of the Electrical and Computer Engineering Department at the University of Houston. 

  • Dr. Steve Jiang (Fellow FinstP, AAPM) is Barbara Crittenden Professor in Cancer Research, Vice Chair of Radiation Oncology Department, and the founding director of Medical Artificial Intelligence and Automation Laboratory at UT Southwestern Medical School.

  • Dr. S. Kevin Zhou (Fellow AIMBE) is a Professor at the Institute of Computing Technology, Chinese Academy of Sciences. He was a Principal Expert of Image Analysis and a Senior R&D director at Siemens Healthcare Technology. 

  • Dr. Vishal M. Patel is an Assistant Professor in the Department of Electrical and Computer Engineering at Johns Hopkins University. 

  • Dr. Khoa Luu is an Assistant Professor of Computer Science and Computer Engineering Department and the Director of Computer Vision and Image Understanding Lab, University of Arkansas. 
  • Dr. Ngan Le is currently an Assistant Professor of Computer Science and Computer Engineering, University of Arkansas

 

Event snapshots:

 

 

why we organize
submssion
Important dates
Organization
bottom of page