ers as negative. During classification, each new example is classified by all three classifiers. If more than
one classifies the point as positive, it is associated
with the label corresponding to the classifier with the
maximum margin.
There are two parameters that must be input to the
SVM: C, the regularization parameter that trades off
margin size and training error, and γ, the RBF kernel’s
bandwidth. In our experiments, we select these
parameters using an internal fivefold stratified cross-validation loop and a two-dimensional grid search.
Image Data Sets
The clinical images (in vivo) that we used for evalu-
ating our approach were selected from a large data-
base of manually analyzed IVOCT images obtained
in a clinical setting. Images were collected on the C7-
XR system from St. Jude Medical Inc., Westford, MA.
It has an OCT swept source that has a 1310 nm cen-
ter wavelength, 110 nm wavelength range, 50 kHz
sweep rate, and 12 mm coherence length. The pull-
back speed was 20 mm/s and the pullback length was
54 mm. The images consist of 35 IVOCT pullbacks of
the left anterior descending (LAD) and the left cir-
cumflex (LCX) coronary arteries of patients acquired
prior to stent implantation, with a total of 287
images across 35 patients. An expert cardiologist on
our team then labeled volumes of interest (VOIs) as
belonging to one of the three plaque types in the
images. The expert marked the VOIs of a particular
plaque type using freehand brush strokes. On the
clinical images the expert annotated 311 VOIs
(roughly equal number from each plaque type). VOIs
were of various sizes and shapes. Most consisted of 2–
5 image frames, 50–200 A-lines, and 20–50 sample
points in each A-line.
A concern with the images is that the image anno-
tations we train with are provided by an expert and
so could contain errors. To evaluate the performance
of the trained classifier on ground truth, we created a
second data set using cryo-imaging from cadaver
samples (Salvado et al. 2006). The system serially sec-
tions and acquires micron-scale color images using
different lighting wavelengths (figure 7, depicted lat-
er in this article, left column, bottom row shows an
example of lipid plaque obtained this way) and aut-
ofluorescence microscopy images along the vessel
(figure 7 left column, top row shows a calcified lesion
obtained this way). Visualization software is then
used on the cryo-images to generate microscopic res-
olution color/fluorescence volume renderings of ves-
sels, in which plaque architecture and components
are fully preserved (Nguyen et al. 2008, Prabhu et al.
2016). This provides an accurate depiction of the ves-
sel without the limitations of standard histological
fixation and processing (shrinkage, spatial distortion,
missing calcifications, missing lipid pools, tears, and
so on). Most importantly, this provides 3D validation
for volumetric IVOCT pullback. Furthermore, in cas-
es where plaque type may be ambiguous, the system
enables acquisition of standard cryo-histology.
We acquired a set of 106 such cryo-images. Note
that, since these are ex vivo, we do not use these
images for training our classifiers but use them to validate the results. We call these images “cryo-images”
below to distinguish them from the previous set.
Empirical Evaluation
We now describe experiments to test our hypothesis
that the system we described will be able to accurately and efficiently classify different plaque types
from IVOCT images.
We preprocess all images for speckle noise reduction, baseline subtraction, catheter optical system
correction, and catheter eccentricity correction. We
segment the lumen and the back border using
dynamic programming. To do this, we use a cost
function from prior work (Wang 2012). An example
of the results of the back-border segmentation is
shown in figure 5 in both the r-θ view and the x-y
view. Segmenting the image in this way is important
because ( 1) the regions of interest are contained
between these borders and the rest of the pixels do
not contain any relevant information, and ( 2) it
enables us to properly compute the distance to the
lumen and the beam penetration depth discussed
previously, which are important signals for different
plaque types.
Next, we generate features by scanning the annotated VOIs in the image pixel by pixel. For each pixel, we construct a 7 x 11 x 3 neighborhood (0.035mm
x 0.055mm x 0.6mm) around it. As long as the neighborhood is within the VOI, the features of the box
are computed as explained above and the values are
assigned to the pixel. In the cryo-images we generated features for all pixels between border regions in a
similar way.
For cross validation we use the processed images
with a leave-one-pullback-out strategy. Here, in each
iteration, we hold out all the data from one pullback
as the test set and use the remaining 34 pullbacks as
the training set. This mimics practical usage where
the system will operate on novel pullbacks and is
more stringent than using random folds. In a second
experiment, we ran the trained classifiers on the
cryo-images (these were not used at all during train-ing/cross validation). We ran our experiments on a
64-bit Windows 7 machine with third-generation
Intel Core i7 and 16 GB RAM.
Results and Discussion
The receiver operating curves (ROC) for each OVR
classifier from the cross-validation experiment is
shown in figure 6. The summary statistics are shown
in table 1, where the accuracy, sensitivity, and speci-
ficity are noted at the optimal operating point along