• Nu S-Au Găsit Rezultate

Combination of B-mode and color Doppler mode using mutual information including canonical correlation analysis for breast cancer diagnosis

N/A
N/A
Protected

Academic year: 2022

Share "Combination of B-mode and color Doppler mode using mutual information including canonical correlation analysis for breast cancer diagnosis"

Copied!
9
0
0

Text complet

(1)

DOI:

Original papers

Combination of B-mode and color Doppler mode using mutual information including canonical correlation analysis for breast cancer diagnosis

Tongjai Yampaka, Prabhas Chongstitvatana

Department of Computer Engineering, Chulalongkorn University, Bangkok, Thailand

Received 12.10.2019 Accepted 15.12.2019 Med Ultrason

2020, Vol. 22, No 1, 49-57

Corresponding author: Tongjai Yampaka

Department of Computer Engineering Chulalongkorn University

Phayathai Rd., Pathumwan, 10330 Bangkok, Thailand Phone: +660849419437

E-mail: [email protected]

Introduction

Breast cancer is the leading cause of death for wom- en. Early screening is key to reduce the death rate. How- ever, early screening requires accurate and reliable tools.

Computer-Aided Diagnosis (CAD) has been developed

to help the radiologists in the detection and diagnosis of breast cancer. In recent years, several previous studies have suggested that CAD systems can increase early can- cer detection rates [1]. Ultrasound (US) has been used in screening as a supplementary tool especially in women with dense breast tissue [2]. The most abnormal breast lesions are easy to find by using the conventional US, while some lesions are still hidden. Therefore, multiple US modes have been performed to extract different in- formation from lesions. For example, B-mode (Bright- ness) displays the acoustic impedance of a two-dimen- sional cross-section of tissue, while color Doppler mode displays blood flow, the motion of tissue over time, the location of blood, the presence of specific molecules, the stiffness of tissue, or the anatomy of a three-dimensional Abstract

Aim: This study proposes the combination of B-mode and color Doppler mode using Mutual Information including Ca- nonical Correlation Analysis (MI-CCA) to improve breast cancer diagnosis. Materials and methods: The dataset consisted of 53 benign lesions and 202 malignant lesions including B-mode, and color Doppler mode. Convolutional Neuron Networks (CNNs) was applied to automatically extract the features from breast ultrasound images. Then, MI-CCA was performed to fuse with maximized correlation. Finally, the classification model was built via the support vector machine technique to dis- tinguish breast tumors. Diagnosis performances of single modes, combination modes, and other fusion strategies were com- pared. Results: The single B-mode obtained 90.92% accuracy, while the color Doppler mode obtained 97.16% accuracy. The MI-CCA fusion reveals a significant improvement with 98.80% accuracy. The results indicated that the fusion of two modes tended to offer a more accurate diagnosis than the single mode. In addition, the unsupervised-PCA was high (AUC 0.91, 95%

CI [0.90, 0.91]) and no significant difference was observed with the unsupervised-CCA (AUC 0.90, 95% CI [0.84, 0.90]).

The supervised-PCA was the lowest (AUC 0.93, 95% CI [0.91, 0.93] and no significant difference was observed with the supervised-CCA (AUC 0.95, 95% CI [0.91, 0.94]). The proposed MI-CCA was the highest performance (AUC 0.99, 95% CI [0.93, 0.99]). These results indicated that the supervised strategies tended to give a more accurate diagnosis than unsupervised strategies. Conclusion: By using the combination of ultrasound modes, this approach achieves high performance compared with the single mode and other fusion strategies. Our methodology may be a beneficial tool for the early detection and diag- nosis of breast cancer.

Keywords: breast cancer diagnosis; canonical correlation analysis; mutual information

DOI: 10.11152/mu-2270

(2)

region. In previous studies, a single US mode has been improved. According to Ko et al [3], non-mass lesions were defined in four types. Positive predictive values could have been improved but the differentiation of NMLs by B-mode remained ambiguous and required fur- ther exploration. After intensive research, the elastogra- phy mode was well-established in cases of breast masses [4-5]. Guo et al [6] used contrast agent CEUS to depict the microcirculation of breast masses and provide quali- tative and quantitative analysis for classifying breast le- sions. These studies showed that the elastography mode could be helpful, but they note that it remained imprecise in its interpretation. Color Doppler mode, which used to supplement in the conventional US, showed high sensi- tivity, low angle dependency, and no aliasing [7]. Never- theless, the compilation with recent clinical research [8]

reported that the Doppler image alone was not able to significantly distinguish from a solid mass. Consequent- ly, multiple US modes have been widely considered to combine B-mode and color Doppler mode for improving diagnosis performance. When B-mode was always ex- amined together with color Doppler mode, the fusion of B-mode and color Doppler mode was performed [9,10].

These studies reported that combining the B-mode and color Doppler mode showed high accuracy and specific- ity to guide the decision for biopsy for non-mass breast lesions. Laurence et al [11] evaluated the performance fusion of B-mode, color Doppler, and SWE measure- ments. The result could significantly (p <0.001) improve characterization of testicular masses and, therefore, could avoid inappropriate total orchiectomy. Although previous studies demonstrated that the combination of US modes could improve the overall accuracy, these in- vestigations were not performed with the CAD system.

In addition, when the US examination is interpreted by an inexperienced radiologist, some pitfalls may appear by the human error. Lee et al [12] investigated the effect of CAD (S-detectTM) on breast US when inexperienced radiologists described and categorized breast lesions, es- pecially in comparison with experienced radiologists. In their conclusion, the CAD system can be more beneficial and educational for less experienced radiologists than for experienced radiologists, not only when describing lesions, but also when determining if the lesion is malig- nant. Thus, automatic breast lesion detection established with CAD could be beneficial to help radiologists in breast US. To our knowledge, there have been few stud- ies that automatically combined multiple US modes. The CAD using four state-of-the-art methodologies for breast lesion detection were introduced to improve diagnostic performance. For instance, Radial Gradient Index (RGI) Filtering [13] and Multifractal Filtering [14] have been

widely cited works in this area, while Rule-based Region Ranking [15] and Deformable Part Models [16] also in- cluded two recent approaches. Although the state-of-the- art method took advantage, it was not designed for fusing multiple input images.

When the fusion of each dataset is required, differ- ent sources of information may be correlated or uncorre- lated, so the fusion algorithm should be concerned about the correlation and ensure a compatible model between the two datasets. In recent years, the Principal Compo- nent Analysis (PCA) and Canonical Correlation Analysis (CCA), which linearly projects from two sets of random variables to low dimension sub-space and maximizes correlation, have been developed to fuse heterogeneous datasets. However, these methods do not consider the class separation compared to its individual modalities. To address this issue, supervised dimension reduction meth- ods supervised-PCA and supervised-CCA [17,18] have been introduced.

These studies reported that the supervised method is able to fuse data from any number of modalities to a joint subspace that is robust to modality-specific noise. Mo- tivated by recent success in supervised-CCA, this study proposes the Mutual Information Canonical Correlation Analysis (MI-CCA) strategy which is extended from the supervised-CCA for fusing US modes. Thus, this strategy is supposed to achieve a higher predictive performance as compared to single US mode and other fusion strategies such as unsupervised and supervised strategies includ- ing unsupervised-PCA, unsupervised-CCA, supervised- PCA, and supervised-CCA.

Material and method Overview

The contribution of this study aims to fuse B-mode and color Doppler modes for breast cancer diagnosis in- cluding (a) feature extraction from two breast US modes (B-mode and color Doppler mode) using CNN, (b) ex- tension of CCA via Mutual Information MI-CCA for data fusion, and (c) building a classifier model to distinguish breast tumor from benign and malignant. Figure 1 shows the overview of method.

Data Acquisition and Data Description

The experiment dataset has been provided by the Department of Radiology of Thammasat University and Queen Sirikit Center of Breast Cancer of Thailand. These lesion images consist of 53 benign lesions and 202 ma- lignant lesions (including 255 B-mode images and 255 color Doppler mode images). Figure 2 show the STRAD diagram of this approach. The patients’ information has been removed from the images. All lesions were con-

(3)

firmed by biopsy, thus it is absolutely clear whether the lesion was malignant or benign. In addition, the lesions were classified by three leading experts as malignant or benign. The consensus decision has been obtained by the majority voting rule (two out of three). The images were obtained by a Philips iU22 US machine in resolu- tion ranges from 200×200 to 300×400 pixels based on the criteria of the provider.

Feature extraction using Deep Convolution Neuron Network: DCNNs

The features were automatically extracted by using computer vision methods. According to modern practice, a set of image features was extracted from the Deep Con- volutional Neural Networks (DCNNs) which are pow- erful models that achieve impressive results for image classification to avoid the cost of hand-crafted feature extraction [19]. The success from many studies [20-23]

was applied to large-scale image and video recognition.

Inspired by their success, this approach was used to ex- tract the features from US images. The feature extraction was carried out via DCNNs. During the training steps, the image was passed through a stack of convolutional (conv.) layers. These stack layers composed of the 5×5 filter to capture the notion of position followed by a 1×1 convolution filter. Then, spatial pooling layers were carried out by max-pooling layers. Non-linearity recti- fication (ReLU) activation function, which is a popular choice especially for deep networks, was used to activate the parameter values. Finally, the final layer is the soft- max layer to classify the target class. The results of the DCNNs feature extraction is shown in Table I.

Canonical Correlation Analysis (CCA)

Features extracted from the feature selection were defined as:

n i

x f x f x

f( 1), ( 2),..., ( i); ∈ ,12,..., .

Fig 1. The overview of MI- CCA fusion for multiple ultrasound modes detected for screening or diagnosis. (A) Features were ex- tracted from breast ultrasound images. (B) The fusion method was performed by MI-CCA. (C) From the fusion result, both groups finally had their lesion categorized as benign or malignant.

Fig 2. The STARD diagram of combination of B-mode and color Doppler mode.

Table I. The results of the DCNNs feature extraction

La- yer Layer Type Size Output

Features 1 Convolution + ReLU 32 5×5 filters

1 Max Pooling 2×2, stride 2 (32,12,12) 2 Convolution + ReLU 48 5×5 filters

2 Max Pooling 2×2, stride 2 (48,4,4) 3 Convolution + ReLU 48 5×5 filters

3 Max Pooling 2×2, stride 2 (64,1,1) 4 Fully Connected + ReLU 121 hidden units 121

4 Softmax 121 ways 121

(4)

Given the first and the second dataset were defined as:

{

n

}

i x n i

x1i; ∈{,12,..., }, i2, ∈ ,12,..., . Features matrix were defined as:

] ,..., , [ ], ,..., ,

[ 11 12 1 2 12 22 2

1 x x xn X x x xn

X = = .

CCA accounts to fusion more than two datasets based on cross correlation. Although the correlation of more than two datasets were not easy to examine, the sub- space which maximizes the correlations of each pair in sequential has been approximated instead [24]. Given n data samples comprise of n=n1+n2+...+nM features

{

n

}

M X X

X1, 2,..., M, ∈ ,12,..., X from M dataset, this im- plementation of pairwise CCA attempts a set of linear transformations: w1Rn1×1,w2Rn2×1,wMRnM×1 such that the sum of the correlations across all pairs of modali- ties is maximized, shown as:

( )(

T tt t

)

t m T mt m

T mt w

w

w w C w w C w

w C w

M

2 1 ,...,

, 2 1

max arg

ρ = (1)

where nM mt nM

M tt

mm Rn C R R

C××nM, and mt×nM

M tt

mm Rn C R R

C××× are the co-

variance matrices of the features X1 and X1, X2 and X2, then X1 and X2, respectively.

The CCA weights Wi for all modalities are maximized based on Cmt, shown as:

( )

M x T x M x

T x x

x T mt x

x T mt w x

W C W W

C W

I W C W t s

W C W trace

M M x

=

=

= ...

. .

max arg

1 1 1

where I is an n×n identity matrix and the weight matrix is defined as X1=[w1,w2,...,wM]∈R(n1+n2+...+nM)×p .

Extension of CCA via Mutual Information

While CCA methods are able to account for 2 views, when used for classification, these representations do not consider class separation compared to its individual modalities. Therefore, supervised dimension reduction methods sPCA and sCCA were introduced. This study presents MI-CCA, which considers the labeled data to improve classification performance compared to super- vised dimension reduction methods. The MI between random variables X and Y can estimate the under prob- ability distribution from the posterior knowledge of the pointwise mutual information H(X, Y). If X given Y are the evens, then the true frequencies of all combinations of (X;Y) pairs can be estimated by counting the number of times each pair occurs in the data. The mutual infor- mation scores were computed using the equation, shown as:

( ) ( ) ( )

( ) ( )

, )

( log ,

; px p y

y x y p

x p Y

X H

Y y xX

∑∑

= (2)

(2) where p(x,y) is the joint probability density function of X and Y, and p(x)p(y) are the marginal probability den-

sity functions of X and Y respectively. If X and Y are independent, then knowing X does not give any informa- tion about Y, their mutual information is zero. Followed by this concept the parametric distributions over feature and target class, it is convenient to revise from the equa- tion (2), shown as:

( )

( )

( )

( ( ) ) ( ( ) )

( ) ( ) ( )

, )

( log ,

; p f p Y

Y f p X

f y Yp f Y

Y f

H

=

(3)

where the set of f

( )

is final output from DCNNs net- works, and Y is the possible target class. The mutual infor- mation scores H1

(

f

( )

x1 ;Y

)

,H2

(

f

( )

x2 ;Y

)

,...,Hi

(

f

( )

xi ;Y

)

were computed from the equation (3). Then, the features

( )

fˆ which correspond over the mean score would be se- lected to the CCA task, shown as

( ) ( ( ) )

; 1 1 1 ˆ 1

meanH i Y

x i f H X

f = > and

( ) ( ( ) )

; 2 2 2 ˆ 2

meanH i Y

x i f H X

f = >

where superscript 1 is the first dataset and superscript 2 is the second dataset, respectively. The objective func- tion can be formulated as the following equation (1) that is modified from standard CCA to MI-CCA, shown as:

(

1 11 1

)(

2 22 2

)

2 12 1

, ˆ ˆ

max ˆ arg

2

1 w C w w C w

w C w

T T

T w

w

ρ = (4)

where the covariance matrices Cij were replaced to Cˆ11, ˆ22

C , and Cˆ12 that were calculated from fˆ

( )

X1 and fˆ

( )

X2 . Data fusion in the context of MI-CCA

Two modalities can be used to represent in the fusion space. Given n embedding components U i1,i{1,2,...,n}, are expressed via U i1=W TiX1 and V i1,i{1,2,...,n} are ex- pressed via V i1=W TiX2. The embedding components U i1,

V i1 will be included in the fusion space based on the k-largest l (which is the variance ratio). The fusion space was written as:





= 1, 1

Vi

e U i

concatenat

S (5)

where i is the number of embedding components that corresponding to the top k-largest explain variance scores. The coordinate of , in the fusion space is mostly used for visualizing the prediction model instead of the original variables. After the fusion step, the classification task was performed to classify the breast lesions. Table II shows the proposed algorithm of MI-CCA.

Comparative Data Fusion Strategies

While dimension reduction methods such as CCA or PCA are able to fusion for 2 views, when used for classification, these representations do not consider class separation compared to its individual modalities. There- fore, supervised dimension reduction methods sPCA and

(5)

sCCA are introduced. This study aims to develop a fused algorithm with a higher predictive performance as com- pared to other unsupervised strategies and supervised strategies including unsupervised PCA, unsupervised CCA, supervised PCA, and supervised CCA.

Statistical analysis

Experiment 1: Exploration of correlation analysis via Pearson correlation

Because the objective of the data fusion method is the strongest correlation between two datasets, the Pearson correlation was used to measure the distance of linear relationships between variables to confirm our contribu- tion and compare between other strategies. When the cor- relation coefficient is close to 1 or −1, their correlation is the strongest. The correlation coefficient is close to 0, their correlation is weak. The calculation formula is as follows:

( ) ( )

Y X

Y X Y cov

X σ σ

ρ , = , (6)

where cov is the covariance, σX is the standard deviation of X, and σY is the standard deviation of Y.

Experiment 2: Comparing the fusion of B-mode and color Doppler modes vs. single US mode Confusion matrices were used to evaluate the per- formance and compared with single B-mode and color Doppler mode. These matrices computed sensitivity (true positive rate), specificity (true negative rate) and accu- racy of models. The predictive formulas were defined as:

FN TP y TP Sensitivit

= +

(7),

TN FP

y TN Speciticit

= +

(8), FN

TN FP TP

TN Accuracy TP

+ + +

= +

(9)

Experiment 3: Comparing MI-CCA fusion vs.

other fusion methods

The comparison was the receiver operating character- istic curve (AUC). ROC is a probability curve and AUC represents the degree or measure of separation. It shows the capable of distinguishing between classes. The higher AUC shows a better model for distinguishing between patients with benign and malignant lesions. The ROC curve is plotted with TPR against the FPR where TPR is on y-axis and FPR is on the x-axis.

FN TP TPR TP

= + (10), FPR=1−Specificity (11) where TP is true positive, TN is true negative, FP is false positive and FN is false negative.

Results

Experiment 1: Exploration of correlation analysis via Pearson correlation

Fig 3a shows the comparisons of Pearson correlation of unsupervised and supervised strategies. First, the un- supervised strategies (fig 3b, 3c) showed that unsuper- vised PCA had a lower correlation (0.30) than unsuper- vised CCA (0.90). Second, the supervised strategies (fig 3d) showed that supervised PCA had the lowest correla- tion (0.08). The supervised CCA (fig 3e) had an inverse correlation (-0.80), while proposed MI-CCA (fig 3f) had a lower correlation than supervised CCA. These results indicate that the unsupervised strategies seem to be the relevant information between the two datasets rather than supervised strategies. The unsupervised strategies are performed to maximize variable correlation, while the supervised strategies not only are performed to maximize variable correlation but also performed to maximize the class label.

Experiment 2: Comparing the fusion of B-mode and color Doppler modes vs. single US mode Table III shows the model performance in sensitivity, specificity, and accuracy. The results indicated that the fusion of two modes tended to achieve a high diagnostic accuracy.

Table II. The proposed algorithm of MI-CCA Proposed algorithm:

MI-CCA:

( )x1

f = input feature 1, f( )x2 = input feature 2

( )

( ) ( ( ) )

( )

( )

( )

( ) ( ) ( ), )

( log ,

; p f pY

Y f p X

f y Yp f Y

Y f

H

=

( ) ( ( ) )

; 1 1 1 ˆ 1

meanH i Y

x i f H X

f = >

( ) ( ( ) )

; 2 2 2 2

ˆX Hi f xi Y meanH

f = >

(

1 11 1

)(

2 22 2

)

2 12 1

, ˆ ˆ

max ˆ arg

2

1 w C w w C w

w C CCA w

T T

T w

w

= Canonical space:

W TiX U i1= 1

} ,..., 2 , 1 {

1,i n

V i



= 1, 1

Vi e U i concatenat S

Classification:

Output = SVM(S)

Table III. Comparing the fusion of multiple ultrasound modes vs. single ultrasound mode

Mode Sen% Sp% Ac%

B-mode 92.11 86.92 90.92

color Doppler mode 98.12 94.23 97.61

MI-CCA Fusion 98.66 96.15 98.80

Se, sensitivity; Sp, specificity; Ac, accuracy

(6)

Experiment 3: Comparing MI-CCA fusion vs. other fusion methods

Fig 4 shows the comparisons of breast tumor clas- sification accuracy of unsupervised strategies and super- vised strategies. First, (fig 4a) unsupervised PCA was high (AUC 0.91, 95% CI [0.90, 0.91]) and no signifi- cant difference was observed with the unsupervised CCA (AUC 0.90, 95% CI [0.84, 0.90]). Second, (fig 4b) the su- pervised PCA was the lowest (AUC 0.93, 95% CI [0.91, 0.93] and no significant difference was observed with the supervised CCA (AUC 0.95, 95% CI [0.91, 0.94]). The mutual information scores that were over mean would be selected (fig 5) for MI-CCA. The proposed MI-CCA was the highest performance (AUC 0.99, 95% CI [0.93,

Fig 3. (A) Pearson correlation was used to evaluate the correlation of unsupervised and supervised strategies. (B) Unsupervised-PCA had a lower correlation (correlation = 0.30). (C) Unsupervised CCA had a higher (correlation = 0.90) than unsupervised-PCA. (D) Supervised-PCA had the lowest correlation (correlation = 0.08). (E) Supervised-CCA had an inverse correlation (correlation = -0.80), while proposed MI-CCA (fig 3f) had a lower correlation than supervised-CCA (F).

Fig 4. The AUC was evaluated and compared with unsupervised and supervised. (A) Unsupervised-PCA had a higher AUC than unsupervised-CCA. (B) Supervised PCA was the lowest AUC and no significant difference was observed with the supervised-CCA, while the proposed MI-CCA was the highest performance.

Fig 5. The mutual information scores were calculated and se- lected.

(7)

0.99]). MI-CCA is performed using high mutual infor- mation between variables and class labels. Therefore, the variables tend to be more compatible with the class labels than other supervised strategies. These results in- dicated that the supervised strategies tended to a more accurate diagnosis than unsupervised strategies. Figure 2 shows the final diagnosis in STARD diagram of this approach.

Discussion

The popular tool and effective technique for breast cancer screening is digital mammography. However, there are some limitations. For instance, the lesions in the dense breast are hidden by the surrounding tissue.

Therefore, breast US has been used in alternative tools to complement breast cancer detection due to its ability, non-radiation, and high sensitivity [25]. The general ab- normal breast lesions are easy to find by using the con- ventional US, while some cases are still hidden. There- fore, multiple US modes have been performed to extract different information from the lesions.

The recent additional US techniques have been sug- gested and applied in practice to improve diagnostic ac- curacy. Therefore, CAD has been developed to provide efficient interpretation or second opinion for a lesion detected on breast US [12]. For example, the US CAD system developed by Samsung Medison, Co, Ltd, Seoul, Korea provides additional morphologic analysis of breast masses to detect breast US according to the BI-RADS lexicon and assist in the final assessment for the detected breast masses [26]. According to previous studies using the CAD system, specificity can be improved for the diagnosis of malignant breast mass and assisted the ra- diologists [27-29]. However, previous studies consider only a single mode. To extend the previous studies, the multiple US modes were fused and applied to the com- puterized algorithm to improve diagnosis performance as a feature extraction, data fusion, and classification. The result showed an increase in sensitivity, specificity, and accuracy.

Regarding the classification performance, single color Doppler mode was more accurate than the single B-mode because color Doppler mode can display blood flow, motion of tissue over time, the location of blood, the presence of specific molecules, the stiffness of tissue, or the anatomy of a three-dimensional region. This informa- tion can be beneficial to improve diagnosis performance, whereas the fusion of B-mode and color Doppler mode achieve the highest performance by complementing the information with each other. In addition, our approach confirmed the previous study, who suggested that other

methods may be reduced missing such as the explored correlation of image or integrated of double reading.

Therefore, not only high accuracy but also the maximal correlation between fusion dataset has been important.

Data fusion as described in Foster et al [30] noted that dataset X and Y will have similar information when there is a maximal correlation. CCA aim to explore the rela- tionship between different views or a variety of datasets the many learning problems applied this technique with a great performance.

In practice the advantage of data fusion should meet two requirements [31]. First, the final layer should be accurate. Second, the fusion layers should be a high relationship among views. Our results differ from An- drew et al [32] that reported maximized the correla- tion of dataset. Although the proposed MI-CCA had a lower correlation, the mutual information was help- ful to maximize the accuracy performance. In addition, early detection and diagnosis of breast cancer is critical for survival [33-35]. Early diagnosis requires an accu- rate and reliable tool to distinguish benign and malig- nant tumors. The major cancer screening problems are false negative to take effect with patients who lose the chance of early treatment. The false positive devel- oped unnecessary surgery such as biopsy. Our experi- ments reduce the false positive and false negative; fur- thermore, overall accuracy is better than the previous works.

Some limitations of our study should be considered.

First, the other information such as patient demograph- ics and health history were not included. Second, other significant tumor characteristics such as dense breast or fat breast were considered. Finally, more sample datasets were included in the future work. In addition, like the clinicians’ decision, other medical evidence should be combined for diagnosis.

Conclusion

This study presents the combination of B-mode and color Doppler modes for improving the diagnosis per- formance using MI-CCA. Our methodology achieves high performance compared with single mode and other fusion strategies when applied in breast US to classify breast tumors.

Acknowledgement: The dataset is contributed by the Department of Radiology of Thammasat University and Queen Sirikit Center of Breast Cancer of Thailand [36]

and available to be downloaded at [37].

Conflict of interest: none

(8)

References

1. Freer TW, Ulissey MJ. Screening mammography with com- puter-aided detection: prospective study of 12,860 patients in a community breast center. Radiology 2001;220:781- 2. Berg WA, Blume JD, Cormack JB, et al. Combined screen-786.

ing with ultrasound and mammography vs mammography alone in women at elevated risk of breast cancer. JAMA 2008;299:2151-2163.

3. Ko KH, Hsu HH, Yu JC, et al. Non-mass-like breast lesions at ultrasonography: Feature analysis and BI-RADS assess- ment. Eur J Radiol 2015;84:77-85.

4. Zhi H, Ou B, Xiao XY, et al. Ultrasound Elastography of Breast Lesions in Chinese Women: A Multicenter Study in China. Clinical Breast Cancer 2015;13:392-400.

5. Parajuly S, Lan P, Yan L, Gang YZ, Lin L. Breast elastogra- phy: a hospital-based preliminary study in China. Asian Pac J Cancer Prev 2010;11:809-814.

6. Guo R, Lu G, Qin B, Fei B. Ultrasound Imaging Technolo- gies for Breast Cancer Detection and Management: A Re- view. Ultrasound Med Biol 2018;44:37-70.

7. Yamakoshi Y, Nakajima T, Kasahara T, Yamazaki M, Koda R, Sunaguchi N. Shear Wave Imaging of Breast Tissue by Color Doppler Shear Wave Elastography. IEEE Trans Ul- trason Ferroelectr Freq Control 2017;64:340-348.

8. Davoudi Y, Borhani B, Rad M P, Matin N. The Role of Doppler Sonography in Distinguishing Malignant from Benign Breast Lesions. J Med Ultrasound 2014;22:92-95.

9. Cho N, Jang M, Lyou CY, Park JS, Choi HY, Moon WK.

Distinguishing Benign from Malignant Masses at Breast US: Combined US Elastography and Color Doppler US-In- fluence on Radiologist Accuracy. Radiology 2012;262:80- 10. Özdemir A, Özdemir H, Maral I, Konuş O, Yücel S, Işik S. 90.

Differential diagnosis of solid breast lesions: contribution of Doppler studies to mammography and gray scale imag- ing. J Ultrasound Med 2001;20:1091-1101.

11. Rocher L, Criton A, Gennisson JL, et al. Characterization of Testicular Masses in Adults: Performance of Combined Quantitative Shear Wave Elastography and Conventional Ultrasound. Ultrasound Med Biol 2019;45:720-731.

12. Lee J, Kim S, Kang BJ, Kim SH, Park GE. Evaluation of the effect of computer aided diagnosis system on breast ul- trasound for inexperienced radiologists in describing and determining breast lesions. Med Ultrason 2019;21:239-245.

13. Drukker K, Giger ML, Mendelson EB. Computerized anal- ysis of shadowing on breast ultrasound for improved lesion detection. Med Phys 2003;30:1833-1842.

14. Yap MH, Edirisinghe EA, Bez HE. A novel algorithm for initial lesion detection in ultrasound breast images. J Appl Clin Med Phys 2008;9:2741.

15. Shan J, Cheng H, Wang Y. completely automated segmenta- tion approach for breast ultrasound images using multiple- domain features. Ultrasound Med Biol 2012;38:262-275.

16. Pons G, Martí R, Ganau S, Sentis M, Marti J. Feasibility Study of Lesion Detection Using Deformable Part Models

in Breast Ultrasound Images. In: Sanches J.M., Micó L., Cardoso JS. (eds). Pattern Recognition and Image Analy- sis. Lecture Notes in Computer Science, vol. 7887. Spriger, Berlin, Heidelberg 2013:269-276.

17. Lee G, Singanamalli A, Wang H, et al. Supervised Multi- View Canonical Correlation Analysis (sMVCCA): Inte- grating Histologic and Proteomic Features for Predict- ing Recurrent Prostate Cancer. IEEE Tran Med Imaging 2015;34:284-297.

18. Singanamalli A, Wang H, Lee G, et al. Supervised multi- view canonical correlation analysis: fused multimodal prediction of disease diagnosis and prognosis. Biomedical Applications in Molecular, Structural, and Functional Im- aging. Progress in Biomedical Optics and Imaging - Pro- ceedings of SPIE. 9038. 2014. doi:10.1117/12.2043762.

19. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521:436-444.

20. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60-88.

21. de la Cruz GV Jr, Du Y, Taylor ME. Pre-training Neural Networks with Human Demonstrations for Deep Reinforce- ment Learning. 2017; arXiv e-print (arXiv:1709.04083).

22. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classifi- cation with deep convolutional neural networks. Communi- cations of the ACM 2017;60:84-90.

23. Sermanet P, Eigen D, Zhang X, et al. OverFeat: Integrated Recognition, Localization and Detection using Convolu- tional Networks. Computer Vision and Pattern Recognition, 2014. arXiv:1312.6229v4.

24. Kettenring JR. Canonical analysis of several sets of vari- ables. Biometrika 1971;58:433-451.

25. Stavros AT, Thickman D, Rapp CL, Dennis MA, Parker SH, Sisney GA. Solid breast nodules: use of sonography to distinguish between benign and malignant lesions. Radiol- ogy 1995;196:123-134.

26. Kim K, Song MK, Kim EK, Yoon JH. Clinical application of S-Detect to breast masses on ultrasonography: a study evaluating the diagnostic performance and agreement with a dedicated breast radiologist. Ultrasonography 2017;36:3- 27. Choi JH, Kang BJ, Baek JE, Lee HS, Kim SH. Application 9.

of computer-aided diagnosis in breast ultrasound interpreta- tion: improvements in diagnostic performance according to reader experience. Ultrasonography 2018;37:217-225.

28. Cho E, Kim EK, Song MK, Yoon JH. Application of Com- puter-Aided Diagnosis on Breast Ultrasonography: Evalu- ation of Diagnostic Performances and Agreement of Ra- diologists According to Different Levels of Experience. J Ultrasound Med 2018;37:209-216.

29. Di Segni M, de Soccio V, Cantisani V, et al. Automated classification of focal breast lesions according to S-detect:

validation and role as a clinical and teaching tool. J Ultra- sound 2018;21:105-118.

30. Foster DP, Kakade SM, Zhang T. Multi-view dimensional- ity reduction via canonical correlation analysis. Technical Report TR-2008-4. TTI-Chicago 2008.

(9)

31. Zhao J, Xie X, Xu X, Sun S. Multi-view learning overview:

Recent progress and new challenges. Information Fusion 2017;38:43-54.

32. Andrew G, Arora R, Bilmes J, Livescu K. Deep Canonical Correlation Analysis. Proceedings of the 30th International Conference on Machine Learning. PMLR 2013;28:1247- 1255.

33. Weedon-Fekjaer H, Romundstad PR, Vatten LJ. Modern mammography screening and breast cancer mortality: pop- ulation study. BMJ 2014;348:g3701.

34. Jacobs MA, Wolff AC, Macura KJ, et al. Multiparametric and Multimodality Functional Radiological Imaging for

Breast Cancer Diagnosis and Early Treatment Response Assessment. J Natl Cancer Inst Monogr 2015;2015:40-46.

35. Massat NJ, Dibden A, Parmar D, Cuzick J, Sasieni PD, Duffy SW. Impact of Screening on Breast Cancer Mortal- ity: The UK Program 20 Years On. Cancer Epidemiol Bio- markers Prev 2016;25:455-462.

36. Rodtook A, Kirimasthong K, Lohitvisate W, Makhanov SS. Automatic initialization of active contours and level set method in ultrasound images of breast abnormalities. Pat- tern Recognition 2018;79:172-182.

37. Medical Images Home. 2019. Accessed 10 December 2019., Available at: http://www.onlinemedicalimages.com/

Referințe

DOCUMENTE SIMILARE