Archives

  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • 2021-03
  • br The missing value estimation by BPCM

    2019-09-23


    2. The missing value estimation by BPCM Belinostat (PXD101) always outperforms the other two algorithms investigated, which demon-strated the effectiveness of the proposed approach;
    3. The proposed approach can achieve a relatively high accuracy and sensitivity in cervical cancer screening.
    6. Conclusion
    Cervical cancer is a high-death-rate disease threatening women’ s health. Computer-aided diagnosis systems for early detection of this disease without the help of experienced doctors is in urgent demand, especially in developing countries. However, due to the privacy concerns and noise in data collection, the related risk factors obtained from the questionnaires usually contain a high level of missing entries/uncertainty, which brings great di culty for accurate and robust diagnosis. To solve this problem, an algorithm based on the fuzzy Belinostat (PXD101) is proposed in this paper and it can well handle the severe uncertainty/missing entries in data collection. A new kind of fuzzy clustering algorithm, i.e. the BPCM, is proposed to extract the representative patterns from the limited complete data for missing attribute imputation. Then, a fuzzy ensemble learning scheme is designed to learn the inherent rules between the imputed risk factors and the class label (positive or negative). Experiment results on a dataset with 858 patients have shown the effectiveness of the proposed solution. The proposed approach can achieve an accuracy of 76% and a sensitivity of 79% in cervical cancer screening, outperforming many existing algorithms.
    Declaration of Competing Interest
    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
    Acknowledgments
    Appendix A. A Review of Classic Clustering Algorithms
    The ordinary model-based clustering algorithms minimize a loss function of the form:
    NS NC
    The joint optimization of U and C makes a global optimum unattainable, and thus a two-step iterative optimization is used, where U and C are optimized alternatively.
    The essential difference between some mainstream model-based clustering algorithms is the constraint exerted upon U, which can be summarized in Table A.6:
    Constraint on exerted by some classic clustering algorithms.
    Algorithm Constraint
    NC
    NS p p
    Though these forms of constraint were not explicitly stated in the proposal of the corresponding algorithms, they can be obtained straightforwardly by renaming the parameters or treating the regularizer as the Lagrange multiplier that reflects extra conditions.
    The robustness against noise in FCM is a derivation of the constraint NC up = 1, which reduces the attraction of am-j=1 i, j biguous points to centroids as illustrated in Fig. A.11. For an outlier sample i whose distances towards the two centroids are
    similiar, the total impact of this sample ui,1 + ui,2 is equal to 21− 1p ≤ 1. Hence, such sample contributes little on the update of the centroids.
    But this setting makes FCM perform poorly when there is severe overlapping in data. Relaxing the constraint to ∀i, j: ui,j ∈ [0, 1] would provide the most expressive ability, however, Deletion results in a trivial optimum U = for (A.1). This reasoning gave birth to the constraint exerted by PCM.
    However, PCM suffers from its arbitrariness in the constraint form and its over-sensitivity to initialization, and there have been various studies on this topic [1,19,29]. From a perspective of statistical learning, this pathology is a form of overfitting, which can be compensated by a Bayesian approach. The reason why the model-based clustering algorithms instead of the density [23] or the kernel [34] based ones are adopted is that only the model-based approaches are capable of yielding a fuzzy result, which is more suitable to handle uncertainty.
    Appendix B. Derivation of the Gradient of (5)