Ross Fadely, David W. Hogg, Beth Willman
Ground-based optical surveys such as PanSTARRS, DES, and LSST, will produce large catalogs to limiting magnitudes of r > 24. Star-galaxy separation will pose a major challenge to such surveys because galaxies---even very compact galaxies---outnumber halo stars at these depths. Here we investigate photometric classification techniques on stars and galaxies with intrinsic FWHM < 0.2 arcsec. We consider unsupervised SED template fitting and supervised, data-driven Support Vector Machines (SVM). For template fitting, we use a Maximum Likelihood (ML) method and a new Hierarchical Bayesian (HB) method, in which we learn the prior distribution of template probabilities by optimizing the likelihood for the entire dataset. SVM requires training data to classify unknown sources; ML and HB don't. We consider both i.) a best-case scenario (SVM_best) in which the training data is (unrealistically) a random sampling of the data in both signal-to-noise and demographics, and ii.) a more realistic scenario in which the SVM is trained on higher signal-to-noise data (SVM_real) at brighter apparent magnitudes. Testing with COSMOS ugriz data we find that HB outperforms ML, delivering ~80% completeness in both star and galaxy samples, with purity of ~40-90% and ~70-90% for stars and galaxies, respectively. We find no algorithm delivers perfect performance, and that studies of M-giant and metal-poor main-sequence turnoff stars may be most affected by poor star-galaxy separation. We measure the area under the ROC curve to assess the relative performance of the approaches and find a best-to-worst ranking of SVM_best, HB, ML, and SVM_real. We conclude, therefore, that a well trained SVM will outperform template-fitting methods. However, a normally trained SVM performs worse. Thus, Hierarchical Bayesian template fitting may prove to be the optimal method for source classification in future surveys.
View original:
http://arxiv.org/abs/1206.4306
No comments:
Post a Comment