Supervised machine learning techniques generally require that the training set on which learning is based contains sufficient examples representative of the target concept, as well as known counter-examples of the concept. However in many application domains it is not possible to supply a set of labeled counter-examples. This paper presents a technique that combines supervised and unsupervised learning to discover symbolic concept descriptions from a training set in which only positive instances appear with class labels. Experimental results obtained from applying the technique to several real world datasets are provided. These results suggest that in some problems domain learning without labeled counter-examples can lead to classification performance comparable to that of conventional learning algorithms, despite the fact that the latter use additional class information. The technique is able to cope with noise in the training set, and is applicable to a broad range of classification and pattern recognition problems.