Background: Patients treated for prostate cancer may present to general practitioners (GPs) for treatment follow up, but may be reticent to have their consultations recorded. Therefore the use of simulated patients allows practitioner consultations to be rated. The aim of this study was to determine whether the speciality of the assessor has an impact on how GP consultation performance is rated. Methods: Six pairs of scenarios were developed for professional actors in two series of consultations by GPs. The scenarios included: chronic radiation proctitis, Prostate Specific Antigen (PSA) 'bounce', recurrence of cancer, urethral stricture, erectile dysfunction and depression or anxiety. Participating GPs were furnished with the patient's past medical history, current medication, prostate cancer details and treatment, details of physical examinations. Consultations were video recorded and assessed for quality by two sets of assessors- a team of two GPs and two Radiation Oncologists deploying the Leicester Assessment Package (LAP). LAP scores by the GPs and Radiation Oncologists were compared. Results: Eight GPs participated. In Series 1 the range of LAP scores by GP assessors was 61%-80%, and 67%-86% for Radiation Oncologist assessors. The range for GP LAP scores in Series 2 was 51%- 82%, and 56%-89% for Radiation Oncologist assessors. Within GP assessor correlations for LAP scores were 0.31 and 0.87 in Series 1 and 2 respectively. Within Radiation Oncologist assessor correlations were 0.50 and 0.72 in Series 1 and 2 respectively. Radiation Oncologist and GP assessor scores were significantly different for 4 doctors and for some scenarios. Anticipatory care was the only domain where GPs scored participants higher than Radiation Oncologist assessors. Conclusion: The assessment of GP consultation performance is not consistent across assessors from different disciplines even when they deploy the same assessment tool.