We propose that a mismatch in problem presentation and question structures may promote errors on Bayesian reasoning problems. In this task, people determine the likelihood that a positive test actually indicates the presence of a condition. Research has shown that people routinely fail to correctly identify this positive predictive value (PPV). We point out that the typical problem structure is likely to confuse reasoners by focusing on the incorrect reference class for answering this diagnostic question; instead, providing the anchor needed to address the diagnostic question about sensitivity (SEN). Results of two experiments are described in which participants answer diagnostic questions using problems presented with congruent or incongruent reference classes. Aligning reference classes eases both representational and computational difficulties, increasing the proportion who were consistently accurate to an unprecedented 93% on PPV questions, and 69% on SEN questions. Analysis of response components from incongruent problems indicated that many errors reflect difficulties in identifying and applying appropriate values from the problem, which are prerequisite processes that contribute to computational errors. We conclude with a discussion of the need, especially in applied settings and on initial exposure, to adopt problem presentations to guide, rather than confuse, the organization and use of diagnostic information. (PsycINFO Database Record (c) 2018 APA, all rights reserved)