Meta-reasoning is an integral part of high-level cognition. Among other tasks, an intelligent agent may use meta-reasoning for learning by correcting mistakes in its domain knowledge. For example, given examples where its classification knowledge fails, a classification agent may use meta-reasoning to self-diagnose and self-repair the erroneous domain knowledge. We know from prior work that this kind of self-diagnosis and self-repair is facilitated if a hierarchical structure can be imposed on the agent's classification knowledge, based upon available background knowledge. In this paper, we examine the effect of imposing fixed semantics on the intermediate nodes of the hierarchy on classification learning, as distinct from the advantage offered by the structuring of knowledge. We present empirical results from ablation experiments that demonstrate the generalization power provided by verification procedures at the intermediate concepts in the hierarchy, which are demonstrated to go well beyond those provided by the structure of the hierarchy.