Existing multi-view 3D object reconstruction methods heavily rely on sufficient overlap between input images and are prone to generating holes or artifacts, thereby limiting the geometric precision and completeness of the reconstructed models. Recent advancements in diffusion-based 3D generative techniques offer the potential to address these limitations by leveraging learned generative priors to ‘‘hallucinate" the invisible parts of objects, thereby generating plausible 3D structures. However, the stochastic nature of the inference process limits the accuracy and reliability of generation results, preventing existing reconstruction frameworks integrating such 3D generative priors. In this work, we comprehensively analyze the reasons why diffusion-based 3D generative methods fail to achieve high consistency, including (a) the insufficiency in constructing cross-view connections when extracting multi-view image features as conditions, (b) the susceptibility of the global coarse structure generation to initial noise, and (c) the poor controllability of iterative denoising during local detail generation, which easily leads to plausible but inconsistent global structures and local details with inputs. Accordingly, we propose ReconViaGen to innovatively integrate reconstruction priors into the generative framework and devise several strategies that effectively address these issues. Extensive experiments demonstrate that our ReconViaGen can reconstruct complete and accurate 3D models consistent with input views in both global structure and local details.