This paper provides a comprehensive survey of the rapidly evolving field of knowledge graph (KG) construction using Large Language Models (LLMs). The authors meticulously analyze how LLMs are reshaping the traditional three-layered pipeline of ontology engineering, knowledge extraction, and knowledge fusion. They begin by revisiting traditional KG methodologies to establish a solid conceptual foundation, then delve into emerging LLM-driven approaches, categorizing them into schema-based and schema-free paradigms. The survey highlights the transformative impact of LLMs, noting a shift from rule-based systems to more adaptive, generative frameworks. Key trends identified include the move from static schemas to dynamic induction, the integration of pipeline modularity into generative unification, and the transition from symbolic rigidity to semantic adaptability. The paper concludes by redefining KGs as living, cognitive infrastructures that blend language understanding with structured reasoning, while also acknowledging the remaining challenges in scalability, reliability, and continual adaptation. The authors synthesize representative frameworks, analyze their technical mechanisms, and identify their limitations, providing a valuable overview of the current state of the field. This work serves as a useful resource for researchers and practitioners interested in the intersection of LLMs and knowledge graph construction, offering a clear and well-structured analysis of the current landscape and future directions. The paper's systematic approach, coupled with its clear categorization of different methodologies, makes it a valuable contribution to the field. However, it's important to note that the paper's focus is primarily on summarizing existing work, and it does not delve deeply into the practical challenges and ethical implications of using LLMs for KG construction, which are areas that require further exploration.