This paper introduces an AI-driven framework designed to optimize energy efficiency and network resilience in data centers, addressing a critical challenge in modern computing infrastructure. The core contribution lies in the integration of multi-agent reinforcement learning (MARL) with workload prediction, enabling dynamic resource allocation while maintaining network reliability. The authors propose a system where multiple agents, each responsible for a subset of resources, learn to coordinate their actions to minimize energy consumption and maximize fault tolerance. The framework employs a Long Short-Term Memory (LSTM) network to predict future workloads, providing the MARL agents with foresight to make informed decisions. The MARL agents utilize the Proximal Policy Optimization (PPO) algorithm to learn optimal resource allocation policies. The proposed dynamic resource allocator manages resources through mechanisms like workload migration, standby, and redundancy management. The paper's empirical findings demonstrate significant improvements over traditional methods, achieving a 27.2% reduction in energy consumption, measured by Power Usage Effectiveness (PUE), and a 58.4% improvement in Mean Time To Repair (MTTR), a key metric for network resilience. The experiments are conducted using realistic workload traces and established network configurations, enhancing the credibility of the results. The authors also provide ablation studies to analyze the impact of different components of the framework. The paper is well-structured, with clear explanations of the methodology, experimental setup, and results, making it accessible to a broad audience. The significance of this work lies in its potential to address the growing energy demands of data centers while ensuring their reliability, a crucial aspect for the continued growth of AI and other compute-intensive applications. The framework's ability to dynamically adapt to changing workloads and network conditions represents a significant step forward in the field of green computing. However, the paper also acknowledges certain limitations, such as the computational overhead of training the AI models and the simulation-to-reality gap, which need to be addressed in future research.