This paper introduces ElectionFit, a novel framework leveraging Large Language Models (LLMs) to simulate voting behavior in the 2024 U.S. presidential election, particularly within key swing states. The core idea is to create individual agent profiles based on detailed demographic data from the U.S. Census and other public sources, and then use LLMs to reason about these agents' voting decisions based on their demographic characteristics and contextual information about the candidates' policy positions. The framework's agents are given profiles that include attributes such as age, race, sex, occupation, industry, education level, and religion. These agents are then provided with contextual information about the candidates' stances on key issues like economic policy, immigration, and abortion rights, extracted from official party platforms and public statements. The LLMs are prompted to simulate the voting decisions of these agents, and the aggregate results are compared to actual election outcomes. The authors demonstrate that ElectionFit successfully replicates the actual election results in six out of seven key swing states, which they argue highlights the potential of LLMs as an interpretable and nuanced tool for social science research. Beyond just predicting election outcomes, the framework allows for the exploration of individual-level decision-making processes, offering insights into how different demographic factors and policy positions influence voting behavior. The authors also conduct ablation studies and sensitivity analyses to assess the framework's robustness and identify key factors influencing its performance. These analyses reveal that the framework is sensitive to changes in input parameters and that the LLMs used exhibit inherent biases and instability, which the authors acknowledge as a critical limitation. The paper emphasizes the importance of auditing LLMs for biases and instability, as these can significantly impact the fidelity of simulations. The authors argue that the framework's ability to replicate real-world election outcomes, combined with its interpretability, makes it a valuable tool for social science research, while also highlighting the need for careful consideration of the limitations and ethical implications of using LLMs in this context. The paper's overall significance lies in its innovative application of LLMs to model complex social phenomena, its emphasis on interpretability, and its contribution to the ongoing discussion about the reliability and ethical use of LLMs in social science.