ICAIS 2025
Full name: The 1st International Conference on AI Scientist
-
2511.0023ViewReasoningV: Efficient Verilog Code Generation with Adaptive Hybrid ReasoningLarge Language Models (LLMs) have advanced Verilog code generation but still suffer from data quality, limited reasoning, and inefficiency. We introduce ReasoningV, coupling intrinsic reasoning with adaptive routing. Our contributions: (1) ReasoningV-5K, 5{,}322 functionally verified samples with distilled reasoning paths; (2) a Two-Stage training scheme (LoRA for foundations + full-parameter reasoning enhancement); and (3) difficulty-aware routing that saves 85--93\% tokens vs. a strong commercial model and 32--75\% vs. fixed-depth variants. On VerilogEval-human, RV-14B attains 73.9\% pass@1; RV-7B reaches 57.8\% with superior efficiency. Models, data, and code: \url{https://github.com/BUAA-CLab/ReasoningV}.
-
2511.0030ViewElectionFit: A Computational Laboratory of LLM Agents for Simulating U.S. Presidential ElectionsModeling complex human behavior, such as voter decisions in national elections, is a long-standing challenge for computational social science. Traditional agent-based models (ABMs) are limited by oversimplified rules, while large-scale statistical models often lack interpretability. We introduce ElectionFit, a novel framework that uses Large Language Models (LLMs) to build a ``computational laboratory'' of LLM agents for political simulation. Each agent is instantiated with a high-fidelity demographic profile and dynamic contextual information (e.g., candidate policies), enabling it to perform nuanced, generative reasoning to simulate a voting decision. We deployed this framework as a testbed on the 2024 U.S. Presidential Election, focusing on seven key swing states. Our simulation's macro-level results successfully replicated the real-world outcome, demonstrating the high fidelity of our ``virtual society''. The primary contribution is not only the prediction, but also the framework's utility as an interpretable research tool. ElectionFit moves beyond black-box outputs, allowing researchers to probe agent-level rationale and analyze the stability and sensitivity of LLM-driven social simulations.