Papers
Event:
-
2511.0026ViewEstimating Rural Rooftop Solar Potential Using Semantic Segmentation and Multi-Source DataSolar energy is a clean and renewable resource, and the low-rise, unobstructed rural buildings of northern China provide ideal conditions for photovoltaic (PV) installation compared to shaded, high-density urban areas. Yet, progress in assessing rural solar potential is limited by the absence of accurate 3D building data. This study proposes a rapid estimation approach integrating deep learning, parametric modeling, and GPU-accelerated simulation. Convolutional neural net- works (CNNs) extract building footprints from satellite imagery, which are then processed in Grasshopper to generate refined vector outlines. Combined with digital surface model (DSM) data, these outlines produce precise 3D village models. Using Vitality 2.0 for GPU-based solar simulation, the method was applied to 31 villages in Tianjin, generating parametric 3D models and estimating their solar potential. Results show that low building heights and minimal mutual shading make photovoltaic capacity scale with roof area—larger villages have greater generation potential. Moreover, villages with metal roofs exhibit higher conversion efficiency and shorter cost-recovery periods than those with concrete or ceramic-tile roofs, due to better heat dissipation. Overall, the workflow offers a practical and efficient solution for estimating rural solar potential in data-scarce regions to guide renewable energy planning and investment.
-
2511.0025ViewEstimating Rural Rooftop Solar Potential Using Semantic Segmentation and Multi-Source DataSolar energy is a clean and renewable resource, and the low-rise, unobstructed rural buildings of northern China provide ideal conditions for photovoltaic (PV) installation compared to shaded, high-density urban areas. Yet, progress in assessing rural solar potential is limited by the absence of accurate 3D building data. This study proposes a rapid estimation approach integrating deep learning, parametric modeling, and GPU-accelerated simulation. Convolutional neural net- works (CNNs) extract building footprints from satellite imagery, which are then processed in Grasshopper to generate refined vector outlines. Combined with digital surface model (DSM) data, these outlines produce precise 3D village models. Using Vitality 2.0 for GPU-based solar simulation, the method was applied to 31 villages in Tianjin, generating parametric 3D models and estimating their solar potential. Results show that low building heights and minimal mutual shading make photovoltaic capacity scale with roof area—larger villages have greater generation potential. Moreover, villages with metal roofs exhibit higher conversion efficiency and shorter cost-recovery periods than those with concrete or ceramic-tile roofs, due to better heat dissipation. Overall, the workflow offers a practical and efficient solution for estimating rural solar potential in data-scarce regions to guide renewable energy planning and investment.
-
2511.0024ViewTouch Beyond Vision: A Survey of Vision-Tactile-Language Models in Embodied IntelligenceEmbodied intelligence increasingly leverages multimodal perception—particularly vision and language—to support rich interaction with the physical world. Yet the tactile modality remains under-explored, despite its essential role in human perception and manipulation. In this survey, we systematically review research at the intersection of vision, tactile sensing, and language, which we refer to as Vision-Tactile-Language (VTL) models. We provide (i) a historical context tracing the shift from vision-centric embodied systems to multisensory agents, (ii) foundational aspects of tactile sensing and representation, (iii) methods for integrating vision and touch, (iv) emerging architectures that incorporate language alongside vision and touch, (v) applications in embodied robotics, (vi) current challenges and open problems, and (vii) a forward-looking outlook toward tactile foundation models. We conclude by arguing that touch closes a key gap in embodied AI, enabling truly grounded perception, reasoning and action.
-
2511.0023ViewReasoningV: Efficient Verilog Code Generation with Adaptive Hybrid ReasoningLarge Language Models (LLMs) have advanced Verilog code generation but still suffer from data quality, limited reasoning, and inefficiency. We introduce ReasoningV, coupling intrinsic reasoning with adaptive routing. Our contributions: (1) ReasoningV-5K, 5{,}322 functionally verified samples with distilled reasoning paths; (2) a Two-Stage training scheme (LoRA for foundations + full-parameter reasoning enhancement); and (3) difficulty-aware routing that saves 85--93\% tokens vs. a strong commercial model and 32--75\% vs. fixed-depth variants. On VerilogEval-human, RV-14B attains 73.9\% pass@1; RV-7B reaches 57.8\% with superior efficiency. Models, data, and code: \url{https://github.com/BUAA-CLab/ReasoningV}.
-
2511.0022ViewStealing 3D Medical Segmentation Models via Collaborative Dual-Model ArchitectureMachine Learning as a Service (MLaaS) facilitates the deployment and accessibility of medical models, yet concurrently exposes proprietary models to potential adversaries. Attackers may exploit model stealing attacks (MSAs) to replicate these models illicitly, leading to loss of training investment and privacy vulnerabilities. While existing research has mainly focused on MSAs in the context of 2D natural image classification, this work presents the first investigation into stealing 3D medical segmentation models. We introduce collaborative dual-model 3D medical segmentation stealing (CDMSS-3D), which decomposes the model stealing objective into two complementary aspects: stealing accuracy and stealing robustness. With our adversarial proxy training, CDMSS-3D achieves superior model stealing performance. Furthermore, we incorporate a dual-model discrepancy sampling strategy, which enhances the fidelity of the substitute model by prioritizing uncertain samples. Extensive experiments on four 3D medical segmentation datasets demonstrate that CDMSS-3D consistently outperforms adapted baselines.
-
2511.0021ViewA scalable deep learning framework for gene expression prediction by integrating promoter-enhancer sequences with multimodal epigenomic dataTranscriptional regulation, critical for cellular differentiation and adaptation to environmental changes, involves coordinated interactions among DNA sequences, regulatory proteins, and chromatin architecture. Despite extensive data from consortia like ENCODE, understanding the dynamics of cis-regulatory elements (CREs) in gene expression remains challenging. Deep learning is a powerful tool for learning gene expression and epigenomic signals from DNA sequences, exhibiting superior performance compared to conventional machine learning approaches. However, even the most advanced deep learning-based methods may fall short in capturing the regulatory effects of distal elements such as enhancers, limiting their predictive accuracy. In addition, these methods may require significant resources to train or to adapt to newly generated data. To address these challenges, we present EPInformer, a scalable deep-learning framework for predicting gene expression by integrating promoter-enhancer interactions with their sequences, epigenomic signals, and chromatin contacts. Our model outperforms existing gene expression prediction models in rigorous cross-chromosome validation, accurately recapitulates enhancer-gene interactions validated by CRISPR perturbation experiments, and identifies crucial transcription factor motifs within regulatory sequences.
-
2511.0020ViewAI-Powered Rainfall Forecasting: Progress, Challenges, Future DirectionsRainfall forecasting holds significant importance across a wide range of sectors, including disaster prevention, energy planning and agriculture. In the past decade, artificial intelligence(AI) has emerged as a revolutionary approach, aiming to overcome the long-standing limitations of traditional numerical weather prediction (NWP) models and statistical downscaling models (SDMs) for rainfall forecasting. This chapter briefly introduces the remarkable progress made in AI-based rainfall forecasting. It mainly focuses on three major aspects: physical-constrained machine learning (ML), multi-modal data fusion, and extreme event prediction. AI-based models can be used to resolve the subgrid-scale parameterization problems (e.g., convective parameterization) that troubled NWP models for a long time. For instance, DeepMind's GraphCast employs dynamic graph neural networks to generate a high-resolution global forecast. Making 10-day forecasts with GraphCast takes less than a minute on a single Google TPU v4 machine. Regarding multi-modal data fusion, systems such as National Oceanic and Atmospheric Administration (NOAA) Multi-Radar Multi-Sensor(MRMS) combine various data sources and significantly improves the accuracy of forecasts. For the extreme rainfall prediction, the application of adversarial training and attention mechanisms has also led to improvements. The review finally suggests the future research directions. It emphasizes how AI is updating rainfall forecasting technology, enabling it to better meet the challenges posed by a changing climate.
-
2511.0019ViewFrom Virtual Cells to Programmable Humans: Advancing Digital Biology Through Hybrid AI SystemsRecent advances in artificial intelligence (AI), high-performance computing, and systems biology have accelerated the development of AI-powered virtual biological systems, from virtual cells to multiscale organ models and programmable virtual humans. These systems promise transformative applications in drug discovery, precision medicine, and in silico clinical trials. This review provides a critical synthesis of current progress, key technologies, and future directions across this spectrum. We explore hybrid modeling strategies that combine mechanistic models—such as ordinary and partial differential equations—with deep learning methods including convolutional, recurrent, and graph neural networks. We emphasize the importance of robust uncertainty quantification, simulation validation, and multiscale integration across molecular, cellular, organ-level, and systemic processes. A core contribution is the introduction of the SIM-CARD framework, a standardized simulation accountability protocol to document data provenance, modeling assumptions, performance metrics, and regulatory alignment. We propose a three-phase translational roadmap: (1) validated AI-augmented virtual cells and organs (by 2030), (2) interoperable multi-organ physiological systems (by 2040), and (3) programmable full-body virtual humans supporting personalized simulations and regulatory use cases (by 2055). We identify key enablers—including high-fidelity multiscale data, computational scalability, and simulation governance—as well as bottlenecks such as algorithmic bias, explainability, and regulatory uncertainty. Finally, we call for collaborative efforts to establish minimal benchmarking suites, FAIR-compliant simulation metadata, and cross-institutional federated learning infrastructure. This review aims to guide the scientific, regulatory, and clinical communities in navigating the complex yet promising trajectory toward clinically actionable programmable human simulations.
-
2511.0018ViewFrom Virtual Cells to Programmable Humans: Advancing Digital Biology Through Hybrid AI SystemsThe convergence of artificial intelligence and systems biology is giving rise to a new paradigm in biomedical research—AI-powered virtual biological systems. From single-cell simulations to organ-level models and ultimately programmable virtual humans, this digital continuum holds transformative potential for disease modeling, personalized medicine, and therapeutic discovery. In this review, we critically examine the state of the art in AI-driven simulations, including the numerical foundations, multiscale integration strategies, and the emerging class of hybrid models that bridge mechanistic and data-driven approaches. We explore the challenges of validation, uncertainty quantification, and regulatory alignment across simulation scales, with particular focus on the development of simulation accountability frameworks such as SIM-CARDs. Ethical and privacy concerns, including algorithmic bias and data sovereignty in patient-specific models, are also addressed, alongside concrete proposals for governance and federated simulation workflows. Special attention is given to the technical complexity of multiscale modeling, including the integration of mechanistic solvers with neural architectures and the computational resources required for real-time, clinically actionable simulations. We conclude with a translational roadmap for virtual biology that projects validated virtual cells for drug screening by 2030, multi-organ simulations by 2040, and the emergence of programmable virtual humans by 2055. By unifying high-fidelity numerical models with explainable AI, and aligning simulation design with ethical, regulatory, and clinical needs, the field of digital biology is positioned to unlock scalable and trustworthy biomedical innovation.
-
2511.0016ViewGraphics Capsule: Learning Hierarchical 3D Face Representations from 2D ImagesThe function of constructing the hierarchy of objects is important to the visual process of the human brain. Previous studies have successfully adopted capsule networks to decompose the digits and faces into parts in an unsupervised manner to investigate the similar perception mechanism of neural networks. However, their descriptions are restricted to the 2D space, limiting their capacities to imitate the intrinsic 3D perception ability of humans. In this paper, we propose an Inverse Graphics Capsule Network (IGC-Net) to learn the hierarchical 3D face representations from large-scale unlabeled images. The core of IGC-Net is a new type of capsule, named graphics capsule, which represents 3D primitives with interpretable parameters in computer graphics (CG), including depth, albedo, and 3D pose. Specifically, IGC-Net first decomposes the objects into a set of semantic-consistent part-level descriptions and then assembles them into object-level descriptions to build the hierarchy. The learned graphics capsules reveal how the neural networks, oriented at visual perception, understand faces as a hierarchy of 3D models.
-
2511.0015ViewEngineering Collective Attention in the Age of Artificial IntelligenceThis article explores how collective attention can be both disrupted and enhanced by artificial intelligence. It examines how the rise of algorithmic recommendation systems, generative media, and large-scale language models has transformed public communication and redefined what captures human attention. The analysis identifies the dual nature of artificial intelligence: while it can distort information ecosystems through deepfakes, social bots, and engagement-driven algorithms, it also holds the potential to strengthen collective reasoning by improving access to reliable knowledge and facilitating the clarification of complex information. Drawing on interdisciplinary research, the article develops a multilevel framework for understanding and improving collective attention. At the individual level, it emphasizes education, digital literacy, and critical awareness to build cognitive resilience. At the governmental level, it assesses regulatory and ethical strategies for ensuring transparency, accountability, and fairness in the design and deployment of AI systems. At the societal level, it highlights the promise of human–AI collaboration to guide attention toward truth, empathy, and shared problem-solving. The article concludes that collective attention can indeed be engineered in beneficial ways when artificial intelligence is governed transparently, used ethically, and integrated with public oversight to reinforce informed, cohesive, and resilient democracies.
-
2511.0014ViewArtificial Intelligence in Biomedical Research: From Data Integration to Precision MedicineThis comprehensive review examines the transformative role of artificial intelligence in biomedical research, from foundational data integration to clinical applications. The paper explores how AI techniques facilitate multimodal data fusion across diverse biological data types, employing both traditional statistical methods and advanced deep learning architectures including variational autoencoders, graph neural networks, and transformer models. It evaluates AI applications in medical imaging, where convolutional neural networks have achieved remarkable diagnostic accuracy (up to 94\% in COVID-19 detection) while enhancing segmentation and classification tasks across multiple imaging modalities. The review further investigates generative AI’s impact on molecular design and drug discovery, highlighting transformer-based architectures like TransAntivirus that navigate vast chemical spaces to optimize therapeutic candidates. Finally, it examines AI-enabled precision medicine applications, including Clinical Decision Support Systems and federated learning approaches that balance analytical power with privacy preservation. Despite significant progress, implementation challenges persist, including data heterogeneity, model explainability, and ethical concerns regarding bias and privacy. The paper underscores the importance of developing interpretable AI systems that integrate seamlessly into clinical workflows while addressing regulatory, ethical, and economic considerations to realize the full potential of AI in advancing biomedical research and healthcare delivery.
-
2511.0013ViewRevolutionizing AI Conference Peer Review: A Bi-Directional Feedback and Rewards FrameworkThe rapid increase in submissions to AI conferences has led to a crisis in the peer review process, characterized by declining review quality and accountability. This position paper proposes a novel bi-directional feedback mechanism where authors can evaluate the quality of reviews while safeguarding against retaliation. Cou- pled with a blockchain-enabled reviewer rewards system, this framework aims to incentivize high-quality reviewing and create an accountability structure that ben- efits all stakeholders. By allowing authors to provide feedback on reviews and rewarding reviewers with transparent digital credentials, this system fosters a cul- ture of quality and responsibility in the peer review process. We call upon the AI community to engage in this vital conversation and explore these transformative reforms for sustainable peer review practices.
-
2511.0012ViewPhysics-Informed Neural Networks and Neural Operators for Parametric PDEs: Methods, Applications and Future DirectionsPDEs arise ubiquitously in science and engineering, where solutions depend on parameters representing physical properties, boundary conditions, or geometric configurations. Traditional numerical methods require solving the PDE anew for each parameter value, making parameter space exploration prohibitively expensive for high-dimensional problems. Recent advances in machine learning, particularly physics-informed neural networks (PINNs) and neural operators, have revolutionized parametric PDE solving by learning solution operators that generalize across parameter spaces. We critically analyze two main paradigms: (1) PINNs, which embed physical laws as soft constraints and excel at inverse problems with sparse data, and (2) neural operators (including DeepONet, Fourier Neural Operator, and their variants), which learn mappings between infinite-dimensional function spaces and achieve unprecedented parameter space generalization. Through detailed comparisons across fluid dynamics, solid mechanics, heat transfer, and electromagnetics, we show that neural operators can achieve computational speedups ranging from 10^3 to 10^5 times faster than traditional solvers for multi-query scenarios, while maintaining comparable accuracy. We provide practical guidance for method selection, discuss theoretical foundations including universal approximation and convergence guarantees, and identify critical open challenges including high-dimensional parameter spaces, complex geometries, and out-of-distribution generalization. This work establishes a unified framework for understanding parametric PDE solvers through the lens of operator learning, offering a comprehensive resource—which we intend to incrementally update—for this rapidly evolving field.
-
2511.0011ViewFrom Virtual Cells to Programmable Humans: Advancing Digital Biology Through Hybrid AI SystemsRecent advances in artificial intelligence (AI), high-performance computing, and systems biology have accelerated the development of AI-powered virtual biological systems, from virtual cells to multiscale organ models and programmable virtual humans. These systems promise transformative applications in drug discovery, precision medicine, and in silico clinical trials. This review provides a critical synthesis of current progress, key technologies, and future directions across this spectrum. We explore hybrid modeling strategies that combine mechanistic models—such as ordinary and partial differential equations—with deep learning methods including convolutional, recurrent, and graph neural networks. We emphasize the importance of robust uncertainty quantification, simulation validation, and multiscale integration across molecular, cellular, organ-level, and systemic processes. A core contribution is the introduction of the SIM-CARD framework, a standardized simulation accountability protocol to document data provenance, modeling assumptions, performance metrics, and regulatory alignment. We propose a three-phase translational roadmap: (1) validated AI-augmented virtual cells and organs (by 2030), (2) interoperable multi-organ physiological systems (by 2040), and (3) programmable full-body virtual humans supporting personalized simulations and regulatory use cases (by 2055). We identify key enablers—including high-fidelity multiscale data, computational scalability, and simulation governance—as well as bottlenecks such as algorithmic bias, explainability, and regulatory uncertainty. Finally, we call for collaborative efforts to establish minimal benchmarking suites, FAIR-compliant simulation metadata, and cross-institutional federated learning infrastructure. This review aims to guide the scientific, regulatory, and clinical communities in navigating the complex yet promising trajectory toward clinically actionable programmable human simulations.
-
2511.0010ViewFrom AI for Science to Agentic Science: A Survey on Autonomous Scientific Discovery and AI ScientistsArtificial intelligence (AI) is reshaping scientific discovery, evolving from specialized computational tools into autonomous research partners. We position \textit{\textbf{Agentic Science}} as a pivotal stage within the broader \textit{\textbf{AI for Science}} paradigm, where AI systems progress from partial assistance to full scientific agency. Enabled by large language models (LLMs), multimodal systems, and integrated research platforms, agentic AI exhibits capabilities in hypothesis generation, experimental design, execution, analysis, and iterative refinement-behaviors once regarded as uniquely human. This survey offers a \textbf{domain-oriented review} of autonomous scientific discovery across life sciences, chemistry, materials, and physics, synthesizing research progress and advances within each discipline. We unify three previously fragmented perspectives-process-oriented, autonomy-oriented, and mechanism-oriented-through \textbf{a comprehensive framework }that connects foundational capabilities, core processes, and domain-specific realizations. Building on this framework, we (i) trace the evolution of AI for Science, (ii) identify five core capabilities underpinning scientific agency, (iii) model discovery as a dynamic four-stage workflow, (iv) review applications across life sciences, chemistry, materials science, and physics, and (v) synthesize key challenges and future opportunities. This work establishes a domain-oriented synthesis of autonomous scientific discovery and positions Agentic Science as a structured paradigm for advancing AI-driven research.
-
2511.0009ViewA Pilot Study Evaluating Large Language Models as Reviewers at Academic ConferencesThis paper presents a new system for academic peer review that is more objective, efficient, and community-guided. Our system incorporates author-assisted evaluation (Author-AAE) and community-guided review (CGR) into the peer review of AI conferences. This is in contrast to existing approaches that prioritize alternative systems that only address some of these challenges. Our evaluation uses data from three major AI conferences that used our system and from a survey of reviewers. Their feedback indicates that our system’s reviews are superior to single-LLM-based reviews due to their reduced subjectivity and enhanced quality. The reviewers’ scores for our system’s reviews were significantly higher than for single-LLM-based reviews across multiple metrics: “Reproducibility and Quality” (by 0.427 ± 0.007), “Review Quality” (by 0.265 ± 0.09), and “Alignment between opinion and paper score” (by 0.503 ± 0.090). In addition, we discovered that single-LLM-based reviews are more likely to be rejected by the program committee after author major revisions (on average by 0.182 ± 0.103) and are much more likely to be rejected overall (on average by 0.300 ± 0.124), compared to our system’s reviews. These results suggest that our system performs better in reducing the arbitrary nature of the current peer review system and can serve as an inspiration for the scientific community to explore new review systems.
-
2511.0008ViewA Self-Driving Laboratory for Materials Science: An Autonomous Research Agent for Deep Data Analysis and InterpretationAs artificial intelligence increasingly permeates scientific research, the ”AI for Science” paradigm is evolving to enable more autonomous scientific workflows. Traditional research processes heavily rely on researchers’ expertise and manual operations, particularly in data analysis and interpretation—the critical ”last mile” from raw data to profound insights. This paper presents an autonomous research agent for materials science that achieves end-to-end automation from raw characterization data to deep analytical interpretation. The system integrates four core innovations: (1) AI-driven automatic data understanding with unified ingestion of heterogeneous instrument data, (2) automated data analysis through an extensible algorithm library, (3) one-click automated reporting system, and (4) interactive AI-powered data interpretation via natural language dialogue. We demonstrate the agent’s capabilities through real-world case studies across multiple characterization techniques (Raman, UPS, UV-Vis, TG), achieving remarkable performance: UV-Vis bandgap analysis is accelerated by 600× compared to manual processing, while maintaining exceptional accuracy with fitting precision R2 ≥ 0.999. The system reduces analysis time from hours to seconds while ensuring objectivity and reproducibility. By automating the data analysis pipeline while preserving human oversight and interpretability, this work contributes a practical component toward building more integrated and efficient scientific discovery systems in materials research.
-
2511.0007ViewEnhancing Small Language Models with Gradient Noise InjectionTraining small language models is challenging due to their limited capacity to capture complex patterns and their susceptibility to overfitting. To address these issues, we investigate gradient noise injection as a regularization strategy, building on prior work while introducing a noise schedule that decays exponentially over training. Unlike existing techniques, our method explicitly controls the trade-off between exploration and stability during optimization. We compare the exponential decay schedule with linear and adaptive variants, demonstrating empirically that the exponential schedule yields superior convergence and generalization. Extensive experiments on diverse text corpora, including shakespeare\_char, enwik8, text8, and larger benchmark datasets, show consistent improvements in training dynamics, validation loss, and final performance. We report error bars and statistical significance tests to ensure robustness of the results. Detailed implementation information, including model architectures, hyperparameter settings, dataset sizes, and optimization strategies, is provided to support reproducibility, and we release our code and trained models publicly. Furthermore, we compare gradient noise injection with other regularization methods such as dropout, weight decay, and data augmentation, both in isolation and in combination, revealing complementary effects on training stability and generalization. Finally, we analyze the computational cost of gradient noise injection relative to these baselines, highlighting its practical efficiency in resource-constrained environments. Together, these contributions position gradient noise injection as a theoretically grounded, empirically validated, and computationally practical method for improving the robustness of small language models.
-
2511.0006ViewMulti-Agent Adaptive Variance Reduction Technique for Decentralized Nonsmooth Nonconvex Stochastic OptimizationDecentralized stochastic optimization with nonsmooth objectives and only zeroth-order oracle access arises in federated learning and privacy-sensitive applications, yet existing methods suffer from high variance and dimension-dependent complexity. We propose MAAVRT (\textbf{M}ulti-\textbf{A}gent \textbf{A}daptive \textbf{V}ariance \textbf{R}eduction \textbf{T}echnique), a decentralized zeroth-order algorithm that integrates \emph{randomized smoothing}, \emph{adaptive variance reduction}, and \emph{topology-aware consensus}. MAAVRT employs moving-average buffers to reduce estimator variance online and leverages network spectral properties for efficient consensus. Our theoretical analysis decomposes the convergence error into four components, yielding sample complexity $\mathcal{O}(d\delta^{-1}\epsilon^{-3})$ that \emph{matches known lower bounds}. Empirically, on standard benchmarks (IJCNN, COVTYPE, A9A), MAAVRT achieves substantially lower gradient norms and higher test accuracy compared to baseline methods, demonstrating the effectiveness of adaptive variance reduction in the decentralized nonsmooth setting.