Papers
Event:
-
2603.0010ViewCRISPR-Cas12/13 系统在快速核酸检测中的应用研究进展核酸检测技术在传染病诊断、食品安全监控与环境微生物监测等领域发挥着不可替代的作用。传统核酸检测方法如聚合酶链式反应(PCR)虽灵敏度高,但对精密仪器和专业操作人员的依赖严重制约了其在基层与现场环境中的推广应用。近年来,基于CRISPR-Cas系统的新一代核酸检测技术迅速发展,其中Cas12和Cas13蛋白凭借其独特的反式切割活性,分别催生了DETECTR、HOLMES和SHERLOCK等代表性检测平台。本文围绕CRISPR-Cas12/13系统的分子作用机制,系统综述了其在传染性疾病快速诊断、食品安全检测及信号读出技术集成等方面的最新研究进展,并就临床转化路径、多重检测平台构建和智能化融合趋势进行了前瞻性讨论。
-
2511.0028ViewAI as an Anti-Entropy Engine: Actively Designing Intelligent Matter from Dynamic States to Proto-LifeAbstract The trial-and-error paradigm of traditional materials discovery, fundamentally constrained by its inherent high entropy, is proving inadequate for designing complex intelligent matter. Here, we propose a new scientific paradigm: Artificial Intelligence as an ‘Anti-Entropy’ Engine, transforming research from passive understanding to active design. By systematically injecting informational negative entropy across perception, planning, and execution loops, AI guides material systems from disorder to pre-defined functional order. We demonstrate this through empirical advances—such as the GNoME model discovering 2.2 million stable crystals—and construct a unified ‘Perception-Planning-Execution’ framework enabling inverse design across scales. This paradigm extends beyond static structures to dynamic non-equilibrium systems and life-like chemical networks. We prospectively map future frontiers using a ‘Ladder of Intelligence’ and address ethical governance, systemic risk, and sustainability. Ultimately, this marks a fundamental transition for humanity, from being passive observers of nature to becoming active ‘anti-entropy’ designers in the evolution of matter. This review not only synthesizes these advances but also provides a unifying conceptual framework and a clear roadmap for the field, aiming to catalyze the transition towards this fifth paradigm of scientific discovery. Keywords: Anti-entropy; AI-Driven Design; Intelligent Matter; Inverse Design; Autonomous Laboratory; Life-like Systems; Interdisciplinary Paradigm
-
2511.0027ViewAI as an Anti-Entropy Engine: Actively Designing Intelligent Matter from Dynamic States to Proto-LifeAbstract The trial-and-error paradigm of traditional materials discovery, fundamentally constrained by its inherent high entropy, is proving inadequate for designing complex intelligent matter. Here, we propose a new scientific paradigm: Artificial Intelligence as an ‘Anti-Entropy’ Engine, transforming research from passive understanding to active design. By systematically injecting informational negative entropy across perception, planning, and execution loops, AI guides material systems from disorder to pre-defined functional order. We demonstrate this through empirical advances—such as the GNoME model discovering 2.2 million stable crystals—and construct a unified ‘Perception-Planning-Execution’ framework enabling inverse design across scales. This paradigm extends beyond static structures to dynamic non-equilibrium systems and life-like chemical networks. We prospectively map future frontiers using a ‘Ladder of Intelligence’ and address ethical governance, systemic risk, and sustainability. Ultimately, this marks a fundamental transition for humanity, from being passive observers of nature to becoming active ‘anti-entropy’ designers in the evolution of matter. This review not only synthesizes these advances but also provides a unifying conceptual framework and a clear roadmap for the field, aiming to catalyze the transition towards this fifth paradigm of scientific discovery. Keywords: Anti-entropy; AI-Driven Design; Intelligent Matter; Inverse Design; Autonomous Laboratory; Life-like Systems; Interdisciplinary Paradigm
-
2511.0024ViewTouch Beyond Vision: A Survey of Vision-Tactile-Language Models in Embodied IntelligenceEmbodied intelligence increasingly leverages multimodal perception—particularly vision and language—to support rich interaction with the physical world. Yet the tactile modality remains under-explored, despite its essential role in human perception and manipulation. In this survey, we systematically review research at the intersection of vision, tactile sensing, and language, which we refer to as Vision-Tactile-Language (VTL) models. We provide (i) a historical context tracing the shift from vision-centric embodied systems to multisensory agents, (ii) foundational aspects of tactile sensing and representation, (iii) methods for integrating vision and touch, (iv) emerging architectures that incorporate language alongside vision and touch, (v) applications in embodied robotics, (vi) current challenges and open problems, and (vii) a forward-looking outlook toward tactile foundation models. We conclude by arguing that touch closes a key gap in embodied AI, enabling truly grounded perception, reasoning and action.
-
2511.0020ViewAI-Powered Rainfall Forecasting: Progress, Challenges, Future DirectionsRainfall forecasting holds significant importance across a wide range of sectors, including disaster prevention, energy planning and agriculture. In the past decade, artificial intelligence(AI) has emerged as a revolutionary approach, aiming to overcome the long-standing limitations of traditional numerical weather prediction (NWP) models and statistical downscaling models (SDMs) for rainfall forecasting. This chapter briefly introduces the remarkable progress made in AI-based rainfall forecasting. It mainly focuses on three major aspects: physical-constrained machine learning (ML), multi-modal data fusion, and extreme event prediction. AI-based models can be used to resolve the subgrid-scale parameterization problems (e.g., convective parameterization) that troubled NWP models for a long time. For instance, DeepMind's GraphCast employs dynamic graph neural networks to generate a high-resolution global forecast. Making 10-day forecasts with GraphCast takes less than a minute on a single Google TPU v4 machine. Regarding multi-modal data fusion, systems such as National Oceanic and Atmospheric Administration (NOAA) Multi-Radar Multi-Sensor(MRMS) combine various data sources and significantly improves the accuracy of forecasts. For the extreme rainfall prediction, the application of adversarial training and attention mechanisms has also led to improvements. The review finally suggests the future research directions. It emphasizes how AI is updating rainfall forecasting technology, enabling it to better meet the challenges posed by a changing climate.
-
2511.0019ViewFrom Virtual Cells to Programmable Humans: Advancing Digital Biology Through Hybrid AI SystemsRecent advances in artificial intelligence (AI), high-performance computing, and systems biology have accelerated the development of AI-powered virtual biological systems, from virtual cells to multiscale organ models and programmable virtual humans. These systems promise transformative applications in drug discovery, precision medicine, and in silico clinical trials. This review provides a critical synthesis of current progress, key technologies, and future directions across this spectrum. We explore hybrid modeling strategies that combine mechanistic models—such as ordinary and partial differential equations—with deep learning methods including convolutional, recurrent, and graph neural networks. We emphasize the importance of robust uncertainty quantification, simulation validation, and multiscale integration across molecular, cellular, organ-level, and systemic processes. A core contribution is the introduction of the SIM-CARD framework, a standardized simulation accountability protocol to document data provenance, modeling assumptions, performance metrics, and regulatory alignment. We propose a three-phase translational roadmap: (1) validated AI-augmented virtual cells and organs (by 2030), (2) interoperable multi-organ physiological systems (by 2040), and (3) programmable full-body virtual humans supporting personalized simulations and regulatory use cases (by 2055). We identify key enablers—including high-fidelity multiscale data, computational scalability, and simulation governance—as well as bottlenecks such as algorithmic bias, explainability, and regulatory uncertainty. Finally, we call for collaborative efforts to establish minimal benchmarking suites, FAIR-compliant simulation metadata, and cross-institutional federated learning infrastructure. This review aims to guide the scientific, regulatory, and clinical communities in navigating the complex yet promising trajectory toward clinically actionable programmable human simulations.
-
2511.0018ViewFrom Virtual Cells to Programmable Humans: Advancing Digital Biology Through Hybrid AI SystemsThe convergence of artificial intelligence and systems biology is giving rise to a new paradigm in biomedical research—AI-powered virtual biological systems. From single-cell simulations to organ-level models and ultimately programmable virtual humans, this digital continuum holds transformative potential for disease modeling, personalized medicine, and therapeutic discovery. In this review, we critically examine the state of the art in AI-driven simulations, including the numerical foundations, multiscale integration strategies, and the emerging class of hybrid models that bridge mechanistic and data-driven approaches. We explore the challenges of validation, uncertainty quantification, and regulatory alignment across simulation scales, with particular focus on the development of simulation accountability frameworks such as SIM-CARDs. Ethical and privacy concerns, including algorithmic bias and data sovereignty in patient-specific models, are also addressed, alongside concrete proposals for governance and federated simulation workflows. Special attention is given to the technical complexity of multiscale modeling, including the integration of mechanistic solvers with neural architectures and the computational resources required for real-time, clinically actionable simulations. We conclude with a translational roadmap for virtual biology that projects validated virtual cells for drug screening by 2030, multi-organ simulations by 2040, and the emergence of programmable virtual humans by 2055. By unifying high-fidelity numerical models with explainable AI, and aligning simulation design with ethical, regulatory, and clinical needs, the field of digital biology is positioned to unlock scalable and trustworthy biomedical innovation.
-
2511.0014ViewArtificial Intelligence in Biomedical Research: From Data Integration to Precision MedicineThis comprehensive review examines the transformative role of artificial intelligence in biomedical research, from foundational data integration to clinical applications. The paper explores how AI techniques facilitate multimodal data fusion across diverse biological data types, employing both traditional statistical methods and advanced deep learning architectures including variational autoencoders, graph neural networks, and transformer models. It evaluates AI applications in medical imaging, where convolutional neural networks have achieved remarkable diagnostic accuracy (up to 94\% in COVID-19 detection) while enhancing segmentation and classification tasks across multiple imaging modalities. The review further investigates generative AI’s impact on molecular design and drug discovery, highlighting transformer-based architectures like TransAntivirus that navigate vast chemical spaces to optimize therapeutic candidates. Finally, it examines AI-enabled precision medicine applications, including Clinical Decision Support Systems and federated learning approaches that balance analytical power with privacy preservation. Despite significant progress, implementation challenges persist, including data heterogeneity, model explainability, and ethical concerns regarding bias and privacy. The paper underscores the importance of developing interpretable AI systems that integrate seamlessly into clinical workflows while addressing regulatory, ethical, and economic considerations to realize the full potential of AI in advancing biomedical research and healthcare delivery.
-
2511.0012ViewPhysics-Informed Neural Networks and Neural Operators for Parametric PDEs: Methods, Applications and Future DirectionsPDEs arise ubiquitously in science and engineering, where solutions depend on parameters representing physical properties, boundary conditions, or geometric configurations. Traditional numerical methods require solving the PDE anew for each parameter value, making parameter space exploration prohibitively expensive for high-dimensional problems. Recent advances in machine learning, particularly physics-informed neural networks (PINNs) and neural operators, have revolutionized parametric PDE solving by learning solution operators that generalize across parameter spaces. We critically analyze two main paradigms: (1) PINNs, which embed physical laws as soft constraints and excel at inverse problems with sparse data, and (2) neural operators (including DeepONet, Fourier Neural Operator, and their variants), which learn mappings between infinite-dimensional function spaces and achieve unprecedented parameter space generalization. Through detailed comparisons across fluid dynamics, solid mechanics, heat transfer, and electromagnetics, we show that neural operators can achieve computational speedups ranging from 10^3 to 10^5 times faster than traditional solvers for multi-query scenarios, while maintaining comparable accuracy. We provide practical guidance for method selection, discuss theoretical foundations including universal approximation and convergence guarantees, and identify critical open challenges including high-dimensional parameter spaces, complex geometries, and out-of-distribution generalization. This work establishes a unified framework for understanding parametric PDE solvers through the lens of operator learning, offering a comprehensive resource—which we intend to incrementally update—for this rapidly evolving field.
-
2511.0011ViewFrom Virtual Cells to Programmable Humans: Advancing Digital Biology Through Hybrid AI SystemsRecent advances in artificial intelligence (AI), high-performance computing, and systems biology have accelerated the development of AI-powered virtual biological systems, from virtual cells to multiscale organ models and programmable virtual humans. These systems promise transformative applications in drug discovery, precision medicine, and in silico clinical trials. This review provides a critical synthesis of current progress, key technologies, and future directions across this spectrum. We explore hybrid modeling strategies that combine mechanistic models—such as ordinary and partial differential equations—with deep learning methods including convolutional, recurrent, and graph neural networks. We emphasize the importance of robust uncertainty quantification, simulation validation, and multiscale integration across molecular, cellular, organ-level, and systemic processes. A core contribution is the introduction of the SIM-CARD framework, a standardized simulation accountability protocol to document data provenance, modeling assumptions, performance metrics, and regulatory alignment. We propose a three-phase translational roadmap: (1) validated AI-augmented virtual cells and organs (by 2030), (2) interoperable multi-organ physiological systems (by 2040), and (3) programmable full-body virtual humans supporting personalized simulations and regulatory use cases (by 2055). We identify key enablers—including high-fidelity multiscale data, computational scalability, and simulation governance—as well as bottlenecks such as algorithmic bias, explainability, and regulatory uncertainty. Finally, we call for collaborative efforts to establish minimal benchmarking suites, FAIR-compliant simulation metadata, and cross-institutional federated learning infrastructure. This review aims to guide the scientific, regulatory, and clinical communities in navigating the complex yet promising trajectory toward clinically actionable programmable human simulations.
-
2511.0010ViewFrom AI for Science to Agentic Science: A Survey on Autonomous Scientific Discovery and AI ScientistsArtificial intelligence (AI) is reshaping scientific discovery, evolving from specialized computational tools into autonomous research partners. We position \textit{\textbf{Agentic Science}} as a pivotal stage within the broader \textit{\textbf{AI for Science}} paradigm, where AI systems progress from partial assistance to full scientific agency. Enabled by large language models (LLMs), multimodal systems, and integrated research platforms, agentic AI exhibits capabilities in hypothesis generation, experimental design, execution, analysis, and iterative refinement-behaviors once regarded as uniquely human. This survey offers a \textbf{domain-oriented review} of autonomous scientific discovery across life sciences, chemistry, materials, and physics, synthesizing research progress and advances within each discipline. We unify three previously fragmented perspectives-process-oriented, autonomy-oriented, and mechanism-oriented-through \textbf{a comprehensive framework }that connects foundational capabilities, core processes, and domain-specific realizations. Building on this framework, we (i) trace the evolution of AI for Science, (ii) identify five core capabilities underpinning scientific agency, (iii) model discovery as a dynamic four-stage workflow, (iv) review applications across life sciences, chemistry, materials science, and physics, and (v) synthesize key challenges and future opportunities. This work establishes a domain-oriented synthesis of autonomous scientific discovery and positions Agentic Science as a structured paradigm for advancing AI-driven research.
-
2511.0004ViewVision Transformers for Semiconductor Defect Detection: A Comprehensive Survey of AI-Driven Image Segmentation from CNNs to Foundation Models (2015-2025)VISION TRANSFORMERS FOR SEMICONDUCTOR DEFECT DETECTION: A COMPREHENSIVE SURVEY OF AI-DRIVEN IMAGE SEGMENTATION FROM CNNS TO FOUNDATION MODELS (2015-2025)
-
2510.0044ViewA Comprehensive Survey on Deep LearingMachine learning and deep learning methodologies have revolutionized computational approaches to complex problem-solving across numerous domains, emerging as transformative technologies in artificial intelligence research [1,7]. This comprehensive review synthesizes current literature to examine the theoretical foundations, methodological advancements, and practical implementations of these techniques, highlighting their evolution from basic machine learning concepts to sophisticated deep neural architectures [2,9]. The analysis demonstrates remarkable success in applications ranging from computer vision and natural language processing to healthcare diagnostics and autonomous systems, with deep learning models achieving unprecedented performance in pattern recognition tasks [3,4,8]. However, significant challenges persist, including the need for massive labeled datasets, computational resource requirements, model interpretability issues, and inherent parameter redundancy in deep architectures [5,6]. The review identifies emerging opportunities in transfer learning, few-shot learning, and explainable AI as promising research directions [10]. By critically evaluating both current limitations and future potential, this analysis provides a structured framework for researchers to advance the field while addressing practical implementation barriers across diverse application domains.
-
2510.0033ViewAI Transformation in Biomedical Research: From Data-Driven to Insight-Driven ApproachesThis review examines the ongoing transformation of artificial intelligence applications in biomedical research, tracing the evolution from data-driven to insightdriven approaches. It synthesizes advances in AI-powered multimodal data integration techniques, including early, intermediate, late, and hybrid fusion strategies that effectively combine heterogeneous biomedical data sources. The review explores how network-based computational frameworks and single-cell technologies are revolutionizing disease mechanism analysis through multi-omics integration, enabling the identification of dysregulated pathways and potential therapeutic targets. It further evaluates AI’s role in enabling precision medicine through personalized diagnostics, treatment selection, and radiomics-based healthcare. The integration of AI with various omics disciplines has enhanced understanding of disease mechanisms at molecular, cellular, and tissue levels, creating unprecedented opportunities for early diagnosis and targeted therapeutics. The review concludes by addressing critical challenges including model explainability and data privacy considerations, while highlighting the emergence of closed-loop AI systems that actively participate in scientific discovery through continuous learning and adaptation. These developments collectively signal a paradigm shift toward AI systems that not only analyze biomedical data but generate actionable insights that advance clinical practice and scientific understanding
-
2510.0032ViewArtificial Intelligence in Biomedical Research: From Data Integration to Precision MedicineThis comprehensive review examines the transformative role of artificial intelli- gence in biomedical research, from foundational data integration to clinical ap- plications. The paper explores how AI techniques facilitate multimodal data fu- sion across diverse biological data types, employing both traditional statistical methods and advanced deep learning architectures including variational autoen- coders, graph neural networks, and transformer models. It evaluates AI appli- cations in medical imaging, where convolutional neural networks have achieved remarkable diagnostic accuracy (up to 94% in COVID-19 detection) while en- hancing segmentation and classification tasks across multiple imaging modalities. The review further investigates generative AI’s impact on molecular design and drug discovery, highlighting transformer-based architectures like TransAntivirus that navigate vast chemical spaces to optimize therapeutic candidates. Finally, it examines AI-enabled precision medicine applications, including Clinical Deci- sion Support Systems and federated learning approaches that balance analytical power with privacy preservation. Despite significant progress, implementation challenges persist, including data heterogeneity, model explainability, and ethical concerns regarding bias and privacy. The paper underscores the importance of developing interpretable AI systems that integrate seamlessly into clinical workflows while addressing regulatory, ethical, and economic considerations to realize the full potential of AI in advancing biomedical research and healthcare delivery.
-
2510.0014ViewLLM-empowered knowledge graph construction: A surveyKnowledge Graphs (KGs) have long served as a fundamental infrastructure for structured knowledge representation and reasoning. With the advent of Large Language Models (LLMs), the construction of KGs has entered a new paradigm—shifting from rule-based and statistical pipelines to language-driven and generative frameworks. This survey provides a comprehensive overview of recent progress in **LLM-empowered knowledge graph construction**, systematically analyzing how LLMs reshape the classical three-layered pipeline of ontology engineering, knowledge extraction, and knowledge fusion. We first revisit traditional KG methodologies to establish conceptual foundations, and then review emerging LLM-driven approaches from two complementary perspectives: *schema-based* paradigms, which emphasize structure, normalization, and consistency; and *schema-free* paradigms, which highlight flexibility, adaptability, and open discovery. Across each stage, we synthesize representative frameworks, analyze their technical mechanisms, and identify their limitations. Finally, the survey outlines key trends and future research directions, including KG-based reasoning for LLMs, dynamic knowledge memory for agentic systems, and multimodal KG construction. Through this systematic review, we aim to clarify the evolving interplay between LLMs and knowledge graphs, bridging symbolic knowledge engineering and neural semantic understanding toward the development of adaptive, explainable, and intelligent knowledge systems.
-
2510.0013ViewA Review of Intelligent Rock Mechanics: From Methods to ApplicationsArtificial Intelligence (AI) has great potential to transform rock mechanics by tackling its inherent complexities, such as anisotropy, nonlinearity, discontinuous, and multiphase nature. This review explores the evolution of AI, from basic neural networks like the BP model to advanced architectures such as Transformers, and their applications in areas like microstructure reconstruction, prediction of mechanical parameters, and addressing engineering challenges such as rockburst prediction and tunnel deformation. Machine learning techniques, particularly Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), have been crucial in automating tasks like fracture detection and efficiently generating 3D digital rock models. However, the effectiveness of AI in rock mechanics is limited by data scarcity and the need for high-quality datasets. Hybrid approaches, such as combining physics-informed neural networks (PINNs) with traditional numerical methods, offer promising solutions for solving governing equations. Additionally, Large Language Models (LLMs) are emerging as valuable tools for code generation and decision-making support. Despite these advancements, challenges remain, including issues with reproducibility, model interpretability, and adapting AI models to specific domains. Future progress will hinge on the availability of improved datasets, greater interdisciplinary collaboration, and the integration of spatial intelligence frameworks to bridge the gap between AI’s theoretical potential and its practical application in rock engineering.
-
2510.0012ViewA Review of Intelligent Rock Mechanics: From Methods to ApplicationsIntelligent rock mechanics represents the convergence of artificial intelligence (AI) and classical rock mechanics, providing new paradigms to understand, model, and predict the complex behaviors of geological materials. This review synthesizes recent progress from foundational AI methodologies to their practical applications in rock engineering. Traditional challenges—such as anisotropy, discontinuities, and multiphysics coupling—have been re-examined through data-driven and hybrid approaches that integrate learning algorithms with physical principles. The study traces the evolution of AI in this field, from early backpropagation and support vector machines to modern deep learning frameworks such as convolutional and transformer architectures, highlighting their roles in microstructure reconstruction, mechanical parameter estimation, constitutive modeling, and real-time hazard prediction. Emerging techniques, including physics-informed neural networks and graph-based learning, bridge data-driven inference with physical interpretability, while large language models are beginning to facilitate automated code generation and decision support in geotechnical analysis. Despite remarkable progress, key challenges remain in data quality, model generalization, and interpretability. Addressing these issues requires standardized datasets, interdisciplinary collaboration, and the establishment of transparent, reproducible AI workflows. The paper concludes by outlining a forward-looking perspective on developing next-generation intelligent frameworks capable of coupling physical knowledge, spatial reasoning, and adaptive learning, thereby advancing rock mechanics from empirical modeling toward fully intelligent, autonomous systems.