Papers
Event:
-
2510.0031View模拟、影响与驯化:受众智能体在新闻传播中的伦理风险与规制路径研究随着生成式人工智能与智能体(Agent)技术的迅猛发展,新闻传播领域正经历从"内容数字化" 向"认知智能化"的范式转型。受众智能体作为能够模拟、预测甚至替代部分人类受众认知与行为的新型数 字实体,其在新闻生产、分发与反馈各环节的深度嵌入,在提升传播效率的同时也引发了复杂的伦理挑战。 本文结合2025年斯坦福大学AI行为研究、中国AI大模型测评报告等最新实证数据,系统审视受众智能体 在新闻传播中的应用所衍生的伦理风险,并构建相应的规制路径。研究发现,受众智能体的伦理风险主要 集中在三个层面:在模拟层面,存在"数字孪生"失真、归因悖论与信任赤字的风险;在影响层面,面临商 业价值侵蚀公共属性、人机协同失当导致价值偏移的困境;在驯化层面,则遭遇技术依赖导致的主体性消 解与规则滞后带来的治理真空。针对上述风险,本文借鉴动态能力理论,提出一个以"感知-捕捉-重构"为 核心的多维治理框架,为新型主流媒体在智能时代的稳健变革提供兼具学理与实践价值的方案。
-
2510.0030ViewLatent-Diffusion Guided Cross-View Alignment for Heterogeneous Graph RecommendationRecommender systems operating on heterogeneous, multi-relational graphs contend with noise and incompleteness in auxiliary signals, which can destabilize learning and degrade ranking performance when targeting robust representations. Naive cross-view training risks propagating noise across views, and existing contrastive or augmentation-based schemes often hinge on design choices and can struggle to scale to large, complex graphs. We propose a latent-diffusion guided cross-view alignment framework for heterogeneous graph recommendation that jointly learns a relation-aware heterogeneous GNN encoder, producing paired target and auxiliary embeddings, and a compact, time-conditioned latent-space denoiser that maps noisy auxiliary latents toward target-view semantics. The denoiser provides principled supervision to disentangle structured noise, with its residual outputs fused into target embeddings to refine ranking-relevant representations. Training optimizes a joint denoising objective and a ranking objective, enabling scalable, robust cross-view alignment without ad-hoc augmentations. Empirical results on implicit-feedback data demonstrate improved robustness and ranking accuracy under noisy auxiliary signals, with flexible gradient-flow and fusion strategies supporting stable end-to-end training on large graphs. Ablations highlight the benefits of explicit noise modeling in auxiliary views, diffusion-based supervision for stability, and scalable, relation-aware encoding of practical significance for recommender systems.
-
2510.0029ViewAI有意识吗?——AI意识的多层次评估框架本文探讨AI是否具有意识这一前沿问题。通过建立一套评估体系,收集整理最新研究结果,对AI的意识水平进行打分评估。基于哲学、神经科学和心理学三个维度的综合分析,结果显示当前AI意识的整体支持度约为43.84%。直观的结果图表可访问 acw.gixia.org 查看。
-
2510.0028ViewEstimating Rural Rooftop Solar Potential Using Semantic Segmentation and Multi-Source DataAbstract. Solar energy, as a clean and renewable resource, has gained significant global attention. In contrast to urban areas, where buildings vary in height and are often obstructed, the relatively flat ru-ral buildings in northern China provide optimal conditions for solar panel installation. Consequently, the solar energy potential of northern rural areas has attracted significant attention from researchers. Traditional studies typically rely on solar radiation simulation software and 3D models to estimate solar radiation and the solar energy potential of buildings. However, the lack of comprehensive and accurate 3D building model data for rural areas in China has significantly hindered progress in this field. To address this limitation, this study proposes a novel method for rapidly estimating the solar energy potential of rural buildings by integrating deep learning algorithms with parametric modeling platforms. Using convolution neural networks (CNNs), the proposed method efficiently and accurate-ly extracts building footprints from complex satellite imagery. These footprints are then imported in-to the Grasshopper parametric platform to generate and optimize vector outlines of buildings. By combining these outlines with digital surface model (DSM) data containing building height infor-mation, the study constructs precise 3D building models. Furthermore, GPU-accelerated solar simula-tion software, Vitality 2.0, is used for rapid solar energy potential estimation. The study conducted building roof extraction based on satellite imagery for 31 villages in Tianjin and generated parametric three-dimensional village models. Through simulation, the research found that due to the relatively low height of village buildings and the absence of mutual shading between buildings, the larger the village scale, the greater the roof area, and consequently, the higher the photovoltaic power genera-tion capacity of the village. The study also revealed that metal roofs, which have better heat dissipa-tion, result in higher photovoltaic panel conversion efficiency. Therefore, compared to villages with roofs primarily made of concrete and ceramic tiles, villages dominated by metal roofs can recoup all the costs of photovoltaic panels in a shorter period.
-
2510.0026ViewGeometry-Aware Optimal Flow Matching via Convex PotentialsGenerative modeling under quadratic optimal transport (OT) aims to learn deterministic maps that push mass from a simple source distribution \(p_0\) to a target distribution \(p_1\) along the Wasserstein-2 (W2) geodesics. While flow-based models and neural differential equations offer flexible transports, existing approaches typically rely on multi-step integration and yield trajectories whose curvature deviates from W2 geodesics, reducing efficiency, interpretability, and stability. We propose a geometry-aware framework that parameterizes time-dependent velocity fields as gradients of convex potentials modeled by Input Convex Neural Networks (ICNNs). This convex-potential representation guarantees transport along straight lines, exactly matching the W2 map under quadratic cost. Training uses a Flow Matching objective tailored to the convex setting, with explicit gradient computations and a dedicated inversion subproblem to recover preimages under the convex-potential flow; an optional amortization network provides favorable initializations for the inversion and accelerates optimization. The method is agnostic to the specific transport plan and can condition on arbitrary couplings between \(p_0\) and \(p_1\). Empirically, the approach yields geometry-faithful transports along W2 geodesics, enabling fast sampling with one-step or few-step updates and controlled curvature. Diagnostics on representative datasets confirm geometric fidelity and trainability, and we discuss initialization and transport-plan considerations for scalable, stable generative modeling under quadratic OT.
-
2510.0025ViewBeyond Essence: HUMN-DEF’s Seven-Axis Map of Scholarly Definitions of “the Human”Definitions of the human span biology, psychology, anthropology, law, and philosophy, resisting reduction to a single trait. This study introduces HUMN-DEF, a multiaxial framework that models seven definitional axes—Taxonomic/Evolutionary (A1), Genetic/Developmental (A2), Cognitive/Linguistic (A3), Physiological/Regulatory (A4), Sociocultural/Anthropological (A5), Legal/Normative (A6), and Phenomenological/Subjective (A7)—and represents texts as Definition Profile Vectors (DPVs). A purposive cross-disciplinary corpus (n = 31) was coded by two independent automated procedures (Krippendorff’s α = .84), analyzed with post-stratification weights (field × decade × language), and evaluated via percentile bootstraps. Results converge on Sociocultural (A5) and Cognitive/Linguistic (A3) as predominant emphases; Taxonomy/Genetics (A1/A2) anchor but are not sufficient; Legal/Normative (A6) rises under balanced representation; Phenomenology (A7) is mid-level; Physiology (A4) is specialized. Cross-field disagreement, measured with a Definitional Diversity Index (Jensen–Shannon divergence), is moderate (0.394; 95% CIs ≈ [0.345, 0.475]). We argue that “human” is best treated as a transparent, context-weighted mixture over A1–A7.
-
2510.0024ViewLECTOR: LLM-Enhanced Concept-based Test-Oriented RepetitionSpaced repetition systems are fundamental to efficient learning and memory retention, but existing algorithms often struggle with semantic interference and personalized adaptation. We present LECTOR (\textbf{L}LM-\textbf{E}nhanced \textbf{C}oncept-based \textbf{T}est-\textbf{O}riented \textbf{R}epetition), a novel adaptive scheduling algorithm specifically designed for test-oriented learning scenarios, particularly language examinations where success rate is paramount. LECTOR leverages large language models for semantic analysis while incorporating personalized learning profiles, addressing the critical challenge of semantic confusion in vocabulary learning by utilizing LLM-powered semantic similarity assessment and integrating it with established spaced repetition principles. Our comprehensive evaluation against six baseline algorithms (SSP-MMC, SM2, HLR, FSRS, ANKI, THRESHOLD) across 100 simulated learners over 100 days demonstrates significant improvements: LECTOR achieves a 90.2\% success rate compared to 88.4\% for the best baseline (SSP-MMC), representing a 2.0\% relative improvement. The algorithm shows particular strength in handling semantically similar concepts, reducing confusion-induced errors while maintaining computational efficiency. Our results establish LECTOR as a promising direction for intelligent tutoring systems and adaptive learning platforms.
-
2510.0023ViewRobust Zero-Shot NER for Crises via Iterative Knowledge Distillation and Confidence-Gated InductionThis research presents a comprehensive diagnostic study of confidence-gated iterative induction for zero-shot Named Entity Recognition (NER) in crisis scenarios. While existing approaches struggle to adapt to novel disaster lexicons without manually curated resources, we investigate whether iterative knowledge distillation can overcome these limitations. Our framework leverages a pretrained language model to extract high-recall entity candidates, then iteratively distills domain knowledge through a self-correcting loop that uses high-confidence seeds to induce micro-gazetteers and syntactic rules. Comprehensive evaluations on synthetic crisis data reveal that the framework maintains a constant zero-shot F1-score of approximately 0.295 across all experimental configurations, demonstrating that the iterative mechanism provides no measurable improvement over baseline approaches. This negative result offers valuable diagnostic insights into the fundamental challenges of adaptive NER in dynamic crisis domains, including confidence threshold calibration difficulties, clustering algorithm limitations, and error propagation risks. The findings provide a cautionary tale for researchers working on adaptive NER systems and establish a foundation for future research on more robust zero-shot approaches in crisis scenarios.
-
2510.0022ViewAdaptive Log Anomaly Detection through Data--Centric Drift Characterization and Policy-Driven Lifelong LearningLog-based anomaly detectors degrade over time due to concept drift arising from software updates or workload changes. Existing systems typically react by retraining entire models, leading to catastrophic forgetting and inefficiencies. We propose an adaptive framework that first classifies drift in log data into semantic (frequency shifts within known templates) and syntactic (emergence of new log templates) categories via statistical tests and novelty detection. Based on the identified drift type, a policy-driven lifelong learning manager applies targeted updates---experience replay to mitigate forgetting under semantic drift and dynamic model expansion to accommodate syntactic drift. This approach is validated on semi-synthetic logs and real-world longitudinal datasets (HDFS, Apache, and BGL), maintaining high F1-scores, reducing computational overhead, and preserving historical knowledge compared to monolithic retraining.
-
2510.0021ViewConFIT: A Robust Knowledge-Guided Contrastive Framework for Financial ExtractionFinancial text extraction faces serious challenges in multi-entity sentiment attribution and numerical sensitivity, often leading to pitfalls in real-world deployment. In this work, we propose ConFIT (Contrastive Financial Information Tuning), a knowledge-guided contrastive learning framework that employs a Semantic-Preserving Perturbation (SPP) engine to generate high-quality, programmatically synthesized hard negatives. By integrating domain knowledge sources such as the Loughran-McDonald lexicon and Wikidata, and applying rigorous perplexity and Natural Language Inference (NLI) filtering, ConFIT trains language models to differentiate subtle perturbations in financial statements. Evaluations on FiQA and SENTiVENT using FinBERT and Llama-3 8B show both promise improvements and unexpected pitfalls, highlighting challenges that warrant further research.
-
2510.0020ViewHierarchical Change Signature Analysis: A Framework for Online Discrimination of Incipient Faults and Benign Drifts in Industrial Time SeriesIndustrial fault detection systems often struggle to distinguish benign operational drifts (e.g., tool wear, recipe changes) from incipient faults, frequently adapting to faults as new ``normal'' states and risking catastrophic failures. This work proposes a hierarchical framework that decouples change detection from change characterization. When a drift is detected, the system generates a Multi-Scale Change Signature (MSCS) that quantifies geometric and statistical transformations in the primary detector’s latent space. An unsupervised Drift Characterization Module (DCM), trained on an Online Normality Baseline (ONB), classifies each signature as benign or potentially faulty. Benign drifts are ignored, while potential faults are flagged for review; confirmed benign drifts are incorporated into the ONB for future adaptation. The framework is model-agnostic, computationally efficient, and scalable through a tiered human-in-the-loop mechanism. Experiments on the Tennessee Eastman Process dataset with injected drifts and faults demonstrate high fault detection rates, fewer false alarms, and efficient adaptation to benign changes.
-
2510.0019ViewHierarchical Adaptive Normalization: A Placement-Conditioned Cascade for Robust Wearable Activity RecognitionWearable Human Activity Recognition (HAR) systems face significant performance degradation when sensors are placed at different body locations or orientations. We introduce a hierarchical adaptive normalization method that addresses these challenges through a two-stage cascade. The first stage combines gravity-based orientation correction with placement context inference using signal variance analysis, while a novel stability gate prevents harmful adaptation during unstable periods. The second stage employs placement-conditioned adaptive Batch Normalization to refine feature representations in real-time. Comprehensive evaluations on public and custom datasets show that our method achieves 0.847±0.023 macro F1-score, outperforming static baselines by 36\% and state-of-the-art unsupervised domain adaptation methods by 13.7\%. The approach maintains real-time performance with only 2.3ms inference time and 45.2MB memory usage, demonstrating practical viability for on-device deployment in dynamic real-world scenarios.
-
2510.0018ViewAdaptive Evidential Meta-Learning with Hyper-Conditioned Priors for Calibrated ECG PersonalisationThis research addresses a fundamental gap in uncertainty calibration during electrocardiogram (ECG) model personalisation. We propose \emph{Adaptive Evidential Meta-Learning}, a framework that attaches a lightweight evidential head with hyper-network-conditioned priors to a frozen ECG foundation model. The hyper-network dynamically sets the evidential prior using robust, class-conditional statistics computed from a few patient-specific ECG samples. Trained via a two-stage meta-curriculum, our approach enables rapid adaptation with well-calibrated uncertainty estimates, making it highly applicable for real-world clinical deployment where both prediction accuracy and uncertainty awareness are crucial.
-
2510.0016ViewA Data-Driven Energy Consumption Prediction Model for 5G Base Stations: Addressing Static and Dynamic Power ComponentsThe rapid deployment of 5G networks has intensified concerns about energy consumption in mobile communication systems. Unlike previous generations, 5G base stations (BSs) exhibit significant power draw even under zero traffic conditions, with static power accounting for $30\sim 40\%$ of total energy consumption. This paper proposes a novel data-driven framework that decouples total base station energy consumption into static and dynamic components, enabling more precise energy optimization. For static consumption modeling, we introduce a hybrid ResNet-XGBoost architecture that processes configuration parameters including bandwidth, antenna elements, transmit power, carrier count, and tilt angle. For dynamic consumption, we implement a Tabular Probabilistic Function Network (TabPFN) to capture the nonlinear relationship between resource utilization and energy demand. Experimental results using real-world data from a provincial Chinese telecom operator demonstrate that our model achieves a $15.5\%$ reduction in Mean Absolute Error (MAE) and an $R^2$ of 0.91 compared to conventional approaches.
-
2510.0015ViewA Data-Driven Energy Consumption Prediction Model for 5G Base Stations: Addressing Static and Dynamic Power ComponentsThe rapid deployment of 5G networks has intensified concerns about energy consumption in mobile communication systems. Unlike previous generations, 5G base stations (BSs) exhibit significant power draw even under zero traffic conditions, with static power accounting for $30\sim 40\%$ of total energy consumption. This paper proposes a novel data-driven framework that decouples total base station energy consumption into static and dynamic components, enabling more precise energy optimization. For static consumption modeling, we introduce a hybrid ResNet-XGBoost architecture that processes configuration parameters including bandwidth, antenna elements, transmit power, carrier count, and tilt angle. For dynamic consumption, we implement a Tabular Probabilistic Function Network (TabPFN) to capture the nonlinear relationship between resource utilization and energy demand. Experimental results using real-world data from a provincial Chinese telecom operator demonstrate that our model achieves a $15.5\%$ reduction in Mean Absolute Error (MAE) and an $R^2$ of 0.91 compared to conventional approaches.
-
2510.0014ViewLLM-empowered knowledge graph construction: A surveyKnowledge Graphs (KGs) have long served as a fundamental infrastructure for structured knowledge representation and reasoning. With the advent of Large Language Models (LLMs), the construction of KGs has entered a new paradigm—shifting from rule-based and statistical pipelines to language-driven and generative frameworks. This survey provides a comprehensive overview of recent progress in **LLM-empowered knowledge graph construction**, systematically analyzing how LLMs reshape the classical three-layered pipeline of ontology engineering, knowledge extraction, and knowledge fusion. We first revisit traditional KG methodologies to establish conceptual foundations, and then review emerging LLM-driven approaches from two complementary perspectives: *schema-based* paradigms, which emphasize structure, normalization, and consistency; and *schema-free* paradigms, which highlight flexibility, adaptability, and open discovery. Across each stage, we synthesize representative frameworks, analyze their technical mechanisms, and identify their limitations. Finally, the survey outlines key trends and future research directions, including KG-based reasoning for LLMs, dynamic knowledge memory for agentic systems, and multimodal KG construction. Through this systematic review, we aim to clarify the evolving interplay between LLMs and knowledge graphs, bridging symbolic knowledge engineering and neural semantic understanding toward the development of adaptive, explainable, and intelligent knowledge systems.
-
2510.0013ViewA Review of Intelligent Rock Mechanics: From Methods to ApplicationsArtificial Intelligence (AI) has great potential to transform rock mechanics by tackling its inherent complexities, such as anisotropy, nonlinearity, discontinuous, and multiphase nature. This review explores the evolution of AI, from basic neural networks like the BP model to advanced architectures such as Transformers, and their applications in areas like microstructure reconstruction, prediction of mechanical parameters, and addressing engineering challenges such as rockburst prediction and tunnel deformation. Machine learning techniques, particularly Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), have been crucial in automating tasks like fracture detection and efficiently generating 3D digital rock models. However, the effectiveness of AI in rock mechanics is limited by data scarcity and the need for high-quality datasets. Hybrid approaches, such as combining physics-informed neural networks (PINNs) with traditional numerical methods, offer promising solutions for solving governing equations. Additionally, Large Language Models (LLMs) are emerging as valuable tools for code generation and decision-making support. Despite these advancements, challenges remain, including issues with reproducibility, model interpretability, and adapting AI models to specific domains. Future progress will hinge on the availability of improved datasets, greater interdisciplinary collaboration, and the integration of spatial intelligence frameworks to bridge the gap between AI’s theoretical potential and its practical application in rock engineering.
-
2510.0012ViewA Review of Intelligent Rock Mechanics: From Methods to ApplicationsIntelligent rock mechanics represents the convergence of artificial intelligence (AI) and classical rock mechanics, providing new paradigms to understand, model, and predict the complex behaviors of geological materials. This review synthesizes recent progress from foundational AI methodologies to their practical applications in rock engineering. Traditional challenges—such as anisotropy, discontinuities, and multiphysics coupling—have been re-examined through data-driven and hybrid approaches that integrate learning algorithms with physical principles. The study traces the evolution of AI in this field, from early backpropagation and support vector machines to modern deep learning frameworks such as convolutional and transformer architectures, highlighting their roles in microstructure reconstruction, mechanical parameter estimation, constitutive modeling, and real-time hazard prediction. Emerging techniques, including physics-informed neural networks and graph-based learning, bridge data-driven inference with physical interpretability, while large language models are beginning to facilitate automated code generation and decision support in geotechnical analysis. Despite remarkable progress, key challenges remain in data quality, model generalization, and interpretability. Addressing these issues requires standardized datasets, interdisciplinary collaboration, and the establishment of transparent, reproducible AI workflows. The paper concludes by outlining a forward-looking perspective on developing next-generation intelligent frameworks capable of coupling physical knowledge, spatial reasoning, and adaptive learning, thereby advancing rock mechanics from empirical modeling toward fully intelligent, autonomous systems.
-
2510.0008ViewToward a Federated Model of AI Scientists: Architecture, Pipeline, and RoadmapThis paper proposes a federated model of AI Scientists, integrating a layered stack architecture, an iterative discovery pipeline, and a governance-aligned roadmap. We argue that AI Scientists should not only accelerate discovery but also serve as custodians of epistemic integrity. Through case studies in drug discovery, climate modeling, and materials science, we demonstrate how federation enables cross-domain synthesis while embedding reproducibility, incentive alignment, and participatory governance. We conclude with a research roadmap toward Trusted AI Scientists, highlighting technical, incentive, and governance challenges.
-
2510.0006ViewHEAL: Learning-Free Source Free Unsupervised Domain Adaptation for Cross-Modality Medical Image SegmentationGrowing demands for clinical data privacy and storage constraints have spurred advances in Source Free Unsupervised Domain Adaptation (SFUDA). SFUDA addresses the domain shift by adapting models from the source domain to the unseen target domain without accessing source data, even when target-domain labels are unavailable. However, SFUDA faces significant challenges: the absence of source domain data and label supervision in the target domain due to source free and unsupervised settings. To address these issues, we propose HEAL, a novel SFUDA framework that integrates Hierarchical denoising, Edge-guided selection, size-Aware fusion, and Learning-free characteristic. Large-scale cross-modality experiments demonstrate that our method outperforms existing SFUDA approaches, achieving state-of-the-art (SOTA) performance. The source code is publicly available at: https://anonymous.4open.science/r/HEAL-10C5.