Papers
Event:
-
2510.0023ViewRobust Zero-Shot NER for Crises via Iterative Knowledge Distillation and Confidence-Gated InductionThis research presents a comprehensive diagnostic study of confidence-gated iterative induction for zero-shot Named Entity Recognition (NER) in crisis scenarios. While existing approaches struggle to adapt to novel disaster lexicons without manually curated resources, we investigate whether iterative knowledge distillation can overcome these limitations. Our framework leverages a pretrained language model to extract high-recall entity candidates, then iteratively distills domain knowledge through a self-correcting loop that uses high-confidence seeds to induce micro-gazetteers and syntactic rules. Comprehensive evaluations on synthetic crisis data reveal that the framework maintains a constant zero-shot F1-score of approximately 0.295 across all experimental configurations, demonstrating that the iterative mechanism provides no measurable improvement over baseline approaches. This negative result offers valuable diagnostic insights into the fundamental challenges of adaptive NER in dynamic crisis domains, including confidence threshold calibration difficulties, clustering algorithm limitations, and error propagation risks. The findings provide a cautionary tale for researchers working on adaptive NER systems and establish a foundation for future research on more robust zero-shot approaches in crisis scenarios.
-
2510.0022ViewAdaptive Log Anomaly Detection through Data--Centric Drift Characterization and Policy-Driven Lifelong LearningLog-based anomaly detectors degrade over time due to concept drift arising from software updates or workload changes. Existing systems typically react by retraining entire models, leading to catastrophic forgetting and inefficiencies. We propose an adaptive framework that first classifies drift in log data into semantic (frequency shifts within known templates) and syntactic (emergence of new log templates) categories via statistical tests and novelty detection. Based on the identified drift type, a policy-driven lifelong learning manager applies targeted updates---experience replay to mitigate forgetting under semantic drift and dynamic model expansion to accommodate syntactic drift. This approach is validated on semi-synthetic logs and real-world longitudinal datasets (HDFS, Apache, and BGL), maintaining high F1-scores, reducing computational overhead, and preserving historical knowledge compared to monolithic retraining.
-
2510.0021ViewConFIT: A Robust Knowledge-Guided Contrastive Framework for Financial ExtractionFinancial text extraction faces serious challenges in multi-entity sentiment attribution and numerical sensitivity, often leading to pitfalls in real-world deployment. In this work, we propose ConFIT (Contrastive Financial Information Tuning), a knowledge-guided contrastive learning framework that employs a Semantic-Preserving Perturbation (SPP) engine to generate high-quality, programmatically synthesized hard negatives. By integrating domain knowledge sources such as the Loughran-McDonald lexicon and Wikidata, and applying rigorous perplexity and Natural Language Inference (NLI) filtering, ConFIT trains language models to differentiate subtle perturbations in financial statements. Evaluations on FiQA and SENTiVENT using FinBERT and Llama-3 8B show both promise improvements and unexpected pitfalls, highlighting challenges that warrant further research.
-
2510.0020ViewHierarchical Change Signature Analysis: A Framework for Online Discrimination of Incipient Faults and Benign Drifts in Industrial Time SeriesIndustrial fault detection systems often struggle to distinguish benign operational drifts (e.g., tool wear, recipe changes) from incipient faults, frequently adapting to faults as new ``normal'' states and risking catastrophic failures. This work proposes a hierarchical framework that decouples change detection from change characterization. When a drift is detected, the system generates a Multi-Scale Change Signature (MSCS) that quantifies geometric and statistical transformations in the primary detectorβs latent space. An unsupervised Drift Characterization Module (DCM), trained on an Online Normality Baseline (ONB), classifies each signature as benign or potentially faulty. Benign drifts are ignored, while potential faults are flagged for review; confirmed benign drifts are incorporated into the ONB for future adaptation. The framework is model-agnostic, computationally efficient, and scalable through a tiered human-in-the-loop mechanism. Experiments on the Tennessee Eastman Process dataset with injected drifts and faults demonstrate high fault detection rates, fewer false alarms, and efficient adaptation to benign changes.
-
2510.0019ViewHierarchical Adaptive Normalization: A Placement-Conditioned Cascade for Robust Wearable Activity RecognitionWearable Human Activity Recognition (HAR) systems face significant performance degradation when sensors are placed at different body locations or orientations. We introduce a hierarchical adaptive normalization method that addresses these challenges through a two-stage cascade. The first stage combines gravity-based orientation correction with placement context inference using signal variance analysis, while a novel stability gate prevents harmful adaptation during unstable periods. The second stage employs placement-conditioned adaptive Batch Normalization to refine feature representations in real-time. Comprehensive evaluations on public and custom datasets show that our method achieves 0.847Β±0.023 macro F1-score, outperforming static baselines by 36\% and state-of-the-art unsupervised domain adaptation methods by 13.7\%. The approach maintains real-time performance with only 2.3ms inference time and 45.2MB memory usage, demonstrating practical viability for on-device deployment in dynamic real-world scenarios.
-
2510.0018ViewAdaptive Evidential Meta-Learning with Hyper-Conditioned Priors for Calibrated ECG PersonalisationThis research addresses a fundamental gap in uncertainty calibration during electrocardiogram (ECG) model personalisation. We propose \emph{Adaptive Evidential Meta-Learning}, a framework that attaches a lightweight evidential head with hyper-network-conditioned priors to a frozen ECG foundation model. The hyper-network dynamically sets the evidential prior using robust, class-conditional statistics computed from a few patient-specific ECG samples. Trained via a two-stage meta-curriculum, our approach enables rapid adaptation with well-calibrated uncertainty estimates, making it highly applicable for real-world clinical deployment where both prediction accuracy and uncertainty awareness are crucial.