This paper introduces a hierarchical adaptive normalization method designed to enhance the robustness of wearable Human Activity Recognition (HAR) systems when faced with variations in sensor placement and orientation. The core contribution lies in a two-stage cascade approach. The first stage combines gravity-based orientation correction with placement context inference. Specifically, it employs a lightweight classifier, trained on labeled data from multiple sensor placements, to infer the sensor's location (wrist, waist, or ankle) based on the variance of normalized signals. This inferred context is then used to condition the parameters of a Batch Normalization (BN) layer, adapting it to the specific sensor placement. The second stage refines the feature representations using a placement-conditioned adaptive Batch Normalization, which updates its running statistics in real-time based on the inferred placement context and a stability gate. This stability gate, based on the L2 norm of the normalized input, prevents harmful updates during unstable periods by suppressing adaptation when the signal norm is below a threshold. The method's efficacy is demonstrated through experiments on both a public dataset (Opportunity) and a custom dataset, showing significant improvements in accuracy and robustness compared to baseline methods and state-of-the-art unsupervised domain adaptation techniques. The paper also includes a computational efficiency analysis, demonstrating the method's suitability for real-time, on-device applications. The authors report a macro F1-score of 0.847, highlighting the method's strong performance. Overall, the paper presents a practical and effective approach to addressing sensor variability in wearable HAR, with a focus on real-world applicability and computational efficiency.