(1) <a href="https://researchtrend.ai/papers/2512.23987" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">MeLeMaD: Adaptive Malware Detection via Chunk-wise Feature Selection and Meta-Learning
(2) MeLeMaD: Adaptive Malware Detection via Chunk-wise Feature Selection and Meta-Learning
🔍 More at researchtrend.ai/communities/AAML
(1) <a href="https://researchtrend.ai/papers/2512.23987" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">MeLeMaD: Adaptive Malware Detection via Chunk-wise Feature Selection and Meta-Learning
(2) MeLeMaD: Adaptive Malware Detection via Chunk-wise Feature Selection and Meta-Learning
🔍 More at researchtrend.ai/communities/AAML
(1) AdvPrefix: An Objective for Nuanced LLM Jailbreaks
(2) Improving Large Language Model Safety with Contrastive Representation Learning
🔍 More at researchtrend.ai/communities/AAML
(1) AdvPrefix: An Objective for Nuanced LLM Jailbreaks
(2) Improving Large Language Model Safety with Contrastive Representation Learning
🔍 More at researchtrend.ai/communities/AAML
(1) Certainly Bot Or Not? Trustworthy Social Bot Detection via Robust Multi-Modal Neural Processes
(2) BeDKD: Backdoor Defense based on Dynamic Knowledge Distillation and Directional Mapping Modulator
🔍 More at researchtrend.ai/communities/AAML
(1) Evolving Security in LLMs: A Study of Jailbreak Attacks and Defenses
(2) WGLE:Backdoor-free and Multi-bit Black-box Watermarking for Graph Neural Networks
🔍 More at researchtrend.ai/communities/AAML
(1) Towards Benchmarking Privacy Vulnerabilities in Selective Forgetting with Large Language Models
🔍 More at researchtrend.ai/communities/AAML
(1) Towards Benchmarking Privacy Vulnerabilities in Selective Forgetting with Large Language Models
🔍 More at researchtrend.ai/communities/AAML
(1) Holmes: Towards Effective and Harmless Model Ownership Verification to Personalized Large Vision Models via Decoupling Common Features
🔍 More at researchtrend.ai/communities/AAML
(1) MoAPT: Mixture of Adversarial Prompt Tuning for Vision-Language Models
(2) Optimization with Access to Auxiliary Information
🔍 More at researchtrend.ai/communities/AAML
(1) MoAPT: Mixture of Adversarial Prompt Tuning for Vision-Language Models
(2) Optimization with Access to Auxiliary Information
🔍 More at researchtrend.ai/communities/AAML
(1) <a href="https://researchtrend.ai/papers/2406.10090" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis
(2) Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis
🔍 More at researchtrend.ai/communities/AAML
(1) <a href="https://researchtrend.ai/papers/2406.10090" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis
(2) Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis
🔍 More at researchtrend.ai/communities/AAML
(1) Evaluating Adversarial Attacks on Federated Learning for Temperature Forecasting
(2) Safe2Harm: Semantic Isomorphism Attacks for Jailbreaking Large Language Models
🔍 More at researchtrend.ai/communities/AAML
(1) We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature
(2) Rethinking Jailbreak Detection of Large Vision Language Models with Representational Contrastive Scoring
🔍 More at researchtrend.ai/communities/AAML
(1) CachePrune: Neural-Based Attribution Defense Against Indirect Prompt Injection Attacks
(2) Adversarial Signed Graph Learning with Differential Privacy
🔍 More at researchtrend.ai/communities/AAML
(1) The Coherence Trap: When MLLM-Crafted Narratives Exploit Manipulated Visual Contexts
(2) Robust Satisficing Gaussian Process Bandits Under Adversarial Attacks
🔍 More at researchtrend.ai/communities/AAML
(1) Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search
(2) Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners
🔍 More at researchtrend.ai/communities/AAML
(1) OMNIGUARD: An Efficient Approach for AI Safety Moderation Across Languages and Modalities
(2) 3S-Attack: Spatial, Spectral and Semantic Invisible Backdoor Attack Against DNN Models
🔍 More at researchtrend.ai/communities/AAML
(1) GeoShield: Safeguarding Geolocation Privacy from Vision-Language Models via Adversarial Perturbations
(2) Toward Reliable Machine Unlearning: Theory, Algorithms, and Evaluation
🔍 More at researchtrend.ai/communities/AAML
(1) Edge-Only Universal Adversarial Attacks in Distributed Learning
(2) Analyzing PDFs like Binaries: Adversarially Robust PDF Malware Analysis via Intermediate Representation and Language Model
🔍 More at researchtrend.ai/communities/AAML
(1) <a href="https://researchtrend.ai/papers/2312.10657" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
(2) UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
🔍 More at researchtrend.ai/communities/AAML
(1) <a href="https://researchtrend.ai/papers/2312.10657" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
(2) UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
🔍 More at researchtrend.ai/communities/AAML
(1) SafeGenes: Evaluating the Adversarial Robustness of Genomic Foundation Models
(2) SafePTR: Token-Level Jailbreak Defense in Multimodal LLMs via Prune-then-Restore Mechanism
🔍 More at researchtrend.ai/communities/AAML
(1) <a href="https://researchtrend.ai/papers/2512.02062" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">Superpixel Attack: Enhancing Black-box Adversarial Attack with Image-driven Division Areas
(2) Superpixel Attack: Enhancing Black-box Adversarial Attack with Image-driven Division Areas
🔍 More at researchtrend.ai/communities/AAML
(1) <a href="https://researchtrend.ai/papers/2512.02062" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">Superpixel Attack: Enhancing Black-box Adversarial Attack with Image-driven Division Areas
(2) Superpixel Attack: Enhancing Black-box Adversarial Attack with Image-driven Division Areas
🔍 More at researchtrend.ai/communities/AAML
(1) <a href="https://researchtrend.ai/papers/2512.00343" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">Assimilation Matters: Model-level Backdoor Detection in Vision-Language Pretrained Models
(2) Assimilation Matters: Model-level Backdoor Detection in Vision-Language Pretrained Models
🔍 More at researchtrend.ai/communities/AAML
(1) <a href="https://researchtrend.ai/papers/2512.00343" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">Assimilation Matters: Model-level Backdoor Detection in Vision-Language Pretrained Models
(2) Assimilation Matters: Model-level Backdoor Detection in Vision-Language Pretrained Models
🔍 More at researchtrend.ai/communities/AAML
(1) A Flat Minima Perspective on Understanding Augmentations and Model Robustness
(2) Medical Malice: A Dataset for Context-Aware Safety in Healthcare LLMs
🔍 More at researchtrend.ai/communities/AAML
(1) LTD: Low Temperature Distillation for Gradient Masking-free Adversarial Training
(2) Probabilistic Robustness for Free? Revisiting Training via a Benchmark
🔍 More at researchtrend.ai/communities/AAML
(1) Learning to Compress Graphs via Dual Agents for Consistent Topological Robustness Evaluation
(2) Critical Evaluation of Quantum Machine Learning for Adversarial Robustness
🔍 More at researchtrend.ai/communities/AAML
(1) Exploring Potential Prompt Injection Attacks in Federated Military LLMs and Their Mitigation
(2) DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs
🔍 More at researchtrend.ai/communities/AAML