Research

research topics at Trust AI & Security Lab

TAIS Lab studies trustworthy AI systems from both sides: using AI to improve security, and securing AI against emerging threats. Our work builds on published research in federated learning robustness, privacy leakage, model extraction, adversarial malware, IoT intrusion detection, and hardware-assisted system security.

LLM agent Security evidence use AI for security, then secure the AI

LLM4SEC & SEC4LLM

We use LLMs as security agents that can connect code, execution evidence, retrieval context, and attack traces into actionable analysis. At the same time, we study how LLM systems fail under adversarial prompts, tool misuse, insecure retrieval, and unsafe generated code, then build defenses that make those systems trustworthy in security-critical workflows.

Code context Detect weakness Repair safely Red team

Automated Vulnerability Repair, Secure Code Generation, RAG Security, Red Teaming

We develop automated workflows that find weaknesses, generate repairs, and verify whether the patched system is actually safer. This direction extends lessons from kernel integrity monitoring, branch-trace behavior modeling, IoT intrusion detection, and adversarial malware generation into secure code generation, RAG security, and practical red teaming.

Model SCA FL updates model theft property leak

Security and Privacy of AI Models

We analyze how AI models leak sensitive information through side channels, model queries, local updates, and clustered training behavior. This line is grounded in work showing that side-channel information can amplify model extraction and that clustered federated learning can leak unintended properties, motivating defenses such as obfuscation, robust aggregation, and client-level differential privacy.

VFL data identify purify robust AI remove triggers, reduce leakage

Backdoor Attacks, Differential Privacy

We investigate attacks that implant hidden behavior into collaborative AI and defenses that identify, remove, or neutralize malicious information before it controls the prediction. This direction connects VFLIP’s inference-time identification and purification for vertical federated learning, FLGuard’s contrastive filtering of malicious clients, and differential privacy as a practical tool for reducing property leakage.