A comprehensive portfolio of offensive and defensive cybersecurity services built for organizations facing sophisticated, persistent adversaries. Every engagement is designed to deliver actionable intelligence — not checkbox outputs. I operate across sectors including finance, healthcare, critical infrastructure, technology and defence, adapting methodology to the specific threat landscape and operational constraints of each client.
Red Team operations represent the most rigorous form of security validation available. Rather than testing individual components in isolation, a full-scope Red Team engagement simulates the complete kill chain of a sophisticated adversary — from initial reconnaissance through to persistence, lateral movement and objectives on target. I lead multi-disciplinary teams executing these engagements, bringing together expertise in network intrusion, social engineering, physical security and custom tooling development. The intelligence produced informs strategic security investment decisions in ways that no other assessment methodology can replicate. Every engagement is scoped precisely to the client's threat model, their critical assets and the specific adversary personas most likely to target them.
Penetration testing conducted with genuine adversarial intent — not compliance box-ticking. The difference matters enormously in practice: a compliance-oriented test produces a list of findings; an adversarially-driven test produces a narrative of exploitation that demonstrates real business impact. I test every attack surface the organization exposes, chain vulnerabilities across systems, and deliver working proof-of-concept exploits that remove any ambiguity about severity and exploitability. The reporting is structured to serve both the technical teams who need remediation guidance and the executive stakeholders who need to understand business risk. Findings are mapped to industry-standard severity frameworks and prioritized by exploitability in your specific environment.
Artificial intelligence systems are now embedded in business-critical infrastructure across every sector — and they introduce attack vectors that conventional security frameworks were never designed to address. I specialize in the precise intersection of offensive security methodology and machine learning systems research. This allows me to evaluate AI deployments with the rigor of a penetration tester and the depth of an ML researcher simultaneously. The scope spans the full AI pipeline: from the security of training data and model supply chains through to inference-time attacks against deployed systems. I assess large language models for prompt injection and jailbreak vectors, evaluate retrieval-augmented generation architectures for data leakage through retrieval, test classification and detection models against adversarial examples, and audit AI governance frameworks against emerging regulatory requirements including the EU AI Act.
When a security incident occurs, the quality of the forensic response determines whether the organization recovers cleanly or compounds the damage through incomplete containment and missing evidence. I provide rapid, methodical digital forensic investigation that combines traditional forensic discipline with modern AI-assisted triage to compress analysis timelines on large, complex datasets. My investigations span the full spectrum of incident types — ransomware, targeted intrusion, insider threat, business email compromise and financial fraud — across environments ranging from corporate endpoints to cloud-native architectures and blockchain-based systems. Every investigation maintains strict chain of custody protocols to ensure evidence is preserved in a legally defensible manner suitable for regulatory reporting, civil proceedings or criminal referral.
Effective security architecture is not about deploying more products — it is about designing a layered system of controls that detects, contains and ejects adversaries who have already bypassed the perimeter. My architectural designs are grounded in threat modeling derived from direct experience as an attacker: I understand how lateral movement works in practice, how persistence mechanisms evade detection, and which detection logic actually stops sophisticated actors versus which creates alert noise. I design zero trust environments from first principles, deploy and tune SIEM and SOAR platforms with detection engineering that achieves sub-five-minute mean time to detect, and integrate AI-powered analytics that multiply analyst effectiveness rather than generating additional noise for overwhelmed security operations teams.
Regulatory compliance frameworks exist because they encode hard-won lessons about what security controls matter at scale. The problem is that most compliance implementations treat them as bureaucratic exercises rather than genuine risk management tools — producing documentation that satisfies auditors while leaving the organization exposed to the exact threats the framework was designed to address. My approach translates regulatory requirements into technical controls with real defensive value, designed to be implemented by engineering teams rather than paper-shuffled by compliance officers. Having operated in highly regulated sectors including financial services, healthcare and critical infrastructure, I understand the operational constraints and cultural dynamics that determine whether compliance programmes succeed or fail, and I design pragmatic solutions that achieve certification while materially improving the security posture.
Cloud environments are systematically over-trusted — not because organizations are careless, but because the complexity and velocity of cloud infrastructure makes comprehensive security validation genuinely difficult. I assess cloud environments through the lens of an attacker who starts with minimal credentials and progressively escalates by chaining IAM misconfigurations, exposed metadata services, overly permissive storage policies and insecure workload configurations. The blast radius of what looks like a minor misconfiguration often extends far beyond the immediate resource. My cloud security engagements cover all major platforms and span the complete DevOps stack, from source code repositories and CI/CD pipelines through container orchestration to serverless compute and data storage layers.
Critical infrastructure is the highest-consequence target in the threat landscape and, historically, among the most chronically under-secured. Operational technology environments present unique security challenges: they were engineered for availability and deterministic behaviour rather than confidentiality, and many run protocols and hardware that predate modern security concepts by decades. I assess industrial control systems using methodologies purpose-built for OT constraints — passive network monitoring and protocol analysis that deliver intelligence without disrupting production processes, complemented by carefully scoped active testing where safety margins permit. My sector experience spans energy, water and wastewater, manufacturing, transportation and building management systems. I understand the difference between an acceptable residual risk in an OT context and one that demands immediate remediation.
End-to-end artificial intelligence and machine learning competence — from foundational academic research to adversarial testing of production systems. I design intelligent systems, publish on their security implications, and break them when clients need to understand their real risk exposure. The convergence of offensive security and AI research is where my most consequential work happens.
Modern conversational AI systems have evolved far beyond intent classification and slot filling. The state of the art demands architectures capable of maintaining coherent multi-turn context across extended interactions, adapting dynamically to user intent and affect, operating reliably across multiple languages and modalities, and degrading gracefully when presented with edge cases or adversarial inputs. I design systems to these standards and audit deployed systems against the same criteria. My research background in conversational AI security means I understand the full landscape of ways these systems fail — from benign edge cases that frustrate users to deliberate adversarial manipulation that causes systems to behave contrary to their operators' intentions. Conversational AI safety is not a bolt-on — it must be engineered from the ground up.
The deployment of generative AI at enterprise scale introduces a new class of security and reliability challenges that organizations are only beginning to grapple with. I help enterprises harness the capabilities of large language models, diffusion systems and multi-modal generative architectures while systematically addressing the risks that accompany them. This includes the design of retrieval-augmented generation systems that do not leak sensitive information through the retrieval layer, fine-tuning pipelines that avoid embedding private training data in ways that can be extracted through model inversion, and output monitoring systems that detect when generative systems deviate from intended behaviour. My adversarial testing experience is directly integrated into the design process — the best time to identify how a generative system can be manipulated is before deployment, not after.
Production machine learning systems operate in environments that differ fundamentally from the controlled conditions of research and development. Distribution shifts, adversarial inputs, data poisoning through upstream supply chains, and deliberate model manipulation by malicious actors are operational realities that must be designed for from the outset. I engineer ML systems with these constraints fully integrated into the architecture — not addressed as afterthoughts when problems emerge in production. My applied ML work spans cybersecurity applications including real-time intrusion detection, malware family classification, user and entity behavior analytics, and automated threat intelligence processing and correlation. Each of these applications demands ML models that are not merely accurate in test conditions but genuinely robust under adversarial pressure.
Original research at the intersection of cybersecurity and artificial intelligence informs every engagement I conduct. The academic process — rigorous hypothesis formation, controlled experimentation, peer review and independent replication — produces security insights that purely commercial practice cannot. My research output spans adversarial machine learning, conversational AI safety, privacy-preserving computation and the governance of autonomous systems. I collaborate with research institutions and academic groups internationally, contributing to the scientific foundations that the industry requires to deploy AI responsibly. I also bridge the other direction — translating theoretical findings from the research literature into practical security assessments and architectural recommendations that organizations can act on. I supervise researchers working on open problems in AI security and mentor the next generation of practitioners who will need to work fluently across both disciplines.
The convergence of offensive security methodology with machine learning systems research is where my most distinctive expertise lies. Adversarial examples that cause ML models to misclassify inputs with high confidence, training data poisoning that embeds backdoor triggers into model weights, model inversion attacks that reconstruct sensitive training inputs from model outputs, and membership inference attacks that determine whether specific records were used in training — these are not theoretical vulnerabilities documented in research papers. I have replicated all of them against production systems in controlled assessment contexts, characterized the conditions under which they succeed and fail, and developed the defensive countermeasures that demonstrably reduce exposure. This hands-on offensive experience is the foundation of the defensive guidance I deliver, and it produces substantially more actionable recommendations than purely theoretical security analysis.
Security operations generate more data than virtually any other organizational function — log telemetry, network flows, endpoint events, threat intelligence feeds, user activity records, cloud audit trails — and the vast majority of it is never analyzed in a way that produces security value. I build data science pipelines that change this equation: machine learning models that surface genuine anomalies from the noise, statistical correlation engines that connect signals across disparate data sources, threat intelligence enrichment systems that automate the triage of indicators, and visualization layers that give analysts the situational awareness to act decisively. Every pipeline is designed with security-first principles — a data platform that leaks sensitive telemetry or can be manipulated to suppress detections is a liability, not an asset.
Systematic investigation of vulnerabilities and defensive mechanisms in production AI deployments — from prompt injection in deployed LLMs to adversarial example attacks against computer vision systems used in safety-critical applications.
Development of safety protocols, alignment techniques and failure mode taxonomies for automated conversational systems operating at scale in high-stakes domains including healthcare, finance and legal services.
Research into novel attack vectors against deployed ML systems and the development of hardening techniques that maintain practical performance while materially reducing vulnerability to adversarial manipulation and distribution shift.
Methodologies for the security evaluation, alignment verification and reliability assessment of large-scale generative models and autonomous AI agent systems in enterprise deployment contexts.
Active collaboration with international research institutions and technology organizations to develop scientific foundations for trustworthy AI. Research output is designed to bridge the gap between theoretical advances and operational security practice — translating findings into actionable guidance that practitioners can apply without requiring deep ML expertise.
I am a cybersecurity expert and artificial intelligence researcher with over a decade of operational experience at the frontier where these two disciplines converge. My career has been built on a single core conviction: the most effective security practitioners are those who think and operate like the adversaries they defend against.
As an active operator across multiple international Red Team engagements, I plan and execute sophisticated adversarial campaigns that replicate the complete tradecraft of real-world threat actors — from initial access through to objectives on target. The intelligence these operations produce is qualitatively different from any other form of security assessment: it demonstrates exactly what a determined adversary would find, the path they would take, and the impact they would achieve. Across engagements spanning financial services, healthcare, critical infrastructure, defence and technology sectors, the objective has never been missed.
In parallel, my research advances the science of AI security and adversarial machine learning. The proliferation of AI systems across critical business processes has created an entirely new class of attack surface — one that most security teams are not equipped to evaluate and most AI teams are not equipped to defend. I work at the intersection of these two fields because that is precisely where the most consequential security challenges of the next decade will be decided.
Beyond operational and research work, I invest significantly in the development of the next generation of practitioners. I supervise researchers working on open problems in AI security, mentor practitioners making the transition from conventional cybersecurity into machine learning contexts, and collaborate with academic institutions to ensure that the curriculum reflects the reality of what practitioners will face when they enter the field.
The convergence of offensive security expertise and foundational AI research capability remains rare in the industry. Most cybersecurity professionals lack the depth in machine learning required to evaluate AI systems seriously; most AI researchers have never operated in an adversarial context. That combination is what I bring to every engagement.
Full-scope adversarial campaigns. I breach what others assess and prove it with impact — not theory.
Original research on AI security and adversarial ML. Theory and practice, simultaneously.
Developing the next generation of practitioners who will work at the AI–security boundary.
"To find a vulnerability is not to expose a weakness — it is to preempt a catastrophe. That is our obligation and our discipline."
Available for offensive security engagements, AI security assessments, academic collaboration and consulting on AI governance and risk. All enquiries handled with discretion. NDA available for all engagements.