My research concerns computer security, and is motivated by the central questions of:
My research tends to favour formal approaches to answering these questions, although not exclusively. For instance, using machine checked proofs or sound empirical measurement to demonstrate system security, or using verifying compilation of domain specific languages to more easily build verified systems, are examples of formal approaches to building secure systems. On the other hand, some aspects of security like human behaviour need to be understood through a different lens but are no less vital.
The most important quality I look for in potential research students (whether summer interns, undergraduate thesis students or higher degree research students) is genuine curiosity. That means
Being able to answer these kinds of questions is more important than having specific experience with e.g. formal methods or particular areas of security; although such experience may be beneficial for some topics. As you'll see from the overview of my research on my main page, my interests in security are relatively broad.
I look for students who are excited by the prospect of reading the latest research papers in their area of interest, and who—when reading such papers—are likely to ask (and answer) questions such as:
- “What are the unstated assumptions and limitations of this work?”
- “How does it compare to other state-of-the-art research that attempts to solve the same problem?”
- “How can this work be improved?”
Below you'll find a list of some semi-recent ideas for research projects.
Remember, this list is not exhaustive. You should think of these as starting points for potential ideas. Please get in touch if you'd like to talk further.
Being able to prove that a concurrent program doesn't leak sensitive data is an important challenge for building highly secure software. This project involves extending an existing information flow verification tool to allow it to handle a wider class of programs, and applying the tool to non-trivial code-bases to detect and fix security vulnerabilities.
Modern languages like Rust sport advanced static type systems to make it easier to write correct code. Rust's type system, for instance, prevents not only memory-safety errors (like e.g. Java's), but also prevents race conditions in concurrent programs. Even better, it manages to do these without needing a garbage collector. What makes this work is Rust's sophisticated static type system, which tracks which parts of a program have access to which data structures at which times. This enables it to detect for instance when two threads might modify the same data structure at the same time (a race condition) or when a data structure can no longer be accessed (allowing it to be automatically freed without need of garbage collection).
Unfortunately, Rust's type system is also known to be tricky to use, and programmers often find themselves having to fight against it when trying to write seemingly simple programs. One key challenge is finding the right set of annotations to add to a program to get it to successfully type check. This project investigates how to solve this problem by having the annotations automatically generated by phrasing correct annotation inference as a constraint satisfaction problem.
Recent work has begun to explore how to automatically verify that neural networks produce safe results. In this project you will explore how to specify security properties of neural networks for formal verification, with a particular focus on differential privacy.
This project explores the interaction between usability and security in high-assurance systems, which need to protect highly sensitive information from well-resourced attackers. In collaboration with the Defence Science and Technology Organisation, we have built a highly secure device that allows users to interact with classified data at the same time as performing ordinary (risky) work activity, like Internet browsing, on the same desktop. We are proving that the device and its software does not leak classified data to the public Internet, even under the assumption that the user's desktop operating system (i.e. Windows) is compromised.
However, this proof necessarily rests on the assumption that the user never accidentally causes a classified information leak (e.g. by pasting classified data into an unclassified window). In practice, user error is known to be a major cause of sensitive information breaches. In this project, you will develop attacks that run on the user's Windows desktop and try to exploit the user into unknowingly causing an information leak.
This will require writing programs on the Windows operating system that try to fool the user by, for instance, impersonating classified applications, or trick the user into entering sensitive information when they shouldn't be. These applications and attack scenarios may then form the foundation for conducting empirical evaluations of how effective the attacks are against ordinary users. Alternatively they could form the foundation for building formal (i.e. mathematical) models of user behaviour under different threat models, to better inform the formal security evaluation of the system.
Recent work has produced the world's first theories for proving that well-synchronised concurrent programs do not leak information, i.e. are information flow secure. However, so far they are restricted to systems with very simple and rigid security policies. In this project, you will extend one of these theories to support more diverse security policies. A relatively simple extension would be one to support multiple security classifications. A more ambitious goal would be to adapt the theory to a knowledge-based definition of information flow security, in order to support security policies with secure declassification.
Your work will be carried out in the interactive theorem prover Isabelle/HOL, in which the existing theory has been formalised.