My research concerns computer security, and is motivated by the central questions of:
My research tends to favour formal approaches to answering these questions, although not exclusively. For instance, using machine checked proofs or sound empirical measurement to demonstrate system security, or using verifying compilation of domain specific languages to more easily build verified systems, are examples of formal approaches to building secure systems. On the other hand, some aspects of security like human behaviour need to be understood through a different lens but are no less vital.
The most important quality I look for in potential research students (whether summer interns, undergraduate thesis students or higher degree research students) is genuine curiosity. Specific experience with formal methods or particular areas of security is not essential; although it may be beneficial for some topics. As you'll see from the overview of my research on my main page, my interests in security are very broad, and I'm always willing to dive into a new topic for a motivated student.
Below you'll find a list of some current ideas for research projects. However, sometimes the best student project ideas are those that come from students themselves. So please get in touch if you'd like to talk further.
Remember, this list is not exhaustive; the best ideas may still be in your head. Contact me to talk more.
Modern languages like Rust sport advanced static type systems to make it easier to write correct code. Rust's type system, for instance, prevents not only memory-safety errors (like e.g. Java's), but also prevents race conditions in concurrent programs. Even better, it manages to do these without needing a garbage collector. What makes this work is Rust's sophisticated static type system, which tracks which parts of a program have access to which data structures at which times. This enables it to detect for instance when two threads might modify the same data structure at the same time (a race condition) or when a data structure can no longer be accessed (allowing it to be automatically freed without need of garbage collection).
Unfortunately, Rust's type system is also known to be tricky to use, and programmers often find themselves having to fight against it when trying to write seemingly simple programs. One key challenge is finding the right set of annotations to add to a program to get it to successfully type check. This project investigates how to solve this problem by having the annotations automatically generated by phrasing correct annotation inference as a constraint satisfaction problem.
Recent work has begun to explore how to automatically verify that neural networks produce safe results. In this project you will explore how to specify security properties of neural networks for formal verification, with a particular focus on differential privacy.
Recent work has produced the world's first theories for proving that well-synchronised concurrent programs do not leak information, i.e. are information flow secure. However, so far they are restricted to systems with very simple and rigid security policies. In this project, you will extend one of these theories to support more diverse security policies. A relatively simple extension would be one to support multiple security classifications. A more ambitious goal would be to adapt the theory to a knowledge-based definition of information flow security, in order to support security policies with secure declassification.
Your work will be carried out in the interactive theorem prover Isabelle/HOL, in which the existing theory has been formalised.