This page provides an overview of some of the research projects carried out in our group. It contains resources (source code, data, etc.) that you might find useful for your research or to reproduce our results. If you use any of the resources please cite the corresponding publications. If you come across a mistake or bug or have a suggestion please contact us and we'd be happy to hear your feedback.


Digitisation and 3D modelling of historical buildings and heritage sites



Spatial data and building information modelling are indispensable for smart decision making on the use and management of heritage buildings and sites. The Royal Exhibition Building (REB) Living Lab is a user-centred platform for research and development of technologies and methodologies for the documentation, protection, and preservation of historical buildings and cultural heritage sites. A particular focus of the project is to develop effective and efficient methods for spatial data acqusition and 3D Building Information Modelling (BIM) of the Royal Exhibition Building.


Reference:

Khoshelham, K., 2018. Smart Heritage: Challenges in Digitisation and Spatial Information Modelling of Historical Buildings, in: Belussi, A., Billen, R., Hallot, P., Migliorini, S. (Eds.), 2nd Workshop On Computing Techniques For Spatio-Temporal Data in Archaeology And Cultural Heritage. CEUR Workshop Proceedings, Melbourne Australia, pp. 7-12. Paper (8.8 MB PDF)


Real-time parking occupancy detection using CCTV images



Parking Guidance and Information (PGI) systems have the potential to reduce congestion in crowded areas by providing real-time indications of occupancy of parking spaces. We present a robust parking occupancy detection framework by using a deep convolutional neural network and a binary Support Vector Machine (SVM) classifier to detect the occupancy of outdoor parking spaces from CCTV images.


Reference:

Acharya, D., Yan, W., Khoshelham, K., 2018. Real-time Image-based Parking Occupancy Detection Using Deep Learning. In: Peters, S., Khoshelham, K. (Eds.), Proceedings of 5th Annual Research@Locate Conference, CEUR Workshop Proceedings Vol. 2087, Adelaide, Australia, pp. 33-40. Paper (9 MB PDF)


Real-time detection and tracking of pedestrians in CCTV images using deep learning



We use a deep convolutional neural network to detect pedestrians in CCTV images. The CNN features are matched in subsequent frames to establish correspondence and track the detected pedestrians across the image sequence.

Reference:

Acharya, D., Khoshelham, K., Winter, S., 2017. Real-time Detection and Tracking of Pedestrians in CCTV Images Using a Deep Convolutional Neural Network. In: Deng, X., Pettit, C., Leao, S.Z., Doig, J. (Eds.), Proceedings of 4th Annual Researc@Locate Conference. CEUR Workshop Proceedings, Sydney, Australia, pp. 31-36.Paper (530 KB PDF)


Vehicle positioning by visual inertial odometry



Accurate positioning of moving vehicles in GNSS-deprived urban areas is important for autonomous vehicles and mobile mapping systems. We present various visual-inertial odometry approaches to vehicle positioning in the absence of GNSS signals, and present experimental results that show visual-inertial odometry can provide positioning accuracies up to 0.3% of the trajectory length, which is promising for vehicle positioning in short periods of GNSS signal outage.

References:

- Ramezani, M., Khoshelham, K., 2018. Vehicle Positioning in GNSS-Deprived Urban Areas by Stereo Visual-Inertial Odometry. IEEE Transactions on Intelligent Vehicles 3, 208-217. Paper (2 MB PDF)
- Ramezani, M., Khoshelham, K., Fraser, C., 2018. Pose estimation by Omnidirectional Visual-Inertial Odometry. Robotics and Autonomous Systems 105, 26-37. Paper (3.6 MB PDF)
- Ramezani, M., Khoshelham, K., Kneip, L., 2017. Omnidirectional visual-inertial odometry using multi-state constraint Kalman filter, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, pp. 1317-1323. Paper (5.3 MB PDF)
- Khoshelham, K., Ramezani, M., 2017. Vehicle Positioning in the Absence of GNSS Signals: Potential of Visual-Inertial Odometry, IEEE Joint Urban Remote Sensing Event. IEEE, Dubai, UAE, pp. 1-4. Paper (442 KB PDF)


Urban scene classification using full-waveform lidar data



Fine scale land cover classification of urban environments is important for a variety of applications. Full-waveform lidar data has been increasingly used for providing land cover classification due to its high geometric accuracy as well as its additional radiometric information. An important issue in the classification of lidar data is the inevitable imbalance of training samples, which usually results in poor classification performance in classes with few samples (minority classes). In this research, a synergy of sampling techniques in data mining with ensemble classifiers is proposed to address the data imbalance problem in the training datasets.

References:

- Azadbakht, M., Fraser, C.S., Khoshelham, K., 2018. Synergy of sampling techniques and ensemble classifiers for classification of urban environments using full-waveform LiDAR data. International Journal of Applied Earth Observation and Geoinformation 73, 277-291.Paper (4 MB PDF)
- Azadbakht, M., Fraser, C., Khoshelham, K., 2016. Improved Urban Scene Classification Using Full-Waveform Lidar. Photogrammetric Engineering & Remote Sensing 82, 973-980. Paper (5 MB PDF)
- Azadbakht, M., Fraser, C., Khoshelham, K., 2016. A Sparsity-Based Regularization Approach for Deconvolution of Full-Waveform Airborne Lidar Data. Remote Sensing 8, 648. Paper (3 MB PDF)


Closed-form motion estimation from point-plane correspondences

Localizing a mobile sensor in an indoor environment usually involves obtaining 3D scans of the environment and estimating the sensor pose by registering the successive scans. This can be done effectively by minimizing point-plane distances for which only iterative solutions are available. This work presents a direct method for estimating 6-dof pose of a sensor by minimizing point-plane distances.

Code: Matlab functions (9 KB Zip)
Reference:

- Khoshelham, K., 2016. Closed-form solutions for estimating a rigid motion from plane correspondences extracted from point clouds. ISPRS Journal of Photogrammetry and Remote Sensing, 114(2016): 78-91. Paper (5 MB PDF)
- Khoshelham, K., 2015. Direct 6-DoF Pose Estimation from Point-Plane Correspondences. International Conference on Digital Image Computing: Techniques and Applications (DICTA), Adelaide, Australia. Preprint paper (600 KB PDF)


Modeling indoor spaces using a shape grammar



Existing methods for creating indoor models from point clouds focus on geometric reconstruction of architectural elements: walls, floors and ceilings. However, for route planning and navigation the model should contain navigable interior spaces as well. This work present an approach to 3D modelling of interior spaces using a shape grammar. The interior spaces are modelled by iteratively placing, connecting and merging cuboid shapes. The parameters and sequence of grammar rules are learned automatically from a point cloud.

Slides: Riva2014 presentation (2 MB PDF)
Video: Reconstruction animation (1.6 MB MP4)
Reference:

Khoshelham, K., Díaz-Vilariño, L., 2014. 3D modeling of interior spaces: Learning the language of indoor architecture. ISPRS Technical Commission V Symposium, 23 – 25 June 2014, Riva del Garda, Italy. Paper (726 KB PDF)


Modelling rail tracks using mobile Lidar point clouds



Mapping and 3D modelling of rail tracks is useful for monitoring irregularities and planning maintenance. This work presents a method for automated generation of 3D mesh models of the rail tracks from mobile Lidar point clouds. It involves a Markov Chain Monte Carlo estimation step for approximate fitting of rail pieces, and an interpolation step for fitting a smooth and continuous 3D mesh model.

Video: Rail tracks fly-through (10.7 MB MP4)
References:

- Oude Elberink, S., Khoshelham, K., 2015. Automatic Extraction of Railroad Centerlines from Mobile Laser Scanning Data. Remote Sensing 7(5): 5565-5583. Paper (44.5 MB PDF)
- Oude Elberink, S., Khoshelham, K., Arastounia, M., Diaz Benito, D., 2013. Rail track detection and modelling in mobile laser scanner data. ISPRS Workshop Laser Scanning 2013, Antalya Turkey. Paper (1.8 MB PDF)


Accuracy analysis of Kinect depth data

Kinect has been a very popular sensor for capturing depth (=range) images, but how accurate are the depth measurements? This work was one of the first to provide an insight into the geometric accuracy of Kinect depth data and the factors influencing it.

References:

- Khoshelham, K., Oude Elberink, S., 2012. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors 12(2): 1437-1454. Paper (1.2 MB PDF)
- Khoshelham, K., 2011. Accuracy Analysis of Kinect Depth Data. ISPRS Workshop Laser Scanning 2011, Calgary, Canada. Paper (667 KB PDF)