Francesca Palermo
Hello! I am Francesca, a Senior Machine Learning and Computer Vision Scientist at EssilorLuxottica’s Smart Eyewear Lab, working at the intersection of computer vision, deep learning, and healthcare. My research focuses on on-device intelligence for smart eyewear, with a particular emphasis on context recognition, ego action recognition, human pose and face keypoint estimation, and SLAM for wearable devices.
In my current role, I design and optimise computer vision and deep learning models for edge deployment and TinyML, achieving substantial model size reductions while maintaining competitive accuracy. I lead a collaboration with Politecnico di Milano, managing 4 research groups and 8 projects on smart eyewear, and I co-lead a joint project with Meta on on-device, camera-based context recognition. I also investigate biomarker extraction from RGB eye imaging on-device, including early work on non-invasive glucose monitoring and anaemia detection. I drive innovation and IP generation, with 7 patents submitted and 5 peer-reviewed publications in the computer vision and machine learning domains, and I co-organise workshops (for example ICCV, IJCNN) on smart eyewear and edge intelligence.
Previously, I was a Machine Learning Research Associate at Imperial College London, working with the Care Research and Technology Centre (CR&T) of the UK Dementia Research Institute (UKDRI) and the Barnaghi Lab. There I developed deep learning models (for example LSTMs and autoencoders) for longitudinal, personalised time-series data collected in collaboration with the NHS, focusing on detecting health-related episodes in people living with dementia and improving model robustness and explainability when learning from unreliable data.
I received my Ph.D. from the Advanced Robotics Lab and the HAIR Robotics Lab at the School of Electronic Engineering and Computer Science, Queen Mary University of London, United Kingdom. My Ph.D., sponsored by the National Centre of Nuclear Robotics (NCNR), focused on multi-modal robotic exploration and fracture detection in extreme environments such as nuclear facilities, combining visual object detection, tactile sensing, and robotic control.
Beyond my roles at EssilorLuxottica and Imperial College, I have contributed to several other research initiatives. I worked on the NinaPro and MeganePro projects, analysing the repeatability of hand movement recognition for robotic prosthesis control using sEMG data. I developed an augmented reality environment in Unity with Microsoft HoloLens to assist amputees during arm prosthesis training, and I applied image segmentation methods to large medical imaging datasets for cancer prediction. These projects strengthened my interest in machine learning for healthcare, wearable systems, and human–robot interaction.
As a Ph.D. graduate in engineering with a focus on machine learning, I have extensive experience with deep learning techniques and architectures across computer vision, time-series analysis, and multimodal learning. My work spans supervised learning, object detection, segmentation, SLAM, and 3D reconstruction, as well as applications in healthcare, recommender systems, and generative models. I primarily work in Python with PyTorch, TensorFlow, OpenCV, and modern MLOps tools such as Docker and Google Cloud Platform (Vertex AI, Buckets).
For further information, please refer to the projects page.
Main research:
- Computer Vision for Context Recognition
- Context and ego-action recognition from egocentric video
- Human pose and face keypoint estimation for wearable devices
- Visual SLAM and perception for embedded and edge platforms
- Machine learning for Healthcare
- Image Classification
- Object Detection
- Multimodal learning (vision, signals, and tactile or haptic sensing)
- Learning from Noisy Data
- Haptic Exploration
I am currently studying how to design robust visual learning models for on-device intelligence and downstream decision-making, exploring how vision and language can be integrated in real-world applications.