Over a decade in visual effects across episodic TV, advertising, animation, and feature films. Now applying that production experience to machine learning research in volumetric capture and 3D scene understanding.
I started in VFX as a 3D generalist and worked my way up to Senior 3D Environment Generalist TD. That path covered procedural environment generation in Houdini, modelling, and lighting across studios including DNEG, Digital Domain, and MPC. Along the way I contributed to award-winning teams on productions like Dune: Part Two and The Last of Us.
I fell in love with proceduralism and large-scale work through Houdini. Defining systems and rules that generate results instead of bespoke workflows. AI is the ultimate proceduralism. Instead of defining rigid rules, the systems learn them. Figuring out how to direct that learning is what pulled me into research.
Bournemouth University's National Centre for Computer Animation. My thesis, SegSplat, tackled 3D instance segmentation on Gaussian splats using cross-domain transfer from RGB-D training data. The research demonstrated that a frozen pre-trained encoder combined with aggressive data augmentation and adjusted clustering parameters could segment objects in photogrammetric captures without any domain-specific fine-tuning.
Volustor is a UK startup developing volumetric capture technology with a 27-camera system for film production. I build the machine learning pipelines that process those captures: automated metadata tagging, spatial search, and 3D instance segmentation on Gaussian splats.
The work sits at the intersection of computer vision and production infrastructure. The goal is to turn raw volumetric captures into searchable, segmented assets that artists can actually use, funded by a UK Research and Innovation Grant.
Training 3D instance segmentation models to identify and separate objects within photogrammetric Gaussian splat scenes. Cross-domain transfer from RGB-D data without fine-tuning.
Building pipelines that chain vision-language models with segmentation models for automated object detection, metadata tagging, and spatial search across multi-camera captures.
Processing 27-camera volumetric captures into production-ready assets. Background removal, quality control, deduplication, and export to artist-friendly formats.
Point cloud processing, depth estimation, stereo reconstruction, and mesh generation. Bridging traditional VFX workflows with modern computer vision techniques.
VFX showreel 2026. Environment work, lighting, and procedural generation across feature films and episodic TV.
View reel →MSc dissertation. 3D instance segmentation on Gaussian splats using cross-domain transfer from RGB-D training data.
View project →Text emotion classification with TensorFlow. GRU-based architecture reaching 97.16% validation accuracy across five categories.
View project →Custom SegNet + YOLOv8 pipeline for automated rotoscoping with multichannel EXR output for Nuke compositing.
View project →CNN trained from scratch on 2,392 images. 7 iterations from 58% to 91.43% test accuracy across 7 animal categories.
View project →✓ alexmoed@gmail.com copied to clipboard