I started in VFX as a 3D generalist and worked my way up to Senior 3D Artist. That path covered procedural environment generation in Houdini, modeling, and lighting in studios in the US and Canada, including DNEG, Digital Domain, and MPC. During this time, I contributed to award-winning teams on productions like Dune: Part Two and The Last of Us.
I fell in love with proceduralism and large-scale work through Houdini. This involves defining systems and rules that generate results instead of custom workflows. AI is the ultimate tool for proceduralism. Instead of defining rigid rules, the systems learn them. Figuring out how to build those systems is what drew me into computer vision and machine learning.
Volustor is a volumetric asset management platform for film, television, and games, backed by a UK Research and Innovation grant. I built the pipeline that turns raw volumetric scans into a searchable asset library with automatic tagging, bounding boxes, and segmentation.
Bournemouth University's National Centre for Computer Animation, a Master's programme for VFX and animation professionals in machine learning for media. Coursework ranged from computer vision and diffusion models to data mining and software engineering. My thesis SegSplat, researched in partnership with Volustor, tackled 3D instance segmentation on Gaussian splats.
Automated pipeline that turns 2D captures into labeled 3D point clouds. VLMs detect objects, Grounding DINO localizes them across frames, SAM segments masks, and visual hull projection produces 3D bounding boxes.
View projectVFX showreel 2026. Environment work, lighting, and procedural generation across feature films and episodic TV.
View reelSingle-step diffusion model restoring degraded Gaussian splat avatar renders. SD-Turbo with LoRA adapters, ~80ms inference.
View projectMSc dissertation. 3D Instance Segmentation on Gaussian Splats Using Cross-Domain Transfer from RGB-D Training Data.
View projectCustom SegNet + YOLOv8 pipeline for automated rotoscoping with multichannel EXR output for Nuke compositing.
View projectCNN trained from scratch on 2,392 images. Iteratively refined from 58% to 91.43% test accuracy across 7 animal categories.
View projectThirteen productions in visual effects at DNEG, Digital Domain, and MPC. Hover to see the title.
✓ alexmoed@gmail.com copied to clipboard