Cochabamba/14/March/2026
Working from our small lab in the mountains in Cochabamba, we are developing new approaches to animate non-humanoid digital characters.
Most computer vision frameworks were trained on human pose datasets. When working with creatures, plants, or unconventional bodies, these systems quickly reach their limits. There is also little data about spatial behavior — how bodies actually move through depth, rotation, and real physical space. We optimized vision systems to recognize humans in images, but not necessarily to understand how movement unfolds in the real world. W ere missing embodied intelligence.
To address this, we are integrating MediaPipe tracking with stereo depth sensing from a robotic camera. This allows us to estimate spatial orientation and body rotation using real-world measurements in millimeters rather than relying purely on 2D image heuristics.
Much of the work happens through hands-on experimentation: prototyping new pipelines, combining sensing systems, and testing ideas through iteration.