The Llama Learns to Feel, the Model Learns to Listen, and Yakumama Is Born.

Cochabamba/6/May/2026

In Week 10 of the koa.xyz computational creativity lab, Violeta Ayala, Daniel Fallshaw, and Brian Condori transformed the llama pipeline from a direct puppeteering experiment into a trained emotional performance system built from 95,091 frames of recorded movement. What began as technical debugging became a deeper confrontation with how AI systems interpret emotion, embodiment, and contradiction.


The week opened with a complete rebuild of the llama pipeline. The camera bridge was rewritten for OAK-D and DepthAI v3, adding neck tracking, ear tracking, walk controls, and face-only recording modes. Inside Blender, the system expanded into 56 bones and 6 shape keys connected through wave cascades across the neck and tail, allowing movement to ripple organically through the body instead of behaving like synchronized robotics. Detection thresholds for blinking, brows, mouth, and nose went through multiple calibration rounds to function at real performance distance.


Two major recording sessions produced three separate neural motion models. The first dataset — over 50,000 frames — was discarded after discovering the mouth and ears had recorded almost no expressive data because of incorrect thresholds. Rather than treating this as failure, the team used it to redesign the entire capture methodology: iterative recording, training, testing, debugging, and retraining inside Blender itself. The second round captured over 95,000 frames simultaneously from camera input and real bone output, allowing the system to learn not just facial tracking but how movement actually propagated through the rig.


The resulting neural model learned something unexpected. During the “contenta” mood, the system repeatedly activated the shape key labeled “sad.” The correction layer attempted to erase it. Violeta Ayala challenged the assumption itself: who defines what happiness should look like? The shape key was not broken. The performer’s face carried contradictory emotional states at once. Instead of correcting the model toward standardized emotional expression, the pipeline was restructured around the Aymara concept of ch’ixi — contradictions existing simultaneously without needing resolution. From that point onward, the model itself became the performance. Corrections would only exist for broken sensors or physical limitations, not cultural assumptions.


At the same time, the llama received a heartbeat. A new cardiac breathing system translated nine emotional states into distinct physiological rhythms, from exhausted 45 BPM breathing to accelerated anger and fear. The breathing moved through chest, abdomen, spine, neck, and nostrils using asymmetric systole and diastole timing derived from real cardiac behavior rather than animation loops. The creature no longer simply reacted emotionally; it appeared alive through internal physical rhythm.


The gait system was also rebuilt from real Andean mountain footage recorded by Violeta Ayala. Hoof tracking analysis confirmed that llamas move through lateral pacing — both legs on the same side swinging together — unlike horses. The resulting movement generator introduced distinct behavioral walks: curious investigation, playful bouncing, and proud ceremonial stride.


During the same week, Yakumama emerged as the llama’s cognitive identity. Built on a Qwen 3.6 local language model, she became a protector of water and memory from the high Andean plains. Her ancestors carried salt between Uyuni and the valleys. She understands glaciers, mining, drought, and rain as emotional states tied to land and time. Dawn changes her mood because the mountains change color. Night changes her speech because water remembers differently in darkness.


Alongside the llama work, the Passion Flower heartbeat pipeline was stabilized through ESP32 sensor fixes, free-running cardiac phase systems, and synthesized heartbeat audio inside Blender. The broader koa.xyz ecosystem continued expanding across creatures, plants, sensors, VR systems, and neural motion architectures.


By the end of the week, the lab had crossed a threshold: the work was no longer simply about mapping human movement onto creatures. It became an exploration of how bodies carry history, contradiction, rhythm, and territory into computational systems.


Funded by Abundant Intelligences under Grant CF00159672