Hi, I’ve been building a tool for facial rigging that I believe may be interesting to some mGear users so I’m opening this thread to see if there’s any interest in integrating it with the plugin.
The rationale for the tool is that while mGear is pretty amazing at bipeds it’s quite lacking in the facial department IMO. The tool I’m building follows mGear’s data centric approach and I’m pretty much using it as if it were part of mGear anyway, but I’m not too familiar with the mGear’s code so I’m keeping the source as its own standalone set of modules.
The workflow is split into two parts: creating the joints and driving the facial poses.
Creating the joints is done via something anyone using mGear will understand I think. Just draw guides and then serialize the guides, not the rig.
All components follow a workflow I learned from Judd Simantov. The point controller is a locator that is always zeroed out and drives a joint. The eye (should be eyelid really) has the bottom and upper lids rotating from the center of the eyeball to give a nice arc rather than going straight up and down, and the jaw handles constraining a subset of locators to its main driver so that they go follow the jaw while still being zeroed out.
To main tool of the system is the UI to actually build the poses. The workflow is pretty straightforward, add pose to host, set driven attributes, sculpt the pose by moving locators around, capture the pose and then refine, etc.
The final position of each locator is calculated via a custom driver node. The node view is pretty insane. Each driven attribute for a pose is an anim curve, so you can imagine how that explodes pretty fast.
This tool is also designed with a data centric approach in mind where rigs are serialized and they can be rebuilt given that the same locators/hosts exist in the scene.
The actual blending code is pretty simple, but the node is using quite a few optimizations to keep playback responsive and fast (I haven’t gotten channel box sliding to be nearly as responsive as playback unfortunately)
For angular attributes the node handles angular curves without any conversion nodes which is a pretty nice optimization when you’ve got hundreds of angular curves.
A few considerations, first the code is not polished at all. For example mirroring only handles my own side naming convention, and there’s hardcoded paths everywhere. The system has not been proven in production yet, and there are a few core issues to address (mainly laggy channel box on heavy rigs, and capturing poses when a different pose driving the same targets is active leads to jumps due to the way poses are captured)
Just for the record this is not something I could ever integrate into mGear by myself because I don’t have neither the time nor any knowledge of mGear’s architecture but if anyone is interested in integrating it I can contribute the code and try to implement other features.
Not sure if there’s any interest on a system like this, I think mGear’s approach is mostly blendshape oriented, but this is more for games where you need joint based facial rigs and I could never find any solution that didn’t involve heavy use of SDKs, which are IMO not very nice to use at all. If anyone’s got any questions I’d be happy to answer.




