Home Website Youtube GitHub

Data centric face setup workflows

Hi Jeroen,

Layered rigs, or chaining blendshapes together like that, can have a big performance cost and - up to Maya 2018 at least - was not supported for parallel evaluation. (ie. it makes your rig slow.)

Anytime I’ve seen someone rigging like this, it usually seems to be for rigging convenience (not a great reason), and not because it was actually required for some advanced effect (potentially a good reason). So you’re making your animators pay for your convenience.

And anytime I’ve had to “renovate” someone else’s rig with this technique, I’ve found it to be frustratingly complex. But the same result could be achieved with a nice simple flat structure.

  • ngSkinTools gives you a powerful way to work in layers, and lets you do some very complex skinning. You can also export and export the data in JSON.
  • mGear has skin.py for exporting and importing skinning. Again, so you can work in “layers” of data, but assemble a final flat rig.
  • The Shape Editor lets you export and import shapes, and edit shapes directly on the mesh, while your character is in poses. Which also helps you work in layers, but deliver flat.
  • There is also the issue of people wanting their face controls to follow the face rig’s deformation. This is a fancy trick, but it also costs a huge amount of rig performance. And I’m finding more and more animators are just hiding controls and using synoptic pickers, or Studio Library poses to work with faces. So really, truly, honestly question if this is necessary, and measure the cost in FPS.

Finally, if you do want to work in layered geo, make sure you check out this article on the topic, on how to make it evaluate in parallel: https://medium.com/@kattkieru/deformation-layering-in-mayas-parallel-gpu-world-15c2e3d66d82

tl;dr: I don’t recommend it.

5 Likes

Hey @chrislesage,

This is good stuff! :slight_smile: I wasn’t aware this had such a big performance loss. (More reason to step away from advanced skeleton)

I usually use the layering to get some extra fleshy controls such as the cheek without overlapping the weights of the lips for instance.

Thanks for that link! I’ll definitely check it out :smiley:

Here is another link that might interest you. Ryan Porter shows a way to make softMods work in parallel (not for speed, but to deform simultaneously without interfering with each other or double-transforming.)

Example file: https://github.com/yantor3d/maya_examples/blob/master/parallel_deformers_example.ma

Something like this might be perfect for those fleshy bits on top of a skinCluster.

3 Likes

Actually, he has a couple other articles which will likely be relevant to this.


Also, on Twitter a couple years ago, he was talking about overdriving skinClusters. Which meant something like this, if I recall correctly: If you use 1/10th of a skinCluster’s deformation space (weight to 0.1 instead of 1.0), then you can effectively layer in 10 things on top of each other. Then drive the skinCluster envelope to 10.0 instead 1.0, and they will move at the same rate, but all normalize to each other.

(That might be a huge misunderstanding or mis-remembering of what he was talking about. And working in this way will almost certainly require some Python skills or at least ngSkinTools, because you aren’t possibly going to hand-paint this technique.)

But here is an example. I skin 4 distinct shapes into a skinCluster on 4 separate layers in ngSkinTools. Normalized to 1.0, each layer overrides the other, causing a big mess, as you would expect.

Then, I reduce each layer to 0.25 opacity, and overdrive the skinCluster to 4.0 envelope. They all have their own breathing room to not interfere with each other:

This isn’t necessarily practical for rigging, but it might give you some ideas that help you avoid chaining blendshapes together.

(by the way, if anyone is reading this and tries it, you might find the skinCluster envelope maxes out at 2.0. But you can connect it to another attribute and set that attribute to anything you want.)

4 Likes

These are some great resources. Definitely going to bookmark this thread and follow that blog. It’s really interesting because I’ve always wondered about the 6000 index of the blendshape node.

Animators are going to be very happy when I follow these hints indeed! :smile:

It is true that there is a performance cost. But I don’t agree that you are making pay the animators for your convenience.

At less for my experience up to a certain point in complexity is possible to do many features in a single layer. But reaching a certain point there is also many good reasons to use layered rigs for facials. (Also some features I think is not possible to add without layers)

  • Editing and adding features (Iteration is king)
  • time constraints/cost
  • re-usability
  • Complex features

I know animators need fast rigs. But If they have to choose between a fast rig or a rig that reach the pose/expression that they need the choice is clear. + there are ways to minimize the performance cost using tooled rigs with callbacks and organization in the hierarchy. Also if you have more than one character in the scene this non-parallel evaluation will dilute amount the characters.

I have rigs with 20 layers, maybe some can be collapsed in one layer. But honestly, I think doing it with a single layer will be impossible. Also, the performance loss is not that bad

@Jeroen PLUG: the facial rigging tutorial that I am working on is about these topics.

@chrislesage I would love to keep discussin about this topic :slight_smile:

4 Likes

I would love to keep discussin about this topic

Agreed! I’m very glad to hear your opposing opinion! And to anyone reading this: don’t trust either of us. Use the profiler and your own design sense. :slight_smile:

I approach writing code a lot like I approach my rigging as well. I avoid deep complexities. And I am sure that has cost me reaching certain levels of sophistication. But I always favour being nimble and light and maintainable. A rig with 20 layers is a nightmare to the team who inherits it when you’ve moved on.

2 Likes

Again disagree here: 20 well organised and with a clear function layers are 20 simple rigs that are easy to manage and learn by any member of the team :wink:

EDIT: here is a list of the custom steps / layers that I am using in the facial rigging tutorial. In this case there are 9 layers + 1 base custom step to setup the base and some convenient functions

image

EDIT 2:

I agree with this! :smiley:

1 Like

That’s a really interesting topic, wonder what’s your average frame rate. I know that optimization rules, but in my case, we have a lot of projects in a short amount of time, that way iteration and a possibility to change things fast is super important.
Usually, I’m trying to keep body + lips_joints + eye_joints + main blendshapes on the main setup, but the rest is layered (usually between 5-9 layers).

With scenes up to 4-5 characters, there is usually 25-30fps in the scene (with one character or isolated one - it’s 30-50fps).
Both cases are pleasant to animate with.
And btw. we have a bit lame CPUS + GTX1080/1070

1 Like

There are certainly scenarios where you’d pick one over the other.
Most of our jobs are commercials with short deadlines and require 4-5 rigs in a week or two at most.
Sort of like @Krzym :slight_smile:
Although through AdvancedSkeleton we usually end up with 10 layers or more…

The old way was to sculpt all the tech shapes and use SHAPES to link everything to controllers. This produced high performance rigs with a lot of control and flexibility for shapes. (Also enabling the pre or post infinity per shape to give more freedom).
I believe this can be a very nice technique combined with the mGear workflow as the shapes already get imported and rebuild the connections.
The downside is that it’s very time consuming to sculpt all the tech shapes and test them in the meantime.

Layered skinclusters can get you results quickly.
But perhaps the right tooling can bake these into a blendshape and give you best of both worlds?

The blendshape workflow was based on https://vimeo.com/ondemand/sobelfacerig/122272067

The thing I appreciate about mGear is that it doesn’t feel like it has very strong design constraints. I feel pretty free to work my own design sense into the rigs and build process. The variety you can get is very freeing. And I manage complexity and fast iteration in the build process and how I store my data externally, not in the organization of the structure of layers. So I definitely like that!

Another thing about layers is how often it causes the geometry to stop evaluating properly on the GPU. Do you all not experience those problems? On layered rigs, I would get tickets that the geo is falling off the rigs, or popping, or eyeballs aren’t rotating. The cause seems to be because these live blendshapes are invisible, and so they don’t evaluate properly. And flattening it out fixes that.

I’m not convinced, but I’m fascinated and glad to hear more! I’ll likely do some more tests and relax my view on layers. :slight_smile: And maybe there are some ways to solve those evaluation issues.

1 Like

The freedom you get through mGear is great. I’ve been using it for about a week or two now, and I’m already sold to the idea you can run pre and post scripts and are able to export anything and get things assembled in a clean scene.
And being able to just throw some ideas around and see other opinions on how to approach things is just very enlightening!

For instance, looking for options on how to layer things is just because it’s just the way I’ve been rigging faces lately. It’s insightful to read how others approach these things and we learn along the way :smile:.

These are things I’m definitely going to try out in production soon :).

The GPU evaluation bugs I’ve seen not too often though, running “ogs -reset” seems to fix it most of the time. Admittedly haven’t given it much thought that it could be caused by the layered setup.

2 Likes

Regarding the way I do the layered rig . The most basic/simple implementation only use some blendshapes for the eyebrows (I think this also can be replaced for another system that doesn’t use blendshapes, just didn’t invest the time to research more).
Later it depends of the project requirements and time constraints.

Hey @Miquel. I have another client with another broken rig. It is a layered blendshape rig, done with Advanced Skeleton.

And sure enough, parts of the rig that are flowing through those blendshapes just randomly stop evaluating at random times, including during rendering. So I’m really curious how you make this work. Since Maya 2016, I have so many tickets where rigs just stop evaluating, and that is always the reason, and it is never consistent from one computer to the next or one moment to the next.

So do you have a secret? Why do your rigs work?

Hi @chrislesage

To be honest, I am not sure how Advanced Skeleton is working internally, but I never had this issue of a rig stop evaluation because layered rig.
One thing I know, is that if you have a blendshape trigger by a transform from a joint. let say “rotateX” and the input of the joint is using the compound XYZ connection. There is evaluation problens like one you describe.
In that case the solution is not to use compound connection.
Not sure if that is the case, but I hope it helps.

let me know :slight_smile:

I’ll definitely check that one out. I know that issue too. If I find any other clues, I’ll report back. Thanks!

I would pay for a thorough tutorial on this. Who wants my money?!!

3 Likes

Really nice discussion is going on here!

Just dropped my feet into mgear and I am already in love with it.
I worked with @Jeroen for some time, really loved the place :cry:, but roads separated. Worked on Ainbo as a rigging sup. There wasn’t time in the productions to start from scratch (an animator was already in the house when rigging started and it was a new studio. We didn’t even have a server) so I sticked with Advanced Skeleton for the whole production (and some mgear extensions).
We managed by building tools/pipeline to get the performance of the rigs better. But I really found out that AS is build for commercials and not for big production with many characters to maintain. We had about 70 characters and 150 props and vehicles.

Now I am using mgear for a shortfilm. And I must say it is great! Except the face is an area that could still need some love.

At a previous studio we build the face with parts that interconnect with each other. for example: Mouth → cheek → squint → eyes(lids). So you still have everything in 1 layer, but they are separate islands (modules) to troubleshoot and with ngskintools you could paint those separated island easily.

I think with this you have the best of both worlds. The parts have outputs and inputs to hook everything up. So you could drive the squint with the mouth corner and tweak how much it drives. The final tweaks (blendshape) would be on the part itself. So when you need to correct the shape of the squint with a blendshape. This is still triggered by the mouth as is drives the squint part that triggers the correctives.

I should probably just start building these modules in shifter. But maybe we can discus it here before building.

any thoughts?

regards,

Willem-Jan

2 Likes

Have you seen the facial data-centric rigging series? Miguel is slowly going through a process like what you describe. There is also a package of example files and scripts on Gumroad. That would be where I start on a more advanced facial toolkit built into mGear.

https://www.youtube.com/watch?v=wzYpt6_bhzU&list=PL9LaIDCCDjfiR7Uod5UqIKnMM33XXsp80

1 Like

I have to find the time to finish the Facial workshop :sweat_smile:
I hope to resume it next month :stuck_out_tongue:

2 Likes