Building scenes with consistent characters without going through the trouble of building your own AI model from scratch can be quite challenging. And really, that’s the best way — if you have the time, money, and means.

For the rest of us, a quick and dirty way to get this done is the /seed parameter behind render. Seed is a random number that helps the AI repeat a certain set of results when enough similar elements enter the prompt.

This is a follow-up to the tutorial “How to get Stable Diffusion to do what you want“, which I recommend reading before continuing.

THE SETUP

One of our community members made this great image today, and wanted to move her to another part of town.  So we first take the initial prompt and render it:

/render /recipe:fuji a portrait photo of 21 y.o blonde russian girl wearing a (white t-shirt), old naked man staring on the background, city street <realvis13>

Reply to the photo as if you’re going to talk to it, and type /showprompt

That will display the SEED.  A seed is a random number that acts as an anchor, to repeat the image.  Its also possible to quickly look up the seed by clicking into the picture.  You’ll see it display towards the bottom of the debug message.

 

COPY THE BLUEPRINTS

After the image is created, a bunch of values that were randomly assigned in the background are exposed. This is the data you need to repeat a picture. At the bottom of the image, you’ll see all the parameters we need to roughly repeat this character.

So we then modify the prompt with the numbers it gave us. We need seed, guidance, sampler, concept, and the entire prompt all over again.

/render seed:355734 /guidance:13.26 /sampler:k_euler_a /recipe:fuji a portrait photo of 21 y.o blonde russian girl wearing a (white t-shirt), old naked man staring on the background, city street <realvis13>

NOW LET’S GO TRAVELING

Side note: Pirate Diffusion automatically saves all your images on the cloud, so you’ll never lose them.  Just type /gallery in the room you made them, and they will be there. For practicality, copying them to a local text file helps, too.

Now let’s say we want to teleport her out of the city and onto a tennis court. So we take the prompt that had her in the city…

a russian woman

in the city

/render seed:355734 /guidance:13.26 /sampler:k_euler_a /recipe:fuji a portrait photo of 21 y.o blonde russian girl wearing a (white t-shirt), old naked man staring on the background, city street 

and onto the tennis courts….

/render seed:355734 /guidance:13.26 /sampler:k_euler_a /recipe:fuji a portrait photo of 21 y.o blonde russian girl wearing a (white t-shirt), playing tennis

And slightly modify it, keeping all parameters intact, but only changing the location. The entire prompt matters, not just the seed.  Be meticulous about this. And so, by using this approach, slowly making changes, you can move a character around without building your own model.  Tada!

a russian woman

at the tennis court

Let’s gradually make some more locations to see how far we can push this curious technique. Now we can have fun!

/render seed:355734 /guidance:13.26 /sampler:k_euler_a /recipe:fuji a portrait photo of 21 y.o blonde russian girl (bee keeper costume), panicked facial expression, underwater in the deep sea with strange fish, lots of air bubbles

a russian woman

underwater bee keeping

/render seed:355734 /guidance:13.26 /sampler:k_euler_a /recipe:fuji a portrait photo of 21 y.o blonde russian girl (white toga), at The Last Supper, in ancient Italy 

a russian woman

at the last supper

/render seed:355734 /guidance:13.26 /sampler:k_euler_a /recipe:fuji a portrait photo of 21 y.o blonde russian girl (turned around, view from the back), riding a red Ferrari, wearing tiny jean shorts, as dirty old men watch her in the background, in Times Square

Yep, AI was a mistake. But it’s so fun.

a russian woman

different woman? let’s fix that

 IS THAT EVEN THE SAME WOMAN?!

Nope, that’s not her.  Don’t panic.

You’ll notice the character has started to change depending on situations, like the hair became wavy and has aged. To fix this, we can use positive and negative prompts to make the character come back to their familiar traits, while keeping the same seed. And somehow, it works wonders.

/render seed:355734 /guidance:13.26 /sampler:k_euler_a /recipe:fuji a portrait photo of ((21 year old)) blonde russian girl (very straight platinum blonde hair,  turned around, view from the back), touching a red Ferrari, wearing tiny jean shorts, as dirty old men watch her in the background, in Times Square [[[sunglasses]]] 

And she’s back. See? No sweat, she can time travel anywhere now. Or maybe that’s not her. But isn’t her nose different? Well, you know to fix that now.

a russian woman

there she is again, in Times Square

You can also use seed to re-render old images

Track back in your /gallery and try your favorite old images with new AI models, that’s half the fun. Here’s how.

Caveats and limitations

When working with human characters, this is easier. When you’re trying to do it with a very unique character, like a CGI fox animal that has a persistent uniquely shaped scar on it’s face, this technique is not suitable.

There’s really no substitute for doing your own model, but let’s be honest, that’s a lot of work. You have to catalog a person and tag each photo and build the model and rebuild it until the weights don’t produce glitches. And that costs money every attempt or an up front hardware cost.

For hobbyists and folks in a hurry, nah.

For the rest of us that just want a slight change in scenery for some favorite photos, this works pretty well, if you’re patient.

To go even faster, I recommend adding /render /steps:wayless to get a lot of different poses back from the server quickly, then copy down the seed number, and then work in high res.  This way, you’re not waiting for pristine images you won’t use.

Render-Ahoy!

You may also like

More in Featured