Choose Your AI Generation Engine
Pick which AI engine generates each Flat-to-Model or Model-Swap job, and see which one produced every output.

Flat-to-Model turns your flat-lay product photos into photorealistic on-model imagery, and different AI engines interpret the same inputs in different ways. Some lean clean and editorial, others lean dramatic and creative. Until now we picked one for you. Starting today, you can pick the one whose aesthetic fits your brand, or stay on Auto and let us choose.
This rolls out across both products. Flat-to-Model offers Nano Banana Pro and Seedream. Model-Swap offers Onda and Nano Banana Pro. Flat-to-Model is where the aesthetic difference between engines is most visible, so that's what this post focuses on. A short section at the end covers the same dropdown in Model-Swap.
What's new
A new Generation model selector appears in the Flat-to-Model review step, right alongside the existing pose, background, and aspect ratio controls. The same choice is available via the API as an optional model field on the request's options object. If you don't set it, the default is Auto. That's exactly the behavior every job had before.
The Flat-to-Model engines
Three options:
- Auto (default): runs Nano Banana Pro first. If the input trips the safety filter, we transparently reroute to Seedream so the job still completes.
- Nano Banana Pro: Google's latest image model. Strong garment fidelity and clean photorealistic skin. No fallback on refusal.
- Seedream: ByteDance's image model. Often more dramatic lighting and a more creative interpretation of the styling. No fallback on refusal.
Auto keeps the built-in safety fallback enabled. Picking a specific engine turns it off, so if that engine refuses the input, the job fails cleanly and you can adjust or retry. When Auto runs, you can see which engine actually produced each output. We cover that below.
Same inputs, different engines
Here's the same flat-lay trio, the same identity, and the same pose, rendered by each engine.



Same garments, same identity, same pose: seated on a wooden stool against a warm ecru plaster wall. Nano Banana Pro renders the scene with a fully frontal composition, crisp collar lines, and cooler lighting that holds the icy blue of the shirt. Seedream nudges the model into a three-quarter turn, softens the tones, and pushes the backdrop a touch warmer, with a slightly more relaxed drape on the shirt. Neither is better. They're different aesthetic defaults, and picking the one that fits your brand catalogue is the point.
Which engine produced each output
Once a job finishes, you can see which engine was actually used for every result.
In the app, a small engine icon sits next to the processing time on each output card. Hover over it for the engine name.
In the API, each result in GET /jobs/<id>/results carries a model_used field:
{
"job_id": "8c9f1a...",
"status": "completed",
"results": [
{
"image_index": 0,
"group_index": 0,
"model_used": "nano_banana_pro",
"output": { "full_size": "https://..." }
},
{
"image_index": 1,
"group_index": 0,
"model_used": "seedream",
"output": { "full_size": "https://..." }
}
]
}
This matters most when you submit a job on Auto. If you asked for four outputs and three come back as nano_banana_pro while one comes back as seedream, you instantly know that single slot hit the safety fallback. No guessing, no support ticket.
Using it via the API
Pass model inside the options object on your Flat-to-Model request:
{
"project_id": "...",
"images": ["uuid1", "uuid2", "uuid3"],
"identity_code": "default-pro-...",
"instructions": [ /* ... */ ],
"options": {
"model": "nano_banana_pro"
}
}
Valid values: "auto" (default), "nano_banana_pro", or "seedream".
In our public Python integration, it's a single CLI flag:
python flat_to_model.py \
--input-folder SKU/ \
--identity-code <your-identity-code> \
--model nano_banana_pro
Full reference is in the Flat-to-Model API docs.
Model-Swap gets a dropdown too
The same Generation model selector is now in the Model-Swap review step.
The default is Onda, PiktID's proprietary swap engine, which is what every Model-Swap job has run on to date. The alternative is Nano Banana Pro, available for teams who want to try a different aesthetic on their swaps. model_used works exactly the same way: every result in GET /jobs/<id>/results tells you which engine produced it.
Pass swap_options.model on a Model-Swap job, or --model onda | nano_banana_pro on the Python integration.
"Different engines are different tools. For a long time we picked one for you, and that was the right default. But brands and agencies who run hundreds of SKUs a week know their aesthetic, and they deserve the choice. Exposing engine selection turns On-Model from a black box into a controllable production pipeline, and
model_usedon every output makes the choice auditable."— Nunzio Alexandro Letizia, Co-founder at PiktID and creator of On-Model
Try it now
Open any Flat-to-Model or Model-Swap job and look for the Generation model dropdown in the review step. Pick an engine, generate, and compare the result against Auto.
New to On-Model? Start with the Flat-to-Model guide or explore presets for even more control over the output.
Read Next

Control Garment Styling with Notes
Add per-image styling notes to your flat-lay photos and control exactly how the AI wears each garment: tucked, cuffed, layered, or recolored.

Create AI Presets from Any Fashion Photo
Upload a reference fashion photo and let AI extract pose, background, lighting, and camera settings into a reusable preset — automate your product photography style in seconds.

Welcome to On-Model: AI-Powered Fashion Photography at Scale
Introducing On-Model — the AI platform that transforms flat-lay product photos into realistic on-model images and swaps models while preserving every garment detail.