Build Your Own AI Model Library
Create proprietary AI models from a short description or a reference image, then reuse them across every campaign. No casting required.

A photoshoot needs a model. A model needs a booking, a day rate, a usage window, and a licensing footnote on the contract. For teams producing hundreds of looks per season across multiple markets, that cost scales with SKU count, and the face you cast for spring may not be available for fall. It is the single most unpredictable line item in a production budget.
McKinsey estimates traditional fashion photography costs between $500 and $1,000 per SKU. A growing share of that is talent, not equipment. On top of it, every market swap, every re-shoot, and every regional variant compounds the bill.
Create Identity is our answer: design the model you want, generate it in seconds, and save it to your library as a permanent, brand-owned identity you can call on any time in Model Swap, in Flat-to-Model, and in every future job that needs a face.
What is Create Identity?
A new wizard inside the Identities page. You describe a person, the platform generates face-portrait drafts, and you promote the one you want into your identity library. From that moment on it behaves exactly like any pre-built On-Model identity: usable in every workflow, reusable across every campaign, and owned entirely by your brand.
Generated identities are private to your account. They live alongside the public catalog but are visible only to your team, and each one counts toward your identity slots only after you promote it.
How Create Identity works
Step 1: Open the Identities page and click Create
The wizard opens on the Configure Instructions step. An empty instruction is ready for you, and a Load Preset button in the header lets you start from a saved brief if you have one.
Step 2: Describe the identity you want
Every instruction is grouped into three structured sections:
- Appearance: gender, age, ethnicity, skin tone, build, size, height, expression
- Face: eyes, eyebrows, nose, lips, smile, face shape, facial hair, makeup, distinguishing marks
- Hair: color, length, style
Each field has clickable suggestion chips for common values and a text input for anything off-menu. Every field is optional; leave one blank and the AI picks something sensible.
A sparkles icon on the instruction row unlocks From Image: pick a reference photo from your assets, and the platform fills Appearance, Face, and Hair automatically from what it reads in the picture. A pencil icon does the same from a short paragraph of text. Either way, the structured fields populate in place and you can tweak before generating.
Step 3: Review and generate
Pick how many variations you want (1 to 8), choose an aspect ratio and format, then dispatch. The job produces N face-portrait drafts. Click the one you like and promote it into your library.
Camera angle, framing, lighting, outfit, and background are standardized across every Create Identity job. The output is always a clean, front-facing portrait, so every identity in your library shares the same visual treatment.
One instruction, three variations
We ran a single instruction describing a 28-year-old North European woman with ash-blonde hair and pale blue eyes, asking for three variations. Each draft matches the brief; each face is subtly its own.
Same brief, three faces, none of which exist outside your library. Promote one and it becomes a stable identity you can reuse in any Model Swap or Flat-to-Model job.
Fill the fields from a reference image
Not every team wants to describe a person field by field. Click the sparkles icon on the instruction row, pick a reference image from your assets, and the platform reads it for subject-level traits: gender, age range, skin tone, eyes, eyebrows, hair color, hair style. The structured fields populate automatically, and you can tweak before generating.


The reference is read, not copied. The output is a new person who matches the brief, not the person in the photo. The reference image itself never appears in the final output and is not stored as part of the generated identity.
This is the same extraction pipeline that powers Flat-to-Model presets, retargeted to identity attributes only. It does not read outfits, backgrounds, or props; only the subject.
Expert mode: prompt the entire output
Some users would rather write a paragraph than click chips. Click the lock icon in the prompt field at the top of an instruction, confirm the warning, and the prompt editor unlocks. Anything you type there is sent verbatim to the generator, and the structured fields below are greyed out and ignored.
This is where you can express things the structured fields cannot easily capture: mixed heritage, specific hair arrangements, placement of beauty marks or tattoos, gaze direction, makeup finishes, or any subtle stylistic cue your brand has a point of view on. A complex prompt might read:
"A 31-year-old woman of mixed South Asian and North European heritage, long jet-black hair parted in the middle and twisted into a loose low bun with a few wisps framing her face, deep brown eyes with faint amber flecks, high cheekbones, a thin straight nose with a small gold septum ring, warm olive skin with a scatter of small beauty marks along the jawline, a faded fine-line geometric tattoo just visible on the left collarbone, natural minimal makeup with a slight matte finish. Her expression is calm and focused, as if mid-thought, gaze directed slightly off-camera."
Notice what the prompt does not say: no background, no outfit, no lighting notes. Expert mode sends your text to the generator verbatim, without adding any hidden defaults, so anything you omit becomes the AI's call. If you want a specific studio backdrop, a particular garment, or a set colour palette, spell it out; if you leave it open, the model picks something reasonable and moves on.
Every instruction also has a braces icon that opens a JSON inspector. It shows the exact payload your job will send, in the same shape the API accepts. It is useful for debugging a brief, sharing an exact spec with a teammate, or preparing an API call.
What you can control with Create Identity
Every field the wizard exposes, grouped:
- Appearance: gender, age, ethnicity, skin tone, build, size (with EU and US hints on the chips), height, expression
- Face: eye color and shape, eyebrow shape and density, nose, lips, smile, face shape, facial hair, makeup, distinguishing marks (freckles, moles, small tattoo, piercings, scars)
- Hair: color, length, style
- Output: aspect ratio (1:1, 3:4, 4:5, 9:16, and more), format (jpg or png), resolution up to 2K
- Variations: 1 to 8 drafts per instruction; promote any or all of them into your library
Everything else is standardized so every identity you create shares the same clean, front-facing portrait treatment. No mixed lighting, no mixed framing, no mixed outfits.
Available via API
The same flow is exposed on the On-Model API. Submit a list of instructions and receive draft image results. Call the promote endpoint with the draft you want to keep, and it graduates into a reusable identity on your account.
POST /identity/create
{
"instructions": [
{
"appearance": {
"gender": "female",
"age": 28,
"ethnicity": "North European",
"expression": "confident"
},
"face": {
"eyes": "blue",
"skin": "light",
"marks": "light freckles"
},
"hair": {
"color": "blonde",
"length": "long",
"style": "wavy"
},
"num_variations": 3,
"options": { "ar": "3:4", "format": "jpg", "size": "2K" }
}
]
}
The response returns a job with draft image_result_id values. Call POST /identity/promote-generated on the draft you want to keep, and it becomes a full identity. Full reference in the On-Model API v2 docs.
Try it now
Open the Identities page, click Create, and describe the first model in your brand-owned catalog. Your first identity takes under a minute to generate and is yours to reuse forever.
Already exploring the pre-built catalog? See our AI model catalog guide for the free, basic, and pro identities included with every account, or read our case study on why brand-owned identities matter.
Sources:
- McKinsey & Company. (2024). The State of Fashion: Technology Edition. mckinsey.com
Read Next

Control Garment Styling with Notes
Add per-image styling notes to your flat-lay photos and control exactly how the AI wears each garment: tucked, cuffed, layered, or recolored.

Create AI Presets from Any Fashion Photo
Upload a reference fashion photo and let AI extract pose, background, lighting, and camera settings into a reusable preset — automate your product photography style in seconds.

5 Product Photography Inputs for AI On-Model Imagery
On-Model accepts flat-lays, outfits, ghost mannequin, on-model, and hanger shots. See all five input types produce the same PDP-quality fashion e-commerce output.