Create AI Presets from Any Fashion Photo
Upload a reference fashion photo and let AI extract pose, background, lighting, and camera settings into a reusable preset — automate your product photography style in seconds.
By On-Model Team

Creating custom presets has always been the most powerful way to control your output — but filling in 15+ fields manually? That's where most users dropped off. We saw it in the data: teams stuck with default categories instead of building their own, even when their brand needed something different.
Some power users discovered a workaround — pasting reference images into ChatGPT and asking it to generate our JSON format. Clever, but clunky. We decided to bring that workflow directly into the platform.
Extract from Image
The new From Image button in the preset creation wizard lets you upload or select any fashion photograph and have AI automatically extract all style settings into a ready-to-use preset. One image in, one preset out.
Here's what happens under the hood: the image is sent to our AI pipeline, which analyzes the photograph and returns structured values for every preset field — pose, background, style, expression, mood, color palette, camera settings, and lighting setup.
The result lands directly in the Visual Editor with all fields populated. From there, you can review, tweak, and save.
How it works
Step 1 — Open the preset wizard
Navigate to Presets and click New Preset. Fill in the basics (name, description, category, type) and continue to the Style & Settings step.
Step 2 — Click "From Image"
In the top-right corner of the Style & Settings step, click From Image. This opens the asset picker where you can either:
- Select from your library — pick any image you've already uploaded
- Upload a new image — drag and drop or click to upload a reference photo
Step 3 — Confirm and extract
After selecting an image, you'll see a confirmation message: "This will use 1 credit to extract preset settings." Click Confirm and the AI analyzes the photograph in a few seconds.
Once complete, the wizard advances to the Visual Editor with all fields populated:
- Creative fields — pose, background, style, expression, mood, color palette
- Camera settings — framing, angle, lens, aperture
- Lighting setup — direction, quality, complexity
The AI also suggests a preset name and description based on what it sees in the image.
The extracted preset is a starting point, not a final product. Review the fields in the Visual Editor and adjust anything that doesn't match your intent — the AI gets you 90% of the way there.
What makes a good reference image?
The quality of the extraction depends on the input. For best results:
- Use high-resolution photos — the AI needs to see details like lighting direction and depth of field
- Choose images with clear, visible lighting — dramatic or well-defined lighting setups produce more specific extraction results
- Full-body or three-quarter shots work best — these give the AI enough context for pose, framing, and background
- Any fashion photography style works — editorial, street, studio, lifestyle, e-commerce. The AI adapts to whatever you feed it
The extraction costs 1 credit per image. If you're not happy with the result, adjust the fields manually and try a different reference image — each extraction is independent.
In action — from reference to output
Here's a real example. We found a casual streetwear photo we liked online — hard natural sunlight, athletic pose, outdoor setting with an old SUV — and used Extract from Image to create a preset from it. Then we applied that preset to our own product with Paul as our AI model:
What the AI extracted:
{
"pose": "leaning forward slightly, dribbling a basketball",
"background": "outdoor setting with a white old SUV and a building",
"style": "casual sportswear lifestyle",
"expression": "neutral, confident with sunglasses",
"mood": "active and laid-back",
"color_palette": "navy blue, crisp white, and warm earth tones",
"camera": {
"framing": "mid-shot, thigh up",
"angle": "slightly high angle looking down",
"lens": "35mm wide angle",
"aperture": "f/8 deep focus"
},
"lighting": {
"direction": "strong side lighting from the left",
"quality": "hard natural sunlight",
"complexity": "natural ambient daylight"
}
}
Reference vs. generated output:


Same pose, same outdoor setting with an old SUV, same hard side lighting, same sunglasses, same camera angle — but with our product and our AI model.
The flat-lay input and generated output:

The entire process took under a minute: upload the reference, confirm the extraction (1 credit), review the preset, and generate.
Switch to the JSON tab at any time to see or edit the raw extraction. This is especially useful if you want to copy the preset and make variations.
Use cases
- Replicate a competitor's style — see a product shot you admire? Extract the preset and apply it to your own products
- Standardize from a reference shoot — use your best existing photo as the template for all future outputs
- Quick iteration — try different reference images to rapidly explore visual directions before committing to a full batch
- Onboard new team members — instead of explaining your brand style in words, just point them at a reference image
Try it now
Head to Presets → New Preset and click From Image in the Style & Settings step. Pick any fashion photo and see what the AI extracts.
Already using presets? Check out our Presets guide for more on categories, system presets, and building your own photoshoot briefs.
Read Next

Flat-to-Model Presets: One Outfit, Every Style
Use On-Model's preset categories to generate PDP, lifestyle, editorial, and social imagery from the same flat-lay — plus custom presets for your brand.

5 Product Photography Inputs for AI On-Model Imagery
On-Model accepts flat-lays, outfits, ghost mannequin, on-model, and hanger shots. See all five input types produce the same PDP-quality fashion e-commerce output.

Scaling Product Photography: From 10 SKUs to 10,000
Process your entire product catalog with AI on-model imagery. Batch workflows, consistent identity, and the economics of scaling from dozens to thousands of SKUs.