Control Garment Styling with Notes
Add per-image styling notes to your flat-lay photos and control exactly how the AI wears each garment: tucked, cuffed, layered, or recolored.

When you upload a flat-lay shirt and pants into Flat-to-Model, the AI decides how to dress the model. Should the shirt be tucked in or left hanging? Should the pants be cuffed or at full length? Until now, that was entirely up to the AI's interpretation, and it could change between outputs. For fashion e-commerce teams generating hundreds of product images per week, this inconsistency creates real production friction.
According to Baymard Institute, 56% of users' first action on a product page is to explore the product images. If your shirt appears tucked in one shot and untucked in another, it breaks the visual consistency that drives purchase confidence. Shopify's research backs this up: 75% of online shoppers rely on product photos when deciding on a potential purchase.
Styling notes fix this. They let you attach a short text instruction to any input image, telling the AI exactly how that garment should appear on the generated model.
What are styling notes?
A styling note is a free-text annotation on an individual garment image. You click a pencil icon on any uploaded thumbnail, type a short instruction like "tucked into pants, front tuck only," and the AI treats that as an authoritative constraint when generating the output.
Notes are completely optional. If you don't add any, everything works exactly as before. This makes it a progressive-disclosure feature: casual users never notice it, power users get precise control.
Styling notes affect the wardrobe description step, where the AI decides how garments should be worn. They don't change the pose, background, or camera settings.
How it works
Step 1 — Upload your flat-lay images
Upload your garments in the Flat-to-Model wizard as usual. Nothing changes here.
Step 2 — Add notes to individual images
Hover over any thumbnail in the sidebar and click the pencil icon. A dialog opens showing the garment image on the left and a text area on the right. Type your styling instruction and click Done.
A small banner appears on the thumbnail, confirming the note is attached.
Step 3 — Submit and generate
Submit the job as usual. Your notes flow through the pipeline and directly guide how the AI styles each garment. The results page shows a "Note" badge on any image that had a styling note attached.
Without styling notes
We uploaded three flat-lay images and generated three outputs with no styling notes attached. The AI analyzed the garments and made its own decisions about tucking, fit, and cuffs.



The AI chose to fully tuck the shirt and keep the trousers at full length in all three poses. That's fine if it matches your intent, but you had no way to request a different styling.
With styling notes
Same three input images, same identity, same poses. This time we added two styling notes:
- Shirt: "tucked into pants, front tuck only"
- Trousers: "slim fit look, cuffed at the ankles"
The difference is visible: the shirt now has a casual front tuck instead of a full tuck, and the trousers are cuffed at the ankles across all three poses. The styling is consistent and intentional.
Side by side


Same garments, same identity, same pose. The only difference is two short text notes.
Bonus: change garment colors
Styling notes can do more than control how a garment is worn. You can also use them to change visual properties like color. Here we kept the same three inputs and the same tucking/cuffing notes, but added "white color" to the trousers note:
- Shirt: "tucked into pants, front tuck only"
- Trousers: "slim fit look, cuffed at the ankles, white color"


The trousers changed to white while keeping everything else identical. This is useful when you want to show color variants of the same product without re-photographing the flat-lay. McKinsey estimates traditional fashion photography costs $500–$1,000 per SKU; generating color variants from a single flat-lay cuts that to near zero.
What you can control
Styling notes work with any garment detail the AI can interpret. Here are some examples:
- Tucking: "tucked into pants," "front tuck only," "untucked, hanging loose"
- Zippers and buttons: "zipper fully closed," "worn open, unzipped," "top two buttons undone"
- Sleeves: "sleeves rolled up to elbows," "sleeves pushed up casually"
- Layering: "this is the inner layer, worn under the jacket," "outer layer, draped over shoulders"
- Cuffs: "cuffed at the ankles," "pant legs rolled once"
- Fit descriptions: "slim fit look," "oversized, relaxed drape"
- Scarves and accessories: "draped loosely around the neck, not tied," "worn as a belt"
- Colors: "white color," "navy blue," "change to white"
Notes are per-image, not per-job. This means you can give different instructions to every garment in the same outfit. The AI still analyzes the images to understand the garments; your notes act as overrides on top of that analysis.
Available via API
Styling notes also work through the On-Model API. Pass each image as an object with an optional note field instead of a plain file ID:
{
"images": [
{ "file_id": "uuid-shirt", "note": "tucked into pants, front tuck only" },
{ "file_id": "uuid-pants", "note": "cuffed at the ankles" },
{ "file_id": "uuid-shoes" }
]
}
Images without a note field behave exactly as before. Both formats (plain strings and annotated objects) can be mixed in the same request.
"The gap between flat-lay photography and on-model imagery has always been styling intent. A folded shirt on a table carries no information about how it should be worn. Styling notes close that gap by letting brands encode their creative direction directly into the production pipeline."
— Nunzio Alexandro Letizia, Co-founder at PiktID and creator of On-Model
Try it now
Open any Flat-to-Model job and hover over a garment thumbnail to see the pencil icon. Add a note, generate, and compare the results with and without.
Already familiar with Flat-to-Model? Check out our Flat-to-Model guide or explore presets for even more control over your outputs.
Sources:
- Baymard Institute. (2025). Product Page UX: How Users Interact with Product Images. baymard.com
- McKinsey & Company. (2024). The State of Fashion: Technology Edition. mckinsey.com
- Shopify. (2025). Product Photography Statistics: Why Visuals Drive E-Commerce Sales. shopify.com
Read Next

Create AI Presets from Any Fashion Photo
Upload a reference fashion photo and let AI extract pose, background, lighting, and camera settings into a reusable preset — automate your product photography style in seconds.

5 Product Photography Inputs for AI On-Model Imagery
On-Model accepts flat-lays, outfits, ghost mannequin, on-model, and hanger shots. See all five input types produce the same PDP-quality fashion e-commerce output.

Flat-to-Model Presets: One Outfit, Every Style
Use On-Model's preset categories to generate PDP, lifestyle, editorial, and social imagery from the same flat-lay — plus custom presets for your brand.