Product Updates··6 min read

Your Asset Library, Now Smarter

AI-detected keywords power the search bar so you can find any asset by content, and per-asset styling notes auto-fill the next time you reuse a garment.

By On-Model Team

Folded fashion garment surrounded by floating translucent blue AI keyword tags on a dark studio surface

As fashion teams move more of their production to AI, fashion asset management becomes the bottleneck. Within a few weeks, the same folder that used to hold a dozen flat-lays starts to hold thousands of inputs, outputs, reference shots, and prior generations. Two frictions show up in almost every workflow we watch: re-typing the same styling instruction every time a garment is reused, and hunting page by page for a specific item someone uploaded last month.

McKinsey's State of Fashion: Technology Edition puts production time as one of the top three operational bottlenecks for digital fashion teams. Most of that time is not generation. It's organisation and reuse.

Two improvements landed today on the On-Model asset library. Your library now remembers, and it can be searched by what's actually in your images: an AI image search that recognises garments, accessories, and person attributes, plus per-asset styling notes that survive across jobs.

Notes that stick to the asset

Back in April we shipped per-image styling notes inside Flat-to-Model jobs: a short text annotation on any flat-lay that tells the AI exactly how that garment should be worn. Tucked, cuffed, layered, recoloured. It works, but every job started from a blank slate. If you reused the same shirt on a new job, you re-typed the same instruction.

Today those notes graduate from per-job to per-asset. Open any asset on the Assets page, click the pencil next to Styling note, and write the instruction once. The next time you pick that asset in a job, the note is already there.

It works in both Flat-to-Model and Create-Packshot. Model Swap continues to take its styling cues from the identity, so it is intentionally not affected.

Edit a note from inside a job's image picker and the change is also saved back to the source asset on submit. Whether you set notes from the library or from the picker, the asset is the single source of truth.

Set on the asset
Pre-filled on reuse

The whole interaction is two lines of behaviour: notes save where the asset lives, and they appear wherever the asset is used.

Find any asset by what's in it

The second improvement runs further upstream: AI image recognition is now built into every upload. Each image that lands in your library is passed through automatic content analysis. The system recognises clothing and accessories (Shirt, Pants, Dress, Jacket, Sneakers, Hat, Sunglasses, Bag, and so on) and person attributes (Adult, Child, Female, Male). These appear as a small read-only Detected by AI section on the asset detail panel, right above the styling note.

They also feed the search bar at the top of the Assets page. Until today, that field matched filenames, tags, and asset IDs. From today it also matches detected content, turning the same input into a real search by image content. Type glasses and the library narrows to assets that actually contain a pair of glasses, no matter what the files were named when they were uploaded. The same goes for sneakers, dress, jacket, or any other item the system recognises. No manual tagging, no taxonomy to maintain.

Detected by AI
Search by content

Detection is automatic. There is nothing to enable, nothing to label by hand. Any new upload picks up its detected keywords within a few seconds of landing in the library, and any prior upload keeps working as before until its analysis completes in the background.

Why it matters

Library quality compounds. Once your assets remember their own styling and respond to natural search, the same picker, the same upload action, the same job submission all get faster. A few examples we have already seen on customer accounts:

  • A team running ten PDP variants of the same shirt sets one note on the asset, and every variant inherits it.
  • A production specialist managing a 4,000-asset library finds a specific cropped blazer in three keystrokes, instead of paging through the grid.
  • A localisation lead pulls every dress in the library for a regional campaign by typing dress and exporting the filtered grid.

These are not generation features. They are workflow features. But on a busy week, they save more time than most rendering optimisations.

Where to find it

Open the Assets page and click any asset to see the new detail panel.

  • Tags stays where it was, with the existing four categories: model, product, background, other.
  • Detected by AI sits just below, listing the auto-recognised clothing and person tokens.
  • Styling note sits at the end of the panel, editable for assets you own.

The search bar at the top of the library transparently picks up the new content matches. There is no toggle, no separate filter; the same input does more.

Available via API

The Assets API returns the new fields on every asset. Each item now carries:

{
  "id": "...",
  "filename": "green-button-down.jpg",
  "tags": ["product"],
  "styling_note": "tucked into pants",
  "detected": {
    "has_person": false,
    "has_clothing": true,
    "clothing_items": ["Shirt", "Long Sleeve"],
    "person_attributes": [],
    "categories": ["Apparel and Accessories"]
  }
}

The search query parameter on GET /assets already matches against the detected content, so the same call you make to power your own search UI will return the same library-aware results. There is a dedicated PATCH /assets/<id>/note to update the styling note from your own tooling.

Full reference is in the Assets API docs.

Common questions

What is AI image search for a fashion asset library? It's a search that looks at the actual content of each image — clothing items, accessories, person attributes — instead of relying on filenames or manual tags. On On-Model, every upload is automatically analysed and indexed, so typing a word like blazer or sneakers returns every asset that contains one.

Do I need to tag my assets manually? No. Detection is automatic on upload. You can still maintain the four high-level tags (model, product, background, other) for filtering, but the underlying content tags are produced by AI and kept in sync without any manual work.

Where do styling notes apply? Per-asset styling notes auto-fill in Flat-to-Model and Create-Packshot whenever you reuse the asset. Model Swap takes its styling cues from the chosen identity and is not affected.

Can I use these features through the API? Yes. The Assets API exposes the new detected object and the styling_note field on every asset, and the existing search query parameter now matches detected content. See the Assets API docs.

Try it now

Open the Assets page, pick any asset you have uploaded recently, and look for the new Detected by AI and Styling note sections. Add a note, run a job that reuses that asset, and watch the note pre-fill. Then type a clothing word in the search bar and see your library narrow itself.

New to On-Model? Start with the Flat-to-Model guide, the Create-Packshot guide, or the Model-Swap guide.


Sources:

  1. McKinsey & Company. (2024). The State of Fashion: Technology Edition. mckinsey.com
  2. Baymard Institute. (2025). Product Page UX: How Users Interact with Product Images. baymard.com
asset-managementai-image-searchstyling-notesai-taggingimage-recognitionfashion-ecommercefashion-asset-managementflat-to-modelcreate-packshotworkflowproduct-photographyproduct-updates