Template
Add Visuals to Blog Post
Run this workflow to add images, tables, and charts to your blog posts at scale and improve engagement.
Visuals are no longer a “nice to have” in 2025—they’re the difference between a skimmed headline and a fully-read article. Studies show posts with relevant images earn 94 % more views, and data-rich visuals can lift engagement by 180 %. Yet for most teams, identifying where to slot in an illustration, chart, or table still means a manual pass through every paragraph. Our Add Visuals to Blog Post app removes that bottleneck by analyzing an article, suggesting up to five high-impact visuals, and automatically injecting ready-to-use assets—all without leaving Moonlit.
Walkthrough: How the App Works
1. Collect the raw article
The user drops a URL into the “Article” input. A Scrape Webpage node pulls the cleaned body-text only—no navigation clutter—so the next steps focus purely on the narrative.
2. Spot the visual opportunities
A Chat Model reads the article and returns a JSON package:
data: up to five objects, each holding the excerpt (
text above
), suggestedvisual type
("image", "chart", or "table"), detailedinstructions
, and a uniquereplacement id
.article replace points: the original HTML, now peppered with
<replace>id</replace>
markers where visuals should drop in.
The prompt gently nudges the model on when to choose tables vs. charts and enforces brand consistency by injecting {{Design Specs}} into the system message.
Key | Description |
---|---|
data | Up to five objects, each containing the excerpt, suggested visual type, detailed instructions, and a unique replacement id. |
article replace points | The original HTML with |
3. Generate the assets
The app loops over each opportunity (via the hidden “iterator” pattern) and triggers three parallel Chat Models / Gen-AI nodes:
Tables: GPT-4 crafts fully styled HTML tables inside a
<div>
wrapper.Charts: Another GPT-4 configures a Chart.js snippet, complete with unique canvas IDs to avoid clashes.
Images: A Generate Image node (gpt-image-1) returns a brand-aligned illustration or photo at 1024 × 1024 px, following the design specs.
If a step isn’t relevant (e.g., a text callout needs only an image), the node simply returns "NO"
, keeping costs lean. Techniques like these are part of our broader strategy detailed in Building High-Quality AI Content Pipelines, which emphasizes efficient, minimal-interference processes for AI content generation.
4. Assemble the final article
A Python Function stitches everything together. It:
Pairs each opportunity with its generated asset.
Replaces every
<replace>id</replace>
tag in the article with the corresponding HTML,<canvas>
block, or<img src=...>
.
The result is clean, publish-ready HTML pushed to the Output node.
Customizing This App
More than five visuals: raise the cap in the Opportunities prompt and adjust the Python loop—handy for longform guides.
Add alt-text for accessibility: tweak the image generation prompt to return both URL and alt description, then inject it in the Python step.
Brand-specific chart palettes: pass a JSON palette in “Design Specs” and reference it in the Chart prompt to enforce color consistency. For further tips on preserving your brand’s identity, check out our guide on maintaining brand voice across thousands of pages.
Support callout boxes: create another Chat Model that returns HTML callouts when
visual type
= “callout”, and extend the Python logic to insert them.Multilingual content: prepend the Opportunities prompt with language instructions so the AI suggests visuals that resonate culturally.
Running at Scale with Bulk Runs
Export a CSV with two columns: Article (URLs) and optional Design Specs strings.
Navigate to Bulk Runs → New Job, choose the “Add Visuals to Blog Post” app, and upload the CSV.
Map columns to inputs, hit Run, and watch Moonlit generate and embed visuals across dozens—or hundreds—of posts in parallel.
Download the finished HTML or push directly to your CMS using a webhook step.