How to improve your main image with Claude and PickFu

Your product’s main image is doing a lot of work. On marketplaces like Amazon, it’s one of the only things standing between a shopper’s scroll and your product page. Once it’s live, it’s hard to know whether it’s actually pulling its weight or quietly losing clicks to a competitor with a sharper hero shot.

AI tools have collapsed the time it takes to spin up new image variations. But they’ve also made it easier to ship something that converts worse than what you started with. The trouble is, you’re taking a guess at what’s going to win – and losing clicks while waiting for enough live data to confirm whether you’re right.

So we built a workflow to help sellers solve this problem: generate image variants quickly with AI, validate them with real shoppers through PickFu, and iterate until you have a clear winner before committing to a final design.

PickFu’s MCP is what makes this possible. It’s a way to connect PickFu to AI tools like Claude, so you can access real consumer feedback without leaving the AI chat.

Co-founders Justin Chen and John Li walked through setting up the MCP and the full AI-powered main image workflow in a recent webinar – you can watch the recording below. If you’d rather read (hello fellow readers 👋), the rest of this post includes the same step-by-step process, best practices, and FAQs. Let’s dive in!

Why human validation matters

AI is great at speed and quantity, but the consumers buying your product are still humans. Generating 10 variants in five minutes is only useful if you can also figure out which of those 10 a real shopper would actually click on – and most importantly, why – so you can better understand your audience. That’s the gap PickFu fills.

When John and Justin started PickFu back in 2008, they were trying to solve the same problem: how do you get fast, unbiased feedback on business decisions without burning a week or sending a survey to your friends? MCPs just make that process tighter: your AI assistant talks directly to PickFu, so generation and validation happen in the same conversation.

The playbook in five steps

The workflow is simple enough to follow:

  1. Benchmark. See where your current image stands against competitors.
  2. Generate. Use the feedback to create new creative directions with AI.
  3. Test. Survey real shoppers on the variants.
  4. Analyze. Identify what’s working, what isn’t, and what to test next.
  5. Validate. Iterate until one variant clearly wins, then re-test against your original competitors.

What is the PickFu MCP?

MCP stands for Model Context Protocol — a (still relatively new) open standard that lets AI assistants like Claude, Cursor, and ChatGPT talk directly to other tools without you having to copy and paste anything between tabs.

The PickFu MCP lets your AI assistant create surveys, target audiences, generate images, and pull results on your behalf. You just chat with AI in your own words, and it can help you access and work with PickFu without leaving the conversation.

The MCP works with any MCP-compatible client. We recommend Claude because it’s the easiest to set up and the most reliable for multi-step work like this, but Cursor, Claude Code, and ChatGPT (with the right plan) all work too.

Setup in Claude takes about a minute:

  1. Open Claude, then click Customize → Connectors.
  2. Click the “+” button → Add custom connector. Name it PickFu and paste in this URL: mcp.pickfu.com/mcp.
  3. Authorize with your PickFu account.

Heads up if you’ve gone looking for it: the PickFu MCP isn’t in Claude’s connector marketplace yet (we’re working on it), so you’ll need to add it as a custom connector for now. Full setup instructions are here. If you don’t have a PickFu account yet, sign up for free.

Step 1: Benchmark against competitors

Before you change anything about your image, find out where it actually stands.

Pull three competitors that show up on the same Amazon search term as your product, then run a single ranked poll asking shoppers which of the four they’d be most likely to buy.

Justin’s framing during the demo: “Obviously we like to see who wins, but the real nuggets of information are in the qualitative feedback and the written responses.”

That’s where the “why” lives — what specific elements are working for the winning image, what’s costing the others votes, and which themes show up over and over.

Try this prompt:

“Here’s my ASIN. Pull three of my competitors and set up a PickFu test asking Amazon Prime members which one they’d rather buy.”

In the demo, the protein bar example came in last out of four. That’s fine. Losing the benchmark gives you the clearest possible starting point — you know exactly which competitors to study and what the gap looks like in shoppers’ own words.

Step 2: Generate variants with AI

Now ask your assistant to read the qualitative feedback from step 1 and turn it into new creative directions.

Good prompts give the AI three things: the source material (the feedback), your platform’s rules (Amazon main image TOS, in this case), and a specific number of variants to generate.

“Analyze the feedback from the competitor test and identify opportunities to improve my main image. Then generate three new main image concepts for my product — follow Amazon main image TOS.”

A few practical notes on what to expect:

  • The model. The PickFu MCP uses Google’s Nano Banana 2 (their latest image model) by default. We picked it for consistency — it’s especially good at preserving brand assets and existing product details across variations, which matters when you’re iterating on a real listing.
  • Treat the output as a designer brief, not a final asset. You probably shouldn’t take an AI-generated image and ship it as-is — it’s better to think of these as fast directional briefs your designer can then clean up and finalize. (One of our demo variants printed “15g protein” three times on the same package. Useful directionally; not exactly shippable.)
  • Iterate in the chat. If a variant misses your brand guidelines, gets the colors wrong, or includes weird artifacts, just say so. Claude will regenerate with the corrections — and if you’ve already loaded brand guidelines or past performance data into your AI workspace, it’ll pull from that context too.

Step 3: Test the variants with real shoppers

Once you’ve got two or three variants you’re happy with, ask the AI to set up a new test in PickFu comparing them.

For iterative rounds, 30 to 50 respondents is usually plenty — small enough that results come back within an hour for a general audience, big enough that you can trust the signal. Add one or two targeting traits (Amazon Prime members, parents, pet owners — whatever fits your product) so the feedback comes from people who’d actually buy the thing. PickFu has 100+ targeting traits to choose from, and you can ask Claude to pull all of them and recommend the best matches based on your product and audience.

“Test the image variants with 30 Amazon Prime members. Ask them which image they would click when buying protein bars, and why.”

Claude will draft the survey, show it to you for approval, and launch it once you confirm. You only ever pay for polls you actually launch — exploration, prompts, and image generation are all free. Polls start at $1 per response.

One reminder: the MCP doesn’t ping you when a poll finishes. Just check back and ask Claude to pull the results when you’re ready.

Step 4: Analyze and form a hypothesis

When the results are in, ask the AI to break them down for you. The best prompts ask for the winner, the vote split, the dominant themes from the comments, and one specific change to test in the next round:

“Pull the results and break them down — winner, key themes from the comments, any noteworthy demographic splits, and which single change we should test next.”

End every analysis with a hypothesis in the form “People preferred X because Y.” That sentence becomes the input for your next round.

If you’ve got months of past PickFu tests in your account, this is also a great moment to ask Claude to look across them — it’ll surface themes that show up consistently across multiple tests, which is where the bigger strategic insights tend to live. (For more inspiration on what to ask, see our PickFu MCP prompt examples.)

Step 5: Iterate, then validate

Take the round-one winner, generate one more variant that keeps its strengths and addresses the top piece of feedback, and run the two head-to-head with the same audience.

Keep going until one version is consistently winning around 70% of the audience votes — that’s a solid signal you’ve found a strong creative direction. Testing should never be one-and-done. Now is always going to be the best time to make changes; you don’t want to launch first and then watch the data come in worse than expected.

Then comes the part that gets skipped most often: re-test the final image against the original competitors from step 1.

This is what closes the loop. Even if you don’t beat your top competitor outright, a relative improvement over your starting baseline means you’re taking back share in the search results. In the demo, the protein bar example went from losing the original benchmark to winning the variant tests by a wide margin — and showed measurable improvement when re-tested against the same competitor set.

FAQs

A few things attendees asked that are worth calling out:

Do I need a separate Gemini or Nano Banana subscription? No. Image generation is included free with your PickFu account, with no usage limits at this time.

Does this only work with Claude? No — any MCP-compatible client works (Cursor, Claude Code, ChatGPT with the right plan). We recommend Claude because the connector setup is the most straightforward and the chat is well-suited for multi-step research workflows.

Why isn’t the PickFu MCP in Claude’s marketplace? It’s pending approval (the process takes a while). For now, add it as a custom connector using the URL mcp.pickfu.com/mcp.

Can I attach images directly in the chat instead of using URLs? Not currently — there’s a security limitation in most AI clients that prevents image data attached in the chat from being passed to external connectors. Upload images to catbox.moe (our recommended host; it’s super easy to use) and paste the URL into the chat instead.

Can Claude monitor a poll and notify me when it finishes? Not natively. You’ll need to check back in and ask it to pull the results. (Some clients support scheduled tasks or workflows that can do this for you, but most users just refresh.)

Beyond main images

The same workflow works on almost any creative or strategic decision. A few of the most common we see:

  • Titles and bullet points. Test rewrites against your current listing copy. Bullet points work especially well as ranked polls — shoppers will tell you which claims actually move them.
  • A+ content and infographics. Our survey builder renders A+ layouts in the same vertical format shoppers see on Amazon, so respondents are reacting to something close to the real experience.
  • Logos and packaging. As John put it during the demo, these are things you don’t change very often, which makes validating them with your target audience especially important. Pick a direction before you commit to print.
  • Names and taglines. When PickFu was being named, the team tested over 30 candidates — on PickFu. First names are rarely the best ones.
  • Ad creatives. Validate the message before you put paid spend behind it. Clicks alone won’t tell you whether your ad is sending the right signal; PickFu can.
  • New product concepts. Some of the highest-leverage tests our customers run are the ones that don’t lead to a launch. As John shared, sellers regularly tell us PickFu has saved them money by helping them realize a product idea wouldn’t sell at the price they’d need to make a profit — and they decide to scrap it before sourcing.

Anywhere a creative decision happens, this workflow can shorten it.

Try it on your own listing

You can run this entire workflow today, on your own ASIN or listing, in under an hour.

Start here for a collection of resources on using PickFu with AI: MCP setup docs, prompt examples, upcoming events, and more.

We also run live workshops fairly regularly. If you want to see the next one in this series — or any of the other AI-powered research workflows we’re sharing — check out our Luma page.


About PickFu

PickFu is an on-demand consumer insights platform. Build a survey, choose your target audience from our verified global panel, and get in-depth feedback within hours. Founded in 2008 and used by brands of all sizes worldwide to make confident decisions. Sign up for free. Surveys start at $1 per response.


Learn more: Optimize your product listings by testing design concepts, photos, and descriptions with a target audience of likely buyers.
alt

Adrienne Van Niman

Adrienne Van Niman is the Marketing Lead at PickFu. She has 8+ years of experience as a marketer and writer, specializing in content strategy and wearing many hats for growing B2B tech companies. Outside of work, she loves to read, travel, go to concerts, and spend time in the great outdoors.