A process breakdown of my fictional side project that I did end to end, all the way from design, code, images, content, and a video commercial using AI.

Tools

Figma, Magicpath, Cursor, Storybook, Claude Code (Opus 4.5), Codex (GPT 5.3), Supabase, Vercel, Gemini (Nano Banana Pro), Luma Dream Machine (Photon), Higgsfield (Kling 3.0), Topaz Labs Bloom, Replicate (FLUX Schnell), ElevenLabs, Final Cut Pro X

Timeline

~2 weeks, mostly at night, with a mix of curiosity, caffeine, and questionable sleep decisions.

Shoutouts

thiings.co for 3D illustration library. Skills trained on the work of Soren Iverson (copywriting and tone) and Emil Kowalski (animation and motion). Used as reference and creative direction.

The Idea

Building the story before building the UI.

Years ago I came across this Amazon Dating concept project on Twitter and always wanted to make something similar; a little funny, a little absurd, but instantly understandable. Now with AI, I finally have the superpowers to actually build and ship ideas like this end-to-end, so I decided to try it myself.

While brainstorming with ChatGPT, the idea for waprbnb came up: Airbnb, but for booking stays across different eras. The goal wasn’t to build a real product. It was to create a fictional one that let me apply every AI-adjacent skills into practice — design, code, copywriting, image generation, motion, even a full commercial. Everything it takes to build and sell a product, inside a single experiment.

Best case, I would make something cool. Worst case, I would learn a lot. It ended up being both.

Design Code

From design file to pixel-perfect code.

I started in Figma, drawing screens out the traditional way figuring out what this thing should look like.

For the first design-to-code pass I used Magicpath. It is the fastest way to get a structured base out of a Figma frame and into code, good for that initial orientation, getting Cursor familiar with the structure and layout before anything else.

Once that base was in, I conncted Cursor to Figma's MCP and exported all my design tokens; colors, typography, spacing as JSON files and dropped them into the Cursor codebase. This is the step most people skip. When Cursor has the actual values, it uses them. When it does not, it guesses, and its guesses are close enough to be annoying and wrong enough to cost you time. With the tokens in, Figma MCP becomes genuinely precise, attach the link, connect MCP, ask for exactly what you need.

Screens drawn out the traditional way before pushing to MagicPath.

From there I installed Storybook and asked my agent to create stories for every component. Each one in isolation, all variants, all states. This is where I did my design QA; padding off, hover states not triggering, dark mode breaking. A full-page review hides these things. Storybook does not.

For the actual tweaking I used Agentation. Describe what is wrong, it fixes it, review in Storybook. Tight loop, good visual reference, and you are not burning Cursor credits on every small correction.

Once the foundation (tokens, components) was in and with Storybook set up, everything downstream got dramatically easier. Mobile responsiveness was a prompt. Layout adjustments were a prompt. The hard part is always the beginning. After that you are mostly directing.

Every component in isolation, all variants, all states.

Vibe Content

The work is choosing, not prompting

Getting the shell into code is one thing. Filling it with real content is another and this is where Claude Code became genuinely addictive. Not a single line of copy on warpbnb was written by me. All of it was AI-generated.

I wanted the voice and tone to be unhinged, funny, and a little absurd — it is a fictional time-travel rental platform, it should not sound like a terms of service document.I trained a Claude skill on examples of the style I wanted. Soren Iverson's Twitter feed was the biggest source of influence.

Using that skill I asked Claude Code to generate everything: era descriptions, listing taglines, button text, listing rules, house rules, the whole thing. The output had more personality than most copy I see on real products. The model gave me material to work from; the judgment was mine.

Examples of guest reviews generated using custom trained skill.

The trick is not prompting harder. It is giving the model specific examples to work from. Show it the style you want and it will pick it up, rather than typing "make it funny" for the 50th time and hoping for the best.

For iconography I used a better-icons skill from skills.sh and pointed it at my codebase. It read through all the amenity copy, understood what each item was, found the relevant icon from the Lucide icon library, and placed it. The skill uses related names and tags to find the best match, not just the obvious one. No manual list from me. No matching by hand. It figured it out from context. This would have taken me hours of tedious back and forth. Claude Code did it in less than five minutes.

The better-icons skill helped me match 64+ amenities to the right icons without a single manual instruction.

For 3D illustrations I used things.com which covered most of what I needed without generating from scratch. For anything that fell outside the library, I generated custom assets using Nanobanana as a reference point to build from.

thiings.co. is the 3D illustration library that handled most of the visual heavy lifting.

Image-Gen

Curating takes more time than you think.

Curating the images was the real time cost. I had six image types for each listing — exterior, bedroom, restroom, living room, kitchen, and more. Each started with a prompt structure I drafted in ChatGPT to get the architecture right before touching any generation tool.

From there I prompted Nanobanana and Luma Dream Machine directly, with a lot of manual iteration. The part that took longest was not generating — it was curating. I spent more time thinking about what would look good, what felt right for the era, what matched the overall aesthetic, than I spent actually running generations. The time cost shifts almost entirely to judgment.

The prompt architecture I drafted for each listing before touching any image generation tool.

I tried to shortcut this by running everything through Gemini CLI piped into Nanobanana. The idea was to batch the whole thing and move fast. It flopped. Results were not good enough and I ended up going back to a more hands-on approach anyway.

For avatars it was a different story. I connected Replicate directly to the codebase — it read who the host and reviewer were from the code and generated the images accordingly. No prompt needed. It figured it out from context.

One thing I did not expect: different models behaved completely differently depending on the era. Luma Dream Machine for futuristic settings — the lighting, the scale, the surreal quality. Nanobanana for historical eras — grittier, more grounded texture. I stumbled onto this through iteration. Once I noticed the pattern I leaned in and consistency improved.

What happens when you try to automate image generation without oversight and setting guardrails.

Image consistency across 15 scenes was a real challenge with no dedicated workflow tool. Retrospectively, Flora Fauna nodes might have helped — but I had already done most of the work by the time I realized that. Something for the next project.

Once I had selects I was happy with, I upscaled everything through Topaz Bloom.

Same image, before and after Topaz Labs Bloom. The difference is not subtle.

Vibe Design

The last 10% is the difference between good and great.

This is where I spent the most obsessive time after image-generation phase.

I drew on Emil Kowalski's animation course and applied his principles around easing, timing, and physicality as a foundation. I also used a skill trained by Kyle Zentos on animation craft — dropped it in, asked where things could improve, and it delivered. Both together gave me a strong base to work from.

For the button particle effects, I used a component from codewards as a starting point.

To tune the interactions I built a temporary panel directly in the UI — sliders for easing, duration, intensity — so I could adjust everything live without re-running anything. It is the difference between feeling in control of your design and blindly submitting values and hoping. Worth noting: Josh Puckett has built a dial kit that does exactly this, which I now use on all my projects. Back then I just built my own version by prompting.

For the checkout pages I took the transportation method images into Kling 3.0 to give them delightful motion effect on selection.

I also ended up designing the warpbnb logo here. Generated the base in Nanobanana, vectorized it in Figma, inverted the Airbnb A to make a W-shaped face, and joined the two A's together. I also added a cursor follow effect in Rive — as you zoom closer, the eyes of logo follows your cursor.

Hover, tap, jiggle, grow, snap back, Until we launch Mindscapes?

Inspired by Thanos's snap in Infinity War. Press the button and it disintegrates.

Three ways to travel through time. Each video plays on hover — generated in Kling 3.0 from still images.

Vibe Marketing

Building it is half the job. The other half is making people care.

For the commercial, I took the final images into Kling 3.0 — animating first frame to last frame — then brought some of those outputs into Luma Dream Machine for additional motion. Stitched the clips together afterward.

The full commercial pipeline in Higgsfield — all the video generations that went into the final cut.

For the voiceover, I cloned Airbnb's ad voice style in ElevenLabs. Writing the script to sound right when spoken took more iteration than expected — some words needed to be spelled phonetically to get pronunciation right, and I had to mark emphasis manually to land excitement in the right places. The difference between a flat read and an energetic one is often just a few spelling tricks and some punctuation. Small stuff, but that is what polish is made of.

The voiceover script in ElevenLabs — phonetic spelling and emphasis marks included.

The Slop

Things are often chopped and cooked before they are served and ate.

Not everything worked. Let me be clear about that.

Early on, I tried the obvious shortcut: drop everything into Lovable, paste in some Airbnb reference, and hope it would magically produce something close. It did not. What came out was generic, unstyled, and structurally off — the kind of output that looks like a design tool had a bad dream.

The whole point of this project was unslopifying that output. Getting from the slop baseline to something that actually looked and felt considered took real work. You cannot skip that part. The AI gives you a starting point. It does not give you taste.

Every tool in this stack will produce bad output if you let it. The gap between what AI generates by default and what you actually want to ship is where all the real design work lives. That gap is not shrinking as fast as people think.

Everything that went wrong before anything went right. The gap between this and the final product is the whole point.

Reflections

AI unlocked the ability to build end to end.

Two weeks, solo, zero to shipped. Design system, frontend, backend, custom copy, populated listings, host and reviewer avatars, micro-interactions, and a video commercial.

This entire project was an exercise in curation. I spent more time deciding what was good than I spent generating anything. The images, the copy, the motion — all of it required more judgment than execution. That was not what I expected going in, but it makes sense in retrospect. AI raises your floor dramatically. It does not raise your ceiling.

The other thing that kept hitting me: the foundation is everything. Once the codebase understood the design system, once Storybook was working, once the tokens were in — everything downstream got easy. Claude Code could just handle it. Mobile responsiveness was a prompt. The hard part is always the beginning. After that, you are directing, not building.

With AI, it all boils down to taste and curation.

More work, more stories.

If my work resonates or simply sparks your curiosity, I’d love to chat.
Email me at kingermayank[at]gmail.com