Building with LLMs - Part 1

By Me on 2025-01-11

AI
development

The Start of the Journey

Check out the about page for some information on what this is about and why I'm doing it.

In these blog posts I'll be documenting my journey of building this website with the help of LLMs. I'll mostly be using the v0 platform to generate the website, components, styles, etc. and then tweaking it with Cursor. I'll also be using ChatGPT and Siri to help me with writing and researching.

Before jumping in to v0 I tried to think of a good prompt to get a useful result, when I realised I should of course just use AI to assist me with this! I asked ChatGPT to generate a prompt for me to use with v0.

Human

Help me craft a concise prompt for v0.dev to create a basic website structure that has the following things:

  • A landing page
  • An about page
  • Blog pages
ChatGPT

Here's a concise prompt you can use for v0.dev:

Prompt:

"Create a basic website structure with the following pages: Landing page: A welcoming homepage with a title, subtitle, and call-to-action button. About page: Includes a short description or bio section. Blog: A main blog listing page and individual blog post pages. Use semantic HTML with clean, responsive design."

Let me know if you'd like to adjust the tone or add more details!

The First Iteration

Armed with my first prompt I set off to generate the website. The first iteration was very straightforward and to the point. v0 assumed posts were going to come from some external CMS. I wanted to have the posts in the project and statically render the pages at build time, but we had a few mishaps along the way. The most interesting one was where v0 decided to create a blog editor page. It was funny to be on the other side for once: trying to get a developer to build the thing I have in my head. After some back and forth I got it to a point where I could create posts in the project and through the `[slug]` path Next.js would serve the correct post.

Interestingly enough, this broke the preview functionality in v0 so I decided to eject and load the project in Cursor.

Ejection

Ejecting the project was a bit of a pain. The `shadcn@latest` command v0 promotes as the best way to get started didn't work for me, so I decided to download a .ZIP of the project. v0 did give me some instructions on how to get started, but it missed some dependencies and made some assumptions about the project that didn't match my setup. Luckily these were fairly small and easy to fix with manual intervention.

Using Cursor I made a few changes:

  • I updated the title.
  • I added this blog post.
  • I fixed a bunch of linting errors.

One issue both v0 and Cursor were unable to fix was the dynamic params attribute. In Next.js 15+ several attributes on a page were made async, but both v0 and Cursor kept insisting I could just directly access the params. I had to manually read up on the new attributes and fix the code.

At this point the website looked like this:

screenshot of the first home page version
The first home page version
screenshot of the blog example page v0 generated
The blog example page v0 generated

I imagine that as I (or rather the LLMs) make changes it'll start looking different so I asked v0 to generate an image component for blog posts. This way I can snapshot the website and show how things look along the way.

Human

What is a nice way to add images to blog posts? For this project you can resume using SSR pages as the pages compiled at build time don't work well with the preview here.

v0

I'll show you a nice way to add images to blog posts using Next.js with Server-Side Rendering (SSR). We'll create a new component for handling images in blog posts and update our existing structure to accommodate this change... [truncated]

The result was a decent component, but it was using some legacy attributes. Luckily the feedback from Next.js allowed me to quickly address and fix this manually.

First Iteration

One thing I didn't like in the version generated by v0 was that for the blog list I had to manually keep a hardcoded list of all the blog titles up to date to generate the blog list page. I almost started implementing a solution for this, but I realised that I could just ask the LLM to do it for me!

Human

I want this page to dynamically load all the posts in the `./blog/posts` directory instead of a hardcoded list of slugs.

Claude

I'll help you modify the code to dynamically load all posts from the posts directory. We'll use Node.js's fs module to read the directory contents. Here's how we can update the code:

1import { Card, CardHeader, CardTitle, CardDescription, CardContent } from "@/components/ui/card"
2import { Badge } from "@/components/ui/badge"
3import Link from 'next/link'
4import fs from 'fs/promises'
5import path from 'path'
6
7async function getBlogPosts() {
8 // Get all files in the posts directory
9 const postsDirectory = path.join(process.cwd(), 'app/blog/posts')
10 const files = await fs.readdir(postsDirectory)
11
12 // Filter for .tsx files and remove the extension to get slugs
13 const slugs = files
14 .filter(file => file.endsWith('.tsx'))
15 .map(file => file.replace('.tsx', ''))
16
17 // Load all posts
18 const posts = await Promise.all(
19 slugs.map(async (slug) => {
20 const { default: component, metadata } = await import(`./posts/${slug}.tsx`)
21 return { slug, component, metadata }
22 })
23 )
24
25 // Sort posts by date (assuming metadata.date is a string in a sortable format)
26 return posts.sort((a, b) =>
27 new Date(b.metadata.date).getTime() - new Date(a.metadata.date).getTime()
28 )
29}
30
31// Rest of the code remains the same...

These changes will:

  1. Dynamically read all files in the posts directory
  2. Filter for .tsx files and convert filenames to slugs
  3. Load each post's component and metadata
  4. Sort posts by date (most recent first)

Make sure your post files in the ./blog/posts directory follow the same structure with exported metadata and default component.

This worked first try, so I was pretty happy with that. With all these basics out of the way I started actually writing this post, making a few more tweaks as I encountered problems.

Challenges

While it's of course pretty impressive that the LLM can generate a website like this based on a few prompts (and a few manual fixes), there were a few issues. I already mentioned some mishaps/misunderstandings where my prompt wasn't clear enough or the LLM just didn't have up to date information. The upside is that the mistakes were made quickly and easy to fix. It'd have been a lot more painful had I hired a developer to do this and have them spent X hours implementing the wrong thing.

But even when I get the prompt right the LLM sometimes makes questionale decisions. The most prominent one is the back button on the blog list page, which just navigates to the page where you are already.

screenshot of the blog list page v0 generated
The blog list page v0 generated, with a weird back button to the same page

It was also interesting how much code v0 generated that we're not even using. Most of this code was not visibile in the browser UI, but when I downloaded the ZIP file, I found a whole bunch of extra components. I suppose this is something Vercel does to give the LLM more context while hiding all the unused code from the user.

A downside of this is that after ejecting the project I was left with a huge amount of code and dependencies of which I had no idea what they were for. It feels like this would be a problem if I had to maintain this codebase and augment it manually. I've essentially been dropped into an existing codebase and the only one who knows what it does is the LLM.

Source Control

Normally I would spend time thinking about how to structure my code into sensible commits following the Conventional Commits standard. This allows me to document decisions alongside the code without having to maintain code comments when the code changes in the future. Depending on the type of work I would think of sensible chunks beforehand, after finishing a feature I would split up the work in chunks that could be committed separately.

With the full LLM approach I don't really see a point in writing commits as I don't really know a bunch of the code that's been generated. I suppose I could add the prompts/models that were used to get to this next iteration, but I'm not sure how useful that would be.

Closing Thoughts

My first impressions of using an LLM like v0 to generate entire codebases are mixed. It's very cool how well the website functions based on just a few prompts, especially my requirement of building the pages during compilation instead of SSR. The iteration speed is also insane; you can go from a random idea to a working website in a matter of minutes.

The flipside is of course that now I have a pretty huge codebase of which I haven't seen most of the code. I'm left with a lot of questions about the codebase and the decisions that went into it.

I'll keep trying to use v0 to build/tweak components as I don't think this single interaction is enough to get a good sense of the power of v0.

After ejecting and loading the project into Cursor I found the power offered by the LLM a lot more useful. By making smaller changes and seeing them in my editor it was easier to keep my mental model of the website/components up to date. This had been my approach already in my day job, but I think I can make much better use of it.

Overall I'm pleasantly surprised by the results so far. I'm looking forward to the next iteration and getting a better grasp of when to reach for an LLM and how to prompt them to get useful results.