I built a prompt enhancer

Over the past few weeks, I’ve been diving deep into the world of AI-powered software prototyping. Some might refer to this as vibe coding, but I’m not a fan of that term. It oversimplifies the process—as if it’s just about prompting the IDE. In my view, there’s much more to it.
If you ask me what a better term would be, I’d suggest vibe engineering (a term I definitely didn’t coin!). It captures the broader effort involved—not just writing prompts, but also designing, planning development, refining outputs, and most importantly, understanding the code AI generates.
But that’s not today’s topic.
Instead, I want to share a small Chrome extension I built for myself—something that helps me get a little more out of LLM chatbots like ChatGPT, Claude, and others. I call it (for lack of a better term) a prompt enhancer. It’s not a groundbreaking idea—there are other extensions out there that do similar things. But this one’s different because it was built by me, for me. I also didn’t want to send my prompts to a third-party service I don’t trust.
Why Build This?
I’ve been using ChatGPT and other LLMs extensively—both professionally (no shame in that; who isn’t?) and personally. I use it to learn new things, settle bets with friends, or go down random rabbit holes of curiosity. In fact, I use it so much that I’ve started saying “Let’s ChatGPT it” instead of “Let’s Google it.”
I enjoy the convenience of getting direct answers—rather than sifting through ad-heavy Google results just to find a half-decent Medium post or Reddit thread. You get the idea.
But like with any tech I use frequently, I always try to optimize it. One thing I quickly realized: the quality of LLM output improves significantly if you write better prompts—add more context, clarify what you want, and structure things clearly. The problem is, I don’t always have the time (or mental bandwidth) to craft perfect prompts on the fly. I just want to write something quick and let the model figure it out.
Google recently released a Prompt 101 guide, and while it’s aimed at Gemini users, the principles apply to all LLMs: better prompts lead to better, more specific answers.
So I had two choices:
- Study the guide, memorize prompt techniques, and become a master prompter.
- Build a button that does all that for me.
Being the lazy (read: efficient) person I am, I went with option two.
The "Prompt Enhancer" Chrome Extension
And so, I built one—part AI, part weekend project.
The Prompt Enhancer adds a button next to the ChatGPT prompt input. When clicked, it takes whatever you’ve typed and upgrades it into a more structured, clear, and effective prompt.
That’s it. One click. Better results.

What’s Under the Hood?
The backend is a simple API route built using Next.js. It works like this:
- It receives the raw prompt from the extension.
- It sends that prompt to the OpenAI API along with a carefully crafted system prompt that tells the model to “act as a prompt engineer.”
- It returns the enhanced version for display.
Here's the snippet of my backend code
import { NextResponse } from 'next/server'import { OpenAI } from 'openai'export async function OPTIONS() {return new NextResponse(null, {status: 200,headers: {'Access-Control-Allow-Origin': '*','Access-Control-Allow-Methods': 'POST, OPTIONS','Access-Control-Allow-Headers': 'Content-Type',},})}export async function POST(request: Request) {const responseHeaders = {'Access-Control-Allow-Origin': '*','Access-Control-Allow-Methods': 'POST, OPTIONS','Access-Control-Allow-Headers': 'Content-Type',}try {const { prompt } = await request.json()const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })const completion = await openai.chat.completions.create({model: 'gpt-4.1-mini',messages: [{role: 'system',content: 'You are a prompt engineer helping users get the most effective results from an AI assistant. Improve the below prompt by applying best practices: - Clarify the task using an actionable verb - Add or infer a relevant persona (e.g., expert, analyst, marketer) - Include relevant context or assumptions - Specify the desired output format (e.g., list, table, summary) - Keep it clear, concise (~40–50 words), and easy to understand. Output only the improved prompt — do not explain or comment on it.'},{role: 'user',content: `Please enhance the following prompt:\n\n"${prompt}"`}],temperature: 0.7,max_tokens: 1000})return NextResponse.json({ enhanced: completion.choices[0].message.content }, {headers: responseHeaders})} catch (error) {console.error('Error:', error)return NextResponse.json({ error: 'Failed to enhance prompt' }, {status: 500,headers: responseHeaders})}}
If you’re interested in building this yourself with your own API key, feel free to shoot me an email—I’d be happy to share the repository with you.
Cheers :)