www/apps/bloom/PROMPT_IMPROVER_IMPLEMENTATION.md
This plan details how to implement an AI-powered prompt improvement feature for Bloom documentation. The feature helps users write better prompts by transforming vague or poorly-structured prompts into clear, actionable ones using Ollama AI.
What it does:
Key Benefits:
www/
├── packages/
│ └── docs-ui/
│ └── src/
│ └── hooks/
│ └── use-ai/
│ └── index.tsx # AI hook (to create)
└── apps/
└── bloom/
├── config/
│ └── index.ts # Config (to update)
├── components/
│ └── PromptImprover.tsx # Component (to create)
└── app/
└── store-design-prompting/
└── page.mdx # Integration (to update)
User Input → PromptImprover Component → useAI Hook → Ollama (Railway) → Streaming Response → CodeBlock Display
Add to www/packages/docs-ui/package.json:
{
"dependencies": {
"ai": "^6.0.79",
"ollama-ai-provider-v2": "^3.3.0"
}
}
File: www/packages/docs-ui/src/hooks/use-ai/index.tsx
Full Implementation:
"use client"
import { streamText } from "ai"
import { createOllama } from "ollama-ai-provider-v2"
import { useSiteConfig } from "../../providers/SiteConfig"
type UseAIProps = {
systemPrompt?: string
}
export const useAI = ({ systemPrompt }: UseAIProps) => {
const {
config: { ai },
} = useSiteConfig()
const customOllama = createOllama({
baseURL: ai?.ollamaUrl,
})
const sendPrompt = (prompt: string) => {
if (!ai?.ollamaUrl) {
throw new Error("AI configuration is missing")
}
const { textStream } = streamText({
model: customOllama("llama3"),
prompt,
system: systemPrompt,
})
return textStream
}
return {
sendPrompt,
}
}
Export Hook: Add to www/packages/docs-ui/src/hooks/index.tsx:
export { useAI } from "./use-ai"
Key Features:
File: www/apps/bloom/config/index.ts
Add AI Configuration:
import { DocsConfig, Sidebar } from "types"
import { generatedSidebars } from "@/generated/sidebar.mjs"
import { globalConfig } from "docs-ui"
import { basePathUrl } from "../utils/base-path-url"
const baseUrl = process.env.NEXT_PUBLIC_BASE_URL || "http://localhost:3000"
const basePath = process.env.NEXT_PUBLIC_BASE_PATH || ""
export const config: DocsConfig = {
...globalConfig,
titleSuffix: "Bloom Documentation",
description: "...",
baseUrl,
basePath,
sidebars: generatedSidebars as Sidebar.Sidebar[],
project: {
title: "Bloom",
key: "bloom",
},
logo: basePathUrl("/images/logo.png"),
breadcrumbOptions: {
startItems: [
{
title: "Documentation",
link: basePathUrl("/"),
},
],
},
version: {
...globalConfig.version,
hide: true,
},
// Add AI configuration
ai: {
ollamaUrl: process.env.NEXT_PUBLIC_OLLAMA_URL || "http://localhost:11434",
},
}
Environment Variables: Add to .env.local:
NEXT_PUBLIC_OLLAMA_URL=http://localhost:11434 # For local development
# NEXT_PUBLIC_OLLAMA_URL=https://your-ollama.railway.app # For production
File: www/apps/bloom/components/PromptImprover.tsx
Full Implementation:
"use client"
import { useState } from "react"
import { Button, CodeBlock, TextArea, useAI } from "docs-ui"
export interface PromptImproverProps {
/**
* The rules/guidelines for improving prompts.
* These will be included in the system prompt.
*/
rules: string[]
/**
* Optional placeholder text for the input
*/
placeholder?: string
}
export function PromptImprover({
rules,
placeholder = "Type your prompt here...",
}: PromptImproverProps) {
const [userPrompt, setUserPrompt] = useState("")
const [improvedPrompt, setImprovedPrompt] = useState("")
const [loading, setLoading] = useState(false)
const [error, setError] = useState<string | null>(null)
const systemPrompt = buildSystemPrompt(rules)
const { sendPrompt } = useAI({
systemPrompt,
})
async function improvePrompt() {
if (!userPrompt.trim()) {
return
}
setLoading(true)
setError(null)
setImprovedPrompt("")
try {
const textStream = sendPrompt(
`Improve this Bloom prompt following the guidelines: "${userPrompt}"`
)
for await (const chunk of textStream) {
setImprovedPrompt((prev) => prev + chunk)
}
} catch (err) {
console.error("Failed to improve prompt:", err)
setError("Failed to improve prompt")
} finally {
setLoading(false)
}
}
return (
<div className="flex flex-col gap-1">
<TextArea
value={userPrompt}
onChange={(e) => setUserPrompt(e.target.value)}
placeholder={placeholder}
rows={3}
/>
<Button onClick={improvePrompt} disabled={loading}>
{loading ? "Improving..." : "Improve Prompt"}
</Button>
{error && <p className="text-medusa-tag-red-text">{error}</p>}
{improvedPrompt && (
<CodeBlock
title="Improved Prompt"
source={improvedPrompt}
lang="bash"
noAskAi
isTerminal={false}
noReport
className="!mb-0"
/>
)}
</div>
)
}
function buildSystemPrompt(rules: string[]): string {
return `You are a prompt improvement assistant for Bloom, an AI-powered ecommerce store builder.
SCOPE AWARENESS - READ THIS FIRST:
Generic/Vague Prompts (e.g., "create a shoe store", "build a homepage", "make an online store"):
→ EXPAND with features, sections, and details to make it actionable
→ Add product suggestions, layout structure, and ecommerce functionality
→ Example transformation:
Input: "I want to create a shoe store"
Output: "Create a shoe store homepage with a hero section featuring a main product image and tagline, a product grid showing 8 featured sneakers with images and prices, categories section for Running, Casual, and Sports shoes, customer testimonials, and a newsletter signup form"
Focused/Specific Prompts (e.g., "change the header button to blue", "add more spacing between products"):
→ ONLY restructure and clarify - do NOT add features
→ Do NOT add design details not mentioned in the original
OUTPUT RULES - READ CAREFULLY:
1. Output ONLY the improved prompt itself - nothing else
2. Do NOT include ANY explanations, notes, or commentary
3. Do NOT write "(Note: ...)" or "I've..." or "This prompt..."
4. Do NOT explain what you did or why
5. The output must be copy-paste ready for Bloom's chat
FORBIDDEN PHRASES - Never use these:
- "Here's an improved version"
- "I've improved the prompt"
- "Note:"
- "This prompt"
- Any parenthetical explanations like (Note: ...)
FORMAT:
- Use line breaks between sections for readability
- If breaking into multiple prompts, clearly separate them with "---" and label each as "Prompt 1:", "Prompt 2:", etc.
- This helps users know these are separate prompts to paste into Bloom one at a time
Your output = exactly what user pastes into Bloom. Nothing more.
Improve user prompts following these guidelines:
${rules.map((rule, index) => `${index + 1}. ${rule}`).join("\n")}
`
}
Key Design Decisions:
rules array so different pages can customize improvement guidelinesfor await...of to stream AI output in real-timeFile: www/apps/bloom/app/store-design-prompting/page.mdx
Add after metadata, before first content section:
import { PromptImprover } from "@/components/PromptImprover"
export const metadata = {
title: `Write Better Prompts for Store Design`,
}
# {metadata.title}
This guide teaches you how to write better prompts for store design. Good prompts get you better results, save your tokens, and ensure you have a smoother experience working with Bloom.
## Generate Improved Prompts
Enter your prompt to improve it and get suggestions on how to make it better based on the guidelines in this guide.
<PromptImprover
rules={[
"If the prompt tries to do multiple things, break it into separate sequential steps",
"Replace positional descriptions ('button at the top', 'section on the left') with direct element references",
"Remove vague words ('better', 'nicer', 'modern', 'good') and replace with specific actions or outcomes",
"Make instructions more descriptive by expanding generic actions into specific implementation details",
"Don't add generic ecommerce setup phrases like 'add ecommerce platform' or 'set up online store' - Bloom is already an ecommerce builder",
"Rephrase unclear requests into clear, actionable instructions",
"Structure the prompt with the main action or goal stated first",
"Keep each step focused on a single change or section"
]}
placeholder="Make the homepage better"
/>
## Build Step by Step, Not All at Once
[Rest of the guide content...]
Requirements:
Deployment Steps:
Create Railway Project:
railway init
Deploy Ollama Container:
ollama/ollama:latestPull AI Model:
# SSH into container via Railway CLI
railway run bash
# Pull model (llama3 recommended for prompt improvement)
ollama pull llama3
# Alternative: mistral (lighter, faster)
ollama pull mistral
Configure Internal Networking:
ollama.railway.internal:11434Set Health Checks:
/api/tags endpointUpdate Environment Variables:
# In Bloom app on Railway
NEXT_PUBLIC_OLLAMA_URL=https://ollama.railway.app # or private DNS
Railway Configuration Example (railway.json):
{
"build": {
"docker": {
"dockerfile": "Dockerfile.ollama"
}
},
"deploy": {
"healthcheckPath": "/api/tags",
"restartPolicyType": "ON_FAILURE"
}
}
Issue: Output included explanations like:
Here's an improved version of the prompt:
[improved prompt]
(Note: I've restructured this to be clearer...)
Solution:
Issue: Input "create a shoe store" → Output "Create a new online store with Bloom" (too basic)
Root Cause: Restrictive rules like "do not add design details" prevented expansion
Solution:
Issue: Rules like "Suggest using Selection Mode" made AI add meta-commentary
Bad Rule Examples (cause suggestions):
Good Rule Examples (transform prompts):
Key Insight: Rules should focus on transforming prompt structure, not adding suggestions to output
Issue: Long prompts returned on single line, hard to read
Solution:
Input:
I want to create a shoe store
Expected Output:
Create a shoe store homepage with a hero section featuring a main product image and tagline,
a product grid showing 8 featured sneakers with images and prices, categories section for
Running, Casual, and Sports shoes, customer testimonials, and a newsletter signup form
Input:
Make the button at the top look better
Expected Output:
Change the header button's background color to navy blue and increase padding to 12px
Input:
Create complete store with header, products, footer, and checkout
Expected Output:
Prompt 1:
Create a header with logo, navigation menu (Shop, About, Contact), and cart icon
---
Prompt 2:
Add a product grid showing 12 items with product images, names, and prices
---
Prompt 3:
Create a footer with social media links, copyright text, and contact email
---
Prompt 4:
Add a checkout page with shipping form, payment options, and order summary
The component can be added to any Bloom documentation page. Here are recommended integrations:
File: www/apps/bloom/app/demo-data/page.mdx
<PromptImprover
rules={[
"If requesting multiple products, specify quantity, category, and price range",
"Replace 'demo data' with specific data types (products, reviews, categories)",
"Make product descriptions specific to the store type",
"Include variant details (sizes, colors) when relevant",
"Don't add generic ecommerce setup phrases - focus on data generation"
]}
placeholder="Add demo products"
/>
File: www/apps/bloom/app/selection-mode/page.mdx
<PromptImprover
rules={[
"Use 'this element' or 'the selected element' instead of describing position",
"Be specific about the change (color, size, spacing, text)",
"Keep instructions focused on the selected element only",
"Don't describe which element - Selection Mode handles that"
]}
placeholder="Change this button"
/>
File: www/apps/bloom/app/responsive-view/page.mdx
<PromptImprover
rules={[
"Specify 'on mobile' or 'on desktop' to clarify device target",
"Focus on responsive-specific issues (spacing, text size, layout)",
"Don't add features - focus on fixing mobile view issues",
"Be specific about what's wrong in mobile view"
]}
placeholder="Fix the mobile layout"
/>
Generic Prompt Test:
Focused Prompt Test:
Multi-Step Test:
Edge Cases:
For each test, verify:
Install Ollama locally:
# macOS
brew install ollama
# Start Ollama
ollama serve
Pull model:
ollama pull llama3
Test API:
curl http://localhost:11434/api/tags
Run Bloom docs:
cd www/apps/bloom
yarn dev
Visit http://localhost:3000/store-design-prompting
Token Usage per Request:
Cost Estimation:
Optimization Ideas:
Option 1: Railway Private Networking (Recommended)
Option 2: API Key Authentication
Option 3: Rate Limiting
Recommendation: Use Option 1 (private networking) for simplicity and security
.env.local to gitCause: NEXT_PUBLIC_OLLAMA_URL not set
Solution:
# Add to .env.local
NEXT_PUBLIC_OLLAMA_URL=http://localhost:11434
Cause: Ollama server not running
Solution:
# Start Ollama locally
ollama serve
# Or check Railway deployment logs
railway logs
Cause: Model not pulled in Ollama
Solution:
# Pull model
ollama pull llama3
# Verify
ollama list
Cause: Model too large or server under load
Solutions:
ollama pull mistralCause: Streaming timeout or connection issue
Solutions:
If feature needs to be removed:
Remove component usage from MDX files:
# Search for PromptImprover usage
grep -r "PromptImprover" www/apps/bloom/app/
# Remove import and component from each file
Delete component file:
rm www/apps/bloom/components/PromptImprover.tsx
Delete hook (if not used elsewhere):
rm www/packages/docs-ui/src/hooks/use-ai/index.tsx
Remove AI config from www/apps/bloom/config/index.ts:
// Remove these lines:
ai: {
ollamaUrl: process.env.NEXT_PUBLIC_OLLAMA_URL || "http://localhost:11434",
},
Shut down Railway deployment:
railway down
Remove dependencies from package.json:
// Remove from docs-ui:
"ai": "^6.0.79",
"ollama-ai-provider-v2": "^3.3.0"
Track these metrics to measure success:
Plan Status: Ready for implementation Estimated Time: 6-8 hours (including deployment and testing) Priority: Medium (improves UX but not blocking) Dependencies:
Last Updated: 2026-02-11