CONCEPT · Speculative design · ai chatbot · 2026

Orin Studio

Handcraft brand × AI customer service

80 customer messages a day — nine in ten answered before a human sees them

/ The challenge

Speculative scenario: Orin Studio is a Taiwan handcraft lifestyle brand selling across Shopee and its own storefront. As orders grow, the LINE official account receives 80+ messages daily — 90% of them the same questions: in stock? convenience-store pickup? how long to deliver? custom embroidery? Two staff rotate through copy-pasting answers, burning time on predictable questions that don't need a human at all.

/ Our approach

RAG (Retrieval-Augmented Generation) with Claude Haiku lets the bot query the brand-uploaded knowledge base — product catalogue, FAQ, return policy, shipping rates — before answering. Routine questions handled instantly by AI; for anything the bot is not confident about (custom order details, anomalous requests), a smooth handoff to a human agent — no hard wrong answer, no silent wait. The owner updates the knowledge base through the admin panel — change the return policy today, and the bot quotes the new version in the next message.

O
Orin Assistant
AI Support · Replies instantly
Today
Hi, is the indigo linen cushion in stock?
O
Hi! The indigo linen cushion is in stock 🎉 45×45cm, hand-dyed indigo — each piece has its own natural colour variation.
O
🛋️
Indigo Linen Cushion
NT$ 1,280
12 in stockHandmade
Can you ship to Taichung? How long?
O
Yes! Flat NT$ 60 shipping, free over NT$ 1,500 🚚 Taichung pickup usually 1–2 business days, home delivery 2–3 days.
Can I add custom embroidery to the cushion?
O
Custom embroidery adds 5–7 business days, NT$ 200–600 depending on complexity. Let me connect you with our team for the details 👇
O
👤
Connect to agent
Usually replies in 5 min
Type a message…
/ LINE chat — stock check, shipping FAQ, custom request escalation
§ 01 · Knowledge base

The bot knows what you teach it — nothing more, nothing less

Upload product FAQs, catalogue PDFs, return policy — the system chunks and vectorises automatically. Before answering, the bot retrieves from your knowledge base. It cites your content, not model hallucinations. Update a policy, add a product, change shipping rates — update the document, the bot syncs immediately, no retraining, no engineer. Today: 87 conversations, 94.3% resolved by AI, 5 handed to a human.

aichatbot.com
Knowledge Base
+ Upload
Conversations
87
AI Resolved
94.3%
Avg Response
2.1 s
Handoffs
5
Uploaded documents5 files
Product FAQ.docxTrained
Updated 2026-04-28 · 84 chunks
Product Catalog 2026.pdfTrained
Updated 2026-04-20 · 312 chunks
Return Policy.pdfTrained
Updated 2026-03-15 · 27 chunks
Shipping Rates.txtProcessing…
Vectorising…
Warranty Terms v2.pdfPending
No content yet
Product FAQ.docx
84 chunks · Powered by Claude Haiku
Trained
Chunk preview (what the bot actually reads)
Q: What payment methods do you accept?
A: Visa / Mastercard, ATM transfer, CVS payment code, and LINE Pay.
Q: Can I add custom embroidery?
A: Yes — custom embroidery adds 5–7 business days and costs NT$ 200–600. Please contact a human agent.
Q: How do I track my order?
A: Check the tracking link in your confirmation email, or share your order number and I will look it up.
…81 more chunks
/ Knowledge base — document index, chunk preview, today's stats
/ Key decisions
Decision · 01

RAG, not fine-tuning

Fine-tuning requires large datasets, takes time, and costs money. RAG lets the model query the knowledge base at inference time — no retraining on updates, and higher accuracy because the bot cites source documents rather than "remembering" from training data.

Decision · 02

When unsure, hand off — don't bluff

When confidence falls below a threshold, the bot proactively offers a human handoff instead of hedging an answer. Brand trust matters more than reply speed — customers can wait 5 minutes for a human, but will not accept a bot giving wrong refund information.

Decision · 03

Claude Haiku, not GPT-4

The bottleneck for a support bot is latency and cost, not intelligence. Haiku responds 4–6× faster than large models on FAQ-retrieval tasks, at 80% lower cost, with no accuracy loss when knowledge base context is available. Save the large model for tasks that actually need reasoning.

§ 02 · Tech stack

What this build would use

  • Next.js 15
  • Claude Haiku (Anthropic)
  • RAG / pgvector
  • Supabase (PostgreSQL)
  • LINE Messaging API
  • Resend
  • Vercel
CONCEPT · Orin Studio

Want to turn this into yours?
Let's talk

Book a free call
Reply within 24hQuote and contract includedRemote friendlyEN · 繁中