Ollama Business Memory
In The Fitting Room Memory
Add, edit, delete, and reorder business memory items sent to Ollama.
Jump To Module
Select module...
Amazon Saved Stock Items
Backend Home
Edit Jump To Module Items
Edward's Record
Help Center
In The Fitting Room Post
Missing Reports
To Do List
AI Traffic
Alterations
Ambient Weather
Appointments
Backup Database
Bank
Bills
Blog
Car Maintenance
Cron Manager
Customer Emails & Reviews
Customers
Customer Totals
Duke Daily
Duke Monthly
Documents
Edit Ella Reminders
Email Hub
End Of Day Review
Error Log
Gold & Silver
Hockerty
Hockerty Invoices
Issue Tracker
Kefa Web / OpenWebUI
Live Table Manager
MariaDB Manager
Medication
Morning Jobs
Nightly Reports
Payment Ledger
Payroll
Reports
Report Reconciliation
Reminders
Search Data Imports
Setmore Bridge Freshness
Slideshow
Speed Dial
Square Terminal
Square Payments Report
Stocks
Stocks & Crypto
System Admin
Web Terminal
Umami
Totals Audit
Upcoming
Workload
Month End Reporting
Receipts
Activity Log
Todo Module
Amazon Orders Module
Help
Refresh
Back to Generator
Backend Home
Part 2
Add Memory Item
Title
Sort Order
Memory Text
Add Memory
Current Memory Items
Title
Sort Order
Memory Text
Best setup for Ella’s Alterations Layer 1: Create an Ella’s Alterations custom Ollama model Use an Ollama Modelfile for the permanent stuff the AI should always know, like business name, tone, role, and response style. Ollama’s Modelfile supports a SYSTEM message, which is exactly where this belongs. Example: mkdir -p ~/ollama-ellas cd ~/ollama-ellas nano Modelfile Paste this: FROM llama3.1:8b SYSTEM """ You are the local AI assistant for Ella's Alterations, a professional tailoring and alterations studio in Zephyrhills, Florida. Business identity: Ella's Alterations specializes in wedding dress alterations, formalwear, bridesmaid dresses, mother of the bride dresses, prom dresses, suits, tuxedos, jacket fit, hems, garment repairs, and custom tailoring guidance. Voice: Write in a casual, clear, confident, helpful style. Keep answers practical. Avoid vague advice. Explain things like a master tailor speaking to a real customer. Important business style: Mention that fitting appointments matter because formalwear alterations depend on fabric, structure, seams, beadwork, lining, and body fit. Never guarantee a price without seeing the garment. Encourage appointments for bridal and formalwear questions. Location: Ella's Alterations serves Zephyrhills, Wesley Chapel, Dade City, Tampa, Lakeland, Orlando, and surrounding Central Florida areas. SEO style: When writing website content, use answer boxes, key takeaways, fast answer panels, quick facts, comparison tables, tailor insight, mini glossary, and clear FAQ sections when requested. Rules: Do not invent exact prices, dates, awards, customer names, or policies unless they are provided in the knowledge context. If unsure, say what information is needed. """ Then create the model: ollama create ellas-assistant -f Modelfile ollama run ellas-assistant This gives you a reusable model name: ollama run ellas-assistant Layer 2: Use RAG for the real “memory” This is the big one. For Ella’s Alterations, the AI should be able to search your actual content, like: services.txt pricing-guidelines.txt appointment-policy.txt bridal-alterations.txt rush-policy.txt seo-style-guide.txt reviews.txt business-awards.txt json-ld-rules.txt common-customer-questions.txt blog-template-sections.txt Do not cram all that into the Modelfile. That becomes messy fast. Use RAG, which means the system searches your documents first, pulls the most relevant pieces, then sends those pieces to Ollama with the user’s question. Ollama’s embeddings documentation says embeddings are used for semantic search, retrieval, and RAG workflows. Think of it like this: Customer question ↓ Search Ella’s Alterations knowledge files ↓ Pull the best matching facts ↓ Send facts plus question to Ollama ↓ Answer based on your business knowledge That is the closest thing to “memory” without retraining. Layer 3: Add conversation memory This is for things like: The user asked about prom dress hems earlier. The customer said the wedding is next Saturday. The dress has heavy beading. The customer is asking for rush service. That memory should live in your app, not inside Ollama. Use a small MariaDB or SQLite table like: CREATE TABLE ai_memory ( id INT AUTO_INCREMENT PRIMARY KEY, customer_name VARCHAR(255), memory_type VARCHAR(100), memory_text TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); Example records: customer_name: Sarah memory_type: garment memory_text: Customer has a beaded mermaid wedding dress and asked about taking in the hips. customer_name: Sarah memory_type: deadline memory_text: Wedding date is May 30, 2026. Then your app can include relevant memory before sending the question to Ollama. What I would use on your setup Since you already have Ollama, Open WebUI, Qdrant, and MariaDB experience, I would do this: Simple test version Use: Ollama Open WebUI Knowledge files uploaded into Open WebUI Custom system prompt for Ella’s Alterations This is the fastest way to test whether the idea works. Better local business version Use: Ollama for the model nomic-embed-text or another embedding model for document search Qdrant for vector storage MariaDB for customer memory PHP or Python API wrapper Ollama has official support for embedding models, and their own blog shows using embedding models for RAG applications. What should go where Information Best Place Why Business name Modelfile Always needed Writing tone Modelfile Always needed Location Modelfile Always needed Services RAG documents Easy to update Pricing guidance RAG documents Changes over time Appointment rules RAG documents Changes over time Customer details Database memory Private and customer specific Blog templates RAG documents Too large for system prompt Reviews RAG documents Must be accurate JSON-LD rules RAG documents Reusable SEO knowledge Previous conversation App memory Session specific Do not fine tune it yet Fine tuning sounds tempting, but for this use case it is overkill. Fine tuning is better when you need the model to learn a writing pattern or behavior deeply. But for Ella’s Alterations, most of what you need is business facts, policies, services, wording, and local SEO structure. That belongs in RAG. Tell it like it is: fine tuning for this would be like sewing a dress shut when you only needed a zipper. Starter folder structure I would create this: mkdir -p /mnt/drive1/ellas-ai/knowledge mkdir -p /mnt/drive1/ellas-ai/prompts mkdir -p /mnt/drive1/ellas-ai/memory mkdir -p /mnt/drive1/ellas-ai/scripts Then files like: /mnt/drive1/ellas-ai/knowledge/business-profile.txt /mnt/drive1/ellas-ai/knowledge/services.txt /mnt/drive1/ellas-ai/knowledge/bridal-alterations.txt /mnt/drive1/ellas-ai/knowledge/formalwear-alterations.txt /mnt/drive1/ellas-ai/knowledge/appointment-policy.txt /mnt/drive1/ellas-ai/knowledge/rush-service.txt /mnt/drive1/ellas-ai/knowledge/seo-content-style.txt /mnt/drive1/ellas-ai/knowledge/blog-section-template.txt /mnt/drive1/ellas-ai/knowledge/json-ld-rules.txt /mnt/drive1/ellas-ai/knowledge/common-faq.txt
Save Changes
Delete