summaryrefslogtreecommitdiff
path: root/mcp-servers/memory-mcp-server/.claude/commands
diff options
context:
space:
mode:
Diffstat (limited to 'mcp-servers/memory-mcp-server/.claude/commands')
-rw-r--r--mcp-servers/memory-mcp-server/.claude/commands/explain.md48
-rw-r--r--mcp-servers/memory-mcp-server/.claude/commands/mcp-debug.md115
-rw-r--r--mcp-servers/memory-mcp-server/.claude/commands/memory-ops.md396
-rw-r--r--mcp-servers/memory-mcp-server/.claude/commands/perf-monitor.md353
-rw-r--r--mcp-servers/memory-mcp-server/.claude/commands/review.md147
-rw-r--r--mcp-servers/memory-mcp-server/.claude/commands/setup.md381
-rw-r--r--mcp-servers/memory-mcp-server/.claude/commands/test.md305
7 files changed, 1745 insertions, 0 deletions
diff --git a/mcp-servers/memory-mcp-server/.claude/commands/explain.md b/mcp-servers/memory-mcp-server/.claude/commands/explain.md
new file mode 100644
index 0000000..fb51ae0
--- /dev/null
+++ b/mcp-servers/memory-mcp-server/.claude/commands/explain.md
@@ -0,0 +1,48 @@
+---
+description: Explain code, MCP protocol, or memory system concepts
+argument-hint: "[file, function, MCP tool, or memory concept]"
+allowed-tools: Read, Grep, Glob, Task
+---
+
+# Memory MCP Server Explanation
+
+Provide a detailed explanation of $ARGUMENTS in the context of this Memory MCP Server:
+
+## Core Explanation
+
+- What it does and its purpose in the memory system
+- How it works (step-by-step if applicable)
+- Role in the MCP protocol implementation
+
+## Technical Details
+
+- Key dependencies and interactions
+- Database schema relationships (if applicable)
+- Vector embedding and search mechanics (if relevant)
+- MCP message flow and protocol compliance
+
+## Memory System Context
+
+- How it relates to memory persistence
+- Impact on memory lifecycle (creation, retrieval, expiration, archival)
+- Companion isolation and multi-tenancy considerations
+- Performance implications for vector search
+
+## Integration Points
+
+- MCP tool registration and execution
+- JSON-RPC message handling
+- Session management aspects
+- Error handling patterns
+
+## Usage Examples
+
+- Sample MCP requests/responses
+- Code usage patterns
+- Common integration scenarios
+
+## Related Components
+
+- Related files, functions, or MCP tools
+- Database tables and indexes involved
+- Dependent services or modules
diff --git a/mcp-servers/memory-mcp-server/.claude/commands/mcp-debug.md b/mcp-servers/memory-mcp-server/.claude/commands/mcp-debug.md
new file mode 100644
index 0000000..7232cca
--- /dev/null
+++ b/mcp-servers/memory-mcp-server/.claude/commands/mcp-debug.md
@@ -0,0 +1,115 @@
+---
+description: Debug Memory MCP server connection and protocol issues
+argument-hint: "[connection issue, tool error, or specific debug scenario]"
+allowed-tools: Read, Grep, Bash, Edit, Task, TodoWrite
+---
+
+# Memory MCP Server Debugging
+
+Debug the Memory MCP server implementation with focus on $ARGUMENTS:
+
+## 1. Server Initialization & Configuration
+
+- Verify MCP server startup and registration
+- Check @modelcontextprotocol/sdk initialization
+- Validate server manifest and capabilities
+- Test stdio/HTTP transport configuration
+- Verify database connection (Neon PostgreSQL)
+
+## 2. MCP Protocol Compliance
+
+- Validate JSON-RPC 2.0 message format
+- Test request/response correlation (id matching)
+- Verify error response format (code, message, data)
+- Check notification handling (no id field)
+- Validate batch request support
+
+## 3. Memory Tool Registration
+
+- Verify tool discovery and registration:
+ - `create_memory` - Memory creation with embeddings
+ - `search_memories` - Vector similarity search
+ - `get_memory` - Direct retrieval
+ - `update_memory` - Memory updates
+ - `delete_memory` - Soft/hard deletion
+ - `list_memories` - Pagination support
+- Validate tool parameter schemas (Zod validation)
+- Test tool permission boundaries
+
+## 4. Database & Vector Operations
+
+- Test pgvector extension functionality
+- Verify embedding generation (OpenAI API)
+- Debug vector similarity search queries
+- Check index usage (IVFFlat/HNSW)
+- Validate transaction handling
+
+## 5. Session & Authentication
+
+- Debug companion session management
+- Verify user context isolation
+- Test multi-tenancy boundaries
+- Check session persistence
+- Validate auth token handling
+
+## 6. Error Handling & Recovery
+
+- Test database connection failures
+- Handle embedding API errors
+- Verify graceful degradation
+- Check error logging and telemetry
+- Test retry mechanisms
+
+## 7. Performance & Memory Leaks
+
+- Monitor connection pooling
+- Check for memory leaks in long sessions
+- Verify streaming response handling
+- Test concurrent request handling
+- Profile vector search performance
+
+## 8. Common Issues & Solutions
+
+### Connection Refused
+
+```bash
+# Check if server is running
+ps aux | grep "memory-mcp"
+# Verify port binding
+lsof -i :3000
+# Test direct connection
+npx @modelcontextprotocol/cli connect stdio "node ./dist/index.js"
+```
+
+### Tool Not Found
+
+```bash
+# List registered tools
+npx @modelcontextprotocol/cli list-tools
+# Verify tool manifest
+cat .mcp.json
+```
+
+### Vector Search Failures
+
+```sql
+-- Check pgvector extension
+SELECT * FROM pg_extension WHERE extname = 'vector';
+-- Verify embeddings exist
+SELECT COUNT(*) FROM memories WHERE embedding IS NOT NULL;
+-- Test similarity query
+SELECT id, content <=> '[...]'::vector AS distance
+FROM memories
+ORDER BY distance LIMIT 5;
+```
+
+## 9. Testing Checklist
+
+- [ ] Server starts without errors
+- [ ] Tools are discoverable via MCP protocol
+- [ ] Memory CRUD operations work
+- [ ] Vector search returns relevant results
+- [ ] Session isolation is maintained
+- [ ] Error responses follow MCP spec
+- [ ] Performance meets requirements
+- [ ] Logs provide debugging info
diff --git a/mcp-servers/memory-mcp-server/.claude/commands/memory-ops.md b/mcp-servers/memory-mcp-server/.claude/commands/memory-ops.md
new file mode 100644
index 0000000..777d50d
--- /dev/null
+++ b/mcp-servers/memory-mcp-server/.claude/commands/memory-ops.md
@@ -0,0 +1,396 @@
+---
+description: Test and debug memory CRUD operations and vector search
+argument-hint: "[create, search, update, delete, lifecycle, or batch]"
+allowed-tools: Bash, Read, Write, Task, TodoWrite
+---
+
+# Memory Operations Testing
+
+Test and debug memory operations for the Memory MCP Server focusing on $ARGUMENTS:
+
+## Create Memory
+
+Test memory creation with embedding generation:
+
+```bash
+# Create a simple memory
+npx @modelcontextprotocol/cli call create_memory '{
+ "content": "User prefers dark mode interfaces",
+ "type": "preference",
+ "importance": 0.8
+}'
+
+# Create memory with expiration
+npx @modelcontextprotocol/cli call create_memory '{
+ "content": "Meeting with team at 3pm tomorrow",
+ "type": "event",
+ "importance": 0.9,
+ "expires_at": "2024-12-31T15:00:00Z"
+}'
+
+# Create memory with metadata
+npx @modelcontextprotocol/cli call create_memory '{
+ "content": "Project deadline is March 15",
+ "type": "task",
+ "importance": 1.0,
+ "metadata": {
+ "project": "Memory MCP Server",
+ "priority": "high"
+ }
+}'
+
+# Batch memory creation
+for i in {1..10}; do
+ npx @modelcontextprotocol/cli call create_memory "{
+ \"content\": \"Test memory $i for performance testing\",
+ \"type\": \"test\",
+ \"importance\": 0.5
+ }"
+done
+```
+
+## Search Memories
+
+Test vector similarity search:
+
+```bash
+# Basic semantic search
+npx @modelcontextprotocol/cli call search_memories '{
+ "query": "What are the user preferences?",
+ "limit": 5
+}'
+
+# Search with similarity threshold
+npx @modelcontextprotocol/cli call search_memories '{
+ "query": "upcoming meetings and events",
+ "limit": 10,
+ "threshold": 0.7
+}'
+
+# Search by type
+npx @modelcontextprotocol/cli call search_memories '{
+ "query": "tasks and deadlines",
+ "filter": {
+ "type": "task"
+ },
+ "limit": 20
+}'
+
+# Search with date range
+npx @modelcontextprotocol/cli call search_memories '{
+ "query": "recent activities",
+ "filter": {
+ "created_after": "2024-01-01",
+ "created_before": "2024-12-31"
+ }
+}'
+```
+
+## Update Memory
+
+Test memory updates and importance adjustments:
+
+```bash
+# Update memory content
+npx @modelcontextprotocol/cli call update_memory '{
+ "id": "memory-uuid-here",
+ "content": "Updated content with new information",
+ "regenerate_embedding": true
+}'
+
+# Adjust importance
+npx @modelcontextprotocol/cli call update_memory '{
+ "id": "memory-uuid-here",
+ "importance": 0.95
+}'
+
+# Extend expiration
+npx @modelcontextprotocol/cli call update_memory '{
+ "id": "memory-uuid-here",
+ "expires_at": "2025-12-31T23:59:59Z"
+}'
+
+# Mark as accessed
+npx @modelcontextprotocol/cli call update_memory '{
+ "id": "memory-uuid-here",
+ "increment_access_count": true
+}'
+```
+
+## Delete Memory
+
+Test soft and hard deletion:
+
+```bash
+# Soft delete (archive)
+npx @modelcontextprotocol/cli call delete_memory '{
+ "id": "memory-uuid-here",
+ "soft_delete": true
+}'
+
+# Hard delete
+npx @modelcontextprotocol/cli call delete_memory '{
+ "id": "memory-uuid-here",
+ "soft_delete": false
+}'
+
+# Bulk delete by filter
+npx @modelcontextprotocol/cli call delete_memories '{
+ "filter": {
+ "type": "test",
+ "created_before": "2024-01-01"
+ }
+}'
+```
+
+## Memory Lifecycle
+
+Test expiration, archival, and consolidation:
+
+```bash
+# Process expired memories
+npx @modelcontextprotocol/cli call process_expired_memories
+
+# Archive old memories
+npx @modelcontextprotocol/cli call archive_memories '{
+ "older_than_days": 90,
+ "importance_below": 0.3
+}'
+
+# Consolidate similar memories
+npx @modelcontextprotocol/cli call consolidate_memories '{
+ "similarity_threshold": 0.9,
+ "max_group_size": 5
+}'
+
+# Apply importance decay
+npx @modelcontextprotocol/cli call apply_importance_decay '{
+ "decay_rate": 0.1,
+ "days_inactive": 30
+}'
+```
+
+## Batch Operations
+
+Test bulk operations and performance:
+
+```bash
+# Bulk import memories
+cat memories.json | npx @modelcontextprotocol/cli call bulk_import_memories
+
+# Export memories
+npx @modelcontextprotocol/cli call export_memories '{
+ "format": "json",
+ "include_embeddings": false
+}' > backup.json
+
+# Regenerate all embeddings
+npx @modelcontextprotocol/cli call regenerate_embeddings '{
+ "batch_size": 100,
+ "model": "text-embedding-3-small"
+}'
+```
+
+## Database Queries
+
+Direct database operations for testing:
+
+```sql
+-- Check memory count
+SELECT COUNT(*) as total,
+ COUNT(CASE WHEN is_archived THEN 1 END) as archived,
+ COUNT(CASE WHEN embedding IS NULL THEN 1 END) as no_embedding
+FROM memories;
+
+-- Find duplicate memories
+SELECT content, COUNT(*) as count
+FROM memories
+WHERE is_archived = false
+GROUP BY content
+HAVING COUNT(*) > 1;
+
+-- Analyze embedding distribution
+SELECT
+ percentile_cont(0.5) WITHIN GROUP (ORDER BY importance) as median_importance,
+ AVG(access_count) as avg_accesses,
+ COUNT(DISTINCT user_id) as unique_users
+FROM memories;
+
+-- Test vector similarity manually
+SELECT id, content,
+ embedding <=> (SELECT embedding FROM memories WHERE id = 'reference-id') as distance
+FROM memories
+WHERE embedding IS NOT NULL
+ORDER BY distance
+LIMIT 10;
+```
+
+## Performance Testing
+
+Load testing and benchmarking:
+
+```bash
+# Concurrent memory creation
+for i in {1..100}; do
+ (npx @modelcontextprotocol/cli call create_memory "{
+ \"content\": \"Concurrent test $i\",
+ \"type\": \"test\"
+ }" &)
+done
+wait
+
+# Measure search latency
+time npx @modelcontextprotocol/cli call search_memories '{
+ "query": "test query for performance measurement",
+ "limit": 100
+}'
+
+# Stress test with large content
+npx @modelcontextprotocol/cli call create_memory "{
+ \"content\": \"$(cat large-document.txt)\",
+ \"type\": \"document\"
+}"
+```
+
+## Monitoring Commands
+
+Real-time monitoring during operations:
+
+```bash
+# Watch memory creation rate
+watch -n 1 'psql $DATABASE_URL -t -c "
+ SELECT COUNT(*) || \" memories created in last minute\"
+ FROM memories
+ WHERE created_at > NOW() - INTERVAL \"1 minute\";
+"'
+
+# Monitor embedding generation
+psql $DATABASE_URL -c "
+ SELECT
+ COUNT(*) FILTER (WHERE embedding IS NOT NULL) as with_embedding,
+ COUNT(*) FILTER (WHERE embedding IS NULL) as without_embedding,
+ pg_size_pretty(SUM(pg_column_size(embedding))) as total_size
+ FROM memories;
+"
+
+# Check index usage
+psql $DATABASE_URL -c "
+ SELECT indexname, idx_scan, idx_tup_read, idx_tup_fetch
+ FROM pg_stat_user_indexes
+ WHERE tablename = 'memories'
+ ORDER BY idx_scan DESC;
+"
+```
+
+## Validation Scripts
+
+Automated validation of memory operations:
+
+```typescript
+// validate-memory-ops.ts
+import { MCPClient } from '@modelcontextprotocol/sdk';
+
+async function validateMemoryOperations() {
+ const client = new MCPClient();
+
+ // Test 1: Create and retrieve
+ const created = await client.call('create_memory', {
+ content: 'Validation test memory',
+ type: 'test'
+ });
+
+ const retrieved = await client.call('get_memory', {
+ id: created.id
+ });
+
+ console.assert(created.id === retrieved.id, 'Memory retrieval failed');
+
+ // Test 2: Search accuracy
+ const searchResults = await client.call('search_memories', {
+ query: 'Validation test memory',
+ limit: 1
+ });
+
+ console.assert(searchResults[0].id === created.id, 'Search failed');
+
+ // Test 3: Update verification
+ await client.call('update_memory', {
+ id: created.id,
+ importance: 0.99
+ });
+
+ const updated = await client.call('get_memory', {
+ id: created.id
+ });
+
+ console.assert(updated.importance === 0.99, 'Update failed');
+
+ // Test 4: Cleanup
+ await client.call('delete_memory', {
+ id: created.id
+ });
+
+ console.log('✅ All memory operations validated');
+}
+
+validateMemoryOperations().catch(console.error);
+```
+
+## Common Issues & Solutions
+
+### Embedding Generation Failures
+
+```bash
+# Check OpenAI API key
+echo $OPENAI_API_KEY
+
+# Test API directly
+curl https://api.openai.com/v1/embeddings \
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "model": "text-embedding-3-small",
+ "input": "Test"
+ }'
+
+# Retry failed embeddings
+npx @modelcontextprotocol/cli call retry_failed_embeddings
+```
+
+### Vector Index Issues
+
+```sql
+-- Rebuild IVFFlat index
+DROP INDEX IF EXISTS memories_embedding_idx;
+CREATE INDEX memories_embedding_idx ON memories
+USING ivfflat (embedding vector_cosine_ops)
+WITH (lists = 100);
+
+-- Switch to HNSW for better performance
+CREATE INDEX memories_embedding_hnsw_idx ON memories
+USING hnsw (embedding vector_cosine_ops)
+WITH (m = 16, ef_construction = 64);
+```
+
+### Memory Limit Exceeded
+
+```bash
+# Check user memory count
+psql $DATABASE_URL -c "
+ SELECT user_id, COUNT(*) as memory_count
+ FROM memories
+ WHERE is_archived = false
+ GROUP BY user_id
+ HAVING COUNT(*) > 9000
+ ORDER BY memory_count DESC;
+"
+
+# Archive old memories for user
+npx @modelcontextprotocol/cli call archive_user_memories '{
+ "user_id": "user-uuid",
+ "keep_recent": 5000
+}'
+```
+
+This command provides comprehensive testing and debugging capabilities for all memory operations in the MCP server.
+
diff --git a/mcp-servers/memory-mcp-server/.claude/commands/perf-monitor.md b/mcp-servers/memory-mcp-server/.claude/commands/perf-monitor.md
new file mode 100644
index 0000000..e9db312
--- /dev/null
+++ b/mcp-servers/memory-mcp-server/.claude/commands/perf-monitor.md
@@ -0,0 +1,353 @@
+---
+description: Monitor vector search performance and index efficiency for the memory MCP server
+allowed-tools: Bash, Read, Grep
+---
+
+# Performance Monitoring Command
+
+Monitor and analyze the performance of vector search operations, index efficiency, and memory lifecycle metrics.
+
+## Usage
+
+This command provides comprehensive performance monitoring for:
+
+- Vector search query performance
+- Index usage and efficiency
+- Memory lifecycle statistics
+- Database query patterns
+- Resource utilization
+
+## Available Monitoring Tasks
+
+### 1. Vector Search Performance
+
+```bash
+# Check current pgvector index statistics
+psql $DATABASE_URL -c "
+ SELECT
+ schemaname,
+ tablename,
+ indexname,
+ idx_scan as index_scans,
+ idx_tup_read as tuples_read,
+ idx_tup_fetch as tuples_fetched,
+ pg_size_pretty(pg_relation_size(indexrelid)) as index_size
+ FROM pg_stat_user_indexes
+ WHERE indexname LIKE '%vector%' OR indexname LIKE '%embedding%'
+ ORDER BY idx_scan DESC;
+"
+
+# Analyze query performance for vector operations
+psql $DATABASE_URL -c "
+ SELECT
+ substring(query, 1, 50) as query_preview,
+ calls,
+ mean_exec_time as avg_ms,
+ min_exec_time as min_ms,
+ max_exec_time as max_ms,
+ total_exec_time as total_ms,
+ rows
+ FROM pg_stat_statements
+ WHERE query LIKE '%embedding%' OR query LIKE '%vector%'
+ ORDER BY mean_exec_time DESC
+ LIMIT 20;
+"
+```
+
+### 2. Index Efficiency Analysis
+
+```bash
+# Check IVFFlat index clustering quality
+psql $DATABASE_URL -c "
+ SELECT
+ indexname,
+ lists,
+ pages,
+ tuples,
+ ROUND(tuples::numeric / NULLIF(lists, 0), 2) as avg_vectors_per_list,
+ CASE
+ WHEN tuples::numeric / NULLIF(lists, 0) > 10000 THEN 'Rebalance recommended'
+ WHEN tuples::numeric / NULLIF(lists, 0) < 100 THEN 'Over-partitioned'
+ ELSE 'Optimal'
+ END as status
+ FROM (
+ SELECT
+ 'memories_embedding_ivfflat_idx'::regclass as indexname,
+ (SELECT current_setting('ivfflat.lists')::int) as lists,
+ relpages as pages,
+ reltuples as tuples
+ FROM pg_class
+ WHERE oid = 'memories_embedding_ivfflat_idx'::regclass
+ ) index_stats;
+"
+
+# Check HNSW index parameters
+psql $DATABASE_URL -c "
+ SELECT
+ indexname,
+ m,
+ ef_construction,
+ ef_search,
+ CASE
+ WHEN ef_search < 100 THEN 'Low recall configuration'
+ WHEN ef_search > 500 THEN 'High cost configuration'
+ ELSE 'Balanced configuration'
+ END as configuration_assessment
+ FROM (
+ SELECT
+ 'memories_embedding_hnsw_idx' as indexname,
+ current_setting('hnsw.m')::int as m,
+ current_setting('hnsw.ef_construction')::int as ef_construction,
+ current_setting('hnsw.ef_search')::int as ef_search
+ ) hnsw_config;
+"
+```
+
+### 3. Memory Lifecycle Metrics
+
+```bash
+# Memory distribution by status and type
+psql $DATABASE_URL -c "
+ SELECT
+ type,
+ COUNT(*) FILTER (WHERE is_archived = false) as active,
+ COUNT(*) FILTER (WHERE is_archived = true) as archived,
+ AVG(importance) as avg_importance,
+ AVG(access_count) as avg_accesses,
+ AVG(EXTRACT(EPOCH FROM (NOW() - created_at)) / 86400)::int as avg_age_days
+ FROM memories
+ GROUP BY type
+ ORDER BY active DESC;
+"
+
+# Memory expiration analysis
+psql $DATABASE_URL -c "
+ SELECT
+ CASE
+ WHEN expires_at IS NULL THEN 'Never expires'
+ WHEN expires_at < NOW() THEN 'Expired'
+ WHEN expires_at < NOW() + INTERVAL '7 days' THEN 'Expiring soon'
+ WHEN expires_at < NOW() + INTERVAL '30 days' THEN 'Expiring this month'
+ ELSE 'Long-term'
+ END as expiration_status,
+ COUNT(*) as count,
+ AVG(importance) as avg_importance
+ FROM memories
+ WHERE is_archived = false
+ GROUP BY expiration_status
+ ORDER BY count DESC;
+"
+
+# Consolidation statistics
+psql $DATABASE_URL -c "
+ SELECT
+ relation_type,
+ COUNT(*) as relationship_count,
+ COUNT(DISTINCT from_memory_id) as source_memories,
+ COUNT(DISTINCT to_memory_id) as target_memories
+ FROM memory_relations
+ WHERE relation_type IN ('consolidated_into', 'summarized_in', 'elaborates', 'corrects')
+ GROUP BY relation_type;
+"
+```
+
+### 4. Query Pattern Analysis
+
+```bash
+# Analyze search patterns by limit size
+psql $DATABASE_URL -c "
+ WITH query_patterns AS (
+ SELECT
+ CASE
+ WHEN query LIKE '%LIMIT 1%' THEN 'Single result'
+ WHEN query LIKE '%LIMIT 5%' OR query LIKE '%LIMIT 10%' THEN 'Small batch'
+ WHEN query LIKE '%LIMIT 50%' OR query LIKE '%LIMIT 100%' THEN 'Large batch'
+ ELSE 'Variable'
+ END as pattern,
+ COUNT(*) as query_count,
+ AVG(mean_exec_time) as avg_time_ms,
+ SUM(calls) as total_calls
+ FROM pg_stat_statements
+ WHERE query LIKE '%ORDER BY % <=>%' -- Vector similarity queries
+ GROUP BY pattern
+ )
+ SELECT * FROM query_patterns ORDER BY total_calls DESC;
+"
+
+# Identify slow queries
+psql $DATABASE_URL -c "
+ SELECT
+ substring(query, 1, 100) as query_preview,
+ calls,
+ mean_exec_time as avg_ms,
+ max_exec_time as worst_ms,
+ rows / NULLIF(calls, 0) as avg_rows_returned
+ FROM pg_stat_statements
+ WHERE
+ mean_exec_time > 100 -- Queries slower than 100ms
+ AND (query LIKE '%memories%' OR query LIKE '%embedding%')
+ ORDER BY mean_exec_time DESC
+ LIMIT 10;
+"
+```
+
+### 5. Storage and Resource Utilization
+
+```bash
+# Table and index sizes
+psql $DATABASE_URL -c "
+ SELECT
+ schemaname,
+ tablename,
+ pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as total_size,
+ pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) as table_size,
+ pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename) - pg_relation_size(schemaname||'.'||tablename)) as index_size,
+ n_live_tup as row_count,
+ n_dead_tup as dead_rows,
+ ROUND(100.0 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 2) as dead_percent
+ FROM pg_stat_user_tables
+ WHERE tablename IN ('memories', 'memory_relations', 'companions', 'users', 'companion_sessions')
+ ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
+"
+
+# Embedding storage analysis
+psql $DATABASE_URL -c "
+ SELECT
+ COUNT(*) as total_memories,
+ COUNT(embedding) as memories_with_embeddings,
+ pg_size_pretty(
+ SUM(pg_column_size(embedding))
+ ) as total_embedding_storage,
+ pg_size_pretty(
+ AVG(pg_column_size(embedding))::bigint
+ ) as avg_embedding_size,
+ COUNT(*) FILTER (WHERE embedding IS NULL) as missing_embeddings
+ FROM memories;
+"
+```
+
+### 6. Real-time Monitoring Dashboard
+
+```bash
+# Create a monitoring loop (run for 60 seconds)
+echo "Starting real-time performance monitoring for 60 seconds..."
+for i in {1..12}; do
+ clear
+ echo "=== Memory MCP Server Performance Monitor ==="
+ echo "Time: $(date '+%Y-%m-%d %H:%M:%S')"
+ echo ""
+
+ # Active connections
+ psql $DATABASE_URL -t -c "
+ SELECT 'Active connections: ' || count(*)
+ FROM pg_stat_activity
+ WHERE state = 'active';
+ "
+
+ # Recent vector searches
+ psql $DATABASE_URL -t -c "
+ SELECT 'Vector searches (last min): ' || count(*)
+ FROM pg_stat_statements
+ WHERE query LIKE '%embedding%'
+ AND last_call > NOW() - INTERVAL '1 minute';
+ "
+
+ # Memory operations
+ psql $DATABASE_URL -t -c "
+ SELECT
+ 'Memories created (last hour): ' ||
+ COUNT(*) FILTER (WHERE created_at > NOW() - INTERVAL '1 hour')
+ FROM memories;
+ "
+
+ # Cache hit ratio
+ psql $DATABASE_URL -t -c "
+ SELECT 'Cache hit ratio: ' ||
+ ROUND(100.0 * blks_hit / NULLIF(blks_hit + blks_read, 0), 2) || '%'
+ FROM pg_stat_database
+ WHERE datname = current_database();
+ "
+
+ sleep 5
+done
+```
+
+## Performance Tuning Recommendations
+
+Based on monitoring results, consider these optimizations:
+
+### For Slow Vector Searches
+
+- Increase `ivfflat.probes` for better accuracy
+- Enable iterative scans: `SET enable_iterative_index_scan = true`
+- Consider switching from IVFFlat to HNSW for small result sets
+
+### For Poor Index Performance
+
+- Rebuild IVFFlat indexes if avg_vectors_per_list > 10000
+- Increase HNSW `ef_search` for better recall
+- Add more specific indexes for common query patterns
+
+### For Memory Lifecycle Issues
+
+- Adjust expiration policies based on usage patterns
+- Implement more aggressive consolidation for old memories
+- Archive memories with low importance scores
+
+### For Storage Optimization
+
+- Use halfvec type for less critical embeddings
+- Implement memory pruning for users exceeding limits
+- Compress archived memory content
+
+## Integration with Application
+
+To integrate monitoring into your application:
+
+```typescript
+// src/monitoring/performanceMonitor.ts
+import { db } from "../db/client";
+import { sql } from "drizzle-orm";
+
+export class PerformanceMonitor {
+ async getVectorSearchMetrics() {
+ // Implementation based on queries above
+ }
+
+ async getIndexEfficiency() {
+ // Implementation based on queries above
+ }
+
+ async getMemoryLifecycleStats() {
+ // Implementation based on queries above
+ }
+}
+```
+
+## Automated Alerts
+
+Set up alerts when:
+
+- Average query time exceeds 200ms
+- Index scan ratio drops below 90%
+- Dead tuple percentage exceeds 20%
+- Memory count approaches user limits
+- Embedding generation fails repeatedly
+
+## Export Metrics
+
+Export monitoring data for analysis:
+
+```bash
+# Export to CSV
+psql $DATABASE_URL -c "\COPY (
+ SELECT * FROM pg_stat_user_indexes WHERE indexname LIKE '%vector%'
+) TO '/tmp/index_stats.csv' WITH CSV HEADER;"
+
+# Generate performance report
+psql $DATABASE_URL -H -o performance_report.html -c "
+ -- Your monitoring queries here
+"
+```
+
+This command provides comprehensive monitoring capabilities for optimizing your memory MCP server's performance.
diff --git a/mcp-servers/memory-mcp-server/.claude/commands/review.md b/mcp-servers/memory-mcp-server/.claude/commands/review.md
new file mode 100644
index 0000000..40fb885
--- /dev/null
+++ b/mcp-servers/memory-mcp-server/.claude/commands/review.md
@@ -0,0 +1,147 @@
+---
+description: Comprehensive code review for Memory MCP Server
+argument-hint: "[specific file, module, or leave empty for full review]"
+allowed-tools: Read, Grep, Glob, Task, TodoWrite
+---
+
+# Memory MCP Server Code Review
+
+Perform a comprehensive review of $ARGUMENTS with focus on MCP protocol compliance and memory system integrity:
+
+## Critical Security & Safety
+
+- **Data Isolation**: Verify companion/user boundary enforcement
+- **SQL Injection**: Check all database queries for parameterization
+- **Embedding Leakage**: Ensure vector data doesn't cross tenant boundaries
+- **Auth Tokens**: Validate secure storage and transmission
+- **API Keys**: Check for hardcoded credentials (OpenAI, Neon)
+- **Session Hijacking**: Review session management implementation
+
+## MCP Protocol Compliance
+
+- **JSON-RPC 2.0**: Validate message format compliance
+- **Error Codes**: Use standard MCP error codes (-32700 to -32603)
+- **Tool Registration**: Verify proper tool manifest structure
+- **Parameter Validation**: Check Zod schemas match MCP expectations
+- **Response Format**: Ensure consistent response structure
+- **Streaming Support**: Validate partial result handling
+
+## Memory System Integrity
+
+- **Vector Dimensions**: Ensure consistent embedding dimensions (1536 for OpenAI)
+- **Index Configuration**: Review IVFFlat/HNSW parameters
+- **Memory Lifecycle**: Check expiration and archival logic
+- **Consolidation Rules**: Validate memory merging algorithms
+- **Importance Scoring**: Review decay and update mechanisms
+- **Deduplication**: Check for duplicate memory prevention
+
+## Performance Optimization
+
+- **N+1 Queries**: Identify and fix database query patterns
+- **Vector Search**: Optimize similarity thresholds and limits
+- **Index Usage**: Verify proper index hints and scans
+- **Connection Pooling**: Check pool size and timeout settings
+- **Batch Operations**: Look for opportunities to batch DB operations
+- **Caching Strategy**: Review memory and query result caching
+
+## Database & Schema
+
+- **Migration Safety**: Check for backward compatibility
+- **Transaction Boundaries**: Verify ACID compliance
+- **Deadlock Prevention**: Review lock ordering
+- **Foreign Keys**: Ensure referential integrity
+- **Soft Deletes**: Validate is_archived handling
+- **Timestamps**: Check timezone handling
+
+## Error Handling
+
+- **Database Errors**: Graceful handling of connection failures
+- **API Failures**: OpenAI API error recovery
+- **Validation Errors**: User-friendly error messages
+- **Timeout Handling**: Proper cleanup on timeouts
+- **Retry Logic**: Exponential backoff implementation
+- **Logging**: Structured logging with appropriate levels
+
+## Code Quality
+
+- **TypeScript Strict**: Enable strict mode compliance
+- **Type Safety**: No `any` types without justification
+- **Code Duplication**: Identify repeated patterns
+- **Function Complexity**: Break down complex functions
+- **Naming Conventions**: Consistent naming patterns
+- **Documentation**: JSDoc for public APIs
+
+## Testing Gaps
+
+- **Unit Test Coverage**: Minimum 80% coverage
+- **Integration Tests**: MCP protocol testing
+- **Vector Search Tests**: Similarity threshold validation
+- **Session Tests**: Multi-tenancy isolation
+- **Error Path Tests**: Exception handling coverage
+- **Performance Tests**: Load and stress testing
+
+## Specific Checks for Memory MCP
+
+```typescript
+// Check for these patterns:
+interface MemoryReviewChecks {
+ // 1. Embedding generation should handle failures
+ embeddings: {
+ fallbackStrategy: boolean;
+ retryLogic: boolean;
+ costTracking: boolean;
+ };
+
+ // 2. Vector search should be bounded
+ vectorSearch: {
+ maxResults: number;
+ minSimilarity: number;
+ timeoutMs: number;
+ };
+
+ // 3. Memory operations should be atomic
+ transactions: {
+ useTransactions: boolean;
+ rollbackOnError: boolean;
+ isolationLevel: string;
+ };
+
+ // 4. Session management should be secure
+ sessions: {
+ tokenRotation: boolean;
+ expirationHandling: boolean;
+ revokeOnLogout: boolean;
+ };
+}
+```
+
+## Priority Issues Format
+
+### 🔴 Critical (Security/Data Loss)
+
+- Issue description
+- File:line reference
+- Suggested fix
+
+### 🟡 Important (Performance/Reliability)
+
+- Issue description
+- File:line reference
+- Suggested fix
+
+### 🟢 Minor (Code Quality/Style)
+
+- Issue description
+- File:line reference
+- Suggested fix
+
+## Review Checklist
+
+- [ ] No sensitive data in logs
+- [ ] All DB queries parameterized
+- [ ] MCP responses follow spec
+- [ ] Vector operations are bounded
+- [ ] Sessions properly isolated
+- [ ] Errors handled gracefully
+- [ ] Performance within targets
+- [ ] Tests cover critical paths
diff --git a/mcp-servers/memory-mcp-server/.claude/commands/setup.md b/mcp-servers/memory-mcp-server/.claude/commands/setup.md
new file mode 100644
index 0000000..5a9db1d
--- /dev/null
+++ b/mcp-servers/memory-mcp-server/.claude/commands/setup.md
@@ -0,0 +1,381 @@
+---
+description: Initialize Memory MCP Server project from scratch
+argument-hint: "[quick, full, or database]"
+allowed-tools: Write, MultiEdit, Bash, Task, TodoWrite
+---
+
+# Memory MCP Server Setup
+
+Initialize and configure the Memory MCP Server project based on $ARGUMENTS:
+
+## Quick Setup
+
+Initialize minimal working MCP server with memory capabilities:
+
+```bash
+# Initialize project
+npm init -y
+npm install @modelcontextprotocol/sdk zod dotenv
+npm install -D typescript @types/node tsx nodemon
+npm install @neondatabase/serverless drizzle-orm@^0.44.4
+npm install openai pgvector
+
+# Create TypeScript config
+npx tsc --init
+```
+
+## Full Setup
+
+Complete project initialization with all features:
+
+### 1. Project Structure
+
+```text
+memory-mcp-server/
+├── src/
+│ ├── index.ts # MCP server entry point
+│ ├── server.ts # Server initialization
+│ ├── tools/ # MCP tool implementations
+│ │ ├── createMemory.ts
+│ │ ├── searchMemories.ts
+│ │ ├── getMemory.ts
+│ │ ├── updateMemory.ts
+│ │ └── deleteMemory.ts
+│ ├── db/
+│ │ ├── client.ts # Database connection
+│ │ ├── schema.ts # Drizzle schema
+│ │ └── migrations/ # Database migrations
+│ ├── services/
+│ │ ├── embeddings.ts # OpenAI embeddings
+│ │ ├── vectorSearch.ts # pgvector operations
+│ │ └── memoryLifecycle.ts # Memory management
+│ ├── types/
+│ │ └── index.ts # TypeScript types
+│ └── utils/
+│ ├── logger.ts # Structured logging
+│ └── errors.ts # Error handling
+├── tests/
+│ ├── unit/
+│ ├── integration/
+│ └── fixtures/
+├── .env.example
+├── .mcp.json # MCP manifest
+├── tsconfig.json
+├── package.json
+├── drizzle.config.ts
+└── README.md
+```
+
+### 2. Package Dependencies
+
+```json
+{
+ "name": "memory-mcp-server",
+ "version": "1.0.0",
+ "type": "module",
+ "scripts": {
+ "dev": "tsx watch src/index.ts",
+ "build": "tsc",
+ "start": "node dist/index.js",
+ "test": "jest",
+ "test:watch": "jest --watch",
+ "test:coverage": "jest --coverage",
+ "lint": "eslint . --ext .ts",
+ "typecheck": "tsc --noEmit",
+ "db:generate": "drizzle-kit generate",
+ "db:migrate": "drizzle-kit migrate",
+ "db:studio": "drizzle-kit studio"
+ },
+ "dependencies": {
+ "@modelcontextprotocol/sdk": "^1.0.0",
+ "@neondatabase/serverless": "^1.0.1",
+ "drizzle-orm": "^0.44.4",
+ "zod": "^4.0.17",
+ "openai": "^4.0.0",
+ "pgvector": "^0.2.0",
+ "dotenv": "^16.0.0",
+ "winston": "^3.0.0"
+ },
+ "devDependencies": {
+ "@types/node": "^20.0.0",
+ "typescript": "^5.0.0",
+ "tsx": "^4.0.0",
+ "nodemon": "^3.0.0",
+ "jest": "^29.0.0",
+ "@types/jest": "^29.0.0",
+ "ts-jest": "^29.0.0",
+ "eslint": "^8.0.0",
+ "@typescript-eslint/eslint-plugin": "^6.0.0",
+ "@typescript-eslint/parser": "^6.0.0",
+ "drizzle-kit": "^0.32.0"
+ }
+}
+```
+
+### 3. TypeScript Configuration
+
+```json
+{
+ "compilerOptions": {
+ "target": "ES2022",
+ "module": "NodeNext",
+ "moduleResolution": "NodeNext",
+ "lib": ["ES2022"],
+ "outDir": "./dist",
+ "rootDir": "./src",
+ "strict": true,
+ "esModuleInterop": true,
+ "skipLibCheck": true,
+ "forceConsistentCasingInFileNames": true,
+ "resolveJsonModule": true,
+ "declaration": true,
+ "declarationMap": true,
+ "sourceMap": true,
+ "noUnusedLocals": true,
+ "noUnusedParameters": true,
+ "noImplicitReturns": true,
+ "noFallthroughCasesInSwitch": true
+ },
+ "include": ["src/**/*"],
+ "exclude": ["node_modules", "dist", "tests"]
+}
+```
+
+### 4. Environment Variables
+
+```bash
+# .env
+DATABASE_URL="postgresql://user:pass@host/dbname?sslmode=require"
+OPENAI_API_KEY="sk-..."
+MCP_SERVER_PORT=3000
+LOG_LEVEL=info
+NODE_ENV=development
+
+# Vector search settings
+VECTOR_SEARCH_LIMIT=10
+SIMILARITY_THRESHOLD=0.7
+
+# Memory lifecycle
+MEMORY_EXPIRATION_DAYS=90
+MAX_MEMORIES_PER_USER=10000
+IMPORTANCE_DECAY_RATE=0.1
+```
+
+### 5. MCP Manifest
+
+```json
+{
+ "name": "memory-mcp-server",
+ "version": "1.0.0",
+ "description": "Persistent memory management for AI assistants",
+ "author": "Your Name",
+ "license": "MIT",
+ "server": {
+ "command": "node",
+ "args": ["dist/index.js"],
+ "transport": "stdio"
+ },
+ "tools": {
+ "create_memory": {
+ "description": "Create a new memory with vector embedding",
+ "inputSchema": {
+ "type": "object",
+ "properties": {
+ "content": { "type": "string" },
+ "type": { "type": "string" },
+ "importance": { "type": "number" },
+ "expires_at": { "type": "string" }
+ },
+ "required": ["content", "type"]
+ }
+ },
+ "search_memories": {
+ "description": "Search memories using semantic similarity",
+ "inputSchema": {
+ "type": "object",
+ "properties": {
+ "query": { "type": "string" },
+ "limit": { "type": "number" },
+ "threshold": { "type": "number" }
+ },
+ "required": ["query"]
+ }
+ }
+ }
+}
+```
+
+## Database Setup
+
+Initialize Neon PostgreSQL with pgvector:
+
+### 1. Create Database
+
+```sql
+-- Enable pgvector extension
+CREATE EXTENSION IF NOT EXISTS vector;
+
+-- Create database schema
+CREATE SCHEMA IF NOT EXISTS memory_mcp;
+```
+
+### 2. Drizzle Schema
+
+```typescript
+// src/db/schema.ts
+import { pgTable, uuid, text, timestamp, boolean, real, vector, index, jsonb } from 'drizzle-orm/pg-core';
+
+export const users = pgTable('users', {
+ id: uuid('id').primaryKey().defaultRandom(),
+ external_id: text('external_id').notNull().unique(),
+ created_at: timestamp('created_at').defaultNow().notNull(),
+ metadata: jsonb('metadata')
+});
+
+export const companions = pgTable('companions', {
+ id: uuid('id').primaryKey().defaultRandom(),
+ name: text('name').notNull(),
+ user_id: uuid('user_id').references(() => users.id),
+ created_at: timestamp('created_at').defaultNow().notNull(),
+ is_active: boolean('is_active').default(true)
+});
+
+export const memories = pgTable('memories', {
+ id: uuid('id').primaryKey().defaultRandom(),
+ companion_id: uuid('companion_id').references(() => companions.id),
+ user_id: uuid('user_id').references(() => users.id),
+ content: text('content').notNull(),
+ type: text('type').notNull(),
+ embedding: vector('embedding', { dimensions: 1536 }),
+ importance: real('importance').default(0.5),
+ access_count: integer('access_count').default(0),
+ last_accessed: timestamp('last_accessed'),
+ expires_at: timestamp('expires_at'),
+ is_archived: boolean('is_archived').default(false),
+ created_at: timestamp('created_at').defaultNow().notNull(),
+ updated_at: timestamp('updated_at').defaultNow().notNull()
+}, (table) => ({
+ embeddingIdx: index('memories_embedding_idx')
+ .using('ivfflat', table.embedding.op('vector_cosine_ops'))
+ .with({ lists: 100 }),
+ userIdx: index('memories_user_idx').on(table.user_id),
+ companionIdx: index('memories_companion_idx').on(table.companion_id),
+ typeIdx: index('memories_type_idx').on(table.type)
+}));
+```
+
+### 3. Migration Commands
+
+```bash
+# Generate migration
+npx drizzle-kit generate
+
+# Run migrations
+npx drizzle-kit migrate
+
+# Open Drizzle Studio
+npx drizzle-kit studio
+```
+
+## Initial Server Implementation
+
+```typescript
+// src/index.ts
+import { Server } from '@modelcontextprotocol/sdk/server/index.js';
+import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
+import { z } from 'zod';
+import { createMemoryTool } from './tools/createMemory.js';
+import { searchMemoriesTool } from './tools/searchMemories.js';
+
+const server = new Server({
+ name: 'memory-mcp-server',
+ version: '1.0.0'
+}, {
+ capabilities: {
+ tools: {}
+ }
+});
+
+// Register tools
+server.setRequestHandler('tools/list', async () => ({
+ tools: [
+ createMemoryTool.definition,
+ searchMemoriesTool.definition
+ ]
+}));
+
+server.setRequestHandler('tools/call', async (request) => {
+ const { name, arguments: args } = request.params;
+
+ switch (name) {
+ case 'create_memory':
+ return await createMemoryTool.handler(args);
+ case 'search_memories':
+ return await searchMemoriesTool.handler(args);
+ default:
+ throw new Error(`Unknown tool: ${name}`);
+ }
+});
+
+// Start server
+const transport = new StdioServerTransport();
+await server.connect(transport);
+console.log('Memory MCP Server started');
+```
+
+## Testing Setup
+
+```bash
+# Install test dependencies
+npm install -D jest @types/jest ts-jest
+
+# Create Jest config
+npx ts-jest config:init
+
+# Run tests
+npm test
+```
+
+## Development Workflow
+
+```bash
+# Start development server
+npm run dev
+
+# In another terminal, test MCP connection
+npx @modelcontextprotocol/cli connect stdio "npm run start"
+
+# Test tool execution
+npx @modelcontextprotocol/cli call create_memory '{"content": "Test memory"}'
+```
+
+## Production Deployment
+
+```dockerfile
+# Dockerfile
+FROM node:20-alpine
+WORKDIR /app
+COPY package*.json ./
+RUN npm ci --only=production
+COPY dist ./dist
+CMD ["node", "dist/index.js"]
+```
+
+## Monitoring & Observability
+
+```typescript
+// src/utils/logger.ts
+import winston from 'winston';
+
+export const logger = winston.createLogger({
+ level: process.env.LOG_LEVEL || 'info',
+ format: winston.format.json(),
+ transports: [
+ new winston.transports.Console(),
+ new winston.transports.File({ filename: 'error.log', level: 'error' }),
+ new winston.transports.File({ filename: 'combined.log' })
+ ]
+});
+```
+
+This setup provides a complete foundation for the Memory MCP Server with all necessary configurations and best practices.
diff --git a/mcp-servers/memory-mcp-server/.claude/commands/test.md b/mcp-servers/memory-mcp-server/.claude/commands/test.md
new file mode 100644
index 0000000..e78843c
--- /dev/null
+++ b/mcp-servers/memory-mcp-server/.claude/commands/test.md
@@ -0,0 +1,305 @@
+---
+description: Generate comprehensive tests for Memory MCP Server
+argument-hint: "[file, function, MCP tool, or test scenario]"
+allowed-tools: Read, Write, MultiEdit, Bash, Task, TodoWrite
+---
+
+# Memory MCP Server Test Generation
+
+Generate comprehensive test cases for $ARGUMENTS with focus on MCP protocol compliance and memory operations:
+
+## Unit Tests
+
+### MCP Protocol Tests
+
+```typescript
+// Test MCP message handling
+describe('MCP Protocol', () => {
+ it('should handle JSON-RPC 2.0 requests', async () => {
+ // Test request with id
+ // Test notification without id
+ // Test batch requests
+ });
+
+ it('should return proper error codes', async () => {
+ // -32700: Parse error
+ // -32600: Invalid request
+ // -32601: Method not found
+ // -32602: Invalid params
+ // -32603: Internal error
+ });
+
+ it('should validate tool parameters with Zod', async () => {
+ // Test required fields
+ // Test type validation
+ // Test nested schemas
+ });
+});
+```
+
+### Memory Operations Tests
+
+```typescript
+// Test memory CRUD operations
+describe('Memory Operations', () => {
+ it('should create memory with embeddings', async () => {
+ // Test successful creation
+ // Test OpenAI API failure handling
+ // Test vector dimension validation
+ });
+
+ it('should perform vector similarity search', async () => {
+ // Test similarity threshold
+ // Test result limit
+ // Test empty results
+ // Test index usage
+ });
+
+ it('should handle memory lifecycle', async () => {
+ // Test expiration
+ // Test archival
+ // Test soft delete
+ // Test importance decay
+ });
+
+ it('should consolidate memories', async () => {
+ // Test deduplication
+ // Test summarization
+ // Test relationship creation
+ });
+});
+```
+
+### Database Tests
+
+```typescript
+// Test database operations
+describe('Database Operations', () => {
+ it('should handle transactions', async () => {
+ // Test commit on success
+ // Test rollback on error
+ // Test isolation levels
+ });
+
+ it('should use pgvector correctly', async () => {
+ // Test vector operations
+ // Test distance calculations
+ // Test index scans
+ });
+
+ it('should maintain referential integrity', async () => {
+ // Test foreign keys
+ // Test cascade deletes
+ // Test orphan prevention
+ });
+});
+```
+
+## Integration Tests
+
+### MCP Server Integration
+
+```typescript
+// Test full MCP server flow
+describe('MCP Server Integration', () => {
+ let server: MCPServer;
+ let client: MCPClient;
+
+ beforeEach(async () => {
+ server = await createMemoryMCPServer();
+ client = await connectMCPClient(server);
+ });
+
+ it('should register tools on connection', async () => {
+ const tools = await client.listTools();
+ expect(tools).toContain('create_memory');
+ expect(tools).toContain('search_memories');
+ });
+
+ it('should handle tool execution', async () => {
+ const result = await client.executeTool('create_memory', {
+ content: 'Test memory',
+ type: 'fact'
+ });
+ expect(result.id).toBeDefined();
+ expect(result.embedding).toHaveLength(1536);
+ });
+
+ it('should maintain session isolation', async () => {
+ // Test multi-tenant boundaries
+ // Test companion isolation
+ // Test user context
+ });
+});
+```
+
+### Vector Search Integration
+
+```typescript
+// Test vector search functionality
+describe('Vector Search Integration', () => {
+ it('should find similar memories', async () => {
+ // Create test memories
+ // Generate embeddings
+ // Test similarity search
+ // Verify ranking
+ });
+
+ it('should use indexes efficiently', async () => {
+ // Test IVFFlat performance
+ // Test HNSW performance
+ // Monitor query plans
+ });
+});
+```
+
+## Edge Cases & Error Conditions
+
+```typescript
+describe('Edge Cases', () => {
+ it('should handle malformed requests', async () => {
+ // Invalid JSON
+ // Missing required fields
+ // Wrong types
+ });
+
+ it('should handle resource limits', async () => {
+ // Max memory count per user
+ // Request size limits
+ // Rate limiting
+ });
+
+ it('should handle concurrent operations', async () => {
+ // Parallel memory creation
+ // Concurrent searches
+ // Session conflicts
+ });
+
+ it('should handle external service failures', async () => {
+ // Database down
+ // OpenAI API timeout
+ // Network errors
+ });
+});
+```
+
+## Performance Tests
+
+```typescript
+describe('Performance', () => {
+ it('should handle bulk operations', async () => {
+ // Batch memory creation
+ // Large result sets
+ // Pagination
+ });
+
+ it('should meet latency requirements', async () => {
+ // Vector search < 200ms
+ // CRUD operations < 100ms
+ // Tool registration < 50ms
+ });
+
+ it('should scale with data volume', async () => {
+ // Test with 10K memories
+ // Test with 100K memories
+ // Test with 1M memories
+ });
+});
+```
+
+## Mock Strategies
+
+```typescript
+// Mocking external dependencies
+const mocks = {
+ // Mock OpenAI API
+ openai: {
+ embeddings: {
+ create: jest.fn().mockResolvedValue({
+ data: [{ embedding: new Array(1536).fill(0.1) }]
+ })
+ }
+ },
+
+ // Mock database
+ db: {
+ query: jest.fn(),
+ transaction: jest.fn()
+ },
+
+ // Mock MCP client
+ mcpClient: {
+ request: jest.fn(),
+ notify: jest.fn()
+ }
+};
+```
+
+## Test Data Fixtures
+
+```typescript
+// Reusable test data
+export const fixtures = {
+ memories: [
+ {
+ content: 'User prefers dark mode',
+ type: 'preference',
+ importance: 0.8
+ },
+ {
+ content: 'Meeting scheduled for 3pm',
+ type: 'event',
+ expires_at: '2024-12-31'
+ }
+ ],
+
+ embeddings: {
+ sample: new Array(1536).fill(0.1),
+ similar: new Array(1536).fill(0.09),
+ different: new Array(1536).fill(0.5)
+ },
+
+ mcpRequests: {
+ valid: {
+ jsonrpc: '2.0',
+ method: 'create_memory',
+ params: { content: 'Test' },
+ id: 1
+ },
+ invalid: {
+ jsonrpc: '1.0', // Wrong version
+ method: 'unknown_method'
+ }
+ }
+};
+```
+
+## Test Coverage Requirements
+
+- **Unit Tests**: 90% code coverage
+- **Integration Tests**: All critical paths
+- **E2E Tests**: Core user journeys
+- **Performance Tests**: Load scenarios
+- **Security Tests**: Auth and isolation
+
+## Test Execution Commands
+
+```bash
+# Run all tests
+npm test
+
+# Run with coverage
+npm run test:coverage
+
+# Run specific test file
+npm test -- memory.test.ts
+
+# Run integration tests
+npm run test:integration
+
+# Run performance tests
+npm run test:perf
+
+# Watch mode for development
+npm run test:watch
+```