Add LLM endpoints, web frontend, and rate limiting config
- Added OpenAI-compatible LLM endpoints to API backend - Introduced web frontend with Jinja2 templates and static assets - Implemented API proxy routes in web service - Added sample db.json data for items, users, orders, reviews, categories, llm_requests - Updated ADC and Helm configs for separate AI and standard rate limiting - Upgraded FastAPI, Uvicorn, and added httpx, Jinja2, python-multipart dependencies - Added API configuration modal and client-side JS for web app
This commit is contained in:
12
web/.env.example
Normal file
12
web/.env.example
Normal file
@@ -0,0 +1,12 @@
|
||||
# API Backend Configuration
|
||||
# Set this to the base URL where the API service is running
|
||||
|
||||
# Local development
|
||||
API_BASE_URL=http://localhost:8001
|
||||
|
||||
# Production
|
||||
# API_BASE_URL=https://commandware.it/api
|
||||
|
||||
# Other examples
|
||||
# API_BASE_URL=http://api:8001
|
||||
# API_BASE_URL=http://192.168.1.100:8001
|
||||
Reference in New Issue
Block a user