Add LLM endpoints, web frontend, and rate limiting config
- Added OpenAI-compatible LLM endpoints to API backend - Introduced web frontend with Jinja2 templates and static assets - Implemented API proxy routes in web service - Added sample db.json data for items, users, orders, reviews, categories, llm_requests - Updated ADC and Helm configs for separate AI and standard rate limiting - Upgraded FastAPI, Uvicorn, and added httpx, Jinja2, python-multipart dependencies - Added API configuration modal and client-side JS for web app
This commit is contained in:
@@ -247,7 +247,16 @@ api7:
|
||||
|
||||
# API7 Plugins Configuration
|
||||
plugins:
|
||||
# AI Rate limiting (for /api route)
|
||||
# Standard Rate limiting (for /api route - per IP)
|
||||
rateLimit:
|
||||
enabled: true
|
||||
count: 100
|
||||
timeWindow: 60
|
||||
rejectedCode: 429
|
||||
keyType: "var"
|
||||
key: "remote_addr"
|
||||
|
||||
# AI Rate limiting (for /api/llm route)
|
||||
aiRateLimit:
|
||||
enabled: true
|
||||
limit: 100
|
||||
|
||||
Reference in New Issue
Block a user