Add LLM endpoints, web frontend, and rate limiting config
Some checks failed
Helm Chart Build / lint-only (push) Has been skipped
Helm Chart Build / build-helm (push) Successful in 9s
Build and Deploy / build-api (push) Successful in 33s
Build and Deploy / build-web (push) Failing after 41s

- Added OpenAI-compatible LLM endpoints to API backend - Introduced web
frontend with Jinja2 templates and static assets - Implemented API proxy
routes in web service - Added sample db.json data for items, users,
orders, reviews, categories, llm_requests - Updated ADC and Helm configs
for separate AI and standard rate limiting - Upgraded FastAPI, Uvicorn,
and added httpx, Jinja2, python-multipart dependencies - Added API
configuration modal and client-side JS for web app
This commit is contained in:
d.viti
2025-10-07 17:29:12 +02:00
parent 78baa5ad21
commit ed660dce5a
16 changed files with 1551 additions and 138 deletions

View File

@@ -247,7 +247,16 @@ api7:
# API7 Plugins Configuration
plugins:
# AI Rate limiting (for /api route)
# Standard Rate limiting (for /api route - per IP)
rateLimit:
enabled: true
count: 100
timeWindow: 60
rejectedCode: 429
keyType: "var"
key: "remote_addr"
# AI Rate limiting (for /api/llm route)
aiRateLimit:
enabled: true
limit: 100