Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
LeoNguyen101120
/
ai-assistance
like
1
Paused
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
main
ai-assistance
/
src
/
utils
21.8 kB
5 contributors
History:
55 commits
LeoNguyen101120
Update src/utils/clients/llama_cpp_client.py
d1948a3
verified
7 months ago
clients
Update src/utils/clients/llama_cpp_client.py
7 months ago
tools
Update configuration and refactor chat handling: Change default port in launch.json, modify main.py to simplify FastAPI initialization by removing the lifespan context manager, update LLM_MODEL_NAME in config.py, and enhance system prompts for clearer tool call instructions. Refactor chat service and client to streamline tool call processing and improve response handling.
7 months ago
exception.py
Safe
236 Bytes
update
11 months ago
stream_helper.py
Safe
2.15 kB
Refactor Dockerfile and update requirements: Uncomment llama-cpp-python installation lines for potential future use, streamline requirements installation, and modify CMD to use uvicorn for running the FastAPI app. Enhance chat service to utilize transformer_client for improved streaming and tool call handling, and introduce a new stream_helper for processing content.
7 months ago
timing.py
Safe
377 Bytes
Refactor chat handling and model integration: Update .env.example to include new API keys, modify main.py to implement a lifespan context manager for resource management, and replace Message class with dictionary structures in chat_request.py and chat_service.py for improved flexibility. Remove unused message and response models to streamline codebase.
7 months ago