Cllama is a python streamlit webUI for using LLM models which are available in the ollama localy.
- Real Time weather information using tools.
- Real Time weather alert using tools.
-
you can install using official doc https://ollama.com/download
or you can run the
install_ollama.shbash file.Follow these commands:
-
chmod +x install_ollama_models.sh
-
./install_ollama_models.sh
- Download any llm model's from the Ollama model Library https://ollama.com/search. for instance, to install IBM's Granite 3.3 model, use this command:
-
ollama run granite3.3
To see the downloaded or available LLM models in the local Ollama, use this command:
-
ollama list
- use this command to install UV using python's pip
-
pip install uv
use this command to install using Home Brew (recommended for Mac OS).
-
brew install uv
To sync all the dependencies for the cllama project, use
-
uv sync
To run the cllama webUI project use this streamlit command from the base directory.
-
streamlit run src/main.py
a) Create a task in the project planning board (CPPB) https://github.com/users/joshanjohn/projects/17
b) use proper branch name as github task suggest, which is hash number followed by task name seperated by hypen. (eg: 4-task-title)
c) Submit a pull request (PR) before merging into main branch.
d) Must get an PR approval review to merge.
- In your terminal,
cdinto the project root - Run:
uv run ruff check - To apply safe automatic fix, run:
uv run ruff check --fix. - To apply unsafe automatic fix, run:
uv run ruff check --fix --unsafe-fixes
You may need to run formatter black . again after running ruff auto fix.
- In your terminal,
cdinto the project root - Run:
uv run black .
All The Best 🙌🏻