- Install docker desktop and docker-compose
- clone the repository
- move to the root directory pdf_trainer
- run
docker-compose up --buildThis will spin up three containers- chroma - vector database
- ollama-phi3 - as the model
- fast-api - the application
- If all is successful then you should be able to visit the below url http://localhost:80/docs you should be able to see the swager ui
- This should open swagger uri http://localhost:80/docs in which you will find the endpoints.
- You have two endpoints.
- Post method to submit your pdf that you would like the AI to get trained upon.
- eg
curl -X 'POST' \ 'http://0.0.0.0/uploadfile/' \ -H 'accept: application/json' \ -H 'Content-Type: multipart/form-data' \ -F 'file=@test.pdf;type=application/pdf' - Get method - you can use this method to test the status of processing of the uploaded pdf eg: If the uploaded file name is test.pdf
- then
curl -X 'GET' \ 'http://0.0.0.0/files/test.pdf/status' \ -H 'accept: application/json' - This will return
{ "file": "test.pdf", "status": "done" }
- Now you can run the following command to use a cli chat assistant.
python src/db/query_example.py <filename>.pdf

