Skip to content

Latest commit

 

History

History
5 lines (4 loc) · 332 Bytes

01-model-setup.md

File metadata and controls

5 lines (4 loc) · 332 Bytes

TODO

  • The intent of this document is to outline how to run llama.cpp on your local machine and could include details on AWS / EC2 Build recommendations for compute-providers
  • The end state should be presentation of a proxy-router accessible private endpoint that the proxy-router can talk to to serve its models