Port details on branch 2024Q4 |
- llama-cpp Facebook's LLaMA model in C/C++
- 5054 misc
=3 4967Version of this port present on the latest quarterly branch. - Maintainer: yuri@FreeBSD.org
 - Port Added: 2024-02-15 11:27:23
- Last Update: 2025-04-05 13:54:29
- Commit Hash: 8fecec4
- People watching this port, also watch:: autoconf, ta-lib, weberp, prestashop, irrlicht
- License: MIT
- WWW:
- https://github.com/ggerganov/llama.cpp
- Description:
- The main goal of llama.cpp is to enable LLM inference with minimal setup and
state-of-the-art performance on a wide variety of hardware - locally and in
the cloud.
¦ ¦ ¦ ¦ 
- Manual pages:
- FreshPorts has no man page information for this port.
- pkg-plist: as obtained via:
make generate-plist - Dependency lines:
-
- llama-cpp>0:misc/llama-cpp
- To install the port:
- cd /usr/ports/misc/llama-cpp/ && make install clean
- To add the package, run one of these commands:
- pkg install misc/llama-cpp
- pkg install llama-cpp
NOTE: If this package has multiple flavors (see below), then use one of them instead of the name specified above.- PKGNAME: llama-cpp
- Flavors: there is no flavor information for this port.
- distinfo:
- TIMESTAMP = 1743841439
SHA256 (ggerganov-llama.cpp-b5054_GH0.tar.gz) = db6bd11caa6b6f9739bcbc7477480a6cc3efa5b9dc2b93b524479835696e2e64
SIZE (ggerganov-llama.cpp-b5054_GH0.tar.gz) = 20875423
Packages (timestamps in pop-ups are UTC):
- Dependencies
- NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
- Build dependencies:
-
- glslc : graphics/shaderc
- vulkan-headers>0 : graphics/vulkan-headers
- cmake : devel/cmake-core
- ninja : devel/ninja
- Runtime dependencies:
-
- python3.11 : lang/python311
- Library dependencies:
-
- libcurl.so : ftp/curl
- libvulkan.so : graphics/vulkan-loader
- This port is required by:
- for Libraries
-
- devel/tabby
Configuration Options:
- ===> The following configuration options are available for llama-cpp-5054:
CURL=on: Data transfer support via cURL
EXAMPLES=on: Build and/or install examples
VULKAN=on: Vulkan GPU offload support
===> Use 'make config' to modify these settings
- Options name:
- misc_llama-cpp
- USES:
- cmake:testing compiler:c++11-lang python:run shebangfix localbase
- pkg-message:
- For install:
- You installed LLaMA-cpp: Facebook's LLaMA model runner.
In order to experience LLaMA-cpp please download some
AI model in the GGUF format, for example from huggingface.com,
run the script below, and open localhost:9011 in your browser
to communicate with this AI model.
$ llama-server -m $MODEL \
--host 0.0.0.0 \
--port 9011
- Master Sites:
|