Port details |
- llama-cpp Facebook's LLaMA model in C/C++
- 4837 misc
=3 4381Version of this port present on the latest quarterly branch. - Maintainer: yuri@FreeBSD.org
 - Port Added: 2024-02-15 11:27:23
- Last Update: 2025-03-07 15:57:21
- Commit Hash: 9d86f7e
- People watching this port, also watch:: autoconf, ta-lib, weberp, prestashop, irrlicht
- License: MIT
- WWW:
- https://github.com/ggerganov/llama.cpp
- Description:
- The main goal of llama.cpp is to enable LLM inference with minimal setup and
state-of-the-art performance on a wide variety of hardware - locally and in
the cloud.
¦ ¦ ¦ ¦ 
- Manual pages:
- FreshPorts has no man page information for this port.
- pkg-plist: as obtained via:
make generate-plist - Dependency lines:
-
- llama-cpp>0:misc/llama-cpp
- To install the port:
- cd /usr/ports/misc/llama-cpp/ && make install clean
- To add the package, run one of these commands:
- pkg install misc/llama-cpp
- pkg install llama-cpp
NOTE: If this package has multiple flavors (see below), then use one of them instead of the name specified above.- PKGNAME: llama-cpp
- Flavors: there is no flavor information for this port.
- distinfo:
- TIMESTAMP = 1741327314
SHA256 (ggerganov-llama.cpp-b4837_GH0.tar.gz) = 60587fd5b417ac35d691284e1b117a8c114f10c8d3960494551a4e49338b5e0f
SIZE (ggerganov-llama.cpp-b4837_GH0.tar.gz) = 20796825
Packages (timestamps in pop-ups are UTC):
- Dependencies
- NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
- Build dependencies:
-
- glslc : graphics/shaderc
- vulkan-headers>0 : graphics/vulkan-headers
- cmake : devel/cmake-core
- ninja : devel/ninja
- Runtime dependencies:
-
- python3.11 : lang/python311
- Library dependencies:
-
- libcurl.so : ftp/curl
- libvulkan.so : graphics/vulkan-loader
- This port is required by:
- for Libraries
-
- devel/tabby
Configuration Options:
- ===> The following configuration options are available for llama-cpp-4837:
CURL=on: Data transfer support via cURL
EXAMPLES=on: Build and/or install examples
VULKAN=on: Vulkan GPU offload support
===> Use 'make config' to modify these settings
- Options name:
- misc_llama-cpp
- USES:
- cmake:testing compiler:c++11-lang python:run shebangfix localbase
- pkg-message:
- For install:
- You installed LLaMA-cpp: Facebook's LLaMA model runner.
In order to experience LLaMA-cpp please download some
AI model in the GGUF format, for example from huggingface.com,
run the script below, and open localhost:9011 in your browser
to communicate with this AI model.
$ llama-server -m $MODEL \
--host 0.0.0.0 \
--port 9011
- Master Sites:
|
Commit History - (may be incomplete: for full details, see links to repositories near top of page) |
Commit | Credits | Log message |
2517 25 Mar 2024 05:00:39
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2509 → 2517
Reported by: portscout |
2509 24 Mar 2024 09:59:11
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2487 → 2509
Reported by: portscout |
2487 22 Mar 2024 12:22:51
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2479 → 2487
Reported by: portscout |
2479 21 Mar 2024 10:10:56
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2465 → 2479
Reported by: portscout |
2465 20 Mar 2024 08:59:51
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2450 → 2465
Reported by: portscout |
2450 18 Mar 2024 16:24:19
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2440 → 2450
Reported by: portscout |
2440 17 Mar 2024 05:40:50
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2430 → 2440
Reported by: portscout |
2430 15 Mar 2024 15:46:26
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2409 → 2430
Reported by: portscout |
2409 13 Mar 2024 06:17:05
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2405 → 2409
Reported by: portscout |
2405 12 Mar 2024 19:42:11
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2393 → 2405
Reported by: portscout |
2393 11 Mar 2024 17:53:46
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2376 → 2393
Reported by: portscout |
2376 10 Mar 2024 07:35:57
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2366 → 2376
Reported by: portscout |
2366 09 Mar 2024 07:34:18
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2360 → 2366
Reported by: portscout |
2360 08 Mar 2024 10:25:53
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2355 → 2360
Reported by: portscout |
2355 07 Mar 2024 09:48:07
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2350 → 2355
Reported by: portscout |
2350 06 Mar 2024 11:52:34
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2329 → 2350
Reported by: portscout |
2329 04 Mar 2024 16:09:17
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2294 → 2329
Reported by: portscout |
2294 27 Feb 2024 00:31:15
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2266 → 2294
Reported by: portscout |
2266 26 Feb 2024 05:55:19
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2251 → 2266
Reported by: portscout |
2251 25 Feb 2024 00:18:07
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2241 → 2251
Reported by: portscout |
2241 23 Feb 2024 10:25:42
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2234 → 2241
Reported by: portscout |
2234 22 Feb 2024 09:38:40
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2212 → 2234
Reported by: portscout |
2212 20 Feb 2024 07:09:21
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2185 → 2212
Reported by: portscout |
2185 19 Feb 2024 05:01:44
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2167 → 2185
Reported by: portscout |
2167 17 Feb 2024 08:45:30
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: update 2144 → 2167
Reported by: portscout |
2144 15 Feb 2024 11:25:01
    |
Yuri Victorovich (yuri)  |
misc/llama-cpp: New port: Facebook's LLaMA model in C/C++ |