notbugAs an Amazon Associate I earn from qualifying purchases.
Want a good read? Try FreeBSD Mastery: Jails (IT Mastery Book 15)
Want a good monitor light? See my photosAll times are UTC
Ukraine

Newsfeed changes

The news feed page[s] were not laid out well. Two pages, disjointed information, hard to figure out how to use the optional parameters...

Thankfully, someone told me.

The new page is ready for your review. Please compare these two:

You may also be interested in the Github issue.
Port details on branch 2024Q4
llama-cpp Facebook's LLaMA model in C/C++
5054 misc on this many watch lists=3 search for ports that depend on this port Find issues related to this port Report an issue related to this port View this port on Repology. pkg-fallout 4967Version of this port present on the latest quarterly branch.
Maintainer: yuri@FreeBSD.org search for ports maintained by this maintainer
Port Added: 2024-02-15 11:27:23
Last Update: 2025-04-05 13:54:29
Commit Hash: 8fecec4
People watching this port, also watch:: autoconf, ta-lib, weberp, prestashop, irrlicht
License: MIT
WWW:
https://github.com/ggerganov/llama.cpp
Description:
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
Homepage    cgit ¦ Codeberg ¦ GitHub ¦ GitLab ¦ SVNWeb - no subversion history for this port

Manual pages:
FreshPorts has no man page information for this port.
pkg-plist: as obtained via: make generate-plist
Expand this list (79 items)
Collapse this list.
  1. @ldconfig
  2. /usr/local/share/licenses/llama-cpp-5054/catalog.mk
  3. /usr/local/share/licenses/llama-cpp-5054/LICENSE
  4. /usr/local/share/licenses/llama-cpp-5054/MIT
  5. bin/convert_hf_to_gguf.py
  6. bin/llama-batched
  7. bin/llama-batched-bench
  8. bin/llama-bench
  9. bin/llama-cli
  10. bin/llama-convert-llama2c-to-ggml
  11. bin/llama-cvector-generator
  12. bin/llama-embedding
  13. bin/llama-eval-callback
  14. bin/llama-export-lora
  15. bin/llama-gbnf-validator
  16. bin/llama-gemma3-cli
  17. bin/llama-gen-docs
  18. bin/llama-gguf
  19. bin/llama-gguf-hash
  20. bin/llama-gguf-split
  21. bin/llama-gritlm
  22. bin/llama-imatrix
  23. bin/llama-infill
  24. bin/llama-llava-cli
  25. bin/llama-llava-clip-quantize-cli
  26. bin/llama-lookahead
  27. bin/llama-lookup
  28. bin/llama-lookup-create
  29. bin/llama-lookup-merge
  30. bin/llama-lookup-stats
  31. bin/llama-minicpmv-cli
  32. bin/llama-parallel
  33. bin/llama-passkey
  34. bin/llama-perplexity
  35. bin/llama-quantize
  36. bin/llama-quantize-stats
  37. bin/llama-qwen2vl-cli
  38. bin/llama-retrieval
  39. bin/llama-run
  40. bin/llama-save-load-state
  41. bin/llama-server
  42. bin/llama-simple
  43. bin/llama-simple-chat
  44. bin/llama-speculative
  45. bin/llama-speculative-simple
  46. bin/llama-tokenize
  47. bin/llama-tts
  48. bin/vulkan-shaders-gen
  49. include/ggml-alloc.h
  50. include/ggml-backend.h
  51. include/ggml-blas.h
  52. include/ggml-cann.h
  53. include/ggml-cpp.h
  54. include/ggml-cpu.h
  55. include/ggml-cuda.h
  56. include/ggml-kompute.h
  57. include/ggml-metal.h
  58. include/ggml-opt.h
  59. include/ggml-rpc.h
  60. include/ggml-sycl.h
  61. include/ggml-vulkan.h
  62. include/ggml.h
  63. include/gguf.h
  64. include/llama-cpp.h
  65. include/llama.h
  66. lib/cmake/ggml/ggml-config.cmake
  67. lib/cmake/ggml/ggml-version.cmake
  68. lib/cmake/llama/llama-config.cmake
  69. lib/cmake/llama/llama-version.cmake
  70. lib/libggml-base.so
  71. lib/libggml-cpu.so
  72. lib/libggml-vulkan.so
  73. lib/libggml.so
  74. lib/libllama.so
  75. lib/libllava_shared.so
  76. libdata/pkgconfig/llama.pc
  77. @owner
  78. @group
  79. @mode
Collapse this list.
Dependency lines:
  • llama-cpp>0:misc/llama-cpp
To install the port:
cd /usr/ports/misc/llama-cpp/ && make install clean
To add the package, run one of these commands:
  • pkg install misc/llama-cpp
  • pkg install llama-cpp
NOTE: If this package has multiple flavors (see below), then use one of them instead of the name specified above.
PKGNAME: llama-cpp
Flavors: there is no flavor information for this port.
distinfo:
TIMESTAMP = 1743841439 SHA256 (ggerganov-llama.cpp-b5054_GH0.tar.gz) = db6bd11caa6b6f9739bcbc7477480a6cc3efa5b9dc2b93b524479835696e2e64 SIZE (ggerganov-llama.cpp-b5054_GH0.tar.gz) = 20875423

Expand this list (2 items)

Collapse this list.

SHA256 (nomic-ai-kompute-4565194_GH0.tar.gz) = 95b52d2f0514c5201c7838348a9c3c9e60902ea3c6c9aa862193a212150b2bfc SIZE (nomic-ai-kompute-4565194_GH0.tar.gz) = 13540496

Collapse this list.


Packages (timestamps in pop-ups are UTC):
llama-cpp
ABIaarch64amd64armv6armv7i386powerpcpowerpc64powerpc64le
FreeBSD:13:latest48374967------
FreeBSD:13:quarterly44094409------
FreeBSD:14:latest48375022------
FreeBSD:14:quarterly44095002------
FreeBSD:15:latest49324942n/a-n/a--2241
Dependencies
NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
Build dependencies:
  1. glslc : graphics/shaderc
  2. vulkan-headers>0 : graphics/vulkan-headers
  3. cmake : devel/cmake-core
  4. ninja : devel/ninja
Runtime dependencies:
  1. python3.11 : lang/python311
Library dependencies:
  1. libcurl.so : ftp/curl
  2. libvulkan.so : graphics/vulkan-loader
This port is required by:
for Libraries
  1. devel/tabby

Configuration Options:
===> The following configuration options are available for llama-cpp-5054: CURL=on: Data transfer support via cURL EXAMPLES=on: Build and/or install examples VULKAN=on: Vulkan GPU offload support ===> Use 'make config' to modify these settings
Options name:
misc_llama-cpp
USES:
cmake:testing compiler:c++11-lang python:run shebangfix localbase
pkg-message:
For install:
You installed LLaMA-cpp: Facebook's LLaMA model runner. In order to experience LLaMA-cpp please download some AI model in the GGUF format, for example from huggingface.com, run the script below, and open localhost:9011 in your browser to communicate with this AI model. $ llama-server -m $MODEL \ --host 0.0.0.0 \ --port 9011
Master Sites:
Expand this list (1 items)
Collapse this list.
  1. https://codeload.github.com/ggerganov/llama.cpp/tar.gz/b5054?dummy=/
Collapse this list.

There are no commits on branch 2024Q4 for this port