notbugAs an Amazon Associate I earn from qualifying purchases.
Want a good read? Try FreeBSD Mastery: Jails (IT Mastery Book 15)
Want a good monitor light? See my photosAll times are UTC
Ukraine
Port details on branch 2024Q4
llama-cpp Facebook's LLaMA model in C/C++
4120 misc on this many watch lists=3 search for ports that depend on this port Find issues related to this port Report an issue related to this port View this port on Repology. pkg-fallout 3837Version of this port present on the latest quarterly branch.
Maintainer: yuri@FreeBSD.org search for ports maintained by this maintainer
Port Added: 2024-02-15 11:27:23
Last Update: 2024-11-18 22:26:47
Commit Hash: d45cc30
People watching this port, also watch:: autoconf, ta-lib, weberp, prestashop, irrlicht
License: MIT
WWW:
https://github.com/ggerganov/llama.cpp
Description:
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
Homepage    cgit ¦ Codeberg ¦ GitHub ¦ GitLab ¦ SVNWeb - no subversion history for this port

Manual pages:
FreshPorts has no man page information for this port.
pkg-plist: as obtained via: make generate-plist
Expand this list (67 items)
Collapse this list.
  1. @ldconfig
  2. /usr/local/share/licenses/llama-cpp-4120/catalog.mk
  3. /usr/local/share/licenses/llama-cpp-4120/LICENSE
  4. /usr/local/share/licenses/llama-cpp-4120/MIT
  5. bin/convert_hf_to_gguf.py
  6. bin/llama-batched
  7. bin/llama-batched-bench
  8. bin/llama-bench
  9. bin/llama-cli
  10. bin/llama-convert-llama2c-to-ggml
  11. bin/llama-cvector-generator
  12. bin/llama-embedding
  13. bin/llama-simple-chat
  14. bin/llama-eval-callback
  15. bin/llama-export-lora
  16. bin/llama-gbnf-validator
  17. bin/llama-gguf
  18. bin/llama-gguf-hash
  19. bin/llama-gguf-split
  20. bin/llama-gritlm
  21. bin/llama-imatrix
  22. bin/llama-infill
  23. bin/llama-llava-cli
  24. bin/llama-lookahead
  25. bin/llama-lookup
  26. bin/llama-lookup-create
  27. bin/llama-lookup-merge
  28. bin/llama-lookup-stats
  29. bin/llama-minicpmv-cli
  30. bin/llama-parallel
  31. bin/llama-passkey
  32. bin/llama-perplexity
  33. bin/llama-quantize
  34. bin/llama-quantize-stats
  35. bin/llama-retrieval
  36. bin/llama-save-load-state
  37. bin/llama-server
  38. bin/llama-simple
  39. bin/llama-speculative
  40. bin/llama-tokenize
  41. bin/vulkan-shaders-gen
  42. include/ggml-alloc.h
  43. include/ggml-backend.h
  44. include/ggml-blas.h
  45. include/ggml-cann.h
  46. include/ggml-cpu.h
  47. include/ggml-cuda.h
  48. include/ggml-kompute.h
  49. include/ggml-metal.h
  50. include/ggml-opt.h
  51. include/ggml-rpc.h
  52. include/ggml-sycl.h
  53. include/ggml-vulkan.h
  54. include/ggml.h
  55. include/llama.h
  56. lib/cmake/llama/llama-config.cmake
  57. lib/cmake/llama/llama-version.cmake
  58. lib/libggml.so
  59. lib/libggml-base.so
  60. lib/libggml-cpu.so
  61. lib/libggml-vulkan.so
  62. lib/libllama.so
  63. lib/libllava_shared.so
  64. libdata/pkgconfig/llama.pc
  65. @owner
  66. @group
  67. @mode
Collapse this list.
Dependency lines:
  • llama-cpp>0:misc/llama-cpp
To install the port:
cd /usr/ports/misc/llama-cpp/ && make install clean
To add the package, run one of these commands:
  • pkg install misc/llama-cpp
  • pkg install llama-cpp
NOTE: If this package has multiple flavors (see below), then use one of them instead of the name specified above.
PKGNAME: llama-cpp
Flavors: there is no flavor information for this port.
distinfo:
TIMESTAMP = 1731907679 SHA256 (ggerganov-llama.cpp-b4120_GH0.tar.gz) = ff1e6cde07e3f2a587978ea58d54bece296b61055b500898f702d8fbeff52e73 SIZE (ggerganov-llama.cpp-b4120_GH0.tar.gz) = 19557501

Expand this list (4 items)

Collapse this list.

SHA256 (nomic-ai-kompute-4565194_GH0.tar.gz) = 95b52d2f0514c5201c7838348a9c3c9e60902ea3c6c9aa862193a212150b2bfc SIZE (nomic-ai-kompute-4565194_GH0.tar.gz) = 13540496 SHA256 (121f915a09c1117d34aff6e8faf6d252aaf11027.patch) = 9a0c47ae3cb7dd51b6ce19187dafd48578210f69558f7c8044ee480471f1fd33 SIZE (121f915a09c1117d34aff6e8faf6d252aaf11027.patch) = 591

Collapse this list.


Packages (timestamps in pop-ups are UTC):
llama-cpp
ABIaarch64amd64armv6armv7i386powerpcpowerpc64powerpc64le
FreeBSD:13:latest41204120------
FreeBSD:13:quarterly38893889------
FreeBSD:14:latest39164120------
FreeBSD:14:quarterly38893889------
FreeBSD:15:latest41204120n/a-n/a--2241
Dependencies
NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
Build dependencies:
  1. glslc : graphics/shaderc
  2. vulkan-headers>0 : graphics/vulkan-headers
  3. cmake : devel/cmake-core
  4. ninja : devel/ninja
Runtime dependencies:
  1. python3.11 : lang/python311
Library dependencies:
  1. libvulkan.so : graphics/vulkan-loader
This port is required by:
for Libraries
  1. devel/tabby

Configuration Options:
===> The following configuration options are available for llama-cpp-4120: EXAMPLES=on: Build and/or install examples VULKAN=on: Vulkan GPU offload support ===> Use 'make config' to modify these settings
Options name:
misc_llama-cpp
USES:
cmake:testing compiler:c++11-lang python:run shebangfix
FreshPorts was unable to extract/find any pkg message
Master Sites:
Expand this list (1 items)
Collapse this list.
  1. https://codeload.github.com/ggerganov/llama.cpp/tar.gz/b4120?dummy=/
Collapse this list.

There are no commits on branch 2024Q4 for this port