| Port details |
- ollama Run Llama 2, Mistral, and other large language models
- 0.13.5_1 misc
=5 0.13.1.r0_2Version of this port present on the latest quarterly branch. - Maintainer: yuri@FreeBSD.org
 - Port Added: 2024-08-06 10:06:06
- Last Update: 2026-01-16 17:49:03
- Commit Hash: 013f2d3
- People watching this port, also watch:: drm-61-kmod, firefox, ffmpeg, pipewire, lapce
- License: MIT
- WWW:
- https://ollama.com
- https://github.com/ollama/ollama
- Description:
- Ollama is a tool that allows you to get up and running with large language
models locally. It provides a simple command-line interface to run and
manage models, as well as a REST API for programmatic access.
Ollama supports a wide range of models available on ollama.com/library,
including popular models like Llama 3, Gemma, and Mistral. It also
allows you to customize models and create your own.
With Ollama, you can:
- Run large language models on your own machine
- Chat with models in the terminal
- Generate text and embeddings
- Customize models with your own prompts and data
- Expose models through a REST API for use in your applications
 ¦ ¦ ¦ ¦ 
- Manual pages:
- FreshPorts has no man page information for this port.
- pkg-plist: as obtained via:
make generate-plist - USE_RC_SUBR (Service Scripts)
- no SUBR information found for this port
- Dependency lines:
-
- To install the port:
- cd /usr/ports/misc/ollama/ && make install clean
- To add the package, run one of these commands:
- pkg install misc/ollama
- pkg install ollama
NOTE: If this package has multiple flavors (see below), then use one of them instead of the name specified above.- PKGNAME: ollama
- Flavors: there is no flavor information for this port.
- distinfo:
- TIMESTAMP = 1767567167
SHA256 (go/misc_ollama/ollama-v0.13.5-x1/v0.13.5-x1.mod) = 24e9aaaef0e2169fef54d14b95b528fce46e0f6788ffb71a93bcd3b035f99654
SIZE (go/misc_ollama/ollama-v0.13.5-x1/v0.13.5-x1.mod) = 3454
Packages (timestamps in pop-ups are UTC):
- Dependencies
- NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
- Build dependencies:
-
- bash : shells/bash
- miniaudio.h : audio/miniaudio
- json_fwd.hpp : devel/nlohmann-json
- stb_image.h : devel/stb
- glslc : graphics/shaderc
- vulkan.h : graphics/vulkan-headers
- cmake : devel/cmake-core
- go124 : lang/go124
- pkgconf>=1.3.0_1 : devel/pkgconf
- Library dependencies:
-
- libvulkan.so : graphics/vulkan-loader
- Fetch dependencies:
-
- go124 : lang/go124
- This port is required by:
- for Run
-
- misc/alpaca
Configuration Options:
- ===> The following configuration options are available for ollama-0.13.5_1:
====> Options available for the group BACKENDS
CPU=on: Build CPU backend shared libraries
VULKAN=on: Build Vulkan GPU backend shared library
===> Use 'make config' to modify these settings
- Options name:
- misc_ollama
- USES:
- cmake:indirect go:1.24+,modules localbase pkgconfig zip
- pkg-message:
- For install:
- You installed ollama: the AI model runner.
To run ollama, plese open 2 terminals.
1. In the first terminal, please run:
$ OLLAMA_NUM_PARALLEL=1 OLLAMA_DEBUG=1 LLAMA_DEBUG=1 ollama start
2. In the second terminal, please run:
$ ollama run gemma3
or
$ ollama run mistral
This will download and run the specified AI model.
You will be able to interact with it in plain English.
Please see https://ollama.com/library for the list
of all supported models.
The command "ollama list" lists all models downloaded
into your system.
When the model fails to load into your GPU, please use
the provided ollama-limit-gpu-layers script to create
model flavors with different num_gpu parameters.
ollama uses many gigabytes of disk space in your home directory,
because advanced AI models are often very large.
Please symlink ~/.ollama to a large disk if needed.
- Master Sites:
|
| Commit History - (may be incomplete: for full details, see links to repositories near top of page) |
| Commit | Credits | Log message |
0.13.5_1 16 Jan 2026 17:49:03
    |
Adam Weinberger (adamw)  |
various: Bump Go ports for 1.24.12 |
0.13.5 05 Jan 2026 00:19:57
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.13.1-rc0 → 0.13.5 |
0.13.1.r0_2 15 Dec 2025 23:06:00
    |
Dag-Erling Smørgrav (des)  |
many: Unpin Go ports
* Ports that were pinned to a deprecated version of Go (1.23 or older)
have been unpinned.
* Ports that were pinned to a still-supported version of Go (1.24 or
newer) have been converted to requesting that as their minimum Go
version.
* Ports that had been forcibly deprecated for pinning an expired Go
version have been undeprecated. |
0.13.1.r0_2 03 Dec 2025 18:24:45
    |
Adam Weinberger (adamw)  |
various: Bump Go ports for 1.24.11 |
0.13.1.r0_1 29 Nov 2025 20:50:01
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add computational backends
Options CPU and VULKAN enable various CPU backends and the VULKAN backend.
CPU backends are for different generations of SIMD instructions.
Backends are loaded automatically when they are installed. |
0.13.1.r0 29 Nov 2025 20:50:00
    |
Yuri Victorovich (yuri)  |
misc/ollama: Remove architecture restriction
Ollama should likely work fine all architectures. |
0.13.1.r0 27 Nov 2025 23:47:44
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.3.6 → 0.13.1.r0 |
0.3.6_5 02 Apr 2025 02:07:58
    |
Adam Weinberger (adamw)  |
go: Bump ports for go124 update |
0.3.6_4 05 Mar 2025 16:02:56
    |
Adam Weinberger (adamw)  |
Bump all go ports for yesterday's releases |
0.3.6_3 28 Feb 2025 10:09:27
    |
Yuri Victorovich (yuri)  |
misc/ollama: Update WWW |
0.3.6_3 28 Feb 2025 09:24:22
    |
Yuri Victorovich (yuri)  Author: Yusuf Yaman |
misc/ollama: Fix typos in pkg-message
PR: 285014 |
0.3.6_3 21 Jan 2025 22:21:11
    |
Ashish SHUKLA (ashish)  |
all: Bump after lang/go122 update
PR: 284181
MFH: 2025Q1 |
0.3.6_2 08 Nov 2024 20:58:46
    |
Ashish SHUKLA (ashish)  |
all: Bump after lang/go122 update
PR: 281842 |
0.3.6_1 27 Aug 2024 19:44:05
    |
Yuri Victorovich (yuri)  |
misc/ollama: Remove unnecessary paragraph from pkg-message |
0.3.6_1 27 Aug 2024 17:44:27
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add environment variables to 'ollama start' to work around memory
allocation issues |
0.3.6_1 19 Aug 2024 01:12:09
    |
Yuri Victorovich (yuri)  |
misc/ollama: Improve pkg-message |
0.3.6 18 Aug 2024 20:44:06
    |
Yuri Victorovich (yuri)  |
misc/ollama: update 0.3.4 → 0.3.6 |
0.3.4_4 10 Aug 2024 07:07:35
    |
Yuri Victorovich (yuri)  |
misc/ollama: add CONFLICTS_BUILD |
0.3.4_4 09 Aug 2024 06:24:09
    |
Ashish SHUKLA (ashish)  |
all: Bump after lang/go122 update |
0.3.4_3 09 Aug 2024 05:03:35
    |
Yuri Victorovich (yuri)  |
misc/ollama: Fix Vulkan compatibility |
0.3.4_2 08 Aug 2024 20:01:10
    |
Yuri Victorovich (yuri)  |
misc/ollama: Fix inference; Add ONLY_FOR_ARGHxx lines; Add pkg-message |
0.3.4_1 07 Aug 2024 18:33:34
    |
Yuri Victorovich (yuri)  |
misc/ollama: Add llama-cpp as dependency |
0.3.4 06 Aug 2024 22:32:55
    |
Yuri Victorovich (yuri)  |
misc/ollama: Remove one unnecessary architecture-specific place in scripts |
0.3.4 06 Aug 2024 10:04:44
    |
Yuri Victorovich (yuri)  |
misc/ollama: New port: Run Llama 2, Mistral, and other large language models |