A really beautiful TUI app for local synthetic data generation <3 (we love finetunes)
Find a file
2026-01-22 10:00:18 +01:00
bin Finetunify V0.0.8 2026-01-21 22:54:45 +01:00
src Openrouter Support - Minor bug fixes 2026-01-22 10:00:18 +01:00
.gitignore Openrouter Support - Minor bug fixes 2026-01-22 10:00:18 +01:00
Cargo.toml Openrouter Support - Minor bug fixes 2026-01-22 10:00:18 +01:00
Finetunify.png Finetunify V0.0.8 2026-01-21 22:54:45 +01:00
LICENSE Finetunify V0.0.8 2026-01-21 22:54:45 +01:00
package.json Openrouter Support - Minor bug fixes 2026-01-22 10:00:18 +01:00
README.md Openrouter Support - Minor bug fixes 2026-01-22 10:00:18 +01:00

Finetunify

Finetunify

A really beautiful TUI app for local synthetic data generation <3 (we love finetunes).

Current version: v0.1.0


Why Finetunify

  • Generate train/eval/test splits fast with local Ollama models or OpenRouter
  • Tool-calling friendly, JSON-first, and retry-safe
  • Clean TUI workflow with previews, logs, and live progress

Requirements

  • Rust + Cargo (for building from source or first npm install build)
  • Ollama running locally (ollama serve) for local generation
  • OpenRouter API key (optional) for cloud model distillation
  • At least one local model (ollama pull llama3) or an OpenRouter model

Installation

npm i -g finetunify

Run:

finetunify

Run from source

cargo run

Config & storage

  • Config lives in: ~/.finetunify/finetunify.config.json
  • Outputs: default to ./output/ (relative to the directory you launch finetunify from)
  • Checkpoints: ./output/.finetunify_checkpoint.json

Controls

  • Tab switch focus (menu/content)
  • Up/Down move selection
  • Enter select page / edit field / run action
  • Space toggle booleans
  • Right open selected menu item
  • Left go back
  • a add example
  • d delete example
  • : command palette
  • g start generation
  • s stop generation
  • r refresh model list
  • q quit

Prompt templating

The global prompt template supports:

  • {data_spec}
  • {example}
  • {split}
  • {index}
  • {total}

System prompts are splitspecific and editable in the UI.


Tool calling + format

  • Global tools json should be a JSON array of tool definitions (passed to Ollama /api/chat as tools).
  • Global format can be json or a JSON schema object to constrain output.

Providers (Local + OpenRouter)

You can switch between local Ollama and OpenRouter in Global settings.

OpenRouter options:

  • OpenRouter API key (stored in config)
  • OpenRouter concurrency (parallel requests)
  • OpenRouter retries (HTTP retry attempts)

Model lists are fetched dynamically based on the selected provider.


Examples + retries

  • Global examples accepts one example per line. Each generated record is seeded with one example (rotating).
  • Global json retries controls how many times to retry if the model returns malformed JSON.
    • The parser strips code fences, extracts the first JSON object/array, and removes trailing commas.
    • Live preview updates every 5 generated items on the Generate page.

Command palette

Open with : and run quick commands like:

  • generate
  • stop
  • refresh
  • provider ollama|openrouter
  • set openrouter-key <key>
  • page models|splits|global|examples|logs|generate
  • model <name>
  • examples add <text> / examples clear

Output format

Each output line is JSONL (when Wrap output is enabled) and includes:

  • id, split, model, system, user
  • response.content (raw)
  • response.content_json (parsed JSON, if any)
  • response.tool_calls (if any)

If Wrap output is disabled, the raw model response (or tool calls) is written directly.


Troubleshooting

  • No models listed: run ollama list and ensure ollama serve is running.
  • Generation stalls: the app waits for Ollama response; stop will take effect right after the current request completes.
  • JSON invalid: increase Global json retries or tighten Global format schema.

License

MIT