How to build local AI with GPT-OSS, Ollama and n8n

Step1: Download and setting Docker (docker.com). You might need to restart your computer to finish setting up Docker.

Step 2: Setting n8n local

Run script in CMD "docker volume create n8n_data" to create volume n8n_data.


 

You can run "docker volume ls" to check whether it has been created.

 

After that, run script:

docker run -it --rm --name n8n -p 5678:5678 -e GENERIC_TIMEZONE="Asia/Ho_Chi_Minh" -e TZ="Asia/Ho_Chi_Minh" -e N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true -e N8N_RUNNERS_ENABLED=true -v n8n_data:/home/node/.n8n n8nio/n8n

Note: You can remove "--rm" if you want to keep this container. And now, your script is:

docker run -it --name n8n -p 5678:5678 -e GENERIC_TIMEZONE="Asia/Ho_Chi_Minh" -e TZ="Asia/Ho_Chi_Minh" -e N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true -e N8N_RUNNERS_ENABLED=true -v n8n_data:/home/node/.n8n n8nio/n8n

You can check your time zone here List of tz database time zones - Wikipedia

 After running that script, you will see the following output and can go to 

n8n.io - Workflow Automation

Step 3: Install Ollama

Step 4: Install gpt-oss model

You can use this script "ollama run gpt-oss:latest"

Note: you might set system variable for Ollama and use Windows Power Shell instead of CMD.

Step 5: Use Ollama chat model inside n8n

Run script "ollama serve" to turn on ollama service.

 You can create your workflow in n8n following this image:

Detail each node:

- LLM Chain:

- Edit Fields:


 - Ollama Chat Model:

URL: http://host.docker.internal:11434 => Save


 After that, chosse model: gpt-oss:latest


 

Finally: You can click button "Execute workflow" to test your workflow or use Chat box in the left conner.


Note: In the node "Simple Memory", you can select another node to store data to a database, file, log, Notion, etc.

 

 

 

 

Nhận xét