DS Log
In my blog, I delve into the world of programming web technologies, Linux, Unix-like, and graphic design using free tools on Linux.
KINGCODE
KingCode Editor (ex Texty Editor) is my project developed using Java Swing. Project is still in development and in beta version. I plan to add additional features focused for PYTHON, PHP, JAVA, C, JS and BASH.
Read more ↗
VUE on Linux
In this guide, I'll walk you through the step-by-step process of setting up Vue.js on your Linux system, empowering you to create dynamic and interactive web applications. Let's harness the power of Vue.js together on the Linux platform!
Read more ↗
Symfony PHP
Dive into the world of Symfony PHP with this comprehensive introduction. In this guide, you'll learn the essential steps to create and manage posts and users, empowering you to build dynamic web applications with ease.
Read more ↗
Trying Linux from Windows
How to set up a PHP development server on Ubuntu 22.04
Text editors
List of text editors for developers.
Read more ↗
Fonts
Important fonts everyone needs to know.
Read more ↗
Try Linux from Windows
Here are some quick videos I made showing how to try out Linux Mint on Windows.
Read more ↗
Tuesday, October 7, 2025
How to install AI and use it locally (Ubuntu)
Ollama is a desktop/server application that lets you run large language models locally and expose them via a simple CLI and HTTP API. It runs models on your machine or server so no cloud is required by default. It provides a command-line interface with commands like pull, list, chat, run, and serve. When you run ollama serve it hosts an HTTP API at http://127.0.0.1:11434. Ollama supports multiple models (community and commercially licensed); you pull models with ollama pull model. It runs models using local CPU/GPU acceleration if available, so performance depends on your hardware. Configuration and models are stored under ~/.ollama by default. Ollama can integrate with Docker/OCI-style model packages in some cases. Common uses include local development and testing with LLMs, private or offline model inference, and serving a local API for apps or bots.
Go to Olamma website:
https://ollama.com/download/linux
Run in terminal:
curl -fsSL https://ollama.com/install.sh | sh
CodeLlama is specifically fine-tuned for coding tasks. It's based on Llama 2 and excels at generating, completing, and explaining code in languages like Python, Bash, C++, JavaScript, and more. It's great for writing scripts, debugging errors, or even converting pseudocode to real code.
You will need to download the model with command:
ollama pull codellama
And to run it, use:
ollama run codellama
deepseek-coder is one of the top models for code generation and understanding, often outperforming CodeLlama in benchmarks for multi-language support and complex logic (e.g., algorithms, APIs). It's trained on vast code datasets, so it's excellent for real-world dev tasks like integrating libraries or refactoring.
You will need to download the model with command:
ollama pull deepseek-coder
And to run it, use:
ollama run deepseek-coder
WebAPI
When you run ollama serve it hosts an HTTP API at http://127.0.0.1:11434
curl -X POST http://localhost:11434/api/generate -d '{
"model": "codellama",
"prompt": "Write a Bash script to check disk usage and alert if over 80%.",
"stream": false
}'
To get the list of models:
http://127.0.0.1:11434/v1/models
Ollama has a variety of open-source models optimized for these tasks, and the "best" one depends on your hardware (e.g., CPU/GPU, RAM) and needs, but I'll prioritize those with strong performance in code-related tasks while also handling general system knowledge well.
To stop service use:
sudo systemctl stop ollama