Setting Up a Python Streamlit Environment on a Synology NAS

In this post, I will walk you through the process of setting up an environment for running Python Streamlit scripts on a Synology NAS as part of my experiences with data engineering activities. This guide covers setting up MongoDB in a Docker container, configuring a Python environment with Streamlit, and automating tasks to ensure your environment runs smoothly. This is an excellent example of how to leverage your Synology NAS for data engineering and data science projects.

Table of Contents

  1. Introduction
  2. Prerequisites
  3. Setting Up MongoDB in Docker
  4. Configuring the Python Environment
  5. Creating and Running Streamlit Scripts
  6. Automating Tasks
  7. Conclusion

1. Introduction

Synology NAS devices are versatile storage solutions that can also serve as powerful servers for your data engineering projects. By leveraging Docker, Python, and Streamlit, you can create a robust environment for data processing and visualization.

2. Prerequisites

Before we begin, ensure you have the following:

  • A Synology NAS device with Docker installed.
  • SSH access to your host (your NAS).
  • Basic knowledge of Linux commands and Python programming.

3. Setting Up MongoDB in Docker

First, we need to set up MongoDB in a Docker container. Follow these steps to pull the MongoDB image and run it:

Step 1: SSH into Your NAS

ssh -p your_ssh_port your_username@your_nas_ip

This procedure will ask for a password.

Step 2: Pull the MongoDB Docker Image

Newer versions of mongo need AVX support on the CPU. Check the docker logfiles, if this doens’t work. On my system I found that mongo v4.4.9 is the best supported version.

sudo docker pull mongo:4.4.9

Step 3: Create data folder and run the MongoDB Container

In DSM right click to create a data folder for storing the mongodb database, for instance /volume1/docker/mongodb_data

Or do mkdir /volume1/docker/mongodb_data

Then run the commands:

sudo docker run -d --name mongodb -p 27017:27017 -v /volume1/docker/mongodb_data:/data/db mongo:4.4.9

Small explanation what this command means:

  • sudo: Executes the command with superuser privileges, necessary for Docker operations on many systems.
  • docker run: The Docker command to run a container.
  • -d: Runs the container in detached mode, meaning it runs in the background.
  • --name mongodb: Assigns the name “mongodb” to the container for easier management.
  • -p 27017:27017: Maps port 27017 on your host to port 27017 on the container, allowing access to MongoDB.
  • -v /volume1/docker/mongodb_data:/data/db: Maps a directory on your host to the container’s data directory, ensuring data persistence.
  • mongo:4.4.9: Specifies the MongoDB image and version to use.

4. Configuring the Python Environment

Now that MongoDB is running, let’s set up a Python environment for our data engineering tasks.

Step 1: Install Python and pip

If Python and pip are not already installed, check which python version is needed to run your scripts. In my case python v3.10 was needed. I installed Python 3.10 via the package manager in DSM.

Step 2: Create and Activate a Virtual Environment

cd /volume1/your_project_directory
python3.10 -m venv your_project_env
source your_project_env/bin/activate

Step 3: Install Required Python Packages

From my local development environment I did

pip freeze > requirements.txt

When collaborating with others or reproducing your work on different locations, it helps to use the same package versions by creating a file with all the required packages and versions, which avoids compatibility problems.

The command above creates a requirements.txt file with, for instance, the following content:


Then, in you SSH terminal install the packages:

pip install -r requirements.txt

5. Creating and Running Streamlit Scripts

Let’s create a simple Streamlit script that connects to MongoDB and displays some data.

Step 1: Create a Streamlit Script

Create a file named with the following content:

import streamlit as st
from pymongo import MongoClient

st.title('MongoDB Data Viewer')

client = MongoClient('mongodb://your_nas_ip:27017/')
db = client.your_database
collection = db.your_collection

data = collection.find()
for item in data:

I populated the database as an example with the hotel_review dataset famous for sentiment analysis.

Step 2: Run the Streamlit Script

streamlit run --server.port 8501 --server.headless true

Access the Streamlit app at http://your_nas_ip:8501.

6. Automating Tasks

To ensure your Streamlit app starts whenever the NAS boots, we will use the Task Scheduler in DSM.

Step 1: Create a Startup Script

Create a file named

cd /volume1/your_project_directory
source financieletransacties_env/bin/activate
streamlit run --server.port 8501 --server.headless true

Make the script executable:

chmod +x

Step 2: Schedule the Script to Run at Boot on Synology NAS

  1. Open Control Panel > Task Scheduler.
  2. Create a new Triggered Task > User-defined script.
  3. Name the task (e.g., “Start Streamlit”).
  4. Set the Event to Boot-up.
  5. In the Task Settings tab, enter the following script:
    /bin/bash /volume1/your_project_directory/
  6. Click OK to save the task

Step 3: Testing the automated script

In the Task Scheduler choose Execute to test the script and check http://your_nas_ip:8501 for the proper working of your Streamlit script.


Setting up a data engineering environment on a Synology NAS involves several steps, from configuring Docker and MongoDB to setting up Python environments and automating tasks. By following this guide, you can leverage your NAS to run data engineering and data science projects efficiently.

This setup allows you to utilize the power of Docker for containerization, Python for scripting, and Streamlit for data visualization, creating a comprehensive data engineering environment. Feel free to expand this setup with additional tools and configurations to suit your specific needs. Happy coding!

Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.