Connecting Node-RED to SAP SQL Anywhere with Custom Docker Instance

Learn how to customize a Node-RED Docker instance to connect to SAP SQL Anywhere using ODBC driver. Follow step-by-step instructions to build and push the container to your repository and add it as a custom microservice in UMH Helm chart.

Connecting Node-RED to SAP SQL Anywhere with Custom Docker Instance
ℹ️
This tutorial was written by DanielH from our community and edited to fit into our learning hub. Please note that this is therefore only community supported. For questions, please reach out to our Discord channel and check out the original Discord message

Are you looking to connect your Node-RED instance to SAP SQL Anywhere using an ODBC driver? This guide provides step-by-step instructions on how to customize a Node-RED Docker instance to connect to SAP SQL Anywhere. By following these instructions, you can build and push the container to your repository and add it as a custom microservice in UMH Helm chart. Let's get started!

Instructions

  1. Clone the Node-Red Docker repository from Github using the following command:
    git clone https://github.com/node-red/node-red-docker.git
    
  2. Change the directory to docker-custom using the command:
    cd node-red-docker/docker-custom
    
  3. Open the docker-debian.sh file and update the node version and default tag as per your preference. In this case, the tag is changed to a custom repo and the NodeJS version 16. As base image you can also use bullseye-slim instead of buster-slim
    image1
  4. Next, modify the Dockerfile.debian file to install the UnixODBC driver manager and the SQL Anywhere version of your choice. In this case, we are installing version 17. Update the driver template settings according to your driver.
    image3
    # install unix odbc
    RUN apt-get update --fix-missing && apt-get install -y unixodbc
    
    # install SQL anywhere 17
    RUN apt-get update --fix-missing && apt-get install -y wget && \
        wget https://d5d4ifzqzkhwt.cloudfront.net/sqla17client/sqla17_client_linux_x86x64.tar.gz && \
        tar -xavf sqla17_client_linux_x86x64.tar.gz && \
        cd client17011 && \
        ./setup -nogui -I_accept_the_license_agreement -silent
    
    # create a driver template
    RUN echo "[SQL Anywhere 17]" >> /etc/odbcinst.ini
    RUN echo "Description=SAP SQL Anywhere 17 ODBC Driver" >> /etc/odbcinst.ini
    RUN echo "Driver=/opt/sqlanywhere17/lib64/libdbodbc17_r.so" >> /etc/odbcinst.ini
    RUN echo "Setup=/opt/sqlanywhere17/lib64/libdbodbc17_r.so" >> /etc/odbcinst.ini
    RUN echo "UsageCount=1" >> /etc/odbcinst.ini
    
  5. Export the environment variables to allow Node-Red to use the shared system files. These variables can be found in the installation folder for SQL Anywhere, usually /opt/sqlanywhere17/bin64/ and the sa_config.sh script. Copy the values and insert them in the /scripts/entrypoint.sh file in the docker-custom folder.
    image4
    # the following lines set the SA location.
    SQLANY17="/opt/sqlanywhere17"
    export SQLANY17
    
    [ -r "$HOME/.sqlanywhere17/sample_env64.sh" ] && . "$HOME/.sqlanywhere17/sample_env64.sh" 
    [ -z "${SQLANYSAMP17:-}" ] && SQLANYSAMP17="/opt/sqlanywhere17/samples"
    export SQLANYSAMP17
    
    # the following lines add SA binaries to your path.
    PATH="$SQLANY17/bin64:$SQLANY17/bin32:${PATH:-}"
    export PATH
    NODE_PATH="$SQLANY17/node:${NODE_PATH:-}"
    export NODE_PATH
    LD_LIBRARY_PATH="$SQLANY17/lib32:${LD_LIBRARY_PATH:-}"
    LD_LIBRARY_PATH="$SQLANY17/lib64:${LD_LIBRARY_PATH:-}"
    export LD_LIBRARY_PATH
    
  6. Build the Docker container using the command:
    ./docker.debian.sh
    
  7. Push the newly created Docker file to your repository.
  8. Add the Docker container as a custom microservice in the UMH helm chart.
    image2

Helpful resources

Read next

Node-RED meets Benthos!
benthos · Featured

Node-RED meets Benthos!

Yes, we’ve made it possible to bring Node-RED-style simplicity into Benthos pipelines, enabling your chosen LLM to handle tedious manufacturing connectivity and contextualization tasks—whether you’re prototyping or managing production-grade data flows.

Share, Engage, and Contribute!

Discover how you can share your ideas, contribute to our blog, and connect with us on other platforms.