Nexus as repository

GE Smallworld provides Nexus as the preferred tool for storing you docker containers for the GSS solution. All needed containers are delivered as an archive file that can be loaded in this repository for use in your Kubernetes cluster.

But loading these images in the repository is a time-consuming job…

You want in testing and development the possibility to recreate your cluster quickly and test or demo your solution. The loading of images in a repository is a hurdle in this process.

I decided to get the repo out of the cluster and redirect the manifests to use my local repo.

This solution is not an enterprise ready solution because I user the open-source nexus docker container that does not provide a solution for centralized authentication nor authorization.

Solution in short

Setup a generic (virtual) Ubuntu machine with docker and docker-compose. I use an ansible script for creating such a machine.

Create two containers: the nexus container and the nginx proxy container.

Configure the solution in such a way can push and pull via ports 8444 and 8443.

Compose file

Content of the docker-compose.yaml file:

version: '3.2'
services:
  nexus:
    image: sonatype/nexus3
    container_name: nexus
    volumes:
      - /opt/nexus/data:/nexus-data
  nginx:
    image: nginx:latest
    container_name: nginx
    ports:
      - 443:443
      - 8443:8443
      - 8444:8444
      - 5000:5000
    volumes:
      - /opt/nexus/nginx.conf:/etc/nginx/nginx.conf:ro
      - /opt/nexus/nexus.wijsmanlocal-cert.crt:/etc/nginx/ssl.crt:ro
      - /opt/nexus/nexus.wijsmanlocal-key:/etc/nginx/ssl.key:ro

But first

Kubernetes wants to use ssl. So first create your certificates. I used a key/cert combination I created myself signed with my own root CA. This root CA certificate is needed in the cluster node machines for local trust. Or maybe you can retrieve official certificate. (I am Dutch, so I refuse to pay 😉)

The Services

I mounted a local directory for the volume. So if the container restarts for some reason the volume is persistent.

The access is setup by a nginx proxy. I mounted the key and certificate and the configuration for nginx

Proxy

events {
}

http {
  proxy_send_timeout        120;
  proxy_read_timeout        300;
  proxy_buffering           off;
  keepalive_timeout         5 5;
  tcp_nodelay               on;

  ssl                       on;
  ssl_certificate	    /etc/nginx/ssl.crt;
  ssl_certificate_key  	    /etc/nginx/ssl.key;

  client_max_body_size      1G;

  server {
    listen                  *:443;

    location / {
      proxy_pass            http://nexus:8081/;
      proxy_redirect        off;
      proxy_set_header      Host $host;
      proxy_set_header      X-Real-IP $remote_addr;
      proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header      X-Forwarded-Host $server_name;
      proxy_set_header      X-Forwarded-Proto $scheme;
    }
  }

  server {
    listen                  *:5000;

    location / {
      proxy_pass            http://nexus:5000/;
      proxy_redirect        off;
      proxy_set_header      Host $host;
      proxy_set_header      X-Real-IP $remote_addr;
      proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header      X-Forwarded-Host $server_name;
      proxy_set_header      X-Forwarded-Proto $scheme;
    }
  }

  server {
    listen                  *:8443;

    location / {
      proxy_pass            http://nexus:8082/;
      proxy_redirect        off;
      proxy_set_header      Host $host;
      proxy_set_header      X-Real-IP $remote_addr;
      proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header      X-Forwarded-Host $server_name;
      proxy_set_header      X-Forwarded-Proto $scheme;
    }
  }

  server {
    listen                  *:8444;

    location / {
      proxy_pass            http://nexus:8083/;
      proxy_redirect        off;
      proxy_set_header      Host $host;
      proxy_set_header      X-Real-IP $remote_addr;
      proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header      X-Forwarded-Host $server_name;
      proxy_set_header      X-Forwarded-Proto $scheme;
    }
  }

}

Notice the ports:

ServiceportNexus container port
Push84448083
Pull84438082
Nexus UI4438081
Port mapping by nginx.

With this setup you can access your repository server.

Configure Nexus

In short you can define users and groups in the admin interface. I defined a user swadmin for use with my Kubernetes cluser.

Security needs your attention depending on your own setup.

You need to setup three repositories:

  • A hosted docker repo
  • A group docker repo
  • A proxy docker repo

The repo itself is the hosted repo. Create a hosted docker repo and name it: smallworld-docker. Check the http connector and add port number 8083.

This will be your push connector.

Next you create a proxy for docker containers from the hub.docker.com that will also be cached in your repo. Create a proxy docker and name it: sw-docker-hub.

Next you create a group for pulling. Create a group docker and name it: sw-docker-group. Check the http connecter and give it the port number 8082. At the bottom add the two repos to the members of this group: smallworld-docker and sw-docker-hub.

This will be your pull connector.

Populate the repository

I suggest going to another machine because the populating process create a lot of stuff to clean up. Docker is all you need…

I used below script to load the container you receive from GE Smallworld on a valid license to your repository for Electric Office Web. You need about 40GB of diskspace for the process and it takes quite an amount of time.

The prune command are to free up diskspace in between the pushing.

#!/bin/bash
#
# load the GSS images (EO Web) in the repo
# this is a helper to load the images in an external repo
#
#

CONTAINERFOLDER=/opt/sw530
REPO=nexus.wijsmanlocal:8444
USR=SECRET1
PWD=SECRET2

echo "push images to $REPO"

docker image prune -f
docker container prune -f
docker volume prune -f

# SWMFS

docker load --input $CONTAINERFOLDER/swmfs_cache_container-5.3.0.tar

docker run -v /var/run/docker.sock:/var/run/docker.sock cache-container:swmfs-SW530-PROD-1 python post_artifacts.py -d $REPO --docker-reg-usr $USR --docker-reg-psw $PWD -t 1200

docker image prune -f
docker container prune -f
docker volume prune -f

# GSS

docker load --input $CONTAINERFOLDER/gss_cache_container-5.3.0.tar

docker run -v /var/run/docker.sock:/var/run/docker.sock cache-container:gss-SW530-PROD-1 python post_artifacts.py -d $REPO --docker-reg-usr $USR --docker-reg-psw $PWD -t 1200

docker image prune -f
docker container prune -f
docker volume prune -f

# EO

docker load --input $CONTAINERFOLDER/eo_cache_container-5.3.0.tar

docker run -v /var/run/docker.sock:/var/run/docker.sock cache-container:eo-SW530-PROD-1 python post_artifacts.py -d $REPO --docker-reg-usr $USR --docker-reg-psw $PWD -t 1200

docker image prune -f
docker container prune -f
docker volume prune -f

# STS

docker load --input $CONTAINERFOLDER/sts_cache_container-5.3.0.tar

docker run -v /var/run/docker.sock:/var/run/docker.sock cache-container:sts-SW530-PROD-1 python post_artifacts.py -d $REPO --docker-reg-usr $USR --docker-reg-psw $PWD -t 1200

docker image prune -f
docker container prune -f
docker volume prune -f

You see that GE Smallworld uses the container named cache-container to push the images to the repository.


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *