Running Harbor registry (Docker repository) behind reverse proxy and solve docker push errors

Written by - 2 comments

Published on - Listed in Docker Kubernetes Containers Security

Running (Docker / Kubernetes) containers means an image was downloaded (pulled) and started (run) on a host. The image usually resides on centralized and publicly available repositories - called registries in the container world. The best known such registry is surely DockerHub, which is maintained by Docker Inc. There are some other known registries such as, maintained by Red Hat.

But besides these public registries, it is also possible to build a private registry. This may cause additional efforts for the setup, resources and maintenance, but using a private registry is one way to harden the whole container infrastructure and avoid installing potentially vulnerable and dangerous container images.

Harbor: a private registry

Harbor is such a private registry. It's an open sourced project lead by VMware and is, as of this writing, a CNCF project in incubating status. But Harbor is not just a private registry where one can push and pull images from - it also allows to "plug" vulnerability scanners into the registry. Imagine an anti-virus-scanner on your workstation which runs scans through your files and directories in your file system. More or less the same happens with the pluggable scanners inside Harbor: The images are scanned for known vulnerabilities. Harbor features one embedded scanner already, called clair. When Harbor is installed with the relevant parameter (./ --with-clair), clair will be running alongside Harbor.

Inside the registry, multiple repositories can be created. Each repository with its own RBAC system, allowing to control who or what is allowed to push or pull images. In the following screenshot, a repository called "test" was created.

Harbor - private registry for container images

Image pushing to Harbor

The basic idea is to either build your own image or re-use an existing image, tag it, then push it into such a repository. An easy example is to use the publicly available "nginx" container image from DockerHub.

First the image is pulled:

root@dockerhost:~# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
54fec2fa59d0: Pull complete
4ede6f09aefe: Pull complete
f9dc69acb465: Pull complete
Digest: sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12
Status: Downloaded newer image for nginx:latest

The now locally available nginx:latest image can be tagged:

root@dockerhost:~# docker tag nginx:latest is the remote registry, test the repository name, followed by the image name and tag the image should be saved in the repository.

Once tagged, the local image can now be pushed into the remote registry:

root@dockerhost:~# docker push

Access denied error on push

So far so good, but if you want to harden your container infrastructure, a docker push should not just work out of the box without authentication.

root@dockerhost:~# docker push
The push refers to repository []
b3003aac411c: Preparing
216cf33c0a28: Preparing
c2adabaecedb: Preparing
denied: requested access to the resource is denied

When a repository is configured as "private", only authorized users are allowed to push into this repository. Which means: The docker client needs to be authenticated first.

root@dockerhost:~# docker login
Username: pusher
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See

Login Succeeded

Unknown blob error on push

When Harbor is run behind a reverse proxy, there can be all kinds of weird issues. The "unknown blob" error is one of these.

root@dockerhost:~# docker push
The push refers to repository []
b3003aac411c: Pushing [==================================================>]  3.584kB
216cf33c0a28: Preparing
c2adabaecedb: Pushing [==================================================>]  69.21MB/69.21MB
unknown blob

Unfortunately the Harbor setup documentation is not (yet) set to a certain level to support multiple ways of how to run Harbor and (as of this writing) expects Harbor to receive direct http/https communication. But by going through a couple of issues (Docker issue 970 and Harbor issue 3114 are worth to be looked at), it all is related to a reverse proxy setup and ssl/tls offloading.

Correct reverse proxy setup in front of Harbor

If a reverse proxy is run before Harbor and communicates with Harbor using plain http, inside Harbor there's a problem with handling the correct http scheme.

The tricky part is to find out what exactly is causing the communication problem. As it turns out, the Harbor-run Nginx container is itself is a Reverse Proxy in front of the core and the portal containers (depending on the requested path) and tries to overwrite the http protocol scheme with the http header X-Forwarded-Proto.

https -> [ Reverse Proxy ] -> http -> [ Docker Host FW -> Harbor Nginx -> Harbor Core ]

To circumvent this, the Nginx config of this Harbor Nginx container must be slightly adapted. This can be done in the unpacked Harbor directory, in which ./ was launched. The Nginx configuration can be found in common/config/nginx/nginx.conf and even contains the relevant information what to do:

      # When setting up Harbor behind other proxy, such as an Nginx instance, remove the below line if the proxy already has similar settings.
      proxy_set_header X-Forwarded-Proto $scheme;

The setting of the X-Forwarded-Proto header must be prevented - so these lines must be commented-out:

      # When setting up Harbor behind other proxy, such as an Nginx instance, remove the below line if the proxy already has similar settings.
      #proxy_set_header X-Forwarded-Proto $scheme;

There are a couple of occurrences of these lines. All of these entries must be disabled. Hint: Simply search for "behind other proxy" and the relevant lines are quickly found and disabled.

Now is also a good moment to add an additional parameter to harbor.yml. Just after the http port definition, add "relativeurls: true":

root@harbor:~/harbor# cat harbor.yml
# http related config
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 80
  relativeurls: true

Once the Nginx config and harbor.yml were adjusted, all Harbor containers need to be restarted. This can be done using the docker-compose command inside the unpacked harbor directory:

root@harbor:~/harbor# docker-compose down -v
root@harbor:~/harbor# docker-compose -f docker-compose.yml up -d

The docker-compose command takes care of starting the necessary containers with the relevant configurations.

The Nginx config on the TLS-offloading front reverse proxy is pretty straightforward:

root@frontproxy ~ # cat /etc/nginx/sites-enabled/
server {
  listen 80;
  access_log /var/log/nginx/;
  error_log /var/log/nginx/;

  location / {
    rewrite ^(.*)$ https://$host$1 redirect;

server {
  listen 443;
  access_log /var/log/nginx/;
  error_log /var/log/nginx/;
  ssl on;
  location / {
    include /etc/nginx/proxy.conf;
    proxy_pass;  # harbor docker host

Note: You might want to make sure, proxy_buffering is set to off in proxy.conf.

docker push to Harbor behind reverse proxy

Time to find out whether or not these adjustments helped to overcome the unknown blob error on docker push:

root@dockerhost:~# docker push
The push refers to repository []
b3003aac411c: Pushed
216cf33c0a28: Pushed
c2adabaecedb: Pushed
latest: digest: sha256:cccef6d6bdea671c394956e24b0d0c44cd82dbe83f543a47fdc790fadea48422 size: 948

Finally, the container image was successfully pushed! In Harbor's UI this can be verified by checking the repository "test":


Setting up a private registry for container images isn't as hard as one believes. There are a few gotchas, mainly due to lack of proper documentation (to handle different use cases) but with enough research and willingness for try'n'err, Harbor turns out to be a great helper on hardening a container infrastructure!

If you want to know more about container infrastructures such as Kubernetes based Rancher 2 and Harbor setups on premise, contact us at Infiniroot.

Add a comment

Show form to leave a comment

Comments (newest first)

Daniel from wrote on May 10th, 2023:

Thanks for your explanations, this saved my day. You should maybe also add, that token expiration can also be a problem for large images. I am pushing up to 30GB containers and this needs more than the default token expiration timeout. This can be set either via a curl request or in the System settings.
Thx again!

Merevaht from wrote on Jun 4th, 2021:

good job, thanks for helping

RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Icingaweb   Icingaweb2   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder   

Update cookies preferences