Multicontainer and multiple repos



I’m trying to migrate our gateway architecture for This is currently made up of separate node applications that are each backed by their own repo on github. Is there any way to make use of multiple repos with the multicontainer feature?


That is exactly what I am experimenting with these days. Each service in our docker-compose.yml has its own repo. Then we have some CI triggering on every commit to master, whose job is to build and push the Docker image to a registry (Docker Hub or a private one). Finally the Resin app only consists of a docker-compose.yml that references the images and pulls them from the registry.

Does that make sense?

Something like:

version: '2'

# A simple POC for a multicontainer version of the Medusa app.

    image: <registry>/<repository>/medusa-model-dummy:0cb9c7c45dda5a59b82525a2d0acd0dc4ab382e9
      - medusa-manager

    image: <registry>/<repository>/medusa-manager-dummy:5cd87c6d234baffee309f7cc25857f7efcd60c64

Note that I have to specify the version of each image, as it shows up in the registry (the commit SHA in our case). Otherwise git push resin master does not detect any change and does not update the app on the devices, even when a new version of a service is available.


Same here, Multicontainer don’t support GIT SUBMODULES, this is really important feature, to made Multicontainer easy to use


@mediainbox, is this something you have tested yourself?

EDIT: I can confirm submodules are not supported. I failed to migrate to a submodules approach. git push resin master indeed fails with:

➜  main-application git:(master) git push resin master --force
Counting objects: 6, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (6/6), 830 bytes | 0 bytes/s, done.
Total 6 (delta 0), reused 3 (delta 0)

[Info]      Starting build for XXX, user XXX
[Info]      Dashboard link:
[Info]      Building on arm01
[Info]      Pulling previous images for caching purposes...
[Success]   Successfully pulled cache images
[Error]     Some services failed to build:
[Error]       Service: frontend
[Error]         Error: Cannot locate specified Dockerfile: Dockerfile
[Error]       Service: proxy
[Error]         Error: Cannot locate specified Dockerfile: Dockerfile
[Error]       Service: data
[Error]         Error: Cannot locate specified Dockerfile: Dockerfile
[Error]     Not deploying release.

remote: error: hook declined to update refs/heads/master
! [remote rejected] master -> master (hook declined)
error: failed to push some refs to ''

That is a shame. Submodules could prove a far better alternative.

It looks like the submodules are not cloned when the master branch is checked out on Resin’s build server, while it should be the case on the latest versions of Git.

Anyone from Resin can confirm this, and check which version of Git is used on the build farm?

EDIT2: The latest version of Git might be installed already. However, git clone and git checkout do not clone or update the submodules automatically as I though initially. One needs to explicitly add options like git clone --recurse-submodules or git submodule update --recursive.


Are you able to pull any of your images from a private repo?

I think I’m most of the way to implementing this model, but I’m currently hung up on trying to authenticate a private repo.


Good question. I have been using public Docker Hub repos so far.


At the moment this is not supported, but we are working on a method to support this, which can be tracked in this issue:
This should be release soon.