I didn’t expect it to run on macOS either. That’s why I didn’t initially believe “exec format error” in my case be an architecture issue even though all internet resources I found pointed into that direction.
I had found Petros’ post as well and I now understand (I believe) what RUN [ "cross-build-start" ] does. I’m reasonable certain it would make the build run on Docker Hub but what happens if it runs on the Pi?
Hey, do you have qemu-system-arm set up on your Mac? That would explain why it would have worked, the system would emulate ARM for those base images. For example I have that set up on my local Linux machine, and can cross compile locally these containers (that that will run on the original architecture).
Looking into the blogpost, it seems like the key is using the armv7hf-debian-qemu base image, which includes that qemu executable and the cross-compile setup.
I’d think you’d need to adapt that repo to your need, guessing you can just replace FROM resin/armv7hf-debian:jessie with the base you’d like FROM resin/rpi-raspbian or FROM resin/raspberrypi3-debian for example, and you could create thus a relevant ...-qemu version that you can use for your own project.
It’s a guess, but I’m more confident in it than not that it would work
The usual resin base images are not setup like this, imho, because 1) they would ship with more binary that they really need and most of the time that’s not great; 2) if there’s external qemu-system-arm set up then the container can be built on other architectures as well (as the resin.io builders, and as opposed to the DockerHub builders).
RUN pip3 install nose
CMD python3 -c 'import numpy; numpy.test("full");' && while : ; do echo "Idling..."; sleep 600; done
All these are just some examples, that the original method works, and some extra hints on how to use them.
On the other hand, if you can build your containers locally as well with docker build (without needing to do automated builds), the resulting image you can docker push to DockerHub, and can work from there similarly, without messing with all these cross-compilation steps at all!
What kind of use case you have in mind? Why would you prefer to do builds on DockerHub?
When you mean you are done, what is the outcome that you mean? Are you trying to integrate (custom) base images with DockerHub like this, or are you trying to integrate deploying to resin.io devices some way easier?
By “I’m done” I meant that I didn’t have to worry about building the image. I provide images for the general public on DockerHub based on Dockerfiles in public GitHub repositories. Some of those are for the Raspberry Pi. So, basically I’ve got two options if I don’t want to build images manually:
enable cross-builds on DockerHub, requires modifications to the Dockerfile as discussed above
integrate Travis CI (or similar) through GitHub hooks, Travis would do the cross-build and push to DockerHub
I see, thanks. In our case I think we are cross-compiling locally and just pushing to DockerHub (we are doing a few thousand base images pretty much every day). DockerHub automatic build has the issue that long running builds are killed (e.g. I couldn’t get an image that has both numpy and scipy built, as the process was killed by DockerHub for taking too long). That can be a limitation for this setup.
Travis sounds interesting, because there you can modify your building host, and can add emulation in a way, that you don’t need to do any of these cross-building tricks (and ship those files with all of your containers). Was looking at for example this blogpost:
It’s definitely interesting, and would love to hear what you make of it in the future, whichever path you choose!