Precompiled Python wheels for ARM


TL;DR: you can install some Python packages on ARMv7 platforms quicker now by using

pip install --extra-index-url= <packagename>

for a number of packages, including numpy, scipy, pillow, RPi.GPIO, simplejson,… For the whole list and the hosted versions browse
:construction: This is work in progress! You’ll need pip 8.1 version or later! :construction:


I’m using Python for a lot of my projects, and on ARM machines (like the Raspberry Pi), and when installing libraries which have components to be compiled (such as numpy), ARM is doubly at a disadvantage: it takes much longer compilation, and when there are precompiled versions available for x86 (so those machines don’t even need to compile anything just install) that’s not the case for ARM. So I thought it would be a good Friday Hack to change this and let ARM devices install native compiled packages faster. :rotating_light:

The standard way to distribute precompiled Python libraries is using wheels. If you check the PyPi page for numpy for example, there are a bunch of wheels available to download for x86, and when you issue a pip install numpy, if a compatible wheel is found, pip will use that instead of working things out from scratch.

I got some inspiration from the manylinux project, and set things up as Docker containers that spit out .whl files.


Inspired by the manylinux project, I’ve set up a similar organization: a base Docker container with Python, and separate containers for the different libraries that install dependencies and compile them. Unfortunately manylinux’s containers are based on CentOS, and no ARM versions of those are available, otherwise it’s very cool, how they organize things. To get things done in less than a day, I went instead with our base images, and modified the Python images to work from armv7 directly, install Python, add wheel, and create tags for pythons 2.7-3.6.

Then added Dockerfiles for “generic” Python packages which don’t need anything extra besides what’s normally in build-essential (more precisely the -buildpack-deps base images). This can be used to compile packages like RPi.GPIO, simplejson, Twisted, etc.

Then set up a couple of other packages separately, such as numpy (my main motivation for this project), scipy, and pillow at this time.

The Dockerfile for each setup and the scripts are included in the arm-wheels repo on Github, though will have to fill out some more details.

  • The arch directory has the Dockerfiles for the Python bases (so far only armv7hf, the results are on Dockerhub)
  • The packages directory have the Dockerfiles for specific Python packages and a _generic, as mentioned above. Can be found on Dockerhub as well: generic, numpy, scipy, pillow.

You can use them by running the script as ./ <pythonversion> <packagename> where <pythonversion> is one of 2.7, 3.3, 3.4, 3.5, 3.6 (or fewer, check the available tags for on Docker hub!), and <packagename> is in the standard pip format something like numpy or numpi==1.12.0. The ./ script within spins up the container, does the work, and then spits out the resulting binaries to the ./target directory, so for example numpy's script:

docker run \
  --rm \
  --detach \
  -v $PWD/target:/usr/src/target \
  imrehg/armv7hf-python-numpy:$1 $2

I’ve uploaded the resulting files to my server at the moment, so the available packages and versions are at, and you can use them in pip 8.1 or later for example as:

pip install --extra-index-url= numpy

This one line above should save ~40 minutes of compilation time easily (scipy package saves a few hours).


It is a very rough, and work in progress project, and I learned a big bunch! :mortar_board: Here are a few things that I’d like to change or improve on :hammer_pick: :

  • reorganize the base images, similar to manylinux: I think it would have a bunch of benefits to create a similar setup to manylinux, including
  • a single base image Docker container with all available Linux versions installed, instead of using tags
  • compile Python from scratch instead of using’s precompiled distributions, so could add more platforms, such as aarch64
  • ideally upstream and add these abilities to manylinux
  • have an organization that can run through this compilation automatically (a CI setup), and build all required packages (while checking which ones were already built), and ideally automatically distribute them too
  • have a better way to distribute these packages, such as a devpi server hosted somewhere
  • add more Python libraries, probably all that are listed on Python Wheels as having wheels, but maybe just those ones that really need compilation (non-fully-native packages). Suggestions for what other packages to build are welcome!

What do you think, do you find this helpful? Have any feedback? :construction_worker:


I was working on this a bit more last night, and a few changes since:

First, fixed up some naming issues with the directories as the modules are hosted at the moment (capitalization was tricky for pip :stuck_out_tongue:

Then, added a few more modules:

None of these were really tested so far, so if anyone gives them a spin, it will be appreciated :ballot_box_with_check:

Also, figured out that indeed some of the packaging is tricky for the aspect of making sure the right libraries are pulled in and compiled with. This especially important for things like cryptography (OpenSSL), lxml (libxml), and possibly PyYAML (libyaml, though here it should be distributed without compiling against it). Would need some checking whether they really do work across the board as distributed, or better not to do them instead of doing them badly…

Just a brain dump, as weekend is the time for thinking, usually :slight_smile:


Trying this further, looks like there’s need for further adjustments to make these distributions actually handy (due to the included libraries). Started to use auditwheel for creating manylinux1-type packages, it should be a first step towards actually being cross-platform. On Debian/Raspbian it should work as it is now, on other distros it might need more work / not guaranteed (e.g. Alpine).

In practice this change means that after running pip wheel <packagename>, need to also run auditwheel repair <wheelfile> to create a combined wheel. It also needs patchelf, which might or might not work on all platforms…

Let’s see if it improves things in practice :stuck_out_tongue: Probably should have expected it but it’s turning out to be a heavier project than originally thought…


This is great!

I’m looking at the /wheels/ repo you created and currently scipy is empty. Is this intentional?


Hey, I kinda messed up the process by trying to make it better (happens, right?), and scipy was one of the ones I messed up the most. Will revisit it in the next couple of days and try to make them work again :slight_smile:!

What OS are you running? (Debian/Raspbian, Alpine, other?)


Great, thanks! I’m currently building into a docker image based on resin/armv7hf-debian.


Cool! Debian is easier in general, compared to making “universal” wheels.


A couple packages that I find would benefit from wheels would be cffi, smbus-cffi, and dbus-python.

Right now I have them broken out of my requirements.txt into a separate step so they don’t have to be rebuilt as often.

I created a pull request for dbus-python as that has dependencies, but the other two can be built with _generic.

I also found Adafruit_BBIO that could be built from _generic that didn’t already have a wheel for any architecture.


I guess this news from the Raspberry Pi folks is pretty relevant (especially as haven’t had a chance to take this approach above much further yet)

What do you think?


I haven’t tried them yet. I’m wondering how tightly they will are built to Rasberian.

Will they be enough to run in an Alpine container on a BeagleboneBlack?

Something to play with when I’m not in the process of moving.