Logging Implementation


I have created a custom logging application that was original part of the automated hardware testing suite.

In this project Im using bunyan logging library, logrotate and systemd timers in order to implement a logging solution for my coffeescript application. The logs are stored in the device and persists through reboots and power failures.

You can read more in my blog post here

I’m more than happy to answer any questions regarding the project :smile:


Hi @nchronas, that is an interesting solution. Didn’t think of storing logs locally to a /data folder. I had a similar dilemma on how to efficiently solve logging that I first wanted to send to the support chat, but then thought this forum would make the conversation a bit more easy to digest.

So, the device log file I can access from Resin is cut off after a certain (relatively small) number of lines. I looked into connecting other services like Loggly with the Resin containers, but they seem to require some multi-container setups, which don’t seem to be supported on Resin right now (or maybe I’m wrong there). Do you maybe have some advice about the best way to have good long logs (preferably searchable) with Resin containers? Certainly, keeping the data locally and connecting via terminal every time is one approach, but is there a good way to keep them somewhere centrally using a service like Loggly?



There are some experiments about multi container apps, you can read more in this blog post.

Checking logs through the web-terminal is not the best solution, thats why I used a file server in the device, that allows me to download the logs and examine them locally.

I didnt used loggly but it definely seems interesting, I will check it out ( maybe a new hack day project) and I will let you know.


Thanks for the tip. I’ll also drop a note here if I get Loggly to work myself somehow.


I didn’t manage to send my logs to Loggly, even after trying this tutorial that should work and spending a bunch of time messing with the Python logging config. It works when I just run a gunicorn process, but I think honcho is interfering with it somehow when I run all the processes (gunicorn, redis & celery). In the end I set it to log from both processes to files in /data/ where I can find a longer history if necessary and used an ugly tail hack to still spit out the logs to stdout, so that Resin can pick up on them and show them in the quick summary logs window (which is quicker than opening the tiny terminal session web window every time). This is my Procfile if it helps anyone:

redis: redis-server
web: /venv/bin/gunicorn main:app -b --chdir=/app --log-level info --access-logfile=/data/web.log --error-logfile=/data/web.error.log
web_output: while ! tail -f /data/web.log ; do sleep 1 ; done
web_error_output: while ! tail -f /data/web.error.log ; do sleep 1 ; done
worker: C_FORCE_ROOT=true /venv/bin/celery worker -A main.celery --loglevel=info  --logfile=/data/worker.log --workdir=app -B
worker_output: while ! tail -f /data/worker.log ; do sleep 1 ; done

But in the long run, I hope Resin will provide some official way™ to integrate 3rd party logging services like Loggly :slight_smile:


Thats a great suggestion, I will take it to the team :smile:
In the mean time, if you can share the repo, I can take a look and see if I can identify why loggly didnt work.


Well, I can’t share the repository, as the source is not open. This is the relevant Python log config I used. The issue isn’t strictly related to Resin, though, as even running it using Honcho (the Python Foreman alternative) it didn’t work. When I would run just the gunicorn process from the above Procfile, it would send events to Loggly, so I’m guessing it’s related to Honcho not letting some extra processes being created that the Loggly handler needs or something like that.




# if blank, logging._defaultFormatter used
args=(sys.stderr, )

#      filename     mode size count enc.
args=('bopnos.log', 'a', 1024, 3, 'utf8')
# formatter=precise
# filename=logconfig.log
# maxBytes=1024
# backupCount=3



format={ "loggerName":"%(name)s", "asciTime":"%(asctime)s", "fileName":"%(filename)s", "logRecordCreationTime":"%(created)f", "functionName":"%(funcName)s", "levelNo":"%(levelno)s", "lineNo":"%(lineno)d", "time":"%(msecs)d", "levelName":"%(levelname)s", "message":"%(message)s"}