Rushing in with bug fixes


After the release of integration of Nuimo-Click with Senic Hub, we came across this bug where if a paired Sonos speaker went down(unplugged, IP changes), both Nuimo-Control and Nuimo-Click will become unresponsive. Nuimo-Control shows an X, on its LED display matrix when something goes wrong.

404.jpg

Look and feel of Nuimo-Click is very similar to traditional switches, which rarely malfunction. While carrying that impression(expectation that it will always work), when user realizes that click is not working, it irks.

nuimo-click.jpg

We are using DBus for managing connection between smart home devices and the controllers. For communicating with Sonos we use websockets. In above mentioned situation, when Sonos was not reachable, there was no timeout for such requests and senic-core DBus API throws following error:

nuimo_app ERROR [MainProcess-ipc_thread][dbus.proxies:410] Introspect error on :1.16:/ComponentsS
ervice: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy bl
ocked the reply, the reply timeout expired, or the network connection was broken.

Given that there was no timeout, this error/exception is not instant, it takes a while for the DBus API to raise it. And in the meantime, hub remains in a state of suspension. Also, events are queued for processing, so this issue was also clogging up events. As mentioned earlier, a bad UX, sort of show-stopper, and on top of that we were on tight schedule to ship. I rushed in with first PR, adding timeout for all DBus method calls. All the places where I was calling DBus methods, I wrapped them around try, except block, pass extra timeout argument and handle the DBusException. This fix, although it worked, was HUGE. Changeset was sprawling through lot of files. It was not comforting to get it reviewed thoroughly and to make sure that I was not introducing new regression(we are still in process of adding more unittests to the stack, maybe will write a post around it).

Initially when we were working on design of senic-core, we had decided to handle exceptions at their point of origin. With respect to that decision, my first PR(impulsive fix), was in totally wrong direction. While I waited for reviews and comments, I started looking into websocket library, which was waiting forever for reply from a host which was not reachable. As I googled for terms like websocket client, timeout, I quickly came across lot of links and pointers on how to set a timeout for a websocket connection. This was promising. I quickly put together something as simple as:

# library creating websocket connection
REQUEST_TIMEOUT = 5
ws = websocket.create_connection(
	    url, header=headers,
	    sslopt=sslopt, origin=origin
	)
ws.settimeout(REQUEST_TIMEOUT)

# connection instance will now raise WebSocketTimeoutException

Sonos library interaction already had set of try, except blocks, I added WebSocketTimeoutException to those exceptions and handled it there itself. This fix was small, precise and it was comforting in a way. I tested out the fix, unplugging sonos speaker and interacted with Nuimo-Control and within 5 seconds(timeout), I noticed X on it. I additionally confirmed that system was able to work with Hue and interactions weren't clogging up. It was easier to get it reviewed from colleague, got it verified and merge.

SBjpgScaled.jpg

At times, symptoms can be distracting and putting a bandage won't fix the leak. Don't rush, take your time, identify root of the problem, and then fix it.

This situation also made us think about how to improve the way we are using DBus APIs. We had put together the first working version of API following a blog series around the same subject and from examples which were shipped with dbus-python (Python bindings for D-Bus) package. There is lot of room for improvement. We tried to understand better on how to use these APIs, document things and stress test them. I will write about them too, sometime.

PS: The included pencil sketch work was done by Sudarshan

Lets unittest using Python Mock, wait, but what to Mock?


At Senic, on our Hub, for managing applications, we use Supervisord. I am not sure about its python3 compatibility, but it is one of the reason we still have dependency on Python2.7 Given that Python2.7 life support clock is ticking, we recently merged big changes to use Systemd instead. I came across this small, clean, Python API for managing systemd services. We included it in our stack and I wrote a small utility function for it:

import logging
from sysdmanager import SystemdManager
import dbus.exceptions


logger = logging.getLogger(__name__)

def manage_service(service_name, action):
    '''This function is to manage systemd units passed to it in
    service_name argument. It will try to stop/start/restart unit
    based on value passed in action.
    '''
    try:
	systemd_manager = SystemdManager()
    except dbus.exceptions.DBusException:
	logger.exception("Systemd service is not accessible via DBus")
    else:
	if action == "start":
	    if not systemd_manager.start_unit(service_name):
		logger.info("Failed to start {}".format(service_name))
	elif action == "stop":
	    if not systemd_manager.stop_unit(service_name):
		logger.info("Failed to stop {}".format(service_name))
	elif action == "restart":
	    if not systemd_manager.restart_unit(service_name):
		logger.info("Failed to restart {}".format(service_name))
	else:
	    logger.info(
		"Invalid action: {} on service: {}".format(action, service_name)
	    )

With this in place, manage_service can be imported in any module and restarting any service is , manage_service('service_name', 'restart'). Next thing was putting together some unittests for this utility, to confirm if its behaving the way it should.

This smallish task got me confused for quite some time. My first doubt was how and where to start? Library needed SystemBus DBus, Systemd's DBus API to start, stop, load, unload systemd services. I can't directly write tests against these APIs as they would need root privilege to work, additionally, they won't work on travis. So, I realized, I will need to mock, and with that realization came second doubt, which part to mock? Should I mock things needed by library or should I mock library? When I looked for mocking Systemd APIs on DBus via dbus-mock, I realized this can become too big of task. So lets mock library object and functions which gets called when I call the utility function manage_service. I had read/noticed python's mock support, and while trying to understand it, it came across as a powerful tool and I remembered Uncle Ben has once rightly said, with great power comes great responsibility. At one point, I was almost convinced of hijacking the utility function and having asserts around different branching happening there. But soon I also realized it will defeat the purpose of unit-testing the utility and sanity prevailed. After looking around at lots of blogs, tutorials and conversations with peers, I carefully mocked some functions from SystemdManager, like stop_unit, start_unit, which gets internally called from the library and that way I was able to write tests for different arguments which could be passed to manage_service. At the end the tests looked something like this:

import unittest
from systemd_process_utils import manage_service
from systemd_process_utils import SystemdManager
from unittest import mock

class TestSystemdUtil(unittest.TestCase):
    service_name = "service_name"
    @mock.patch('senic_hub.commons.systemd_process_utils.SystemdManager')
    def test_manage_service(self, mock_systemd):
	# When: start_unit works, it returns True
	mock_systemd.return_value.start_unit.return_value = True
	manage_service(self.service_name, "start")
	mock_systemd().start_unit.assert_called_with(self.service_name)

	# When: start_unit fails, returns False
	mock_systemd.return_value.start_unit.return_value = False
	manage_service(self.service_name, "start")
	mock_systemd().start_unit.assert_called_with(self.service_name)

	# When: stop_unit works, returns True
	mock_systemd.return_value.stop_unit.return_value = True
	manage_service(self.service_name, "stop")
	mock_systemd().stop_unit.assert_called_with(self.service_name)

if __name__ == '__main__':
    unittest.main()

SoFee


After finishing my work with TaxSpanner, I had worked on a personal project, SoFee for around six months. In this post I am documenting idea behind it and what I had expected it to become and where I left it off(till now).

Features

RSS Feed

I have realized that in many of my personal projects, I work broadly around archiving the content I am reading and sharing online, be it news articles, or blogs, or tweet threads or videos. This form of data, feels like sand which keeps slipping away as I try hold it, to keep it fresh, accessible and indexed for reference. And it got triggered after the sunset of Google Reader. Punchagan had introduced me to google reader and I had soon started following lot of people there and used its browser extension to archive the content I was reading. In some way, with SoFee, I was trying to recreate Google Reader experience with the people I was following on twitter. And first iteration of the project was just that, it would give an OPML file which could be added to any feed-reader and I will get separate feed of all the people I am following.

Archiving the content of links

While taking out my feed and data from google-reader, I also noticed that it had preserved content of some of the links. When I tried to access them again, some links were made private and some were no longer available(404). While working on SoFee, I came across the term link-rot and I thought this aspect of preserving the content is crucial, I wanted to archive the content, index it and make it accessible. Often times, I learn or establish some of facts, while reading this content and I wanted it to be referable so that I can revisit it and confirm the origins. I noticed firefox's reader-mode and used its javascript library, Readablity, to extract cleaned up content from the links and add it to the RSS feed I was generating. I also came across Archive.org's Web ARChive/WARC format for storing or archiving web-pages and project using it. I wanted to add this feature to SoFee, so that pages no longer go unavailable and there is individual, archived, intact content available to go back to. In the end after looking at/trying out/playing with different libraries and tools I wasn't able to finish and integrate it.

Personally Trained Model

Last feature which I had thought of including was a personally trained model which can segregate these links into separate groups or categories. Both Facebook and twitter were messing around with timeline, I didn't want that to happen to mine. I wanted a way to control it myself, in a way which suited me. For first step, I separated my timeline from twitter into tweets which had links and others which were just tweet or updates. Secondly, I listed all these tweets in chronological order. With content extracted using Readability, I experimented with unsupervised learning, KMeans, LDA, visualization of results, to create dynamic groups, but results weren't satisfying to be included as feature. For supervised learning, I was thinking of having a default model based on Reddit categories or wikipedia API which can create a generic simpleton data set and then allow user to reinforce and steer the grouping as their liking. Eventually allow users to have a private, personal model which can be applied to any article, news site or source and it will give them the clustering they want. Again, I failed in putting together with this feature.

What I ended up with and future plans

Initially, I didn't want to get into UI and UX and leave that part on popular and established feed-readers. But it slowed down user onboarding and feedback. I eventually ended up with a small web interface where the links and there content were listed and timeline was getting updated every three hour or so. I stopped working on this project as I started working with Senic, and the project kept working for well above an year. Now its non-functional, but I learned a lot while putting together what was working. It is pretty simple project where we can simply charge user the fee for hosting their content on their designated small vps instance or running a lambda service(to fetch updated timeline, apply their model to cluster data), allow them full control of their data(usage, deletion, updation). I will for sure use my learnings to put together more complete version of project with SoFee2.0, lets see when that happens(2019 resolution?).

Bitbake recipes: part 2


Continuing from last post

Title of issue was: "Add leveldb to senic-os"

Now as we got leveldb python library installed and we started implementation/using it. We have multiple applications/process accessing DB and we ran into concurrency issues, we tried couple of things, like making sure every access opens DB connection and closes it but it didn't pass multiprocessing test,

import unittest
import os
import random
import string
from multiprocessing import Process

class TestDevicesDB(unittest.TestCase):
    def setUp(self):
	self.db = DbAPIWrapper('/tmp/test_db.db')
	self.value = ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(10))

    def test_multipe_instance_access_to_db(self):
	self.db.write("key", self.value)
	self.assertEqual(self.db.read("key"), self.value)
	p = Process(target=update_db)
	p.start()
	p.join()
	self.assertEqual(self.db.read("key"), "new value")
	self.db.delete("key")
	self.assertEqual(self.db.read("key"), '')
	p = Process(target=update_db)
	p.start()
	p.join()
	self.assertEqual(self.db.read("key"), "new value")

leveldb documentation mentions that DB can be opened by only one process at a time and writing wrapper to confirm this condition(locks, semaphores) with python seemed a bit of too much work. We got a DB and its bindings setup on OS, but weren't able to use it. Back to square one.

This time, we thought of confirming that Berkeley DB indeed supports multiple process accessing DB. For Raspbian, there is package available for `berkeledb` and its python bindings, `bsddb3`. I installed them on a raspberry Pi, confirmed that above tests work and even accessing with multiple python instance accessing same DB and reading/writing values. Once this was confirmed, we knew that this DB will work for our requirements so we again resumed on getting the dependencies sorted out on Senic-OS.

Title of issue: "setup BerkeleyDB and bsddb3 on senic-os"

First thing to sort out on this task was how to get `berkeledb` binary tools on senic-os. Library was there, `libdb.so` but binary tools were not there. Some email threads mentioned about installing `db-bin` but running `bitbake db-bin` threw error that, `nothing provides db-bin`. I again reached out on IRC channel there good folks pointed me to this section of recipe mentioning that binaries would be directly installed on os if I included `db-bin` in list of packages. First step sorted out \o/

Second thing, though there was `libdb.so` file, somehow `bsddb3` recipe was not able to locate it. After some better insights on recipes from the work done on `leveldb`, I again looked at the initial recipe I had put together. I had to figure out how to pass extra arguments to `setup` tools of python, giving it location of folders where it can find `libdb.so`. This was a right question to ask and search, bitbake documentation and google helped and finally with following recipe, I was able to get `bsddb3` installed on senic-os:

SUMMARY = "pybsddb is the Python binding for the Oracle Berkeley DB"
HOMEPAGE = "https://www.jcea.es/programacion/pybsddb.htm"
SECTION = "devel/python"
LICENSE = "BSD-3-Clause"
LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=b3c8796b7e1f8eda0aef7e242e18ed43"
SRC_URI[sha256sum] = "42d621f4037425afcb16b67d5600c4556271a071a9a7f7f2c2b1ba65bc582d05"

inherit pypi setuptools3

PYPI_PACKAGE = "bsddb3"

DEPENDS = "db \
  python3-core \
"

DISTUTILS_BUILD_ARGS = "--berkeley-db=${STAGING_EXECPREFIXDIR}"
DISTUTILS_INSTALL_ARGS = "--berkeley-db=${STAGING_EXECPREFIXDIR}"

RDEPENDS_${PN} = "db \
  python3-core \
"

We baked Senic-OS with these libraries and dependencies and ran the multip-processing tests and confirmed that indeed we were able to access DB from our multiple applications. And with this, the task got marked done and new task opened up of migrating applications to use this DB wrapper instead of using config files.

Bitbake recipes: part 1


In this two part posts, I am trying to document the process of deciding on a issue, and how it evolves while trying to resolve or "close" it.

After initial release of Nuimo Click, we looked at the pain points we had on our backend stack. Currently we use lot of config file across processes, some applications write to them, others read/need them to connect to smart home devices. We use threads which "watches" changes to these config file so applications update themselves. This overall is becoming pain and also leaving us in nondeterministic states, which resulted in bad UX and is hard to reproduce and debug. We had thought of using Database or better key-value storage system for one sprint and I tried to take a shot at it.

Title of issue: "Add DB to senic-os"

Avichal my colleague, had already done some benchmarking with options available and I started with this reference point. There was initial consensus on using popular BerkeleyDB, said library(libdb.so) was already part of the SenicOS and we just wanted python3 library recipe for it to start using it.

Title of issue: "Add BerkeleyDB to senic-os"

I started exploring on how to get the library installed but got stuck with following bug:

user@senic-hub-02c0008185841ef0:~# ls -la /usr/lib/libdb-5.so 
lrwxrwxrwx    1 user     user            12 May 15 13:44 /usr/lib/libdb-5.so -> libdb-5.3.so
user@senic-hub-02c0008185841ef0:~# pip3.5 install bsddb3
Collecting bsddb3
  Downloading https://files.pythonhosted.org/packages/e9/fc/ebfbd4de236b493f9ece156f816c21df0ae87ccc22604c5f9b664efef1b9/bsddb3-6.2.6.tar.gz (239kB)
    100% |                                | 245kB 447kB/s 
    Complete output from command python setup.py egg_info:
    Can't find a local Berkeley DB installation.
    (suggestion: try the --berkeley-db=/path/to/bsddb option)

    ----------------------------------------

Small background, Yocto is custom embedded linux distribution which can be tailored for any SoC to create a small footprint, lightweight distribution. For libraries and packages which needs to be shipped with the OS, we need to write recipes. They are small snippet of config file, which defines the components, source of the package, dependency, tools needed to compile the recipe etc. Most/many packages already have recipes for them in layer-index, where they can be searched and integrated out of the box, but at times, we need to write one.

I am familiar with and writing/figuring out how to put together a recipe which should work but I needed some more understanding of internals. While python package, bsddb3 is actively maintained, Berkely DB itself was available for download behind oracle sign-in. I wasn't able to get these dependencies sorted out hence I started looking at alternatives.

leveldb was a good option, it is actively maintained, has a recipe available for its core package and its python library is also well maintained.

Title of issue: "Add leveldb to senic-os"

As I tried to put together recipe for leveldb, I got stuck on trying to figure out how to make cython compile header files from the library for the native platform we used. I reached out in IRC channel(#oe on irc.ubuntu.com) and shared my recipe and doubt, folks there helped me understand how to use, enable cython to compile for native platform. Here is how the final recipe looked like:

DESCRIPTION = "Plyvel is a fast and feature-rich Python interface to LevelDB."
HOMEPAGE = "https://github.com/wbolster/plyvel/"
SECTION = "devel/python"
LICENSE = "BSD"
LIC_FILES_CHKSUM = "file://LICENSE.rst;md5=41e1eab908ef114f2d2409de6e9ea735"
DEPENDS = "leveldb \
  python3-cython-native \
  python3-setuptools-native \
"

RDEPENDS_${PN} = "leveldb"

# using setuptools3 fails with make command
# python3native is needed to compile things using python3.5m
inherit setuptools python3native

S = "${WORKDIR}/git"
SRC_URI = "git://github.com/wbolster/plyvel.git;tag=1.0.5 \
  file://setup.patch \
"
PV = "git-${SRCPV}"

I needed to add a small patch to python package to get it compiled with python3:

diff --git a/Makefile b/Makefile
index 2fec651..2c2300a 100644
--- a/Makefile
+++ b/Makefile
@@ -3,8 +3,8 @@
 all: cython ext

 cython:
-	cython --version
-	cython --cplus --fast-fail --annotate plyvel/_plyvel.pyx
+	cython3 --version
+	cython3 --cplus --fast-fail --annotate plyvel/_plyvel.pyx

 ext: cython
	python setup.py build_ext --inplace --force
diff --git a/setup.py b/setup.py
index 3a69cec..42883c6 100644
--- a/setup.py
+++ b/setup.py
@@ -1,7 +1,10 @@
 from os.path import join, dirname
 from setuptools import setup
+from distutils.command.build import build
 from setuptools.extension import Extension
 import platform
+from distutils.command.install import install as DistutilsInstall
+from subprocess import call

 CURRENT_DIR = dirname(__file__)

@@ -14,6 +17,16 @@ def get_file_contents(filename):
	 return fp.read()


+class BuildCython(build):
+    def run(self):
+        cmd = 'make'
+        call(cmd)
+        build.run(self)
+        # do_pre_install_stuff()
+        # DistutilsInstall.run(self)
+        # do_post_install_stuff()
+
+
 extra_compile_args = ['-Wall', '-g']
 if platform.system() == 'Darwin':
     extra_compile_args += ['-mmacosx-version-min=10.7', '-stdlib=libc++']
@@ -53,5 +66,8 @@ setup(
	 "Topic :: Database",
	 "Topic :: Database :: Database Engines/Servers",
	 "Topic :: Software Development :: Libraries :: Python Modules",
-    ]
+    ],
+    cmdclass={
+        'build':BuildCython
+    }
 )

To be continued…