Lets unittest using Python Mock, but wait, What to Mock?

At Senic, on our Hub, for managing applications, we use Supervisord. I am not sure about its python3 compatibility, but it is one of the reason we still have dependency on Python2.7 Given that Python2.7 life support clock is ticking, we recently merged big changes to use Systemd instead. I came across this small, clean, Python API for managing systemd services. We included it in our stack and I wrote a small utility function for it:

import logging
from sysdmanager import SystemdManager
import dbus.exceptions

logger = logging.getLogger(__name__)

def manage_service(service_name, action):
    '''This function is to manage systemd units passed to it in
    service_name argument. It will try to stop/start/restart unit
    based on value passed in action.
        systemd_manager = SystemdManager()
    except dbus.exceptions.DBusException:
        logger.exception("Systemd service is not accessible via DBus")
        if action == "start":
            if not systemd_manager.start_unit(service_name):
                logger.info("Failed to start {}".format(service_name))
        elif action == "stop":
            if not systemd_manager.stop_unit(service_name):
                logger.info("Failed to stop {}".format(service_name))
        elif action == "restart":
            if not systemd_manager.restart_unit(service_name):
                logger.info("Failed to restart {}".format(service_name))
                "Invalid action: {} on service: {}".format(action, service_name)

With this in place, manage_service can be imported in any module and restarting any service is , manage_service('service_name', 'restart'). Next thing was putting together some unittests for this utility, to confirm if its behaving the way it should.

This smallish task got me confused for quite some time. My first doubt was how and where to start? Library needed SystemBus DBus, Systemd's DBus API to start, stop, load, unload systemd services. I can't directly write tests against these APIs as they would need root privilege to work, additionally, they won't work on travis. So, I realized, I will need to mock, and with that realization came second doubt, which part to mock? Should I mock things needed by library or should I mock library? When I looked for mocking Systemd APIs on DBus via dbus-mock, I realized this can become too big of task. So lets mock library object and functions which gets called when I call the utility function manage_service. I had read/noticed python's mock support, and while trying to understand it, it came across as a powerful tool and I remembered Uncle Ben has once rightly said, with great power comes great responsibility. At one point, I was almost convinced of hijacking the utility function and having asserts around different branching happening there. But soon I also realized it will defeat the purpose of unit-testing the utility and sanity prevailed. After looking around at lots of blogs, tutorials and conversations with peers, I carefully mocked some functions from SystemdManager, like stop_unit, start_unit, which gets internally called from the library and that way I was able to write tests for different arguments which could be passed to manage_service. At the end the tests looked something like this:

import unittest
from systemd_process_utils import manage_service
from systemd_process_utils import SystemdManager
from unittest import mock

class TestSystemdUtil(unittest.TestCase):
    service_name = "service_name"
    def test_manage_service(self, mock_systemd):
        # When: start_unit works, it returns True
        mock_systemd.return_value.start_unit.return_value = True
        manage_service(self.service_name, "start")

        # When: start_unit fails, returns False
        mock_systemd.return_value.start_unit.return_value = False
        manage_service(self.service_name, "start")

        # When: stop_unit works, returns True
        mock_systemd.return_value.stop_unit.return_value = True
        manage_service(self.service_name, "stop")
if __name__ == '__main__':


After finishing my work with TaxSpanner, I had worked on a personal project, SoFee for around six months. In this post I am documenting idea behind it and what I had expected it to become and where I left it off(till now).


RSS Feed

I have realized that in many of my personal projects, I work broadly around archiving the content I am reading and sharing online, be it news articles, or blogs, or tweet threads or videos. This form of data, feels like sand which keeps slipping away as I try hold it, to keep it fresh, accessible and indexed for reference. And it got triggered after the sunset of Google Reader. Punchagan had introduced me to google reader and I had soon started following lot of people there and used its browser extension to archive the content I was reading. In some way, with SoFee, I was trying to recreate Google Reader experience with the people I was following on twitter. And first iteration of the project was just that, it would give an OPML file which could be added to any feed-reader and I will get separate feed of all the people I am following.

Archiving the content of links

While taking out my feed and data from google-reader, I also noticed that it had preserved content of some of the links. When I tried to access them again, some links were made private and some were no longer available(404). While working on SoFee, I came across the term link-rot and I thought this aspect of preserving the content is crucial, I wanted to archive the content, index it and make it accessible. Often times, I learn or establish some of facts, while reading this content and I wanted it to be referable so that I can revisit it and confirm the origins. I noticed firefox's reader-mode and used its javascript library, Readablity, to extract cleaned up content from the links and add it to the RSS feed I was generating. I also came across Archive.org's Web ARChive/WARC format for storing or archiving web-pages and project using it. I wanted to add this feature to SoFee, so that pages no longer go unavailable and there is individual, archived, intact content available to go back to. In the end after looking at/trying out/playing with different libraries and tools I wasn't able to finish and integrate it.

Personally Trained Model

Last feature which I had thought of including was a personally trained model which can segregate these links into separate groups or categories. Both Facebook and twitter were messing around with timeline, I didn't want that to happen to mine. I wanted a way to control it myself, in a way which suited me. For first step, I separated my timeline from twitter into tweets which had links and others which were just tweet or updates. Secondly, I listed all these tweets in chronological order. With content extracted using Readability, I experimented with unsupervised learning, KMeans, LDA, visualization of results, to create dynamic groups, but results weren't satisfying to be included as feature. For supervised learning, I was thinking of having a default model based on Reddit categories or wikipedia API which can create a generic simpleton data set and then allow user to reinforce and steer the grouping as their liking. Eventually allow users to have a private, personal model which can be applied to any article, news site or source and it will give them the clustering they want. Again, I failed in putting together with this feature.

What I ended up with and future plans

Initially, I didn't want to get into UI and UX and leave that part on popular and established feed-readers. But it slowed down user onboarding and feedback. I eventually ended up with a small web interface where the links and there content were listed and timeline was getting updated every three hour or so. I stopped working on this project as I started working with Senic, and the project kept working for well above an year. Now its non-functional, but I learned a lot while putting together what was working. It is pretty simple project where we can simply charge user the fee for hosting their content on their designated small vps instance or running a lambda service(to fetch updated timeline, apply their model to cluster data), allow them full control of their data(usage, deletion, updation). I will for sure use my learnings to put together more complete version of project with SoFee2.0, lets see when that happens(2019 resolution?).

Bitbake recipes: part 2

Continuing from last post

Title of issue was: "Add leveldb to senic-os"

Now as we got leveldb python library installed and we started implementation/using it. We have multiple applications/process accessing DB and we ran into concurrency issues, we tried couple of things, like making sure every access opens DB connection and closes it but it didn't pass multiprocessing test,

import unittest
import os
import random
import string
from multiprocessing import Process

class TestDevicesDB(unittest.TestCase):
    def setUp(self):
        self.db = DbAPIWrapper('/tmp/test_db.db')
        self.value = ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(10))

    def test_multipe_instance_access_to_db(self):
        self.db.write("key", self.value)
        self.assertEqual(self.db.read("key"), self.value)
        p = Process(target=update_db)
        self.assertEqual(self.db.read("key"), "new value")
        self.assertEqual(self.db.read("key"), '')
        p = Process(target=update_db)
        self.assertEqual(self.db.read("key"), "new value")

leveldb documentation mentions that DB can be opened by only one process at a time and writing wrapper to confirm this condition(locks, semaphores) with python seemed a bit of too much work. We got a DB and its bindings setup on OS, but weren't able to use it. Back to square one.

This time, we thought of confirming that Berkeley DB indeed supports multiple process accessing DB. For Raspbian, there is package available for `berkeledb` and its python bindings, `bsddb3`. I installed them on a raspberry Pi, confirmed that above tests work and even accessing with multiple python instance accessing same DB and reading/writing values. Once this was confirmed, we knew that this DB will work for our requirements so we again resumed on getting the dependencies sorted out on Senic-OS.

Title of issue: "setup BerkeleyDB and bsddb3 on senic-os"

First thing to sort out on this task was how to get `berkeledb` binary tools on senic-os. Library was there, `libdb.so` but binary tools were not there. Some email threads mentioned about installing `db-bin` but running `bitbake db-bin` threw error that, `nothing provides db-bin`. I again reached out on IRC channel there good folks pointed me to this section of recipe mentioning that binaries would be directly installed on os if I included `db-bin` in list of packages. First step sorted out \o/

Second thing, though there was `libdb.so` file, somehow `bsddb3` recipe was not able to locate it. After some better insights on recipes from the work done on `leveldb`, I again looked at the initial recipe I had put together. I had to figure out how to pass extra arguments to `setup` tools of python, giving it location of folders where it can find `libdb.so`. This was a right question to ask and search, bitbake documentation and google helped and finally with following recipe, I was able to get `bsddb3` installed on senic-os:

SUMMARY = "pybsddb is the Python binding for the Oracle Berkeley DB"
HOMEPAGE = "https://www.jcea.es/programacion/pybsddb.htm"
SECTION = "devel/python"
LICENSE = "BSD-3-Clause"
LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=b3c8796b7e1f8eda0aef7e242e18ed43"
SRC_URI[sha256sum] = "42d621f4037425afcb16b67d5600c4556271a071a9a7f7f2c2b1ba65bc582d05"

inherit pypi setuptools3

PYPI_PACKAGE = "bsddb3"

DEPENDS = "db \
  python3-core \


RDEPENDS_${PN} = "db"

We baked Senic-OS with these libraries and dependencies and ran the multip-processing tests and confirmed that indeed we were able to access DB from our multiple applications. And with this, the task got marked done and new task opened up of migrating applications to use this DB wrapper instead of using config files.

Bitbake recipes: part 1

In this two part posts, I am trying to document the process of deciding on a issue, and how it evolves while trying to resolve or _"close"_ it.

After initial release of Nuimo Click, we looked at the pain points we had on our backend stack. Currently we use lot of config file across processes, some applications write to them, others read/need them to connect to smart home devices. We use threads which "watches" changes to these config file so applications update themselves. This overall is becoming pain and also leaving us in nondeterministic states, which resulted in bad UX and is hard to reproduce and debug. We had thought of using Database or better key-value storage system for one sprint and I tried to take a shot at it.

Title of issue: "Add DB to senic-os"

Avichal my colleague, had already done some benchmarking with options available and I started with this reference point. There was initial consensus on using popular BerkeleyDB, said library(libdb.so) was already part of the SenicOS and we just wanted python3 library recipe for it to start using it.

Title of issue: "Add BerkeleyDB to senic-os"

I started exploring on how to get the library installed but got stuck with following bug:

user@senic-hub-02c0008185841ef0:~# ls -la /usr/lib/libdb-5.so 
lrwxrwxrwx    1 user     user            12 May 15 13:44 /usr/lib/libdb-5.so -> libdb-5.3.so
user@senic-hub-02c0008185841ef0:~# pip3.5 install bsddb3
Collecting bsddb3
  Downloading https://files.pythonhosted.org/packages/e9/fc/ebfbd4de236b493f9ece156f816c21df0ae87ccc22604c5f9b664efef1b9/bsddb3-6.2.6.tar.gz (239kB)
    100% |                                | 245kB 447kB/s 
    Complete output from command python setup.py egg_info:
    Can't find a local Berkeley DB installation.
    (suggestion: try the --berkeley-db=/path/to/bsddb option)

Small background, Yocto is custom embedded linux distribution which can be tailored for any SoC to create a small footprint, lightweight distribution. For libraries and packages which needs to be shipped with the OS, we need to write recipes. They are small snippet of config file, which defines the components, source of the package, dependency, tools needed to compile the recipe etc. Most/many packages already have recipes for them in layer-index, where they can be searched and integrated out of the box, but at times, we need to write one.

I am familiar with and writing/figuring out how to put together a recipe which should work but I needed some more understanding of internals. While python package, bsddb3 is actively maintained, Berkely DB itself was available for download behind oracle sign-in. I wasn't able to get these dependencies sorted out hence I started looking at alternatives.

leveldb was a good option, it is actively maintained, has a recipe available for its core package and its python library is also well maintained.

Title of issue: "Add leveldb to senic-os"

As I tried to put together recipe for leveldb, I got stuck on trying to figure out how to make cython compile header files from the library for the native platform we used. I reached out in IRC channel(#oe on irc.ubuntu.com) and shared my recipe and doubt, folks there helped me understand how to use, enable cython to compile for native platform. Here is how the final recipe looked like:

DESCRIPTION = "Plyvel is a fast and feature-rich Python interface to LevelDB."
HOMEPAGE = "https://github.com/wbolster/plyvel/"
SECTION = "devel/python"
LIC_FILES_CHKSUM = "file://LICENSE.rst;md5=41e1eab908ef114f2d2409de6e9ea735"
DEPENDS = "leveldb \
  python3-cython-native \
  python3-setuptools-native \

RDEPENDS_${PN} = "leveldb"

# using setuptools3 fails with make command
# python3native is needed to compile things using python3.5m
inherit setuptools python3native

S = "${WORKDIR}/git"
SRC_URI = "git://github.com/wbolster/plyvel.git;tag=1.0.5 \
  file://setup.patch \
PV = "git-${SRCPV}"

I needed to add a small patch to python package to get it compiled with python3:

diff --git a/Makefile b/Makefile
index 2fec651..2c2300a 100644
--- a/Makefile
+++ b/Makefile
@@ -3,8 +3,8 @@
 all: cython ext
-	cython --version
-	cython --cplus --fast-fail --annotate plyvel/_plyvel.pyx
+	cython3 --version
+	cython3 --cplus --fast-fail --annotate plyvel/_plyvel.pyx
 ext: cython
 	python setup.py build_ext --inplace --force
diff --git a/setup.py b/setup.py
index 3a69cec..42883c6 100644
--- a/setup.py
+++ b/setup.py
@@ -1,7 +1,10 @@
 from os.path import join, dirname
 from setuptools import setup
+from distutils.command.build import build
 from setuptools.extension import Extension
 import platform
+from distutils.command.install import install as DistutilsInstall
+from subprocess import call
 CURRENT_DIR = dirname(__file__)
@@ -14,6 +17,16 @@ def get_file_contents(filename):
         return fp.read()
+class BuildCython(build):
+    def run(self):
+        cmd = 'make'
+        call(cmd)
+        build.run(self)
+        # do_pre_install_stuff()
+        # DistutilsInstall.run(self)
+        # do_post_install_stuff()
 extra_compile_args = ['-Wall', '-g']
 if platform.system() == 'Darwin':
     extra_compile_args += ['-mmacosx-version-min=10.7', '-stdlib=libc++']
@@ -53,5 +66,8 @@ setup(
         "Topic :: Database",
         "Topic :: Database :: Database Engines/Servers",
         "Topic :: Software Development :: Libraries :: Python Modules",
-    ]
+    ],
+    cmdclass={
+        'build':BuildCython
+    }

To be continued…

Experience with Python...

Last weekend I attended 10th Edition of PyCon India in Hyderabad. I had started using python seriously around 2009 when first PyCon India happened. During first edition entire FOSSEE team was participating and while I was getting better understanding of using Python we as team were also conducting introductory workshops around it. We were pitching python really hard, our workshops content had all cool features which python was offering and how coding becomes so easy compared to Matlab or C or other languages being used in Indian colleges. Back then I personally also believed in what I was preaching, but today, I have my doubts.

In latest Pycon edition, opening keynote was given by Armin, and his talk was around future of Python, given that Guido is removing himself from decision process. One point which was covered in the talk was about how there are guidelines via certain PEPs, but language by itself never enforces them. I have personally used flake8 to adhere to a certain formatting, but again, its an external tool and still not part of standard libraries. The talk, bubbled up some of my recent doubts on what is right approach of doing something. During first PyCon, if I remember correctly, we had zen of python printed on the TShirts which we were handing out, and I very clearly remembered the line:

There should be one– and preferably only one –obvious way to do it.

Here is a small example on how try, except leaves me confused on what would be right approach to use them. I am quoting from a StackOverflow conversation, which goes like:

In the Python world, using exceptions for flow control is common and normal.

So, I have a dictionary and I want to use one of its key and I can do something like:

    uri = station['uri']
except KeyError:

python dict supports a get which can return default value in case key is missing. So using that we can also handle above situation using a if else block:

if station.get('uri'):
    uri = station.get('uri')

#################### OR ####################
uri = station.get('uri')
if not uri:

#################### OR ####################
if 'uri' in station:
    uri = station['uri']

I already feel a bit lost or conflicted, if I follow quote from stackoverflow, first method is and should be the way things need to be done. But I have used both methods in different situation, and that is conflicting to the zen ideology quoted above.

Furthermore, I have tried to use try, except and limit the scope of the block to minimum lines of code instead of stretching it to complete function or logic, but there, handling return for every exception, made the code hard to follow. For multiple exit points I make sure that at every exit, number of variables returned are same, I write unittests to confirm that it is the case, all in place, tests are passing, and at the end, code doesn't look clean anymore, I feel that I am violating the first line of zen:

Beautiful is better than ugly.

Personally, I think there is lot of room to get better at this, these are very fundamental concepts which shouldn't be hard or confusing. From here on, I will be looking out for some pattern across popular libraries and their implementation, understand their approach and get clarity on which methodology is better suited for which situation.

Rejects: A follow up post

Recently I had written a post about cold rejections, and just after it, I got one more reject, but it was really mindful, constructive and just the thing which can help me(any applicant) convert rejects into an offer.

I had applied for python developer position at Scrapinghub, before applying I liked that open source projects were very core to their business and products, whole team worked remotely, and I wanted to learn/work more on extracting content from web pages and how it is evolving, from static HTML pages to dynamically loaded content using frontend frameworks(for a possible next iteration of SoFee). My application email did work and I got shortlisted for next round of trial project.

I was given a project, task was well defined, along with the expected output, and accompanying instructions were very clear and helpful. As I rushed to followup and submit first working code, they gave me more time to debug, improve and iron out things. This is where I missed/made mistake, I think, I debugged the code, tested that it worked, wrote few test cases and confirmed that output I got was what they expected and followed up with them. They took couple of days to go over my submission, results and the code and followed up with a rejection and feedback on where I went wrong: With this background, after exploring open positions at ScrapingHub, opportunities in field of text classification and information retrieval embedded in WebPages, I am really excited by the prospects. after exploring open positions at ScrapingHub, opportunities in field of text classification and information retrieval embedded in WebPages, I am really excited by the prospects.

Simple parsing tasks are better handled using regular expressions. The reviewers found the code complicated and lacking in Python idioms when compared to other solutions that solve the same task.

As I mentioned earlier, I missed on cleaning up my code, optimizing it and instead I rushed to submit. This feedback is really helpful and I know where I have to work more and improve. Not just that, even working on the task was a learning experience for me. All in all, while it was saddening that I didn't make it, the process was really constructive and I hope others too will shift to similar methods for their recruitment.


Some days back #ShareYourRejection was trending on twitter and lot of people were sharing their rejection stories and how they have overgrown despite them and at times because of them(Given that twitter search result are ever-changing, here is a link to some compiled tweets from that time period).

There is a triumph, sort of happy ending vibe around above stories so I wasn't sure if my rejections are going to fit in. Mine are more related to this blog post and HN thread on rejection around it. I went through the post and discussion trying to apply what was being said and discussed there and what I had experienced. Here are few rejection emails I have got recently:

Thank you for your interest in this job. We have now reviewed your application and we regret to inform you that it has not been selected for further consideration.

We have looked at your resume and, although we appreciate your background and experience, we are choosing not to move forward at this time.

Below three rejects are from same company and after the rejection text they also mention, without failing, "we encourage you to keep an eye on the Work With Us page"

We don't think that your skills and experience are a match for this position at this time.

Thank you very much for applying again. Unfortunately we still don't think there's a good fit.

We don't think there's a good fit right now.

And some more

Your background is very impressive, however we are not looking for someone with your expertise at this time.

We regret to inform you that, after careful consideration, your profile does not fully match the requirements for the vacant position.

As it is hard to make out what and where I got wrong, I do try to follow up with some of them asking for directions, like, one person who was communicating with me had blogged about how hard it is to judge and take a call on applications. I read the post, tried to understand what his position was and also asked for feedback saying, "If its not much of trouble, can you tell me which skills I am missing? Or as you mentioned in your post, was there something off in the way I communicated or drafted my application?". But I got no response to it.

Apart from few times where application was for specific role, I tried to apply for generic developer position. While the rejection blog post I mentioned above mentions about how sending feedback is lot of work and personally, on receiving end, above rejects didn't help much. Furthermore, I applied to these positions after going through companies, their employee blogs, work culture they mention, and from them it comes across that they would be open and willing to share their views on application, but clearly there is a gap between expectations and reality. In meanwhile I do keep revisiting them to improve things on my own end but I am clearly missing something critical as I keep getting very similar rejection. Though these rejections hurt, I would still prefer these rejections over no responses(getting ghosted), which I feel are the worst of the rejects and also pretty common.

PS: Thanks punch for suggesting the topic and feedback on the content.

Teacher's Day

I remember it was on 5th of September in 2010, I was at IISc. The Mahoul/माहोल, environment was of celebrating teachers day. I think it is a nice tradition where we end up reflecting upon many teachers we have had in our lives and their contribution in our lives. Some of my college teachers were on GTalk, so I reached out to them there and while walking back from local cafeteria I was looking at my contact list and found my high school English teacher's contact. I called her, rather nervously. Call went normalish, small reminding was needed from my side, it was long time since I had finished my school. After wishing her, we started catching upon what I had been upto. She enquired if I visited Bikaner to which I replied, "Yes, I did came in July" and she replied, *"I did come"*. I got confused and I blurted out, "Hain?", my version of "Ehh", to which she replied, "I did come, you said, I did came, after did, you should use verb in first form." I was like, ohh, sorry for the mistake and thanks for the correction. A nice lesson for the right day, which I still remember and I think this one is going to stick for long. Thank you Mam.

bugs and lost opportunities

I think bugs in code are good, they give opportunity to correct ourselves. Just like in any skill, mistakes eventually becomes your wisdom, bugs are those mistakes for coding. But at times, because of the deadlines or bugs-in-production situation, we miss those learning moments. We keep those lines commented out, in hope that as things cool down we will revisit them and have our ahhaaa moments.

I had blogged previously about using Zulip as platform for connecting to customers over chat and having a control over pairing of customers to sales representatives. I was running that service on EC2 Small instance which was had very less resources compared to required specs mentioned in zulip documentation. And true enough just one day before the last day of yearly filing, 30th of March, 2015, server yielded.

I was frantically combing through code, optimizing it at random places, performing CPR. Most of the fixes as I brought up the services all the zombie browser clients would try to connect and it would crash again. I wasn't understanding what is causing problem and in attempts to fix it, possibly introducing new ones without realising. There were no error logs in application stack, none in system logs, I was looking at resources(CPU, loadavg, RAM), services running(db, rabbitmq, nginx) and I wasn't able to make sense out of them. As server started, loadavg spiked, memory usage was able to fit in RAM, but frontend kept on showing 500.

On 31st, instead of continuing the wild goose chase, we decided to setup fresh instance of chat system on EC2 Medium instance and revisit this server later. The new system came up, for rest of the filing season, it managed load decently, we created backup image of it and were able to upgrade instance to large EC2 instance before the peak. But now, as things got stable, focus shifted to making sure that existing service were always working. I think the older machine logs if looked closely could have given insight on what and how things went wrong and by not looking at them, that opportunity got missed. And as time passes by, inertia to revisit old mistakes gets bigger and bigger.

branching perception

UPDATE: punch shared this talk of Sam Newman from goto; 2017 with me and it had a nice insight about where branches helps and about trunk based development. After watching it, I feel for personal and individual projects, branching and reviews adds unnecessary overhead. Its better to stick to trunk based development, it keeps momentum going and project moving, and gives a good morale boost.

At Senic we are working on next product which started with simple prototype hack of a new integration and a demo around it. But since then, as we are nearing the "launch", work/code still has not come out of its Feature Branch. I have been rebasing my commits on top of latest mainline branch development, but still there is something which makes me feel very uncomfortable around this parallel development. Somehow I have this perception of ideal project with a single branch and the ease with which new feature gets merged is good sign and something to be aspired for.

Issue -> Pull Request -> Review -> Approvals -> Merge.

But in case of feature branch, this starts happening in parallel and for me personally as I work on feature/bug, it ends up spilling into next feature/bug and so on and so forth. This creates branches off of feature branch where feature for feature branch is there and my head is already hurting while putting this thought in words.

On one of my personal project, SoFee, I have been using mozilla's javascript based readability library to get cleaned up url content. Lately I came across newspaper3k, python based library which can do the same and it would make backend code more coherent. This is fairly simple feature which can follow the path of the Pull Request method I mentioned above, but as I started working on it, I came across a bug in my usage of twitter API and I got it sorted out, I had to do changes in models and few other things. I am not able to clearly demark zone of these smaller developments and they keep spilling into each other and that ends up leaving very entangled history which totally drains out the enthusiasm. Someday, we will master this art of project management…