Bugs = Mistakes

Aug 07 2020
Ever since Ad. Grace Murray Hopper found the first computer bug in the Mark ll computer's log book, programmers have been calling abnormal software behavior as bugs.
As long as we call our mistakes as bugs, it may give us the comforting feeling that these bugs just crawled spontaneously into our programs. However, that only means we take no responsibility whatsoever for our mistakes that caused these bugs. 

It's time we stop sugar-coating our mistakes by calling them bugs, and call them by what they are. Just mistakes. 

Go Pointers!!

Jun 20 2020
One of big myths in software industry was perpetuated by Joel Spolsky in his 2006 classic The Guerilla Guide to Interviewing. I love and respect Joel, and fully agree with the article, except the following paragraph. 

I’ve come to realize that understanding pointers in C is not a skill, it’s an aptitude.

In first year computer science classes, there are always about 200 kids at the beginning of the semester, all of whom wrote complex adventure games in BASIC for their PCs when they were 4 years old. They are having a good ol’ time learning C or Pascal in college, until one day the professor introduces pointers, and suddenly, they don’t get it. They just don’t understand anything any more. 90% of the class goes off and becomes Political Science majors, then they tell their friends that there weren’t enough good looking members of the appropriate sex in their CompSci classes, that’s why they switched.

For some reason most people seem to be born without the part of the brain that understands pointers. Pointers require a complex form of doubly-indirected thinking that some people just can’t do, and it’s pretty crucial to good programming. A lot of the “script jocks” who started programming by copying JavaScript snippets into their web pages and went on to learn Perl never learned about pointers, and they can never quite produce code of the quality you need.
For a long time, I believed this. Whenever I tried learning pointers in C, I'd struggle for a while, remember the above words and finally give up thinking pointers are not just not my cup of tea.

Then I started learning Go

One of the things that make Go syntax beautiful is how variables are declared. In most C-based languages, you write the type of the variable first, then the name of the variable. When reading a sentence, you have to read it from right to left. For example,
int x, y;     // x and y are integers
int *p;       // p is a pointer to an integer
However, in Go, the name comes first, followed by a type. So you read it from left to right. For example,
var x, y int;    // variables x and y are integers 
var p *int;      // variable p is a pointer to an integer
That brings me back to the subject of pointers and why I don't agree with the above paragraph, especially the line understanding pointers in C is not a skill, it’s an aptitude

When you are learning a new concept, especially a hard one like pointers, it matters a lot how it's being presented to you. When the concept doesn't make sense at first, it's easy to blame yourself and give up, rather than try to find a new teacher, tackling it from another angle, or learn it using a different approach. 

That's what happened with me when learning pointers in C. Often I'd start on solid grounds, understanding what is a pointer, and how it points to another object in memory, by containing the actual memory address of the location where that object resides. The trouble came when de-referencing that pointer to get the underlying value of that object. I'd always get confused between the meaning of * of de-reference and the * of declaration. 

In C, you'd do this:
int x, y;
int *p;
p = &x        // so far so good.
y = *p        // ???
In Go, you do it as shown below. Notice the second line, where we are defining p (not *p) as a pointer to an integer. This is not confusing (for me, at least) as it looks different from the pointer declaration. There's a clear distinction between a pointer and a de-reference from a pointer. 
var x, y int
var p *int
p = &x
y = *p       // de-reference p to get the value at the memory address it's pointing to
Agreed, you can do this in C:
int* p;
But this is not the same: 
int* p, q    // p is a pointer to an int, q is an int
In Go, this is how you declare two pointer to integers:
var p, q *int 
It was not until I started learning Go, that I finally understood how pointers work. That opened new doors into how computer memory works, why strings are immutable in C#, the difference between Value Types and Reference Types in C#, and much more, which would have been next to impossible without learning pointers.

So if you are a new developer, and struggling with pointers in C, I'd recommend learn them in Go first. Once you understand the basic idea, you can go back to C and the pointers start to make sense. But don't think, for a moment, that you are born without the part of the brain that understands pointers, just because Joel said so. 

On Programming

Jun 01 2020
I started reading the Mythical Man-Month by Fred Brooks this morning, and the first chapter expresses beautifully why programming is an art that provides so much joy to a programmer, if done right. 

Just reading the chapter made me so happy. It captures my exact feelings towards programming and software development in general, in beautiful prose that just flows. Here it is, in words from Fred Brooks,  


Why is programming fun? What delights may its practitioner expect as his reward?

First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things of his own design. I think this delight must be an image of God's delight in making things, a delight shown in the distinctness and newness of each leaf and each snowflake.

Second is the pleasure of making things that are useful to other people. Deep within, we want others to use our work and to find it helpful. In this respect the programming system is not essentially different from the child's first clay pencil holder "for Daddy's office."

Third is the fascination of fashioning complex puzzle-like objects of interlocking moving parts and watching them work in subtle cycles, playing out the consequences of principles built in from the beginning. The programmed computer has all the fascination of the pinball machine or the jukebox mechanism, carried to the ultimate.

Fourth is the joy of always learning, which springs from the nonrepeating nature of the task. In one way or another the problem is ever new, and its solver learns something: sometimes practical, sometimes theoretical, and sometimes both.

Finally, there is the delight of working in such a tractable medium. The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures. (As we shall see later, this very tractability has its own problems.)

Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.

Programming then is fun because it gratifies creative longings built deep within us and delights sensibilities we have in common with all men.

Twitter Going Remote

May 12 2020
This morning, Twitter CEO Jack Dorsey told the employees that they can work from home indefinitely. Way to go!!

Google, Microsoft, and Amazon have all told employees that they can keep working from home through the fall, even if their offices reopen sooner.

Now let's hope all other tech companies notice this shift in work and allow their employees who can work from home, especially developers, work remotely. 

Let Me Unsubscribe!

May 08 2020
Don't you just love when your entire sign-up workflow is done in minutes, and your bank transactions involving money are done in seconds, but a simple 'unsubscribe' from the marketing emails may take up to 10 business days to take effect?
Come on, Tangerine, you are on online bank, just remove my email from your marketing email list, it shouldn't take up to 10 business days!!!

For what it's worth, I love banking using Tangerine, it's just these shoddy marketing strategies employed by so many online websites that annoy me so much. 

On Working

May 01 2020
Developers usually get most of the work done in a state called flow. Flow is a state where you are in deep concentration and time just flows by.

I began to work. I looked up, and three hours had passed.

For anyone involved in engineering, design, development, writing, or anything related to knowledge work, flow is a must, as these are high momentum tasks. It might take some time to get started, but once you get going, you get a lot of work done.

It is during this warm-up period developers are most sensitive to interruptions. An environment filled with constant interruptions and distractions can make it very difficult to attain flow. Each time you  are interrupted, you require extra time to get back into flow. Repeat this a few times, and your whole workday is gone.

The developer who tries and tries to get into flow and is interrupted again and again is not a happy person. Instead of the deep mindfulness that the flow state provides, they are dragged into the surrounding ocean of distractions that is the modern open plan office.

If you are a manager, it can be hard to empathize with your developers seeking the state of flow. After all, your job requires that you do most of your managerial work in interrupt mode, which is management. But your developers really, really need to get into flow. Anything that keeps them from achieving flow will reduce their effectiveness and the joy that comes with it.

The causes of lost hours and days are many, but mostly related. Some days you never spend a productive minute on anything having to do with getting actual work done. Everybody's workday is plagued with frustration and interruption. Entire days are lost, and nobody can put a finger on just where they went. 

There are a million ways to lose a workday, but not even a single way to get one back. 

Time Sheets

Apr 09 2020
When you fill the time sheet at work, are you logging the brain time vs. body time?

Here's a somewhat controversial opinion. I think the practice of logging time-sheets is inefficient, deeply flawed, and doesn't accomplish anything, including the very thing it's trying to measure, employee productivity.

Is there any difference in hours spent doing meaningful work and hours of wasted time staring at the computer? What matters is not the amount of time you are present at the office, but the amount of time that you're working at full potential.

An hour in a state of deep flow is very different from an hour of distracted work where you are constantly getting interrupted every 100 minutes.

How can the time-sheet differentiate between these two hours? It can't.  

Metaphors in Software

Apr 07 2020
By comparing a topic you understand poorly to something similar you understand better, you can come up with better insights about the poorly understood topic. Metaphors help us understand the software development process by relating it to other activities we already know about.

A metaphor is different from an algorithm in the sense that it's more like a searchlight than a road map. It doesn't tell you where to find the answer, but rather how to look for it. Metaphors give us insight into programming problems and processes, and help us imagine better ways of doing things and solving problems.

In his book Code Complete, Steve McConnell compares software development to various metaphors including writing, farming, and construction.

Software development is like writing. How do you write well? You write something and rewrite, and rewrite again. Same goes for programming. You write the first draft of the program to make it work. Then you rewrite to make it better. Then you rewrite it to optimize and make it beautiful. Like well written prose, a well written program is readable.

Software development is like farming and gardening. When we are writing code, we are planting seeds and growing crops. You design, code, test, and add it to the project a little bit at a time.

Iteration is another metaphor that applies well to software development. Incremental designing, building and testing and iterating are some of the most important activities in software development. You first make the simplest possible version of the system that will run. Then you iterate to make it better.

Finally, software development is building construction. In my opinion, this is the most apt metaphor that applies to development. Building software is similar to construction in so many ways. Many common terms in software development derive from building, such as software architecture and architect, scaffolding, construction, etc.

However, there is no one metaphor that rules them all. There is no silver bullet. Many consultants tell you to buy certain methods to the exclusion of others. This usually doesn't work because then you suffer from man with a hammer syndrome, and miss opportunities to use other methods better suited to your problem.

From: Code Complete


Apr 05 2020
Since we are all working remotely, here is a thought experiment all the managers can do to test the effectiveness of remote working on programmer productivity. 

First, think about the amount of work done when working from the office and also the cost of having and maintaining a physical office. Not only monetary cost, but the wasted time and the environmental cost of the commute.

Now, think about the amount of work done when working from home. Also think about the cost of doing that, with the added benefit of hiring great programmers anywhere in the world, and not being restricted to the local talent pool.  

One choice clearly trumps the other. Guess which one?

"But how can I make sure that my programmers are, actually programming?" you ask. Well, just look at the actual work and ignore when/how/where they are doing the work. 

Just because people are in the office and at their desk, staring at their computer from 9-5 doesn't mean they are actually working.  

Working From Home

Mar 20 2020
I have been working from home this week. Though I miss my awesome colleagues at CityView, I am also enjoying the solitude and the quiet work environment at home, where I can work in a focused state for a couple of hours straight without constantly getting interrupted every 15 minutes. 

Here's a picture of my home office on this bright Victoria morning.
home office.jpeg 432.11 KB
Since we are all working from home nowadays, I'd like to recommend a book I read a while ago. Remote: Office Not Required is written by David Heinemeier Hansson (creator of Ruby on Rails framework) and Jason Fried.

It's a great book on how to make remote work, work. Their software development company, Basecamp has been remote for more than 20 years and they have some really unique insights on remote work. 

Write a Spec

Nov 09 2019
I had an interesting experience at work this evening. It was late afternoon, and I was tasked with a somewhat important feature, the details of which were not so clear to me. There was significant work involved in both back-end and front-end, including the database. I have been procrastinating on this for almost an hour.

Normally, I just buckle down and start coding. I have found that once I start coding, it usually gets better, pretty quickly. If I can just ignore the distractions around me, it doesn’t take me very long to reach the cherished flow state, where you are so engrossed in the task at hand, you literally lose the sense of time.

However, this was a hairy feature. There were a few uncertainties involved and I had to go back and forth with more senior developers to smooth out some rough edges. I was still not a hundred percent confident to implement the complete feature on my own. So instead of diving into code, I opened a text file and started writing a detailed specification.

It took me some time to come up with specific requirements for the feature. But as I wrote them down, I noticed the feature getting sharper in my mind. By the time I finished writing the spec, I had a pretty good idea of what the feature was supposed to do, and had a good plan for implementing it.

Once I had the specification, I opened visual studio and started writing code. And then an amazing thing happened. It felt like I had a clear path in front of me. My mind was clear, and my fingers were just spewing out lines and lines of code. Though my original estimate was at least a day, I was able to implement the whole feature in under 2 hours.

Hardcore agile developers usually tend to avoid detailed specs in favor of writing code, and I will admit that I haven’t followed spec-driven development rigorously. However, I have found the benefits of having a specification are huge, and any time spent writing a spec is time well invested, and ultimately saves a lot of time during development.

The reason for writing a spec is not to come up with a perfect requirements document to solve as many problems as you can in advance. The real reason is to solve as many problems as you possibly can in advance to minimize the number of surprizes when you are actually writing the code.

A lot of times, when I write the requirements for a feature, it saves me significant headaches later on during the development stage. Almost all the time, when I dive head-first into code without having a specification at hand, I write lower quality code. The act of writing a detailed specification forces you to think about the design of the program. That helps in narrowing down the scope, fishing out the edge cases, and sharpening the functionality in your mind. That ultimately improves the design of the software.

Once you write a specification, you can revisit it later, only to discover bugs and potential enhancements. It can also serve as documentation, not only for you as a developer, but also for the QA department, when they are testing the functionality. It can be used as release documentation by technical writers. Managers can use it so they can communicate with upper management. Ultimately, a detailed specification benefits everyone.

As I write this, I have decided to be more disciplined and write a detailed specification for most, if not all the features I work on.

Things that didn't happen

Nov 09 2019
There is this thing in Psychology called Absence Bias, where events that are not happening, are not recalled. Hence they seem to have probability zero in hindsight. If the product is free of bugs, you might think there weren’t any, to begin with, or there won’t be any bugs in near future, and you would be wrong.

Of course, even before the question of bug fixing comes the question of bug avoiding. Many techniques help a programmer to fix bugs, few exist to help them avoid a bug altogether. Considered in the abstract, a programmer who keeps fixing lots of new bugs may look intelligent to the management, than the one who hardly fixes any bugs. Problem-solving behavior is often rewarded, and is thought to be more intelligent than problem-avoiding behavior.

The same goes for management. A manager who is killing fires throughout the day is rewarded generously, than the one who is doing nothing most of the time, mainly because they prevented most of the blunders from happening.

Let’s take off our developer glasses and ponder about the real-world consequences of this heuristic. Let’s just be grateful for all the negative events that haven’t happened. It will always be difficult to appreciate how much trouble we are not having.

No, But, However...

Sep 11 2019
When you start a statement with "no", "but", or "however", no matter how friendly your tone or how many cute phrases you throw in to acknowledge the other person's feelings, the message to the other person is YOU ARE WRONG.

It's not "I have a different opinion". It's not "Perhaps you are misinformed". It's not "I disagree with you". It's bluntly and unequivocally, "What you are saying is wrong, and let me correct you. I know better."

Nothing productive can happen after that. 

The usual response from the other person is to oppose your position and fight back. From there, the conversation dissolves into a pointless war. You are no longer communicating. 

The bugs you didn't see

Aug 15 2019
Every successful software product is the result of the collaboration between developers and the QA team. However, it’s the developers who get the lion’s share of the praise, and the testers are mostly ignored. Devs blame the QA for finding faults in their code, and management blames them for delaying the release. I think this is unfortunate and unfair.

We usually don’t notice what’s not there. When you see a flawless software product, it’s easy to notice all the bells and whistles and admire the technology and the development efforts that went into it. However, you are not seeing all the bugs that are not there, the countless bugs that were lurking in the version 1.0, and which were only discovered by QA by testing the software repeatedly, which can be perceived as a boring, mind-numbingly repetitive job by developers who like to solve new problems and take on new challenges everyday. However, testing is a skill in itself, which needs vast reserves of patience and a Stoic attitude in a sense, to do the negative visualization to find out all the paths to failure.

Testing the software you wrote can be very frustrating for developers. If I wrote a piece of code, I am only going to test it to make sure that it works. As a result, I am only going to test the best-case scenario. The QA’s job is to find out all the ways in which the product can fail. That makes the role of the QA very valuable in any software company. Next time, if a QA finds a bug in the software I wrote, I am going to thank them, because that’s one less bug that the customer will see.

Library Rules

Aug 07 2019
Walk into any library, and the first thing you notice is absolute silence. People behave differently when in a library. They respect others’ privacy and the need for solitude. They don’t have loud conversations. If they need to talk, they go outside or talk quietly. Rarely do people go over to someone’s desk to disturb them. Why are all these behaviors, which are considered rude in a library, are treated as a norm in a modern workplace?

Offices are the places where we go to get work done. It seems, nowadays, more and more companies are actively trying to discourage people from getting any work done at work. To get any meaningful work done, one needs uninterrupted, focused blocks of time, free from constant interruptions, notifications, and distractions. It’s very hard to focus on a hard problem if there are multiple threads of conversation going on in the background, or you are getting interrupted every half an hour by someone.

For any kind of knowledge work, be it programming, writing or designing, the switching cost of attention is just too damn high. Every time you interrupt a programmer, all the context related to the current problem they are working on is literally thrown away from their working memory and they have to start from scratch. If companies realize the losses in productivity and profits that are caused by open-plan offices and the constant distractions, they might consider establishing library rules at work.

Remembering Gerald Weinberg

Aug 15 2018
Gerald Weinberg is one of the programmers who has influenced so much of my thinking as a programmer. He passed away on August 7, 2018, at the age of 84. He had simple, but very profound, Zen-like insights on everything related to software development, programming, and many other disciplines. He has left the software industry with a better understanding of what it means to be a programmer.
JerryAndCaro.jpg 15.66 KB
Gerald Weinberg was a prolific author. He wrote over 100 books, and over thousands of articles and blog posts. I have been a regular reader of his blog, “secrets of consulting” for a long time. It’s treasure trove of fundamental principles of Computer Science, software development and programming. Here are some of the books I have read/reading, and have influenced the way I think, and program.

An Introduction to General Systems Thinking
This was the first book of his that I read. It is a great primer on the systems theory, with applications in software development and programming.

The Psychology of Computer Programming
No matter how much technology advances, software is still written by humans. This book has some of his most insightful ideas on the human side of software. This book explains in the great detail, how human psychology affects the programmers, and ultimately the programs being written. This book was written in 1971, and I was blown away by how much of it still applies to software developers and managers alike.
I bought this book on August 6th, a day before he passed away. Have finished reading a couple of chapters of this book. As the name suggests, it is all about errors in software development. What are errors, what causes errors, how to handle errors, and much more.
I plan to read more of his books this year, especially “The Secrets of Consulting”, and “Are Your Lights On?”. Even though he is not among us anymore, Gerald Weinberg will be always there for software developers through his books and articles.

Being a Programmer

Apr 04 2018
I have been studying Stoicism for more than 2 years now. The more I read Seneca, Marcus Aurelius and Epictetus, the more I realize how Stoic principles can be valuable when applied to programming. This post is a quick reminder to myself when I am writing software.

Negative Visualization
  1. What’s the worst that can happen?
  2. Imagining everything that can go horribly wrong, before I write even a single line of code.
  3. Errors that can happen, bugs that creep in, unexpected exceptions, off-by-one errors, null-pointers, invalid inputs. Thinking on this may enforce reliable, robust code.
  4. The software I am working on has failed. Why?
  5. The codebase is a giant ball of mud, spaghetti code everywhere.
  6. The software project got cancelled, and all the beautiful, clean code that I wrote using TDD is thrown away.
  7. As you write code, silently reflect that this might be the last line of code that you might ever write. Might force you to write better code.
Trichotomy of Control
  1. Things over which I have complete control: Code that I write, patterns and practices that I follow, testing the code I have written
  2. Things over which I have no control at all: Underlying framework(.net, jvm), 3rd party libraries, Getting fired from the company
  3. Things over which I have some but not complete control: Customer requirements, Management, Working in a team
Hedonic Adaptation
  1. After you are exposed to a luxurious lifestyle, you might lose your ability to enjoy simple things. Embrace Discomfort.
  2. Realizing the fact that I am coding in a programming environment and working with tools that would have been a dream world for programmers in the 80’s. Being grateful that I don’t have to write code in binary(0110101), Assembly(mov dl dh), or even C.
  3. I have been programming on Visual Studio/IntelliJ/IDEs for more than 5 years now, and have almost forgotten how painful it was writing big Java programs on a Notepad(or sometimes on paper!) in my first year of college, back in 2009. So, whenever I find myself complaining about Visual Studio, open Notepad and try to code without any intellisense or compile-time errors.
  4. Whenever I find myself complaining about C#, fire up 8086 microprocessor simulator, and write some assembly code.
  5. Whenever I find myself complaining about TypeScript, write some plain Javascript using Closures, Callback hell, and try to figure out scope of ’this’ variable.
  1. Not trying to show off how smart you are, by writing complex code when a simple solution would suffice.
  2. Not trying to impress anyone, by sounding clever, by showing off how much you know, by using unnecessary complex jargon.
  3. Writing the code as simple and readable as possible. Reading code is difficult.
Stoic Meditation
  1. Taking pause, and reflecting on the code that I just wrote.
  2. Not attaching my identity to the code/software I am building. Practicing detachment.
  3. Not boasting/taking unnecessary pride at my work, avoiding self-serving biases.
  4. Learning to enjoy the fact that I am getting paid to write software, without feeling entitled to it, and without clinging to it.
Avoiding Intense Ideology
  1. Not getting wrapped in the latest fad/trend/framework in technology. Agile is a software development methodology, not a religion. TDD is a practice, not a rule.
  2. Remembering that everything I hear is an opinion, not a fact. Everything I see is a perception, not absolute reality.
  3. Being willing to change my mind when new facts emerge which contradict my pre-existing beliefs. e.g. Decision to stop working with Angular/Backbone/Knockout after Aurelia was released.
  4. Paraphrasing Marcus Aurelius: “Waste no more time arguing about what a good programmer should be. Be one.”
  5. Not having opinions on things that you don’t understand. Not having opinions unless you know the other side’s argument better than they do.
Having Empathy when reading/reviewing code.
  1. I get frustrated when reading code written by me 2-3 years ago. I also get annoyed when reading buggy code written by other people, e.g. Poorly named variables, long methods, not enough error checking, unnecessary comments, etc.
  2. Some techniques I am trying to use when dealing with my/others’ mistakes, which helps me maintain my sanity :
    • Never assume bad intentions when stupidity will suffice.
    • Never assume stupidity when ignorance will suffice.
    • Never assume ignorance when forgivable error will suffice.
    • Never assume error when information you hadn’t adequately accounted for will suffice.


Feb 10 2018
"It’s not the external events that cause us trouble, but only our perceptions about those events." 

This meditation from Marcus Aurelius, profound in itself, finds parallels to coding theory in Computer Science.

A fundamental concept in computer science is that of a bit. A bit is nothing more than the electricity being on or off, at a particular location. If one takes apart a computer and looks with a microscope, he won’t find any pictures, numbers or letters. There is only one kind of thing that makes computers work. It’s called a bit. What can we do with a bit? Turns out, a lot.

We can use a bit to represent something tangible. That is, we can assign meaning to it. For example, take a bit and connect it to a red light. When the bit is on, red light glows and vice versa. In itself, this doesn’t mean anything. But then we can say “when the red light is on, it means stop, and when it’s off you can proceed”. This is how one can assign meaning to a bit. The bit does not contain any meaning in and itself. It’s just presence or absence of electricity at a particular spot. Meaning is assigned to a bit by something external to the bit.

Someone who studies for the driver’s license will know the meaning behind that red signal as stop the car. Another person making breakfast may interpret the red signal as coffee maker is on. In both the cases, the underlying event is same. It’s only by assigning a different meaning to that event, we interpret it differently.

A person who attaches a certain meaning to an event will perceive and interpret the event differently than one who doesn’t attach any meaning to it. For example, consider two spies trying to communicate using following secret code: if the curtain is down, that means danger. If the curtain is up, everything is okay. Now, for hundreds of people passing through the street who see the window, the curtain doesn’t mean anything. It is just a curtain. But for the one spy, it indicates life or death.

This is the base principle behind coding theory. A code is something that tells you what something else means. A code for a bit has two possible states and hence two possible meanings. At any given point, it can only mean one of two things. Similarly, for any event, there can be multiple different interpretations and different meanings behind each interpretation. It depends on the observer how he/she chooses to interpret the event, and what meaning one assigns to the event. For the same reason, one person’s hell might be another person’s heaven.

The subject of perceptions/meanings doesn’t restrict itself to Stoic philosophy and computer science, and has huge implications in Physics. According to Einstein’s theory of relativity, an event is both a space and time, and can be represented by a particular point in space-time, i.e. a point in space at a particular moment in time. Space-time as a whole can therefore be thought of as a collection of an infinite number of events. According to Einstein then, time/event is relative to the observer, and more specifically to the motion of that observer. Interpretation of each event depends upon the position of the observer and therefore, can take infinite number of meanings.

So it is safe to assume that, an event does not have an intrinsic meaning in and itself, but rather it takes the meaning that an observer chooses to assign it. Let’s conclude with another quote from the last good emperor, Marcus Aurelius:
Choose not to be harmed — and you won’t feel harmed. Don’t feels harmed — and you haven’t been.

Paradox of Choice

Nov 29 2017
Last few weeks have been eventful. I got an offer to join Appreciation Engine as a full-time developer. Also received an email from the CTO of Multivista about an open position they have. 

This afternoon, I had really good conversation with the founder of Momentum. It went really well. I also had a really productive chat with the lead developer at PiPA Solutions. They might have an opening for a full-stack developer soon. 

Finally, there's CityView. I really enjoyed working with Steve during my Co-op at CityView, and would love to join CityView as a full time software developer. 

Reading Gerald Weinberg

Nov 23 2017
For the last few days, I have been reading the work of Gerald Weinberg. He has produced a massive quantity and quality work over time, and in crucial topics such as psychology of computer programming and systems thinking. 

On the first glance, the topics seem trivial and not very important, but as you dive deeper into his writings and work, you start appreciating the deep insights.

Working at a Startup

Nov 16 2017
During my last 4 internships, I worked at 2 mid-level to large companies with 50-60 people and 2 small startups with fewer than 6-7 people. This is of course, not enough sample size to draw conclusions upon, but still I will, based on my personal experience and observations. 

There are few things I love with working in small companies. First,  the people. Having direct access to founders and lead developers who are sitting next to you means a lot to me. This is such a privilege. Getting to see how the product is being developed and what kind of thinking goes into it is exciting. Much better than just following orders coming from the top management. It’s hard to see the big picture. 

The second benefit is rather personal. Being an anti-social introvert, any time I am in a big office with full of people, it just sucks my energy like a vampire. (One of the reasons I chose programming, and I love it). With my mind imagining all sorts of worst case scenarios of failed social interactions and imagining making a fool of oneself during small-talk (a skill which I am terrified more than public speaking), it’s hard to get anything done.

Third, because of such a small size of the company, one gets to see all aspects of the business, e.g. marketing, customer service, etc. It feels good to know that the stuff I am working on is actually going to be used by real people. Just observing someone giving sales presentation can teach a lot of stuff about marketing.

Finally, I observed that learning was much faster working in small, close-knit teams. There’s this quote from Chad Fowler, which I read somewhere,
“Always be the worst guy in every band you’re in. So you can learn.” 
It really helps to work with colleagues, junior or senior who are much smarter than yourself. 

Patterns of Unproductivity

Nov 07 2017
I had a terrible day at work. Cannot think of a day when I have been so unproductive. I devoted less than 10% of my brain cells on any meaningful work. There are some definite patterns here. Calling these patterns out might help me avoid them in the future.

First, I know the root cause of my unproductivity is my utter distaste for Drupal development stack. Working on a dev environment that totally demotivates and frustrates you is a surefire way to lose your productivity. Anyhow, my co-op at AE ends next month, so no point dwelling on it too much. Also, it’s good to know what doesn’t work for you.

Then there is Hackernews. I start the day by glancing at the news and also Hackernews. Most of the time, it’s harmless, but sometime there’s that one disturbing news that distracts and shoots my productivity for the day, or I go down the rabbit hole of Hackernews links which is an enormous time sink.

As the day progresses, I get up a lot for silly reasons (washrooms, coffee, etc.) and these small, frequent breaks make sure it’s impossible for me to get any significant stretches of focused time for any meaningful work to get done.

Then there are people chatting around you, which is a major cause of distraction. I work in an open office, which is also a co-working space. I am surrounded by not only my colleagues, but also people from other companies. A seemingly harmless discussion behind you on an irrelevant topic can totally thwart your attempts to get into the zone.

Why do companies keep falling for the myth that open offices foster creativity by meaningful interactions? That’s nonsense. What are you going to do with all this creativity if there’s no execution? Also, why do people like to chat around others when they are working? Would you do this in a library? Get a room, please!

Enough of the complaints, it’s time to fix these systemic issues, so I can get some actual work done at work. Here are some of the actions I can take.
  1. Block Times Colonist and Hackernews completely during the work day.
  2. Before sitting down at work, do everything and get everything you need.
  3. No random browsing rabbit holes on internet. Only surf or read blogs during break or during the lunch time.
I will test this for the rest of the week and see how it goes. I seriously need to improve my ability to focus and concentrate on a single task at a time.

This day shall not happen again in future.

Developer Pay

Nov 01 2017
A question has been lingering in my mind for quite a while. Are software developers entitled for the time they put on at the work, or are they supposed to get money only for the work that actually goes on building the working product?

I spent 3 and a half out of my 4 months of co-op term on just learning, learning and learning and building a half-assed product. Around 40% of my time was spent on understanding the domain and underlying technology and 40% time was spent on learning PHP and Drupal. Changing the paradigm from a Javascript developer to Drupal PHP development was the biggest adjustment I had to make.

On just 2 days before the demo, my Drupal website and module crashed, and I had to rebuild the whole thing in 2 days.

Now the question which has been bugging me is this: Am I just supposed to getting paid for final built product? or am I entitled for the learning and experimentation that goes before it?

If viewed in a rational and pure capitalistic way, it makes total sense if an employer only hires someone who has already built a similar product that they are supposed to build and only pay for the actual development work. From this perspective, I should be just getting paid only for the last two days.

What, then would justify me getting the previous three months’ salary? If we compare software development to medical profession, a doctor only gets paid for the patients he treats and not for the 14 years of learning and training that goes before that. The doctor is performing when he is at work, he/she is producing results.

How would an employer justify paying a developer for the entire duration of the product development, when some of it may have been spent on learning the tools and technology? If I put myself in an employer’s shoes, I can’t come up with a better argument.