It wasn't until I read Algorithms to Live By that I started realizing computer science isn't all black and white, and that maybe CS concepts can be applied to human lives without being reductionist. The authors, Brian Christianson and Tom Griffiths, explore how computer algorithms might help us make better everyday decisions, about how to spend our time, stay organized, remember the most important things, etc. The algorithms they cover in the book often don't produce definitive solutions. They make assumptions, incorporate randomness, make tradeoffs between efficiency and accuracy, and favor simpler solutions over more complicated ones. Here are my favorite concepts from the book, and how I've started thinking about them in my own life:
Last week my friend and I were searching for a coffee shop to work in. We wandered around downtown SF for half an hour before realizing - we need to think about optimal stopping! Optimal stopping explains how long you should spend searching, and at what point you should stop and commit. It's epitomized by the Secretary Problem: Say you want to hire a new secretary (or software developer, maybe), and you're determined to find the very best person for the job. You'll need to spend some time interviewing applicants just to calibrate, to have something to measure other candidates against. How long do you look, and when do you leap?
The solution, it can be mathematically proven, is 37%. If you know how long you're willing to wait, then spend 37% of your time searching noncommittally, and then settle on the next candidate who's better than all candidates seen so far. This is assuming that applicants will always accept your offer, and that if you pass on someone, you can never go back and change your mind. Even with this optimal strategy, there's still a 63% failure rate. You'll only pick the very best applicant 37% percent of the time.
Is this at all useful to know? I think it's at least somewhat reassuring to know how hard it is to find the best of anything. If we accept the algorithm, maybe we can stop being perfectionists, searching for the very best all the time.
What's the best data structure for organizing your closet? A Queue or a Stack? The difference comes down to which side you place items back in your closet, and which side you take them out from. If you always hang clothes on the most easily accessible side, and then always take from that same side, you have a Stack (last-in-first-out). You probably also never wear any of the clothes at the back of your closet. You could instead implement a Queue, by putting clothes away at the back, and taking them from the front, so you can wear all your clothes in sequence (first-in-first-out). But what if you have items that are out of season, or that you just don't really want to wear? If you ran out of room, this strategy would suggest getting rid of items you've owned the longest.
Christianson and Griffiths instead suggest treating your closet like an LRU Cache. You can take out whatever clothes you want, but always insert at the front. That way the most accessible clothes are the most recently used, probably your favorites. If you run out of room, you can give away or move into storage the clothes at the back, the Least Recently Used.
I haven't been evicting the Least Recently Used items from my closet, but I have been implementing a cache another way - neglecting to put things away after doing my laundry. My favorite items don't even need extracting from my closet, because they're already on the back of my chair!
Do you ever find yourself staring and your todo list, re-evaluating your priorities, scheduling and planning for the week, and not getting any actual work done? Or starting one task and then jumping to another before you've really accomplished anything? When computers have this problem it's called thrashing. If too many programs run simultaneously and they all need to cache data in RAM memory, they end up switching things in and out of RAM, stealing space from each other, and never doing actual work. Like humans, computers have to learn to say no, to do one thing at a time, to not overburden themselves. The goal is to minimize context switching, the time it takes to refocus on a new task, without becoming unreasonably responsive to new requests.
So now I have a good excuse for not responding to Facebook messages in the morning, when I'm doing my focused work. I'm just making a tradeoff between responsiveness (the time it takes to process a request), and throughput (the number of tasks handled). It's easier for me to get more things done when I'm not constantly bombarded with new ideas or requests. I'm also trying to reduce context switching my dealing with all my email in one block after lunch, and setting a timer so that it doesn't overflow into the rest of my day. I'm interrupt coalescing, batching small tasks together.
There's one more problem associated with context switching and thrashing: deciding what to do next. If you have to compare every task to every other task to determine the most important/urgent one, that's approximately (n-1)+(n-2)+(n-3)...+1 compares, which equals n(n-1)/2, where n is the number of tasks. In computer science, this would be considered quadratic time complexity - very inefficient. Sometimes the best solution is to introduce some randomness. Just pick a task, and work on it for as long as possible without interruptions.
Some of the most interesting insights come from merging two seemingly disparate topics, and I'm sure I'll keep revisiting this book as I gain more familiarity with the algorithms and computer science concepts mentioned. If you like very nerdy takes on productivity and self improvement, I'd definitely recommend picking it up!