Penzilla crushes blog

Atomic penguin schedules solid day of destruction

Software Illustrated: I’m just trying to change this lightbulb


I like to think that I’m an GIF afficionado, but it’s hard to overstate how much this GIF can help one understand about software engineering.

Let it wash over you. Watch it a few times.


Just the facts

  • Hal enters, walks to the kitchen and immediately tries to turn on a light in the kitchen
  • Hal turns it off and on again
  • Hal check the bulb and decides it needs to be replaced
  • Hal goes to a kitchen cabinet to get a bulb
  • As he’s reaching for the bulb he notes the shelf is loose
  • Hal decides to fix the shelf and opens a drawer to get a screwdriver
  • The drawer squeaks so Hal decides to get WD-40 to fix the squeak
  • The WD-40 is empty so Hal decides to drive tot he store to get some more
  • His car won’t start
  • Hal decides to fix the car
  • Lois comes home and asks Hal to please drop what he’s doing and fix the lightbulb in the kitchen
  • Hal exclaims that that’s what he’s currently trying to do by fixing the car

Technical Debt

Hal and Lois just illustrated for us what software people like to call “Technical Debt”. Technical debt conceptually leans very heavily on the financial concept of debt. Essentially, you accumulate technical debt by taking short cuts or not fixing problems when they occur. Deferring fixing bugs or avoiding building a feature the “right way” either by ignoring best practices or cutting corners in the system to hit a target date forces you to essentially mortgage your code base and take on “Technical Debt”.

Some software issues are incorrectly interpreted as debt. For example, bugs in your software are not necessarily debt. Some bugs can exist for years and are a non-issue. Some people think something is a bug when it’s really a feature of the system. Some bugs occur only infrequently or no one knows how they bug should really be fixed. While these are frustrating, they generally not issues of debt.

Instead, technical debt is typically design decisions that were made in the name of expediency that you may or may not have realized would cause problems down the line. Sadly, experienced engineers sometimes incur this debt despite the fact they know it will eventually be a problem because:

  • I’ll rewrite it before we get enough customers that scalability will be an issue
  • I’ll fix it after RELEASE_DATE/CUSTOMER_DEMO when we finally have some time to do things right
  • I’ll fix it before I leave the company
  • No one cares about quality

Examples of some design decisions and tradeoffs that may create debt:

  • We’ll write the kernel of the application in a “scripting” language and optimize later, if at all
  • We’ll deploy this by hand
  • We’ll use an ORM to do data access because we don’t know SQL
  • We’ll stop fixing bugs in this OLD_THING while we replatform to the NEW_THING

Us versus them?

Typically any software system has some amount of technical debt so it’s not always clear when that debt becomes a serious problem for your company or team.

Part of what makes technical debt challenging is that software people and business people within your organization likely look at the priority of paying down technical debt completely differently.

For example, if Hal is a typical software person on your team, from his perspective he’s trying to finish the project “Have working kitchen lights” as quickly as possible while following the boyscout rule and at each step of the process leaving the camp site nicer than it was when he arrived. In this case, that means fixing the loose shelf in kitchen cabinet, fixing the squeaky drawer, the car, etc.

Assuming Lois is our CEO, she’s really confused by this behavior. She’s thinking, that kitchen light is critical for the business and needs to be completed as soon as possible so other work can proceed, a new client can be signed, etc. Any delay for minor issues like loose shelves or squeaky drawers seems like a complete waste. Lois can completely understand the emergency car fix, but she’s still confused as to why that’s necessary now to simply change the lightbulb.

From an engineering perspective folks look at Hal’s odyssey to change a lightbulb and they laugh wryly, or sigh deeply and think “I’ve lived through days or weeks like that where I’m just trying to complete a simple task but ‘Technical Debt’ and other roadblocks prevented me from making progress.”

From a management perspective, folks just look at a team of software engineers that are inexplicably taking forever to accomplish a seemingly simple task. They question if they have the right people, if they need a new team, additional resources, or even worse, a replatform.

First of all, let’s look at why software folks behave this way.

Engineer Perspective

Engineers think a number of things that are all somewhat true and attempt to overcompensate for those things with this sort of extreme-Hal behavior:

  1. I want to do a quality job and this isn’t good enough. If only I had more time!
  2. I don’t think my boss will care about this. If I ask them to prioritize it they’ll tell me not to work on it.
  3. If they won’t let me work on it now they’ll never let me work on it and It’ll never be fixed! (keeps, creates debt)
  4. Our code base is already full of bugs, I don’t want to create more or do more maintenance work in the future.
  5. Bugs cost us too much already in terms of maintenance and lost customers/sales, we have to improve quality (pay down debt)

Now what’s true from an Engineering perspective is that this bug is likely not important enough to tackle right now and that it should be prioritized.

What hopefully isn’t true in your business is that no-one will ever care about this issue and that it will never be fixed. Instead, I hope you periodically address technical issues brought up during prioritized work and you fix bugs like “Loose shelf in kitchen cabinet”.

Business Perspective

Business folks of course are more concerned about time to market and really are just looking at everything being done as quickly as possible so that whatever is contingent on the work being complete can proceed.

Business folks think the following things that are also somewhat true:

  1. Quality can slip a “little bit” to hit target dates (it’s okay to take on debt to make a sale)
  2. I’m not sure I trust the software people with their computer wizardry
  3. In previous situations they were able to hit the date despite saying they couldn’t, they just need to work harder
  4. This is the most important project at the company and they are the highest paid people, it’s okay if they work harder

Yes, quality can slip a little bit to hit a date, and yes, the project is likely important and yes, you likely pay these software folks quite well compared to other roles in your company.

However, a lack of trust is not really a good reason to assume that your software group is dragging their feet and despite the high pay, sometimes equipment and resources are still an issue. Perhaps better IT infrastructure, better tools, or additional personnel could complete a project more quickly and if something is truly that important to your business perhaps spending more money to acquire additional resources is the answer.

Fortunately, business people actually have a good understanding of debt and borrowing. Unfortunately, they likely don’t have the technical expertise to understand if the debt they’ve accumulated is a little or a lot. Determining the amount of debt the business has is something that requires a good working relationship, even trust, between the business and engineering sides of an organization.

Addressing Technical Debt

The path towards responsible tech debt financials lies somewhere between these two view points. Typically the business side and software side are both looking out for the best interests of the business or project of which they’re a member. I think that the appropriate response to the presence of technical debt on your team is the following:

  1. Carve out a portion of your software team’s time to pay down technical debt. You want to pay down as much as you can to address some of the principle part of the loan, so that you are not just paying down interest. To do this I advocate treating your engineering / software team as an equal stakeholder in the business in terms of time allocated.

    For example, if you currently work on 4 projects at a time, one for each of 4 business units at your company, you first need to recognize that there are actually 5 “real” projects you need to be able to tackle and that a portion of hours from the 4 projects needs to be allocated to this fifth, home improvement project.

    e.g.: if you’re currently allocating 25 hours a week to each of the four projects, I’d reduce each of those projects hours from 25 to 20 and run 5 projects at 20 hours a piece.

  2. Prioritize technical debt tasks. Estimate them and treat them like real work. The only thing really different is that the stakeholder for this work is the engineers, not some customer or project manager elsewhere in your company.

  3. Create a “home improvement” rotation or otherwise allocate this work to members of your software team.

  4. Demo this debt retirement work to your software team, and where appropriate, to the rest of the business. Better quality, faster, and/or easier to use features should be relevant to all and something that should be recognized and positively reinforced.

Twisted python development server with restart on code change

I really enjoy working with the Twisted framework because it allows me to easily and cleanly blend multiple protocols and services together in a single application.  However, when I’m developing and testing a Twisted-based server it is sometimes inconvenient to have to manually restart the server after each change.

Anyway, to that end I wrote a little wrapper script to run in place of the Twisted command-line tool ‘twistd’ called ‘twistr’.

Twistr very simply wraps ‘twistd’ and restarts the twistd process on changes.

If this is interesting or helpful to you feel free to check it out:

$ pip install twistr

… should do the trick.

Starcraft Emacs Mode (Or How to make an Emacs Minor-mode)

A friend of mine once said that he was terrible at Starcraft because his ”APM” was too low.  APM, if you’re not familiar with the acronym stands for  “Actions Per Minute” and refers to keypresses or mouse clicks per minute while playing the RTS turned sport, Starcraft.

My friend continued saying that he actually thought his APM while programming was quite high, certainly higher than anything he could manage playing Starcraft.  Of course, having nothing better to, I hacked up an Emacs minor mode that would track all your key presses within Emacs and total up an APM score for you.

starcraft.el in ‘action’

Yes, that “sc” in the modeline stands for Starcraft.  Install starcraft.el in your emacs configuration and find how hard it is to inflate your actions per minute while writing code in emacs compared to spaming hotkeys in Starcraft 2.

Really, I just wanted to do a more substantive bit of Emacs Lisp and this was a fun excuse to do so.

You can download and play with the mode here on Github.


Nice. I think I will install this as a "productivity measuring tool" for all my developers.

Diablo 3 Beta Impressions

I recently got access to the Diablo 3 Beta test and have now logged upwards of 30 hours across two builds of the game (Patches 12 and 13).

Assuming you’re already familiar with Diablo I won’t waste your time.  It is the most Diablo-licious game to come out since Diablo 2.  Odds are very high that if you liked that game you’ll like this one too.  The music, environments, and atmosphere all evoke nostalgia for the earlier games while adding some nice new touches like destructible environments and more dynamic monster attacks.

The gameplay is the same mix of hack-n-slash and random loot that was so addictive in D2 and WoW.  As a recovering WoW player I found it entirely too comfortable to slip into old patterns and start min-maxing and hunting for new bits of gear to improve a particular slot.  This seems to be really helped by the addition of the AH as I was able to quickly spend some of my accumulated gold to buy some real upgrades.

The story starts off slow with the return of some elements and characters you’ll remember well from D1 and D2.  Yes, Deckard Cain once again thinks you should stay awhile and listen.

Perhaps the strongest compliment I can offer is that unlike Torchlight, which after the first 10 or so levels left me thinking “whatever” and turning the game off, I felt compelled by Diablo 3 to continue playing, tweaking my character, leveling up to get access to the next ability so I could try new strategies and new weapons, etc.  The story in the demo only hints at the real plot line of D3 but even a brief glimpse made me really wish I had the full game right there and then.


While I haven’t played Diablo 2 in a long time here’s the big changes from my perspective.

  • New classes (4 new ones and the 5th is the returning Barbarian), and you have a little more customization of appearance because you can select a gender (female Barbarians are awesome!)
  • New resource generation mechanics:  This will be pretty familiar to folks from WoW or who enjoyed the assassin from D2.  Every class has a different resource mechanic.  Barbarians build rage by striking enemies with certain attacks and then spend that rage to deal extra damage or perform special abilities (shouts, leaps, etc).  Monks have ‘spirit’ which regenerates to full but can also be recharged / spent using different abilities.  Demon Hunters have a dueling resources of “hatred” and “discipline”, Sorcerors have energy, and Witch Doctors have the old school Mana mechanics.
  • No stat point allocation: D2 required you to spend time putting points into stats like Strength, Dexterity, etc.  Of course there was typically one or two ideal builds and everything else was kind of gimpy due to the nature of the best gear and abilities for your class.  I think blizzard (rightly, in my opinion) identified that while die-hard players may enjoy rerolling characters constantly, that that level of punishing fiddly customization is not really giving you more choice but instead just more chances to do things wrong.
  • No talent trees: Instead abilities unlock as you level up.  You also unlock abilities slots, so the number of active abilities is initially limited but eventually expands to 6(?).  You can change abilities at any time.  Respec’ing your character just has a brief (15-30s) cooldown to force you to think ahead a bit.  Basically you don’t need to reroll your character after spending a few points on a skill that sucks.
  • In game gold / \$ Auction House:  You can now buy and sell items using in game currency or actual real world currency.  I like the notion of easy trading and a way to find items that just never seem to drop for you.
  • Online Requirements

So the class and resource changes are great.  I don’t love all the classes, but that’s always the case in games like this.  The sorceror, demon hunter, and barbarian are all really fun to me so the fact that I think the monk is sort of a wet blanket and the witch doctor is kind of gimpy aren’t really a big deal.  I’m sure others will love the monk and WD.

The changes to stat points, and abilities seem to be pretty controversial to hardcore d2 fans but as someone who enjoyed but didn’t love D2 I think it’s a HUGE improvement.  I will be able to play Diablo 3 and experiment with builds without fear that I’ll completely screw myself over in the harder difficulties and higher levels with a bad decision made on normal difficulty at level 10.

So while some are bemoaning the lack of permanent build decisions I’m really enjoying the freedom to customize and experiment.


With the real money Auction House this’ll be the first Blizzard game to include micro-transactions.  I’m not entirely sold on the idea of buying and selling digital equipment for \$ but I realize what Blizzard is trying to do.  They’re looking for ways to help cover the costs of continued development and support over the life of a game like Diablo 3.  On every real money transaction Blizzard will take a small “listing fee” which millions of players and auctions should begin to add up.  Hopefully this means that D3 will be a more dynamic game with more frequent changes and new content.


I am concerned that with some of this simplification that there’s perhaps something missing after you’ve played through the game a few times.  I think many players are concerned that the game won’t have the depth of Diablo 2.  While I share the concern that the game may be a bit simplified in beta (or even at release) I would encourage people to look at every other Blizzard game including Diablo 1 and 2.  They all went through large changes through patches and expansions.  I also doubt we’ve seen all the tricks that D3 has up its sleeve at this point.

However, I do have one major concern about Diablo 3:  The requirement for internet connectivity for every aspect of the game.

I realize that part of why Blizzard wants the game to always be online is to help deal with item hacking, cheating, etc.  I realize that they also want to prevent piracy and requiring folks to have an account and login to Blizzard is a way to do that.

Unfortunately, unlike Starcraft 2, which also requires players to login to their account to play the game, Diablo 3 seems to have serious gameplay lag when I’m playing a private game, aka single player, game while I’m uploading or downloading large files.  While in SC2 I still may have lag logging in to the game servers, and I may even be prevented from playing my game if I have no internet connection, but at least when I’m playing the game single player there is no lag.  Downloading large files and playing the Starcraft 2 campaign are completely compatible.

However, I was recently attempting to play the Diablo 3 beta while simultaneously downloading The Witcher 2 from steam.  (Great game, by the way! :–)).  As I was playing I found that the straight-from-wow connection status bar was red (indicating high latency).  Thinking that the connection to Blizzards servers was primarily for login, save games, loot randomization, monster spawns, etc I figured there would be no problem actually playing the game just some delays getting started.  Instead, I would walk my character up to a monster, click to attack, wait a second for the attack animation to play, wait another second for the monster to recognize that the attack had taken place, and then another second for the monster death animation and loot popup.  

Essentially, the game is like WoW in that every action is first sent to the server and validated.  I realize this is the right decision for a purely multiplayer game.  However, I think there should be a way to play Diablo 3 as a single player game in lower bandwidth situations without completely compromising the gameplay experience similar to the way Starcraft 2 seems to work.  Ideally there would be some way to play without any internet connection but I doubt that’s going to happen.


I remain excited for Diablo 3, as it is the most Diablo-ey game I’ve played in a long time (yes, I’ve played Torchlight!) but I’ve had to readjust my expectations to think of it as more of an MMO style game.  

Using Jinja2 from Twisted

txTemplate is simply a set of adapters for some popular templating engines like Clearsilver, Genshi and Jinja2 to give them a consistent deferred-returning API.  I originally hacked this up because I used Twisted and Genshi to replace a Webware and Kid application.

You can find the package here:

And the source for anyone interested in hacking on it is hosted on Github.

Assuming I remain interested in hacking on this future enhancements will be a better incremental interface for large template rendering.  The current API generates the template asynchronously but doesn’t provide a good way to write the response to clients in chunks, the assumption being that as long as you can buffer the template in memory you can chunk the response to the clients.  Also, if you don’t finish rendering the template before beginning to send it to the client it’s possible that the you’ll hit an error while rendering and send only a partial response.

Playing Words with Friends

Words with Friends
Words with Friends, if you’re not already familiar, is a game of Scrabble that lets you play asynchronously with friends over the internet.

I recently started playing WoF because I like to think I have a large vocabulary and rarely get the opportunity to use it.  I also enjoy playing boardgames like Scrabble but between work and … sleep… I don’t often get the chance to get together with my friends and play Taluva into the wee hours.

Anyway, I have a confession to make.

I’m terrible at Words with Friends
Whenever it is my turn I look at my hand of letters:  ILNIREE
And think:  ILNIREE is not a word.  Darn.
And then I start to mentally rearrange letters to see if I can make other words.  WoF’s conveniently includes a “Shuffle” button so I hit that a few times…  And while given enough time I do come up with a word a few things bug me about this process.
First of all, years of Google and Wikipedia use have atrophied important muscles like “memory”.  Second, i’m a programmer and that means:

Basically shuffling letters and checking or guessing that they are words is a perfectly straightforward although not terribly efficient way to try to optimize your scrabble game.  Of course, it’s got a lot of drawbacks if you try to do it by hand.  It’s slow and it’s uninteresting.  To try to find every word possible with any of your 7 letters you’d need to try all permutations and there are over 10,000 permutations of 7 letters.  

Anyway, I’m bad at Words with Friends but I like programming so once I started playing WoF of course I gravitated towards…

Solving Words with Friends2
Boring and repetitive work is exactly what computers are good at it!  In fact the Python program to generate all possible permutations of 7 letters (1 letter words, 2 letter words, 3 letter words, etc) is pretty straightforward especially with a little help from our friend itertools.
My first and simplest attempt involved putting the contents of the standard unix dictionary (/usr/share/dict/words) into a python dictionary (an excellent hash table implementation). 
This isn’t exactly what I came up with first but it should illustrate what I’m talking about:

Of course you end up looking at a lot of permutations for 7 letters (~13 thousand) so this can take a little while. It’s still not slow on any reasonable computer but when you start adding in other letters to try and find words that might fit in with available spots on the board things can slow down. For example, say you could play words that begin with t, l, or r. Running the above getScrabbleLetters function for 10 letters requires you to check 9,864,101 permutations.

Worse still, if you add in the notion of wildcards (9 letters plus some character like ‘*’ to represent a wildcard) you end up repeating those over 9 million lookups 26 times to check all the other possible letters of the alphabet in the wildcard slot.  That’s over 250 million dictionary lookups to find a fairly small list of unique words.

However, if we step back for a moment and think about anagrams, a simple tweak can help process larger strings (and strings with wildcards) much more quickly.

Imagine you’ve got a couple anagrams like “was” and “saw”.  They’ve got the same letters but count as two different words in our dictionary and require two separate lookups.  If we shift our perspective a little bit and start thinking about the letters as what’s important we get some very nice performance improvements.

We still use a hash table as our primary data structure but instead of using the words as the keys we use the sorted letters of a word as the key.  The values become lists of all the words which include those same letters.  It’s a little bit more work to insert, as we first have to calculate what the key would be, and it’s a little more work to lookup, as we have to iterate over the list of possible words.  But it’s great any time we care just about anagrams and it’s much more efficient in terms of the number of lookups required to find all the scrabble words.  For example, for the same 10 letter case from earlier we now need only 1024 lookups.  ~26 thousand lookups for the wildcard case.

Without further ado that solution looks like this:

One other really useful bit is a quick way to look for all the words that have a given suffix or prefix. This is a little trickier than the anagram case but it’s also more interesting because I got to spend a little time reading about a new data structure, a Trie. A Trie is a tree-like structure optimized for storing strings.  The root node of the tree represents the empty string and each of its children represents the first letter of a word.  Every child of these nodes is the next letter in a word starting with that substring.  

For example, a trie of nodes for the words “an apple yum” would look like:

So the trie solution for finding words that begin with a certain prefix is:

The suffix version is just a couple more lines with judicious use of the reversed function.

Conclusion and Next Steps
I think that Words with Friends is a lot more fun now that I have my computer as a crutch.  I’ve enjoyed playing with some simple algorithms for finding algorithms and looking up strings.

My next steps after today are going to be putting up my utilities here on my website so that I can share them with my non-programmer friends.  Building a simple web service for solving/cheating at scrabble is a hopefully interesting topic in its own right so building that will probably be my next blog post.

After that I’m interested in taking this to its logical conclusion and actually building (and blogging about) a scrabble solver where you can enter in the whole state of the scrabble board and your hand and have it tell you all your possible moves (or at least the subset of moves that give you higher scores).

If you made it this far you might be interested in the actual tested source code here on github.

Larry Wall calls this the first great virtue of a programmer.  
Perhaps a more honest heading here would be “Cheating at Words With Friends” :-)

Emacs Archaeology

Recently a friend asked me for my Emacs configuration. RMS would excommunicate me from the church if I didn’t make my configuration available to any who asked so, of course, I started packaging it up and shipping it off to my friend. However, to help my friend it seemed reasonable to explain what all I’ve accumulated over the last few years, highlight some areas she might want to customize differently, and to point out some modes and tools I find particularly helpful in my day to day work.

After opening up a terminal and poking around in my ~/etc/emacs.d directory I started to realize that after all this time I wasn’t entirely familiar with my Emacs configuration either. Of course, no one has any geek cred at all if they can’t explain how their editors configured so I started digging in to remind myself what the heck all this stuff does.  
Here’s a few numbers. Over the years I’ve written ~1500 lines of elisp code spread across 27 *.el files and accumulated another ~60,000 lines of dependencies (mostly downloaded from EmacsWiki and/or Emacs Fu).

In any event, over the next few weeks I’ll be cleaning things up a bit and posting more about some of the things about Emacs I really love.

If anyone’s interested in my current configuration I have a bitbucket repo (maybe I should move it to
github?) here ( with all of my
dotfiles including the emacs stuff if you’d like to take a look.

Recipe: Programmatically Creating and Updating AWS security groups

I think I’ve rewritten this code 3 times now in the last year so it seems prudent to save it somewhere.  If other folks find it useful that’d be great.

The problem is a simple one.  You’re looking to setup and install of a few machines on EC2, perhaps to run something fun like a Cassandra cluster.

Typically it’s really tempting to just setup the security group once and never ever touch it again.  I’d log into the the AWS console, and following along with this datastax guide I would manually setup the group, launch instances, etc.

However, without automation there’s some duplication of effort whenever someone on your team sets up a cluster and possibility for user error setting up security groups.  And of course we’re already automating the other important bits like “launch a new instance” or “run a backup” already so why not manage security groups with the same scripts?

I’m currently working with Fabric to automate EC2 stuff so I pulled out the Python code I’m using to handle creation of security groups and permission rules within those groups.

The script attempts to be idempotent.  The idea here is that simply rerunning the script will, only if necessary, create groups, revoke old rules and authorize any new ones.

Anyway, without further ado here’s the script:

Simple Nosy Script: Personal Continuous Integration for TDD

I recently cleaned up and resurrected my old nosy script.  These days there are a few alternatives on PyPI as well though I prefer mine.  The whole concept of nosy is simply to rerun the tests whenever the code changes.  Personally, I find a script like this is really helpful for maintaining flow while doing test driven development.

What I like about this nosy script is that it allows me to basically just say “run this nosetests / trial command every time the code changes” and nothing else.  There’s no config file to setup or any tool specific arguments.  You just need to know how to use your test runner.

Here’s the code in case anyone is interested:

Oh, and despite the name I’ve used it successfully to work on twisted projects with trial.  I’m also willing to bet that it would work just fine with py.test.

Edit:  The polling loop with a time.sleep(1) is eating away at me now that I’ve posted this.  I’m thinking that the only way to live with myself is to replace that with listening for real filesystem events ala inotify… So do ease my conscience I’ll see about doing a followup post to show what the script would look like with filesystem events instead of the polling loop..

Git Pre-commit Hook For Python now with extra Python!

After reading a post by Bryce Verdier, and inspired by comments that suggested the Python version of said hook script would not be as nice as the bash version I decided to hack up a quick python version of the same script using pyflakes instead of pylint.

#!/usr/bin/env python
#-*- mode: python -*-
from subprocess import Popen, PIPE
import sys
syntax_checker = "pyflakes"

def run(command):
    p = Popen(command.split(), stdout=PIPE, stderr=PIPE)
    return p.returncode,,

_, files_modified, _= run("git diff-index --name-only HEAD")
for fname in files_modified:
    if fname.endswith(".py"):
        print >>sys.stderr, "Checking syntax on %s: ... "%(fname,),
        exit_code, _, errors = run("%s %s"%(syntax_checker, fname))
        if exit_code != 0:
            print >>sys.stderr, "\rChecking syntax on %s: FAILED! \n%s"%(fname, errors)
            print >>sys.stderr, "\rChecking syntax on %s: OK!"%(fname,)

You can download / fork this here if would like to give it a try: And of course, if you’re like me and you have no idea what to do with this script you can just do the following:

cp YourGitProject/.git/hooks/pre-commitchmod +x YourGitProject/.git/hooks/pre-commit

It’s also worth noting that this version is currently really strict. ANY warnings will cause your commit to fail.  Of course, replacing pyflakes with pylint again is a simple modification of the syntax_checker variable in the above script.