Thoughts of Geoff

Some writing by Geoff Petrie

Python Dictionaries, PHP Associative Arrays and Python Objects

If you read my previous post on the challenge of education as a developer, you’ll know about my interested in the Python language and syntax. I have had a reason to look at Python again recently and it’s pretty amazing what I had forgotten about the language.

Now I’ll admit that my understanding of object models and design patterns isn’t the finest. But as I was going through my Python refresher I had a bit of an aha moment.

It was this line of code that struck me as pretty significant:

return ";".join(["%s=%s" % (k, v) for k, v in params.items()])

This code came from the Dive into Python website.

While it doesn’t look remarkable on the surface, knowing that everything in Python is an object (like Ruby and Javascript) you’ll appreciate how cool that line of code is. Specifically the ";" piece of it.

That .join method is called by that semicolon. That’s pretty cool.

To demonstrate the importance of this piece, I decided to see how I could actually produce the same results that line of code does in PHP.

So that you understand the whole piece to this, the params variable is actually a dictionary (aka, an associative array in PHP) created in a small module. The variable is:

params = {"server":"gpetrie", \
          "database":"localhost", \
          "uid":"sa", \
          "pwd":"secret" \
         }

The output from return ";".join(["%s=%s" % (k, v) for k, v in params.items()]) is:

pwd=secret;database=localhost;uid=sa;server=gpetrie

So the piece we’re going to focus on is how that last tuple is printed. You see that? server=gpetrie. There’s no semicolon at the end of that. That’s because the semicolon is the object calling the .join method.

How can I produce this same output using PHP and an associative array?

If I were to do something like this:

$myParams = array("server"=>"gpetrie", 
                  "database"=>"localhost", 
                  "uid"=>"sa", 
                  "pwd"=>"secret");

to initialize the PHP script, and then follow it up with something like this:

$str = "";

foreach ($myParams as $k=>$v) {
  $str .= "$k:$v; "
}  

print_r($str);

We wouldn’t get what we wanted. We’d get this:

server:gpetrie; database:localhost; uid:sa; pwd:secret; 

That semicolon at the end. That’s the problem.

So we try something a little more complex:

$myParams = array("server"=>"gpetrie", 
                  "database"=>"localhost", 
                  "uid"=>"sa", 
                  "pwd"=>"secret");

$arrayCount = count($myParams);
$arrayKeys  = array_keys($myParams);

for ($i=0; $i<$arrayCount; $i++) {
  $arrayKey = $arrayKeys[$i];
  $str .= $arrayKey.':'.$myParams[$arrayKey].';';
}

print_r($str);

But it doesn’t work either. We get:

server:gpetrie; database:localhost; uid:sa; pwd:secret; 

as our output.

That semicolon!!!

Now I could be way off here. There is likely a far more elegant solution to this than I did, but this is the solution I came up with to produce the same output as the one line from Python:

$myParams = array("server"=>"gpetrie", 
                  "database"=>"localhost", 
                  "uid"=>"sa", 
                  "pwd"=>"secret");

for ($i=0; $i<=$arrayCount--; $i++) {
  $arrayKey = $arrayKeys[$i];
  $str .= $arrayKey.':'.$myParams[$arrayKey].';';
  unset($myParams[$arrayKey]);
}

$str .= key($myParams).':'.$myParams[key($myParams)];

print_r($str);

And now we get the result we want:

server:gpetrie;database:localhost;uid:sa;pwd:secret

No semicolon at the end.

Now don’t misunderstand the point of this. This isn’t a Python versus PHP thing. This is just to show the interesting power of having everything as an object in Python and how much lighter your code can be because of it. Of course if you have a better solution to this than I do, please let me know. I’d love to see it.

Git Tip: Use Prune to Update Local Repo Branches

I have multiple places where I work and I use Git to support that. The general Git workflow I’ve adopted is similar to the one Scott Chacon describes.

In Scott’s post he explains that anything in master is deployable, and any time you want to work on something you branch from master with something descriptively named. This is what I want to talk about now.

I use this process of branching to keep my changes manageable. By naming my branches descriptively I know that if I’m building a new user profiling tool, and changing the css of a navigational piece and updating a SQL query in the same branch, then the commit isn’t going to be terribly clean. Plus, now the branch has no meaning to me when I get feedback from a tester. This branch would make even less sense if I had someone else coding with me.

To keep branches meaningful I make a lot of branches off of master. I’ve come to the standard practice of pushing pretty much any branch I’m working on, no matter how minor, to the remote. This makes it simple to fetch and merge on my dev server instance instead of scp’ing the work I’ve done to the server. The process is just feels a little crisper.

After the testers have given the thumbs up on the work and I do my final code review, I merge the branch to master, push it out to production and delete that feature/bugfix branch. All-in-all it is a generally good system.

But what inevitably happens is the dev server has a bunch of branches that are no longer useful and don’t have a remote counterpart to them. It gets to be a bit of a mess when you run git branch and see a dozen old branches sitting there.

To fix this issue I found a great, quick command:

git remote prune origin

Where origin is the name of the remote.

From the manual:

prune

git remote prune [-n | --dry-run] <name>

Deletes all stale remote-tracking branches under <name>. These stale branches have already been removed from the remote repository referenced by <name>, but are still locally available in "remotes/<name>".

With --dry-run option, report what branches will be pruned, but do not actually prune them.

prune is one of those commands that you may not use very often, but it is nice to know it’s there when you need it.


Endnote:

I found this particular Stack Overflow answer useful when I was looking for information on how to handle dead remote branches.

Image “Four volunteers prune some plants that were growing over the trail.” from United States Government Work

On the Problems of Education in a Developer’s World

or,

How the Learning Voice Needs to Roundhouse Kick the Work Voice in the Face

N.Y. schools opening

Preface:

As a little background, I wrote the post below about a week or two ago. I’ve been sitting on it since.

The gist of the post is about my frustration in trying to get better or even keep up with skills and tools. I was blaming my deadlines and commitments for getting in the way.

While I was writing this post I came to the realization that only I could really make a difference in the jammed feeling I had. Deadlines will never go away. Commitments will never end. If I really wanted to pick up and understand something, then I had to make that happen for myself. No one else would do that for me.

So in the past couple of weeks I started to look at Twitter Bootstrap and then I rebuilt my personal site on it. (Bootstrap is amazing. I can’t recommend it more highly.) I took a couple of hours and got Octopress in place and moved my more substantial writing from my old Tumblr and put it in place right here. Additionally, I have started to take a more serious look at Python again. I also started to build and put up on GitHub a little Day One journal service pack that I’m having a ball building. Finally, I’m also seriously looking at Canvas and I am trying to figure out a way to prototype a Canvas web app that uses WebSockets.

This effort has significantly improved my spirit. I’m feeling creative and rejuvenated. While the deadlines haven’t gone away, I feel like I’m not deadline bound in my day any more. There are tons of new project possibilities out there now. It’s invigorating.

I had considered dumping this post entirely, but I thought it might be worth posting just to see where my head was at when I wrote this a few weeks back. I know I’m not the only person who gets in these sorts of ruts. Maybe this will resonate with someone else and they’ll be able to pull themselves into their creative zone like I seem to be doing now.

Adding a CNAME to Your Octopress Blog

This option isn’t terribly well explained in the Octopress documentation, and slightly confusing within the Github.

So that I could get blog.geoffpetrie.com to work as my url to my Octopress blog hosted by Github at geopet.github.com, I needed to go to my current hosting service. In my case this is currently Dreamhost.

Since I wasn’t changing my top-level domain (TLD), I wanted to keep geoffpetrie.com pointed at my hosting service, I wanted to add a subdomain, i.e., blog to my domain name server (DNS) run by Dreamhost.

I went into the DNS configuration in the Dreamhost cpanel and added blog as the name/record, CNAME as the type and geopet.github.com as the value.

This is the first step, and once the DNS is refreshed your new subdomain (in my case blog.geoffpetrie.com) will start pointing to a Github 404.

The next step is to add the CNAME to your Octopress master branch. This is surprisingly simple, but not completely intuitive. In the top level of the source branch, you want to use the command:

echo 'blog.geoffpetrie.com' >> source/CNAME

Of course you’re going to use your own subdomain instead of blog.geoffpetrie.com. This command will create the CNAME file in your source directory with the url that you want to direct people to.

After this all that’s needed is rake generate and rake deploy. (You may as well commit to your source branch after this.)

Wait a couple of minutes for things to work their way through DNS and Github’s world and you’ll be looking at your Octopress blog on your own domain.

Starting Again With Octopress

Moving my Tumblr to Octopress seems to be a bit more of a chore than I had initially expected. My journey into Jekyll turned out to be something that required more time that I could commit at this very moment, so I decided to take a quicker path to getting a better blogging workflow into my life by implementing some really exceptional work done Brandon Mathis in Octopress as recommended by my friend Brian Arnold.

Still, this is a new application that I haven’t had a chance to really kick the tires on. I tried to grab the less family specific pieces from the Tumblr I kept, and the more bloggy pieces from there. There is more work to be done on that front. For one, I need to get some of the original images from there for the technical pieces that I took screen captures of. In the mean time I’ll see if I can get URLs back to the original Tumblr pieces if there is any interest in that at all.

While I would love to spend time tweaking this install of Octopress, I suspect that I’ll spend the time I do have writing rather than tweaking for the moment. Forward progression is key for me right now, even though this is something I should dive deeper into.

If I ignore what I said above and fall into the abyss, I will post the tricks I find along the way.

On Sparrow and Google

So at the end of last month (July 20, 2012) there was a disturbance in the indie developer force when the great iOS/OS X gmail skin/email client Sparrow was acquired by Google.

I’ll provide the text of their announcement at the end of this which you can (at the time of this writing) find on their site. The gist of the press release was that they’d been bought by Google, that they’d do one more bug fix and then they’d stop working on the Apple products and start working on other Gmail stuff for Google. As a side note, it was amusing to see the long list of “advisors” and investors that they thanked as well.

Of course my initial reaction was disappointment. The iOS app they built was light-years ahead of the native Apple client and I found that the desktop client was a nice departure from OSX Mail. I was also excited when I saw they were heading toward a Sparrow iPad app. But, you know, Google apparently offered them all jobs and something around $25 million for them to stop working on their product.

A large part of the chatter regarding this buyout was the argument that it is really hard to make it as an independent developer. There was also talk about how $25M really isn’t that much money when all is said and done. And how this was something along the lines of a talent acquisition. I would agree with all of this.

But after the dust has settled what we still have is this great small software group that was given enough to stop building for Apple. And this got me thinking, and you’ll probably think I’m crazy, but I started to wonder if this could be a new business model for Google?

And before you call bullshit on me, just consider it for a moment. Google is lousy at UI. Just look at how they screwed up Gmail and Reader. Sparrow is beautiful. It’s one of the reasons I enjoyed using it so much. Google is also working their asses off to make Android a legit contender to iOS. Some will say that Android is already competitive with iOS. I’m not going to waste my time with this argument. Furthermore, Google’s Android OS has a huge piracy issue on their hands right now, so many devs aren’t interested in spending their time to watch their work be stolen by douchebags who won’t drop $3.99 for their year and a half of work.

So if you were one of the most wealthy companies on the planet and you were losing out due to lack of developer interest and a general lack of impacting talent in the UI department, why wouldn’t you bring excellent UI and mobile app developers in house to make kick ass products for your own OS? Why wouldn’t you start to cherry pick some of the exceptional talent that is out in the iOS space?

I think it is possible that this is the beginning of a few other talent grabs by Google. I wouldn’t be shocked if we saw the end of Reeder and possibly Instacast (although I prefer Downcast for podcast players but Instacast seems to get the lion’s share of mentions), and I hope that the Tapbot folk or the Day One guys don’t get caught in this as well. But by taking on this model, Google gets rid of the lousy Android piracy issue by giving these developers a massive hiring bonus and it gives them a regular paycheck for as long as they want to be Google employees. They never have to sell another app in a market again, but they get to keep building for the platform they love. I wouldn’t blame anyone for accepting something like that. Plus, if Google makes this a culture thing, who’s to say what else may change because of it?

If this really becomes a thing, it will be interesting to see what Apple does as a response.


Sparrow Press Release

We’re excited to announce that Sparrow has been acquired by Google!

We care a lot about how people communicate, and we did our best to provide you with the most intuitive and pleasurable mailing experience.

Now we’re joining the Gmail team to accomplish a bigger vision — one that we think we can better achieve with Google.

We’d like to extend a special thanks to all of our users who have supported us, advised us, given us priceless feedback and allowed us to build a better mail application. While we’ll be working on new things at Google, we will continue to make Sparrow available and provide support for our users.

We had an amazing ride and can’t thank you enough.

Full speed ahead!

Dom Leca CEO Sparrow

We also want to thank our advisors and investors — Loren Brichter, Dave Morin, John Maeda, Xavier Niel, Jérémie Berrebi — as well as our friends and family: Simon Istolainen, Jérémie Kanza, Sacha Cayre, Cedric Gepner, Laurent Merlinot, Didier Kuhn, Tariq Krim, Christophe Baillon, Laurent Cerveau, Christophe Giaume, Sebastien Maury, Manuel Colom, Bertrand Guiheneuf and all of you who have helped us along the way.

Why I Love Gweek

I am a podcast listener. I listen to them on my commute, when I exercise, when I walk from place to place and when I craft. Essentially I try to find reasons to do things that allow me to listen to podcasts. Now because I actually listen to podcasts I have a hard time listening to them while I program or write. This makes me sad sometimes, but I survive.

Because I love podcasts so much, and the time available to me to listen to them is so little, the shows I listen to are at a minimum. This mean that if I add a show I typically need to drop a show.

There are problems with the medium of podcasting, but there are two that stand out for me. The first is production value. The second is time.

In the case of the production value, some podcasters simply don’t have the money to invest in a proper setup. This can mean that the show is either hard to listen to, or in some cases almost impossible. Post production work can only fix so many issues with the initial recording.

In the case of time, some podcasting producers seem to think that time doesn’t matter. If they have two hours to interview someone then the completed show should be released at two hours. Sometimes this works, but often it doesn’t. I’ve found that some podcasting producers are afraid of killing their babies. In the case of podcasting the baby is often the random ramblings of an overly long conversation. When this happens enough times I end up listening to the next few shows at two-time speed. If that still doesn’t remedy the tedium for the show then I just stop listening to it. Sometimes I come back, sometimes I don’t. I can be fickle, but I’m also fair.

But enough about my love and issues with podcasts. Let’s talk about Gweek.

This is my favorite podcast.

Let me say that again.

This is my favorite podcast.

I’ve been a big Boing Boing fan for years. I have recently found a way to make Boing Boing a part of my weekly routine again, after a long period of time where I just caught it as I could, but I’ll discuss this more in another post. The reason I mention Boing Boing is because Gweek is hosted by Mark Frauenfelder, the co-founder and an editor of Boing Boing (along with being the editor of the awesome Make Magazine – that I was just given as a Christmas gift from my parents). It is usually co-hosted by the oft-late (because of Skyrim) Rob Beschizza, Managing Editor of Boing Boing, and Ruben Bolling, the mastermind behind Tom the Dancing Bug.

Before I became a regular listener, each time Gweek was mentioned on Boing Boing I always said that I should at least check out an episode. Finally I did. It was episode 26. The reason I decided on this one was because it mentioned in the show notes about how Mark had an addiction to Dungeon Raid. I suffer from the same affliction, so I had to hear about how he avoided the intervention that I would typically require.

The show begins, “Gweek is where the editors and friends of Boing Boing talk about comic books, science fiction and fantasy, video games, board games, tools, gadgets, apps and other neat stuff.” Everything that I like!

When I had finished listening, it was clear that I had found a show that I would need to make room for on my dance card playlists.

Joel Johnston was a guest/co-host of the show. I’ve been an admirer of Mark, Rob and him for some time. Joel did not disappoint and the conversation was filled with humor and insight. It was great. I was especially pleased about the discussion of Minecraft, a game that I have recently started playing and enjoyed, the Skylanders game, that I had heard of but knew almost nothing about, the book The Postmortal, that I immediately added to my Amazon wish list, and, of course, the iOS game Dungeon Raid.

It was the next day that the next episode with Seth Godin came out. I got to listen to some of it as I went for a short run. Seth as a guest was super insightful and a great conversationalist. I got to learn more about his work on The Domino Project. I am late to this game, but I found it a fascinating experiment. Mark also gave us a heads-up on Collin’s Lab. After watching one of his short seven minute videos I know more about electronics schematics than I ever have before. I also added Handmade Music Factory to my Amazon wish list. This stuff is so cool.

And I couldn’t wait another week to hear the next one, so I started to go back to the earlier episodes. They’re all this good.

I listened to the episode with John Hodgman, and while I disagreed with the opinions regarding the DC Reboot (I actually am excited about being able to get into comic story lines that make sense), I am now reading Atomic Robo because of the conversation. It is really good. John continued the string of great guests.

I listened to the episode with David-Michel Davies, the executive director of the Webbies. I signed up for the Netted email list after that episode and had a half dozen sites that I needed to check out.

And the episode with Michael Kupperman led to my education on the comic strip Cul de Sac, Orchid and The Last Man Alive.

But the last two episodes, the one with Jon Ronson, author of The Men Who Stare at Goats, and the other with Maggie Koerth-Baker, the Boing Boing Science Editor, have just been off the charts good. Jon and Maggie were endlessly entertaining and interesting. By the end of Jon’s episode I had already purchased his ebook, which was great, and started to wonder who around me was a psychopath. By the end of Maggie’s episode, I had a new web comic to start reading (Oglaf, which is absolutely fantastic and totally NSFW), I couldn’t wait for Maggie’s new book to be released, and I wished I had the time to play Skyrim.

I’m now back as far as episode 22, and I like this show so much that I plan on going through the rest of the catalog.

But let’s run full circle here. My two big complaints about podcasts are production value and time. Well Mark Frauenfelder keeps the time perfect. I don’t think there’s been a show that’s gone much beyond an hour. I really appreciate that. The production value leaves something to be desired. Mark occasionally remarks that his USB mic needs to be unplugged and replugged, and there are some occasional challenges with static and background noise. But the quality of the conversation is so excellent that these details become insignificant. I would listen to this show while they sat on a steam powered train.

So if you’re looking for a podcast that is filled with geeky, neat and thoughtful conversation, along with great recommendations of terrific media, you must listen to Gweek. I can’t get enough.

TL;DR

Follow Up

I’m currently listening to Gweek 30. Guest Barry McWilliams is terrific. I love hearing about the artist process and this show definitely gets into that.

Also, Dungeon Quest, Book One has already been added to my Amazon Wish List.

I’ll probably end up finding a way to finish up the episode tonight after my oldest goes to sleep.

Follow Up Two

This has really tickled me. I wrote a short note to Mark Frauenfelder about how much I like the show and he posted it on Boing Boing. I think that’s pretty cool.

Great Tip on Apple IDs

But what if I share my Apple ID?

If you use your Apple ID on multiple devices to buy apps (say, if you have one central account for your, your spouse’s, and your children’s purchases), it’s best not to convert it into an iCloud account. Although your installed iCloud account is, by default, the one you’ll use to purchase music and apps with, you can still manually sign in and out of the App and iTunes Stores on your device. Instead, you should create an entirely separate account to use with iCloud.

Awesome tip from Macworld’s Getting started with iCloud, Apple’s new sync service article.

Creating a Service Using Automator for nvALT Notes Version Control

Introduction

I’m going to get a pretty nerdy here for a moment.

So I dove in and I’m now using nvALT, Elements and Git for my note taking needs. Aside from a few minor hiccups, which I’ll address in a later post, this is really working nicely.

The one thing that I needed when I added nvALT and Elements to my note taking workflow was the ability to easily continue version control with Git. Before taking on this new process I was using Git in my notes directory and I wasn’t about to lose that option now.

But the ease which nvALT allows me to create new text files, and with nvALT built to work better with smaller files, I need some way to get my version control under control.[1]

Here’s what my workflow was:

  1. Work, notes, work, notes.
  2. Work, notes, work, notes.
  3. Look at the time.
  4. Damn! How long has it been since I last did a commit?
  5. cd to my notes directory.
  6. Commit my notes to my repository.

It wasn’t really precision based. Plus it had the added detriment of taking me out of what I was doing to commit my notes.

What I did[2] was create a Bash script and then used Automator to create a Service for it that I also applied a keyboard shortcut to.

If you’re still with me, here’s how I did it:

The Reveal

First[3], create your Bash script.

I keep all my notes in one directory. This is the way that nvALT and, it seems, Elements like to work. With some light taxonomy (ala Merlin Mann and Mac Power Users) I have a reasonably good system in place. All my notes are in a Dropbox subdirectory called “notes.”

To get this to work I created a Bash script named git_notes.sh and put this in it:

#!/bin/bash

cd /Users/username/Dropbox/notes/
git add .
git commit -m 'nvALT Service Commit'
echo "* "`date`" nvALT Commit" >> /Users/username/Dropbox/notes/noteCommits`date "+%Y%m%d"`.md

Now, you’ll see that this is redundant and probably a lot silly, but this is what it does.

  1. It makes sure that we’r in my notes directory using the absolute pathname.
  2. It stages only the modified files to the repository.[4]
  3. It commits those changes and uses a standard message for my Git log file.
  4. The last line is where it gets a little silly, it appends to another file in the same Dropbox subdirectory with a message that also gives a human readable date/time. Why am I doing this? I don’t know, maybe someday I’ll set up something to parse it to get some analytics of when I do most of my commits for my notes file.

But now that we have our Bash script. The rest is trivial.

First, we open Automator and select “Service” as our document type under the “Choose a type for your document:”

Next, we’re going to change the “Service receives” from the default “text” to “no input”. Leave the “in” “any application” as it is.

Penultimately, we click “Utilities” under the left-side “Library” dropdown and then drag the “Run Shell Script” from the middle column over to the right side.

Lastly, we enter in the location of the script in the workflow. It’s a good idea to use the absolute pathname here. In my case it was /Users/username/Dropbox/notes/git_notes.sh.

Once you save you’ll have a Service that will be available from any of your application menus. Click on the Service and it will do a Git commit of all the changes to that notes directory as well as update the faux log file we’ve created for the day.

But I don’t like to use the mouse/trackpad that much. So the final touch to this is to create a keyboard shortcut in your System Preferences. Go to Applications > System Preferences > Keyboard. Then choose “Keyboard Shortcuts” and select “Services” from the left side. Your new Service should be at the end of the Services listings. Just click the blank space at the far end of the window and you should get a text input field. You can use anything you want for your shortcut, but I chose control-option-command-shift-s for mine to avoid any chance of a keyboard conflict.

Conclusion

So there you have it. A quick and dirty way to make sure that the notes you’re producing will be version controlled through Git. As a final remark, I’ll say that having version control has already paid off for me.

I use multiple computers and I made the huge mistake of accidentally deleting a bunch of files when a prompt popped up and I didn’t read exactly what it said. All of a sudden 20-some-odd notes of mine were gone. But version control to the rescue! I knew I had just committed to my repo before I had deleted the files and I was able to pull them back from brink of deletion hell. But this was even easier than it could have been because I had been using git add . instead of git add -a. All it meant was that I needed to unstage the deleted files. It was great and a perfect reason why doing something like this makes sense.

Post Script

I should add that the use of a common/standard commit message is bad practice. The reason I do this, instead of throwing a prompt so that I can enter in a more detailed message, is that this is supposed to create a workflow that won’t interrupt what you’re in the middle of but give you the peace of mind that you’ve got things in a version controlled environment. This does not prevent you from going to your notes directory and doing a proper Git commit with a detailed message on what you’ve done since your last commit. In fact, at the end of this paragraph I’ll be committing properly to say that this draft is finished. And when I finish my review of the draft I’ll commit again, and message that it is ready for posting. After that, I’ll probably do a name change to the file (this is the taxonomy thing I mentioned before) and then do another proper commit.

One last point: You may have noticed that you can easily change the Bash script to point to any directory you want. Once you have the Service in place, you can edit your .sh file whenever you please and have a temporary keyboard shortcut for Git repo commits. I think that’s kinda cool.


  1. For example, I have already committed this post three times.  ↩

  2. My solution was to produce a quick and dirty Bash script and then have it run through launchctl on a regular basis.  ↩

  3. This tutorial assumes you have a local Git repo in place. If you don’t have one and want to learn how, you can check out my post on the subject of Git.  ↩

  4. This is a key point because what I am not doing is staging any files for deletion only modification. This means I won’t worry about any files going away without my knowing.  ↩

I originally posted this tutorial on my Tumblr: The Face of Geoff