Fixing Homebrew after upgrading to Mac OS X El Capitan


After upgrading to El Capitan Homebrew commands stopped working with this error:

TASK: [Clean up and update] *************************************************** 
failed: [] => {"changed": true, "cmd": "brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup", "delta": "0:00:00.466698", "end": "2015-10-16 10:49:57.
562138", "rc": 1, "start": "2015-10-16 10:49:57.095440", "warnings": []}
stderr: Error: The /usr/local directory is not writable.
Even if this directory was writable when you installed Homebrew, other
software may change permissions on this directory. Some versions of the
"InstantOn" component of Airfoil are known to do this.

You should probably change the ownership and permissions of /usr/local
back to your user account.
  sudo chown -R $(whoami):admin /usr/local

FATAL: all hosts have already failed -- aborting


To fix this, just follow the instructions in the error message and change the permissions on /usr/local. Since I'm using Ansible, I added a simple task to check if the directory is own by root, and if it then change ownership:

    - stat: path=/usr/local
      register: usrlocal
    - name: If root owns /usr/local, change it to current user.
      shell: sudo chown -R $(whoami):admin /usr/local
      when: usrlocal.stat.pw_name == "root"

Sidenote: I've chosen to do the shell command as a sudo rather than have ansible invoke the root user so I don't have to remember to run ansible with -K.

Cleaning up Ansible's .retry files from your home directory


By default, whenever Ansible fails it will put a .retry file into your home directory.

Personally I never found this useful and the location was less than ideal.


You can either turn the feature off all together by using the Ansible config setting retry_files_enabled  or you can change the directory with retry_files_save_path.

When I was cleaning up my own settings I noticed these options were not actually documented when the feature was first introduced, so I added the documentation myself with this Ansible pull request on Github.

Changing Vagrant's Default SSH Port To Prevent Collision When Resuming a Suspended Instance


When you are running multiple Vagrant instances you often find you are unable to resume a suspended vm instance because the port forwarding clashes with another running Vagrant box

Solution (Updated 04/09/2015)

 Assign each Vagrant instance a unique port in the Vagrantfile. :private_network, ip: '' :forwarded_port, guest: 22, host: 12914, id: 'ssh'

Setting the id: to 'ssh' overwrites the default mapping. In previous version, you had to explicitly disable the default port first else both would be created and you'd still clash but in current versions the cleaner single line works as expected. :private_network, ip: "" :forwarded_port, guest: 22, host: 2222, id: "ssh", disabled: true :forwarded_port, guest: 22, host: 64673, auto_correct: true

There is discussion of this at where I got the origional solution.


Fixing Silex Doctrine 2 ODM Notice: Undefined index: embedOne Error


Using Doctrine MongoDB Object Document Manager (ODM) and trying to persist a model throws the error:

Notice: Undefined index: embedOne in /vagrant/composer/doctrine/mongodb-odm/lib/Doctrine/ODM/MongoDB/Persisters/PersistenceBuilder.php on line 341


Thanks to Douglas Reith on the doctrin-user list it becomes obvious that it was because the embedOne data within my YAML file was indented two spaces too much, and pulling it back one level so it was at the same indentation as fields fixes the problem.