Fixing Homebrew after upgrading to Mac OS X El Capitan

Problem:

After upgrading to El Capitan Homebrew commands stopped working with this error:

TASK: [Clean up and update] *************************************************** 
failed: [127.0.0.1] => {"changed": true, "cmd": "brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup", "delta": "0:00:00.466698", "end": "2015-10-16 10:49:57.
562138", "rc": 1, "start": "2015-10-16 10:49:57.095440", "warnings": []}
stderr: Error: The /usr/local directory is not writable.
Even if this directory was writable when you installed Homebrew, other
software may change permissions on this directory. Some versions of the
"InstantOn" component of Airfoil are known to do this.

You should probably change the ownership and permissions of /usr/local
back to your user account.
  sudo chown -R $(whoami):admin /usr/local

FATAL: all hosts have already failed -- aborting

Solution:

To fix this, just follow the instructions in the error message and change the permissions on /usr/local. Since I'm using Ansible, I added a simple task to check if the directory is own by root, and if it then change ownership:

  tasks:
    - stat: path=/usr/local
      register: usrlocal
    - name: If root owns /usr/local, change it to current user.
      shell: sudo chown -R $(whoami):admin /usr/local
      when: usrlocal.stat.pw_name == "root"

Sidenote: I've chosen to do the shell command as a sudo rather than have ansible invoke the root user so I don't have to remember to run ansible with -K.

Cleaning up Ansible's .retry files from your home directory

Problem:

By default, whenever Ansible fails it will put a .retry file into your home directory.

Personally I never found this useful and the location was less than ideal.

Solution:

You can either turn the feature off all together by using the Ansible config setting retry_files_enabled  or you can change the directory with retry_files_save_path.

When I was cleaning up my own settings I noticed these options were not actually documented when the feature was first introduced, so I added the documentation myself with this Ansible pull request on Github.

Ensure CodeShip works on the named feature branch, not detached HEAD so Composer alias works

Context

Using Composer aliases to make sure your dependencies resolve correctly during development of a feature branch, when CodeShip checks out a detached HEAD this can not longer be seen as expected during the composer install process.

Solution

CodeShip provide some useful default environmental variables to help you set up your build process, and one of those is the branch name in use, so as part of your build tasks before composer runs just add:

git checkout $CI_BRANCH

Then composer can see which feature branch it is on again and the dependency detection will work.

Prevent Ansible running an out of date version of a playbook

Context

You've got a collection of Ansible Playbooks in a git repo shared between a team and want make sure that someone doesn't accidentally run an out of date version of a playbook.

Solution

Create a role "ensure-safe-to-run" or some such, and add it as the first role in any playbooks list.

- name: ensure local infrastructure repo is up to date
  local_action: shell git remote show origin
  register: command_result
  failed_when: "'local out of date' in command_result.stdout"
  sudo: no

This runs a local shell action, captures the result, and checks it for the text "local out of date". If it sees that in the output, it will fail and the rest of the playbook won't run. Leaving you to do a git pull manually.

It won't check on your own changes, so you can add new playbooks and modify existing ones freely 

Easily Issue "vagrant ssh -c" Commands With a "vudo" Alias

Context

When you want to issue a command inside a running Vagrant instance instead of logging into a full blow session you can issue a single command with "vagrant ssh -c <some command>", however it's a bit of a mouthful and requires you to quote the command and you'll be in the wrong directory when the command runs.

Solution

Adding a "vudo" function to your terminal shell that aliases "vagrant ssh -c" and switches into the correct directory when it does so.

I'm using Oh My ZSH, so for me I update the ~/.zshrc with a function:

function vudo() { eval "vagrant ssh -c \"cd /vagrant && $@\"" }

Now, when I want to run something inside the vm, such as run a test suite I just type "vudo phpunit" and it fires off from the /vagrant directory. This also works for long running processes so if I want to connect to MongoDB running on Vagrant I can just type "vudo mongo" and I'm in.

 

Changing Vagrant's Default SSH Port To Prevent Collision When Resuming a Suspended Instance

Problem

When you are running multiple Vagrant instances you often find you are unable to resume a suspended vm instance because the port forwarding clashes with another running Vagrant box

Solution (Updated 04/09/2015)

 Assign each Vagrant instance a unique port in the Vagrantfile.

config.vm.network :private_network, ip: '192.168.115.12'
config.vm.network :forwarded_port, guest: 22, host: 12914, id: 'ssh'

Setting the id: to 'ssh' overwrites the default mapping. In previous version, you had to explicitly disable the default port first else both would be created and you'd still clash but in current versions the cleaner single line works as expected.

config.vm.network :private_network, ip: "192.168.115.12"
config.vm.network :forwarded_port, guest: 22, host: 2222, id: "ssh", disabled: true
config.vm.network :forwarded_port, guest: 22, host: 64673, auto_correct: true

There is discussion of this at https://github.com/mitchellh/vagrant/issues/3232 where I got the origional solution.

 

Fixing Silex Doctrine 2 ODM Notice: Undefined index: embedOne Error

Problem

Using Doctrine MongoDB Object Document Manager (ODM) and trying to persist a model throws the error:

Notice: Undefined index: embedOne in /vagrant/composer/doctrine/mongodb-odm/lib/Doctrine/ODM/MongoDB/Persisters/PersistenceBuilder.php on line 341

Solution

Thanks to Douglas Reith on the doctrin-user list it becomes obvious that it was because the embedOne data within my YAML file was indented two spaces too much, and pulling it back one level so it was at the same indentation as fields fixes the problem.