projectchilli.com

Installing Openshift Origin

1. Prepare your Fedora 19 server.

$ sudo bash
# yum update -y

# yum install -y puppet facter tar bind wget unzip

# puppet module install puppetlabs/stdlib
# puppet module install puppetlabs/ntp

# echo 'openshift2.example.com' > /etc/hostname
# echo -e '127.0.1.1\topenshift2 openshift1.example.com' >> /etc/hosts

# /usr/sbin/dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom -K/var/named example.com
# cat /var/named/Kexample.com.*.key | awk '{print $8}'
CNk+wjszKi9da9nL/1gkMY7H+GuUng==

# reboot

Make sure to change “example.com” to you the domain you wish to use for your openshift “cloud” domain.

2. Download and install the puppet modules for Openshift.

In this instance I’ve fixed up an old version of the Openshift puppet module to work with Fedora 19. See the diff here

$ sudo bash
# wget https://github.com/pfarmer/puppet-openshift_origin/archive/932c084203cb940c740e29ee7d5e6135516b9f51.zip
# unzip 932c084203cb940c740e29ee7d5e6135516b9f51.zip
# mv puppet-openshift_origin-932c084203cb940c740e29ee7d5e6135516b9f51 /etc/puppet/modules/openshift_origin

3. Setup the puppet manifest for the server, and run puppet.

# vi config.pp

class { 'openshift_origin' :
  #The DNS resolvable hostname of this host
  node_fqdn                  => "openshift.example.com",

  #The domain under which application should be created. Eg: <app>-<namespace>.example.com
  cloud_domain               => 'os.example.com',

  #Upstream DNS server.
  dns_servers                => ['8.8.8.8'],

  enable_network_services    => true,
  configure_firewall         => true,
  configure_ntp              => true,

  #Configure the required services
  configure_activemq         => true,
  configure_mongodb          => true,
  configure_named            => true,
  configure_avahi            => false,
  configure_broker           => true,
  configure_node             => true,

  #Enable development mode for more verbose logs
  development_mode           => true,

  #Update the nameserver on this host to point at Bind server
  update_network_dns_servers => true,

  #Use the nsupdate broker plugin to register application
  broker_dns_plugin          => 'nsupdate',

  #If installing from a local build, specify the path for Origin RPMs
  #install_repo               => 'file:///root/origin-rpms',

  #If using BIND, let the broker know what TSIG key to use
  named_tsig_priv_key         => 'CNk+wjszKi9da9nL/1gkMY7H+GuUng==',

  #If using an external ethernet device other than eth0
  #eth_device                 => '<ethernet device name, eg: enp0s5>',

  #If using with GDM, or have users with UID 500 or greater, add to this list
  # os_unmanaged_users         => ['gdm'],

  #If using the stable version instead of the nightly 
  install_repo               => 'https://mirror.openshift.com/pub/openshift-origin/release/2/fedora-19/packages/x86_64/',
  dependencies_repo          => 'https://mirror.openshift.com/pub/openshift-origin/release/2/fedora-19/dependencies/x86_64/',
}

# puppet apply --verbose config.pp
# reboot

4. Setup a user You shouldn’t really use the admin user to deploy your apps, so create yourself a new user.

$ oo-register-user -l admin -p admin --username pfarmer --userpass myPassW0rd

5. Change the admin password. See Changing the admin passsword in Openshift Origin

Changing the admin password in Openshift Origin

I’ve just started playing with Openshift Origin, but found there were no instructions for changing the admin password, so after scratching the around on the Internet for a while I came across some instructions, these got me most of the way to changing the password, but the database structure outlined in post was an old structure and didn’t change the password.

Here is a corrected version; there are a number of steps to follow:

1. Get the salt from broker.conf file.

$ grep 'AUTH_SALT' /etc/openshift/broker.conf
AUTH_SALT=the_salt

2. Generate the new hash for you password, using the value of “AUTH_SALT”.

$ irb 
irb(main):001:0> require 'digest/md5'
=> true
irb(main):002:0> Digest::MD5.hexdigest(Digest::MD5.hexdigest("YOURPASSWORD") + "the_salt")
=> "8662bb04dd08f370d8e3022a265ad57a"

3. Update the password in the mongo database.

$ mongo -u openshift -p mooo openshift_broker_dev --eval \
'db.auth_user.update({"_id":"admin"}, {"_id":"admin", "user":"admin", "password_hash":"8662bb04dd08f370d8e3022a265ad57a"}, true)

More on Openshift to follow.

Starting a new python project

When I start a new python project (no matter how small) I use virtualenv (actually virtualenvwrapper) to ensure a clean enviroment, git for version control and generally Sublime Editor 2.

1. Install virtualenv and virtualenvwrapper.

$ sudo apt-get install python-pip
$ sudo pip install virtualenvwrapper

virtualenvwrapper adds a bunch of functionality to virtualenv, the following two commands are the most useful, and are covered in this guide.

$ mkproject
$ workon

1a. Setting up virtualenvwrapper.

virtualenvwrapper requires two enviroment variables to be setup.

$ export WORKON_HOME="/home/pfarmer/py"
$ export PROJECT_HOME="/home/pfarmer/projects"

WORKON_HOME is the directory in which the virtual enviroments will be created, PROJECT_HOME is the directory in which your project folders will be created. It is advisable to put these variables in your profile.

2. mkproject.

mkproject creates a virtual enviroment and a project folder, it then moves into the project and activates the virtual enviroment.

$ mkproject testproject1
New python executable in testproject1/bin/python
Installing setuptools............done.
Installing pip...............done.
virtualenvwrapper.user_scripts creating /home/pfarmer/py/testproject1/bin/predeactivate
virtualenvwrapper.user_scripts creating /home/pfarmer/py/testproject1/bin/postdeactivate
virtualenvwrapper.user_scripts creating /home/pfarmer/py/testproject1/bin/preactivate
virtualenvwrapper.user_scripts creating /home/pfarmer/py/testproject1/bin/postactivate
virtualenvwrapper.user_scripts creating /home/pfarmer/py/testproject1/bin/get_env_details
Creating /home/pfarmer/projects/testproject1
Setting project for testproject1 to /home/pfarmer/projects/testproject1
(testproject1)$

Notice that pip and setuptools are installed by default and your command prompt is modified to indicate the project you are currently working on.

3. workon.

workon activates an already created project and moves to the project folder.

$ workon testproject1
(testproject1)$

3a. deactivate.

When you have finished working on a project, you can use the “deactivate” command to turn off the virtual enviroment.

(testproject1)$ deactivate
$

4. Installing packages into the virtual enviroment with pip.

So, we now have a project called testproject1, lets install some python packages into it. In this case we’re going to install the web micro framework flask.

(testproject1)$ pip install flask
Downloading/unpacking flask
  Downloading Flask-0.8.tar.gz (494Kb): 494Kb downloaded
  Running setup.py egg_info for package flask

SNIPPED LOTS OF OUTPUT!!

Successfully installed flask Werkzeug Jinja2
Cleaning up...

Notice how pip actually installed three packages, flask + Werkzeug and Jinja2, Werkzeug and Jinja2 are dependancies for flask. Alternately you can add packages to a “requirements” file.

(testproject1)$ echo flask >> requirements.txt
(testproject1)$ pip install -r requirements.txt
Downloading/unpacking flask
  Downloading Flask-0.8.tar.gz (494Kb): 494Kb downloaded
  Running setup.py egg_info for package flask

SNIPPED LOTS OF OUTPUT!!

Successfully installed flask Werkzeug Jinja2
Cleaning up...

Using a requirements file makes you project much more portable, you can just zip up your project folder, take it anywhere, once unzipped in the new location you can run pip install -r requirements and you have every package you need.

5. Create a small application.

#!/usr/bin/env python
from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello World!"

if __name__ == "__main__":
    app.run()

Save this file as “app.py”, now run “./app.py” and browse to http://localhost:5000/ to see your app in action.

6. Using git.

I can’t extol the importance of source control enough, even for small projects, the ability to roll back a change you have made to source code is invaluable! At the very least you should zip up your project before any change, so you can revert, but I think you’ll find that using git to manage your source code is much better.

7. Create the git repository.

(testproject1)$ cdproject
(testproject1)$ git init .
(testproject1)$ git add -A
(testproject1)$ git commit -m "initial commit"

I keep my projects folder inside a Dropbox folder, this has several benefits for me:

  • All my repositories are automatically backed up.
  • My code stays in the sync between several computers.

But you might find that a public repository on github is what you need, is which case do the following:

(testproject1)$ git remote add origin git@github.com:USERNAME/REPONAME.git
(testproject1)$ git push -u origin master

8. My git workflow.

As most of my projects have just me working on them, I use a fairly simple workflow. Generally my repositories consist of two branches

$ git branch
* dev
  master

master is the standard branch, and this is where my “stable” code lives. dev is where I generally work, once I’m happy with my code in dev I merge into master, which is generally a fast-forward merge.

If I’m making major changes, then I will use a feature branch which is branched off dev. My commits might typically look like this:

master     a -------- d ------------------- h -------------- m
            \        /                     /                /
dev          b ---- c ---- e ---- f ---- g ---- i -------- l
                                                 \        /
feature x                                         j ---- k

In this example a is the initial commit, b and c are commits into the dev branch, d is the point at merge b and c merge back into master, e, f and g are commits in dev, h is the merge into master, i is a commit in dev, j and k are commits in branch feature x based off i, l is the merge of feature x into dev and m is the merge of l into master.

But my commit history could look like this:

master     a --------------------------- g ------------ l
            \           \               /             /
dev          b ---- cx   d ---- e ---- f ---- j ---- k
                                        \
feature x                                h ---- ix

In this example, b and cx are commits made to dev, but at commit cx I abandoned the changes, commit d is a fresh dev branch from commit a, h and ix are commits to the “feature x” branched from commit f on dev being abandoned at commit ix.

As you can see, using git like this gives me huge amounts of flexibility to write experimental code without worrying about being able to get back to a known good point in my code.

9. Commit early, commit often.

Using this workflow you can commit often, making small incremental changes to you code, at the point at which you want to merge code back into your dev or master branches you can squash any number of commits, combining the changes and log messages, typically grouping squashed commits together based on the change they represent.

Unit testing PATCH requests with Django’s test client

Whilst developing with Django and Tastypie I discovered that the Django test client doesn’t contain a method for testing PATCH requests, I have submitted patches to the Django project to correct this, but until those patches get accepted I need a way to test PATCH, so I extend the Client class in my tests.py file:

from django.test.client import Client, FakePayload, MULTIPART_CONTENT from urlparse import urlparse, urlsplit

class Client2(Client): 
""" 
Construct a second test client which can do PATCH requests.
""" 
    def patch(self, path, data={}, content_type=MULTIPART_CONTENT, **extra): "Construct a PATCH request."

    patch_data = self._encode_data(data, content_type)

    parsed = urlparse(path)
    r = {
        'CONTENT_LENGTH': len(patch_data),
        'CONTENT_TYPE':   content_type,
        'PATH_INFO':      self._get_path(parsed),
        'QUERY_STRING':   parsed[4],
        'REQUEST_METHOD': 'PATCH',
        'wsgi.input':     FakePayload(patch_data),
    }
    r.update(extra)
    return self.request(**r)

I can then do the following:

class testpatch(TestCase): def setUp(self): self.client = Client2()

def test_patch(self):
    response = self.client.patch(
        path='/api/v1/enterprisepod/1/',
        data='{"gateway": "10.10.0.1/24"}',
        content_type='application/json',
        **{'HTTP_AUTHORIZATION': APIKEY}
    )

Using-facebook comments with Jekyll

I wanted to implement a comments on my blog, but as I don’t use a blogging software like wordpress I needed to implement something which used an external comments system. I’ve used disqus in the past, but don’t pa rticularly like the system, so having seen some major blogs use facebook’s “Social Plugins” comments system I thought I’d give it go. So here is how I implemented it.

1. Sign-up as a facebook developer:

Go to http://developer.facebook.com, sign in and create a new app for your website.

2. Include the required javascript:

Depending on the layout of your site you need to include the facebook javascript sdk, you could conditionally include it (see 3.). Make sure to set APPID to the correct number.

<body>
<div id="fb-root"></div>
<script>(function(d, s, id) {
  var js, fjs = d.getElementsByTagName(s)[0];
  if (d.getElementById(id)) return;
  js = d.createElement(s); js.id = id;
  js.src = "//connect.facebook.net/en_GB/all.js#xfbml=1&appId=APPID";
  fjs.parentNode.insertBefore(js, fjs);
}(document, "script", "facebook-jssdk"));</script>

3. Include the comments div in your post template:

Conditionally include the comments div in your post template, by conditionally including it, you can decide on a post by post basis whether you want to have comments.

{{ "{% if page.comments " }}%} 
<hr/>
<h2>Comments</h2>
<div class="fb-comments" data-href="{{ "{{ site.url " }}}}{{ "{{ page.url " }}}}" data-num-posts="4" data-width="706"></div>
{{ "{% endif"}}%}

4. On each entry where you want comments add some YAML Front Matter.

comments: yes

For more information see: http://developers.facebook.com/docs/reference/plugins/comments/.

django logging

I frequently write small APIs using a python web framework called django, but when making test calls to the API with curl (or even urllib2) getting the massive DEBUG=True exception page back can be pain. So by using the logging framework you can set DEBUG to False and watch for your exceptions in a log. Here’s how:

1. Set DEBUG to False in your settings.py file:

DEBUG=False

2. Update the logging section in settings.py:

APPHOME="/home/pfarmer/projectname"

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        'standard': {
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
    },
    'handlers': {
        'default': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': '%s/logs/default.log' % APPHOME,
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },
        'request_handler': {
                'level':'DEBUG',
                'class':'logging.handlers.RotatingFileHandler',
                'filename': '%s/logs/django_request.log' % APPHOME,
                'maxBytes': 1024*1024*5, # 5 MB
                'backupCount': 5,
                'formatter':'standard',
        },
    },
    'loggers': {
        '': {
            'handlers': ['default'],
            'level': 'DEBUG',
            'propagate': True
        },
        'django.request': { # Stop SQL debug from logging to main logger
            'handlers': ['request_handler'],
            'level': 'DEBUG',
            'propagate': False
        },
    }
}

Now when some code causes an exception, the traceback should appear in django_request.log

3. Make sure you have a 500.html template in place.

4. The added bonus….

The added bonus is you can use the standard python logging module to log during view execution, add the following to each of your views.py:

import logging
log = logging.getLogger(__name__)

5. Then use code like this to log messages:

log.debug("API login attempt for %s" request.GET['user'])

This will appear in the log like this:

2011-11-16 23:02:32,339 [DEBUG] api.views: API login attempt for pfarmer