Category Archives: Linux

Docker OS X / Homebrew quick start

This post is mostly for my own reference. I’m still in the very early stages of understanding and using docker.


# The docker cli client app
brew install docker

# Tool for installing a docker machine (VM and or docker layer for holding containers (the whale))
brew install docker-machine

# Create a local docker machine using virtualbox as the VM, call it 'dev'
# Assuming this is where it gets clever as we can create docker machines for reference locally way off in the cloud
docker-machine create --driver virtualbox dev

# Nothing worked properly until I did this, not sure what it actually does, probably sets the active docker environment
eval "$(docker-machine env dev)"

# Pull down and register the whalesay container
docker pull docker/whalesay

# Run the cowsay command on the whalesay container with the argument 'boo'
docker run docker/whalesay cowsay boo

Of course, this gets more interesting when you’re running redis, nginx etc.. Not sure about dockerfiles yet, probably fairly simple. Really not sure about deployment.

Heartbleed – why you should be really worried

If you’re not already familiar with the heartbleed bug, have a look at this website: http://heartbleed.com/ – I don’t want to talk about what it is exactly as lots of people have done that.. just why you should be more worried than you are.

If you were (or still are?!) running an affected version of openssl:

  1. You almost certainly have NO way of telling whether someone attacked your server
  2. You may have been attacked through any service using SSL.
  3. All passwords & usernames, root or otherwise may have been logged by a remote attacker. These can be used unless changed
  4. All keys public and private, SSH, SSL or otherwise may have been logged.
  5. If your server was attacked you should consider ALL the contents of the servers hard disk revealed to the attacker.
  6. Even the memory of the server may have been compromised… this could be things like credit card details and other stuff you wouldn’t dream of storing un-encrypted anywhere.
  7. Your server may have been compromised ‘quietly’, you probably have no way of knowing this unless you run an IDS or something similar.

The worst bit is, you almost certainly won’t know if this stuff has happened. I’m not a fear monger, but this really is very, very bad.

If you’re just a person that used a website or service that used/uses an affected version of openssl:

Anything you did with that service is essentially in the hands of an attacker. For example, your usernames and passwords, your credit card information, your emails, your uploaded dropbox photos and work documents.

All in all, if you’ve used the Internet in the last two years, there’s a chance that your data has been stolen and you won’t know until it’s too late.

The only small amount of good news is that hopefully nobody has been exploiting this vulnerability. Hope is never a good security measure.

My biggest concern with the whole thing is not that my server or gmail might get attacked in the future, but that it may have been attacked without my knowing in the past. There’s not much anyone can do about this. Personal data may well have been taken – this is something you can’t fix.

For now, update your passwords, keys, revoke your old SSL certificates. Lock down your servers, secure your firewalls, set up IDS. Nuke your server and start again? Hope nobody really did anything too nasty. This bug may prove much worse than first thought.

PHP: Replacing short open tags with proper ones recursively in a big code base

We recently took on a horrible code base at work, with lots of open tags in the code like this:

<? calculateVat(123..

As far as I know this way of opening PHP code is deprecated and soon won’t be supported at all so I thought I’d just use sed to fix this but it wasn’t quite that simple.

Sed has no way of doing look-aheads with regular expressions meaning we can’t tell it to not turn <?php into <?<?php .. ! So we have to use perl (or something else that has ‘proper’ regexp):

# Convert <? (without a trailing space) to <?php (with a trailing space):

find . -name "*.php" -print0 | 
xargs -0 perl -pi -e 's/<\?(?!php|=|xml|mso| )/<\?php /g'

# Convert <? (with a trailing space) to <?php (retaining the trailing space):

find . -name "*.php" -print0 | xargs -0 perl -pi -e 's/<\? /<\?php /g'

Note, this could probably be improved by not using xargs (xargs has issues with spaces and funny characters in the path) – you’d probably want to use find’s exec command with the curly braces {}…

Anyway, this should fix up your entire codebase, but please CHECK the results afterwards, I only realised it was turning <?xml into <?php xml after checking..

Comments welcome 🙂

Recovering data from a WD Mybook Live 2TB / 3TB (or similar)

This article was originally written in 2013 and applies to a fairly old model of the WD Mybook Live. The procedure here may well not work for you, please just use it for ideas. Also, check the comments as a lot of other people have tips!

When the WD Mybook Live 3TB NAS was released, I went out and bought one and promptly put all my stuff on it. I have never kept anything *really* important on there as I didn’t have anything to back up all that data on to. Anyway, the NAS was destroyed in a thunderstorm one day but fortunately the hard disk still worked. Unfortunately the way WD formats these NAS hard disks is very strange indeed. Normal means of recovering data from them don’t work. Scouring google for tips on how to get your data back results in nothing useful.

I tried various hard disk enclosures.. these have no chance as they all pretty much only support up to 2TB disks. I tried various ext2/ext3 windows drivers.. no good. I tried linux machines with custom built kernels.. also no good.

There are basically three problems:

  1. The hard disk is big, USB enclosures hate that
  2. The hard disk uses a (new) GTP partitioning scheme, older versions of Linux will struggle.
  3. The hard disk ext4 partition (the one with all your data on) is formatted using 64kb sectors. This is the biggest hurdle as your PC running linux will not be able to mount it!

To recover your data:

A rough understanding of Linux is useful. In short you’re going to need to get the hard disk out of the NAS enclosure, stick it into a PC running a recent(ish) version of Linux and mount the partition using fuseext2. The trick to being able to mount the 64k sector disk is to avoid directly mounting it using the most excellent fuseext2 package. You’ll also need somewhere to put the recovered files – maybe another WD NAS? Maybe not 🙂

Step by step:

  1. I recommend getting an old PC (with sata ports inside) and an old hard disk for installing Xubuntu (no need for ‘heavy’ Ubuntu) on. Don’t plug in your WD hard disk yet, you don’t want to accidentally format it!
  2. Once you’ve installed xubuntu or whatever you’re using, turn off the machine and plug in the WD hard disk. Boot it back up again.
  3. Start a terminal and type:

    sudo apt-get install fuseext2 parted
    sudo parted -l

  4. The parted -l command will show you hard disks and partitions labelled /dev/sd.. something. You will see both the hard disk you installed linux on and the WD hard disk. The WD one will have a label such as: Model: ATA WDC WD30EZRS-11J (scsi), have a look down the list of partitions for the big ext4 one, like this:

    4      4624MB  3001GB  2996GB  ext4         primary

    Make a note of the disk (/dev/sdb) displayed underneath the hard disk model, and the partition number (in my case number 4). The path to the partiton for me is /dev/sdb4 (it may be different for you).

  5. Now you’re ready to mount the disk. To make life easier for you non-terminal types, I’ve provided instructions on mounting it in your home directory:

    sudo mkdir -p ~/WD
    sudo fuseext2 -o ro -o sync_read /dev/sdb4 ~/WD 

You may hit various hurdles along the way. I’m not entirely sure if older PCs can support really big hard disks. If you’re using an earlier mybook world or something I believe they used XFS and software raid partitions which this blog post isn’t really about.

Remember, always back up anything you care about!

Please let me know if you found this useful, and link to it so it helps others stuck in the same situation!!

More info: Mounting filesystems > 4Kb block sizes on Linux

How to prevent saslauthd sucking up memory

For about a year I noticed that very infrequently my VPS would run out of memory.. at first I thought it was probably just a wordpress plugin, but after a while I discovered it was actually saslauthd. This is a known bug (not known very well though..) with saslauthd on Debian. Anyway, here’s the fix – I’m not totally sure of the implications, so if you run a busy mail server I’d recommend you look into it a bit more before doing it:

I changed this line:

Update the file: /etc/default/saslauthd

OPTIONS=”-c -m /var/run/saslauthd”

to:

OPTIONS=”-c -m /var/spool/postfix/var/run/saslauthd -r”

I think this basically disables threading and enables forking of the process (or something like that) which is what was responsible for the memory leak.

So if you’re running out memory on your server, maybe give this a try 🙂

[Thanks to Djamu: http://www.howtoforge.com/forums/archive/index.php/t-52750.html ]

TWiT Live via MediaTomb for your WDTV Live (and probably ps3, whatever..)

I recently upgraded from a WDTV media player to a WDTV Live. The WDTV is just a simple set top box that allows you to play video on your tv, much like an xbox 360 and a whole bunch of other devices. The WDTV live allows you to play stuff over the network which opens up a whole host of cool stuff if you use the mediatomb DLNA server software available for linux (google tversity if you want something similar for windows..)

Anyway, I managed to get my wdtv live to play the live TWiT (http://live.twit.tv) video stream – meaning I now have a real internet tv station right up on my tv!

This is a bit rough and ready, but hopefully I’ll improve it over time. You’ll probably need to have a rough idea of what you’re doing.

Make sure transcoding is set to yes in the config:

<transcoding enabled=”yes”>

In mime type profile mappings in the config:

<transcode mimetype=”video/x-flv” using=”ffmpeg”/>

So anyway, you’ll want to add this profile to your config:

<profile name=”ffmpeg” enabled=”yes” type=”external”>
<mimetype>video/mpeg</mimetype>
<accept-url>yes</accept-url>
<first-resource>yes</first-resource>
<accept-ogg-theora>yes</accept-ogg-theora>
<agent command=”/usr/local/bin/ffmpeg-tr.sh” arguments=”%in %out” />
<!– <buffer size=”14400000″ chunk-size=”512000″ fill-size=”1024″/> –>
<buffer size=”5242880″ chunk-size=”102400″ fill-size=”1048576″/>
<hide-original-resource>yes</hide-original-resource>
</profile>
And create this shell script:
#!/bin/sh
exec ffmpeg -i “$1” -sameq -f mpeg -me_method zero -aspect 16:9 – > “$2”
The final piece of the puzzle is to create a link to it in the mediatomb database via the web interface:
Type: External Link (URL)
URL: http://bglive-a.bitgravity.com/twit/live/high
Protocol: http-get
Class: object.item.videoItem
Mimetype: video/x-flv
Notes:
  • You might be able to/want to change the mimetype to a made up one, video/x-twitlive or something so it doesn’t conflict with other types.
  • You’ll probably want to turn off the other profiles you won’t be wanting.
  • Play with the buffer sizes, the commented out one seemed to go a bit funny for me.

A special thanks to aTc from #mediatomb on irc.freenode.net – without whom this wouldn’t exist.

Ubuntu Lucid (10.04) on Dell Studio 1555

Ok, so I decided to natively install Ubuntu 10.04 on my Studio 1555.. fairly impressed.. almost everything works out the box which is a bit annoying.

The only issue I’ve had is that the included proprietary ATI driver fails when you try to use suspend, however this is apparently easily circumvented by using the most recent driver from the ATI site (it was a bug with their driver.)

Using the open source driver results in poor power management, so I’d advise against doing that.

Trac quick start on Debian

Trac is pretty easy to set up on Debian, here’s a mini guide of what I did to get it working nice and quick. You’ll probably want to configure users etc afterwards, but this should be enough to get going.

Install:

apt-get install trac libapache2-mod-wsgi

Initiate the trac environment:

trac-admin /var/www/srdev/trac/ initenv

Set permissions:

chown -R www-data /var/www/srdev/trac/

Install the wsgi script and web resources. The first argument before ‘deploy’ should match the install environment (the path above.) The second argument following ‘deploy’ can probably go anywhere, but I just shoved it in the trac dir.


trac-admin /var/www/srdev/trac/ deploy /var/www/srdev/trac/www/

Next, we need to tell apache how to call the wsgi script, we do this using an alias. You can use / if you just want it to be in the same dir as your virtualhost, but in my case I wanted trac to be in a sub directory of the site (www.mysite.com/trac.) Add this to your virtual host (/etc/apache2/sites-enabled/whatever.)

The directory directive specifies some permissions for the script.

WSGIScriptAlias /trac /var/www/srdev/trac/www/cgi-bin/trac.wsgi
<Directory /var/www/srdev/trac/www/cgi-bin/trac.wsgi>
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>

Finally, enable the wsgi module, and then restart apache. Now it *should* work 🙂

a2enmod wsgi
/etc/init.d/apache2 restart