pantone2hex

I recently put together a small command-line tool to convert Pantone colors to their hexadecimal value.

$ pantone2hex 122C
#fed141

You can grab the code on GitHub.

Password storage, are you doing it wrong ?

Here are a few simple questions to ask yourself to know if you're storing password incorrectly.

Are you storing them as plaintext ?

Really ? Well, that's bad. Very bad. Whenever a website sends me my password in cleartext in an email I delete my account. I know I can't trust their security. Whatever the size of your company, you'll eventually get a leak of your database, so don't make it easily readable.

Are you hashing password with md5 or sha1 ?

That's a bit better, but is as much useless. md5 and sha1 hash passwords to a limited (albeit very large) set of values. While you can't "un-md5" or "un-sha1" something, you can still create a list of all possible hashes (known as a rainbow table). Rainbow tables for md5 and sha1 can be downloaded and stored on a few hundred gigabytes nowadays. Then an attacker just have to search for a hash in the table to get one of the possible original passwords.

Are you hashing with md5 or sha1, but with a app-wide salt ?

Salting is a very good idea. Instead of hashing the password, you hash the password and a random string (known as the salt). That way, rainbow table found online became useless because they do not know your salt. But chances are that if an attacker got your database, they also got the source code of your app, including the salt. It's just a matter of time for them to build the specific rainbow table matching your salt.

Are your hashing with md5 or sha1, but with a specific salt per user ?

Now we're talking. That is a very effective way to slow down attackers. Even if they get their hands on your database, and the salt associated with each user, they will have to create as many custom rainbow tables as you have users in your database. This moves the attacks from massive brute force to specific users and so diminishes the threat. The only drawback is that, thanks to Moore's Law, computer are getting faster and faster and in a few years times generating hundred or thousands of custom rainbow tables will be inexpensive.

Are your hashing with bcrypt ?

To get the more future-proof implementation, you should use bcrypt. Bcrypt acts as a md5 or sha1 with specific salt per user, except that it's designed to be super slow. And that's a good thing. If an attacker needs to build a rainbow table, it will take him forever. And the best thing is, you can even adjust the level of time the method should take, and increase it in a few years when computers will be faster. The resulting bcrypt hash will contain the salt, and the level of complexity used to generate it.

Conclusion

You now have a good overview of what to do and not do when storing password. Remember that the main goal is to make life as hard as possible for a potential attacker to read one of your users password. And the best solution not only work today, but will still work tomorrow.

Automatically save pictures from phone to Dropbox folder

The Dropbox app on my Android offers to automatically save pictures I take to my Dropbox account. That is a very great feature, removing the pain of doing backup of pictures on a regular basis.

But it actually saves them in a special Dropbox folder named Camera Uploads, one that cannot be moved, and is not synchronized with the desktop Dropbox.

So I create a special ifttt recipe that will copy any new picture added to this folder into a real Dropbox folder. I simply chose Dropbox as the input, with /Camera Uploads as the folder to listen to. Then I also chose Dropbox as the ouput, as the File URL, as the File name, and chose one of my folders for the Dropbox path folder.

Now, whenever I take a picture on my phone, it gets saved on my Dropbox account, and then ifttt kicks in and copy it to another directory in my Dropbox which will in turn save it on my local Dropbox folder.

That's quite circumvoluted to simply save a picture from my phone to my computer, but that's still the easiest way I found.

Generate dummy images for testing file upload

Today, I needed to test a file upload mechanism, and needed a bunch of different files, to be able to test that the max file size, max/min file dimensions and image type where correctly checked.

I asked my good friend the command line and came up with the following command to generate the needed files.

$ dd if=/dev/urandom of=1mo.binary count=1024 bs=1024
1024+0 records in
1024+0 records out
1048576 bytes (1,0 MB) copied, 0,0684895 s, 15,3 MB/s
$ ls
total 1,1M
-rw-r--r-- 1 tca tca 1,0M nov.  27 12:00 1mo.binary

This created a 1mo.binary binary file of exactly 1Mo. That can be useful if you simply need to test size limits. But I also needed my files to be valid jpg files. So I used convert.

$ convert -size 640x640 xc:blue 640.jpg
$ ls
total 12K
-rw-r--r-- 1 tca tca 2,7K nov.  27 12:04 640.jpg

This created a valid blue jpg file of 600x600 px. But the file size was way to small, and I simply needed to have bigger filesize but not bigger file dimensions. Best way to do it was to add crappy metadata that will simply adds to the filesize. So I used /dev/urandom again to get random data.

$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 1048576 | head -n 1 > 1mo.txt
$ ls
total 1,1M
-rw-r--r-- 1 tca tca 1,1M nov.  27 12:04 1mo.txt
-rw-r--r-- 1 tca tca 2,7K nov.  27 12:04 640.jpg

This generated a 1mo.txt file of random alphanumeric garbage. You can change the fold -w value to increase the size of the generated file. Next step was to feed this value to our jpg file.

$ exiftool 640.jpg -comment\<=1mo.txt
$ ls
total 2,1M
-rw-r--r-- 1 tca tca 1,1M nov.  27 12:04 1mo.txt
-rw-r--r-- 1 tca tca 1,1M nov.  27 12:05 640.jpg

This updated the 640.jpg file by adding the content of 1mo.txt into the comment metadata. You need to use the <= syntax to feed it the content of the file because your shell might not like having a 1Mo argument. Also, you need to escape the < or your shell will try to interpret it.

Now you're ready to generate jpg files of any dimensions and any filesize.

Pushing to production and Github in one command

I'm using git for all my workflow. I use either GitHub or BitBucket to store my code online. And for some tiny projects, I'm also using git directly to push in production.

Pushing to own remote

I have a few repositories that simply holds a bunch of html and css files, to display a very simple page. Whenever I push some changes to thoses repositories, I want to have the changes directly reflected online.

For this I created on my server a new repo, aptly named repo. In repo, I simply ran git init --bare to create a bare repository. Now, from my local repository I just update my local git repository to point the origin remote to this bare repository. Running git push pushed my changes to this repo.

Easy, I have my own repo on my own server to store my files.

Pushing to production

But that's only a bare repo, holding the list of changes but not exposing the working directory. For that, I cloned repo into another directory using git clone ./repo ./dist. This dist directory is actually served by nginx.

I added a hook to repo/hook/post-receive with the following code :

#!/bin/sh
unset GIT_DIR
cd /path/to/my/dist/directory
git pull

This will ran everytime the repo receives a new push. It will go to the dist folder and pull changes from repo (as repo is the default origin for dist as we cloned from it).

The part about unset GIT_DIR is needed so that the hook correctly run in a bare repo.

Now, everytime I push my code, the hook will be run and the dist repo will be updated. And as this directory is exposed through nginx, it will be directly available to all.

Pushing to multiple remotes

But that's not finished yet. I don't like having my code saved only in one place. I'd like to also have my sources available on GitHub. So I updated the post-receive hook by adding the following lines :

cd /path/to/my/repo/directory
git push

Of course, I also configured my origin remote to be GitHub, but you can make it any repo. This will automatically push the content to a secondary repo whenever the primary one receives new data.

Conclusion

With simple git hooks I managed to push my code to production and save the source in two different repository whenever I git push. Less commands to type, more time to code something else.