Publish my first npm package

I published my first npm package today. It's a micro-css framework, based on tachyons.css, but extended with Algolia-specific classes. What the package actually do is not the point of this post, though.

What I'd like to share here are the tricks and workarounds I had to get right to publish the final package on NPM.

Having a script ready for release

I've reused a release script we've been using at Algolia on one of our JavaScript project. It already handles all the nice dance of updating the develop branch, switching to master, merging, updating the version number and publishing to npm.

I've extended it a bit to add an actual build step, as I have to compile SCSS files to final CSS (including a minified version).

Not using yarn, but good old npm

Main problem I had was that I could run my ./scripts/release script from the command line and have everything uploaded, but if I ever did the same through yarn run release, it would fail on the actual publish phase.

After some digging I found than yarn cannot read the npm credentials files needed to publish. The solution is to run npm run instead of yarn run.

So I added a check at the start of my ./scripts/release script to stop execution if the script is not run from the context of npm. Turns out you can do that by reading the process.env._ variable that contains the path to the binary that is executing the script.

if (!process.env._.match(/npm$/)) {
  shell.echo('This script must be run with "npm run release"'.error);
  shell.echo('It will not correctly publish to NPM if used with yarn');
  process.exit(1);
}

That way, I won't forget that I need to run it through npm and not yarn.

Publishing needed files

Once published, I tried to install it as a dependency in another project and realized I was publishing unneeded files. I don't need to publish my ./script repository for example.

My package is also based on tachyons, so I include the SCSS source file of tachyons in my own SCSS file to build the final CSS file. But I don't want to publish all the sources files of tachyons when you're pulling my dependencies.

So I started fiddling with the files key of package.json. You're supposed to put in here and array of all the filepaths (as globs) you'd like to include in your final published package. You can also define a .npmignore file (with the exact same syntax as a .gitignore file) that is used to exclude some files from being publish.

The sneaky part here is that the files key has precedence over the .npmignore file. And in my case I wanted to include the ./scr directory, but exclude the ./src/vendors directory. No luck there for me.

In the end I had to completly ditch the files and use .npmignore. So instead of defining a list of files to include and then exclude some specific files, I had to define a list of all the files/directories I wanted to exclude. Not as easy, but it works and I now have the built .css files as well as my source .scss files in the final package.

Fighting with the postinstall script

Now, the last bit was a bit more tricky. I use a postinstall script in my package.json to automatically copy the tachyons.scss source files to ./src/vendors (the directory I want excluded from the previous step) from my node_modules.

That way I don't have to commit tachyons.css files to my repo, but still have it referenced in package.json, and pinned to a specific version. It keeps my dependencies clear, and my repo lightweight.

The thing is, the postinstall hook is called both when you're manually running yarn install locally, but also when your dependency is installed by someone else. That was a big surprise for me. I could imagine that was the case at first, for all the security implications this might cause (anyone could run an arbitrary script as part of the postinstall hook of any deep depenency you have in your project).

Still, I didn't want that to happen in my case because I actually excluded the ./scripts/postinstall script from my build, so the postinstall hook was failing, and the whole installation of my package was failing.

After more than 10 releases to test it, I settled on a trivial solution. At first I tried to check if the postinstall hook was triggered as part of a local or dependency install but could not find any reliable way to test it.

Instead, I checked if the ./scripts/postinstall file was present, and run it if so. Because I was excluding my ./scripts folder from the published packaged, the whole hook was not run when installed as a dependency.

Here is the final postinstall hook I used (the true is needed so the hook actually succeed for the install to complete).

{
  "scripts": {
    "postinstall": "(test -f ./scripts/postinstall && ./scripts/postinstall) || true"
  }
}

Server backup using Dropbox

Last week-end I received an email from the company that hosts this very own website. Their monitoring detected that there was an issue with my server. 2 hours later they were able to tell me something was wrong with my hard drive. 24 hours after that they explained the procedure to get the hard drive replaced.

I'm using this dedicated server for hosting websites, but also as a backup for some personal data and private repositories. This was too much reponsability for one server. Having it down made me realize my offsite backup was not 100% reliable.

I decided to keep this server for hosting my websites. Havin a dedicated server let me tweak nginx and do deploys with rsync. I'm deploying static websites so the security risk is low.

But now I needed to move my backup somewhere else. This whole experience made me realize I don't want to have to maintain the backup. I want something that "just works™" and where I could push my data. I decided to go with the Dropbox Plus plan as Dropbox a service I'm already using. $10/month for 1To is ok. I'm paying for the peace of mind it will bring me.

It also comes with features I couldn't have done with my dedicated server like a Web UI to browse pictures, or a way to share (and revoke) links with people. It's not the main features I was looking for, but they are great nice to have, still.

Now the question is: how do I move my 400Go+ to Dropbox. I don't even have that much space available on my local computer and downloading it all to re-upload it afterwards would take ages.

I have a rescue access to my server with a way to mount the hard drive and explore it. It's Debian-based, so I can install the Dropbox headless client.

Once installed, I also have to install their python helper and rename it to dropbox. From this tool, I can command Dropbox. I started by excluding all directories from the synchronization. I didn't want this rescure box to download all my current Dropbox.

Then I created a new directory in my Dropbox (through the Web UI), and added inside symbolic links to all the directories I wanted to backup. And it worked. I didn't expect it to work that easily, to be honest. I could see in the Web UI directories being created and all my data being upload. Now all I have to do is wait a couple hours while the backup is happening in the background.

That's the most original way of using Dropbox I ever did.

Meetup random user picker

Being co-organizer of two meetups (HumanTalks and TechLunch) in Paris, I often give random prizes to the attendees at the end of the sessions. It can be free tickets to conferences we are partners with, or gifts from some of our sponsors.

To choose who is going to get the prize, we resort to randomness and we have a bunch of JavaScript scripts lying around to do that. To make the process easier and scalable to more meetups, I created an online random attendee picker.

Screencast of the tool in
action

You enter the url of your event in the input field and it will get the page and pick one of the attendees at random. Because of CORS issues, I could not directly load and parse the remote meetup.com page from the website. I used webtask.io to do that instead.

webtask.io is mix between Gists and Heroku. You push a server-side snippet of js code to their platform (or write it directly in their online editor), and it automatically hosts it. You then have a url you can use to target your script, make it run, and get the results. It accepts query string as inputs.

The whole download, parse, format-as-JSON logic was then moved into this webtask script and that's this script that I'm requesting on my webpage. You can find the code on GitHub.

This kind of architecture is called serverless, and that is the future of the web. Static hosting through GitHub pages, while still allowing for server-side scripting when you need it. All of that, for free.

Making of CSS Flags

If you're following me on Twitter, you might have seen that I released a crazy project called CSS Flags. People often ask me why I did such a thing. This blog post is an attempt at explaining the thought process that motivated me to build it.

Trip to Tours

Two years ago I was on holidays and visited the city of Tours, in France. It's a small medieval city where you can find the museum of "Compagnonnage". I had no idea what this meant before visiting it, and learned everything about it during my visit.

Companions were craftsmen regrouped in guilds. They could be blacksmiths, carpenters, shoemakers, painters, etc. They regrouped and shared their knowledge between them. Being recognized as a Companion was a mark of quality in your work.

They followed a strict hierarchy of disciples and masters. More importantly, they had a specific relationship with work. They wanted work to stay nothing more than a tool. They refused to let work drive their lives and wanted to stay the masters of their tools, and never be enslaved by it.

But when visiting the museum, what struck me the most was all their internal rules that they followed 200 or 300 years ago. They strove to teach their craft to the new comers, to never compromise on quality, to always search for a better way to do something. They were passionate and knew that what they were working on was a small piece of something much bigger.

And all this resonated with me. It felt a lot like what I experience everyday as a software developer, with the principles of Software Craftsmanship and the Egoless Programmer. Reading all those rules on old parchment and realizing they still apply today was a surprising experience.

The road to the Chef d'œuvre

I felt close to those people, but there was one thing that they were doing back in the days that we do not do today, though. One important part of the initiation process, the step that makes you from disciple to master: the "Chef d'œuvre", or masterpiece.

Every Companion has to build one single piece of fine craft to become a master. They spent an awful lot of time on it, between 10 to 15 years. Masterpieces were miniature version of a bigger construction, where they had to use every technique of their craft, and then more.

Doing it on a miniature version allowed them to mess up with no consequences. They can start over without anyone getting harmed. It starts as a perfect way to learn the basics, perfecting them on a small element. Then they add another technique on top of the first, and they will not only learn each technique independently, but also the potential synergies between them and limitations of each. By combining different techniques, they will even discover new ways to build and see issues through new perspectives.

What started as a learning sandbox evolved as an experimentation that pushed the boundaries of the craft. The whole masterpiece is a metaphor for the growth of the craftsman, from disciple to master. Once their masterpiece has been validated by the previous masters, the disciples becomes masters themselves.

What about doing the same?

I strongly encourage you to go to Tours and visit the museum. There are hundreds of masterpieces from different crafts, each more impressive than the last. Even if you don't speak French, the beauty of it is worth it.

After leaving the museum I thought about building my own masterpiece (in all modesty of course). After all, if it worked for them, considering everything we had in common, it should work for me too.

I didn't want to spend 10 to 15 years on it, though. I also have no skills with wood, so I decided to do it in CSS. I think it is a powerful language, but often underestimated. It's also a language I know well but that evolves so quickly I rarely have the chance to try the latest shiny features because I need to keep backward compatibility with older browsers.

The perspective of having my own miniature sandbox where I could try anything I wanted was appealing.

The challenge I set myself was to reproduce all the flags of the world in pure CSS. But because even that sounded too easy, I added the rule that I should use nothing more than one div per flag. Everything should be in CSS, nothing in HTML except for the div.

Learning vexillology

Which in turn forced me to learn a new word: Vexillology. Vexillology is the study of flags. Some people are passionate about music and can talk about it for hours. Well, some people are passionate about flags and can talk about it for hours.

I discovered a whole new world with it's own vocabulary about colors or shapes. I discovered that each flag has its own history. By reading the history of a flag, you learn about the history of the country behind it.

This part was not a bonus, but core to what I was trying to achieve. CSS was a mean to an end, but what I was building were flags. I needed to understand my subject if I was to build it correctly.

When I started the project, I already knew how to make the easiest flags. Some other I knew were going to be a challenge but I had some ideas to try. But most of them I had no idea where to start. It did not stopped me, though. I knew I was going to learn along the way, all I needed was to get started.

The gritty details

That's how I got the idea and the motivation for this project. By starting with the easy bits first it gave me confidence that I could do it. Going progressively to more and more complex flags I was faced by ne problem at a time. The solutions I found in my first flags I could re-use and combine in the latest one.

I'll write a more complete (and CSS-focused) list of tips in an upcoming article.

How to spot a bullshit hackathon

Being co-organizer of a meetup group, I often receive messages from companies asking us to promote their hackathon or conference. We always refuse to do such advertisement, unless we've already attended their events and appreciated it.

Last week, we received a mail about what I call a "bullshit hackathon". I've translated it to English and replaced the name of the company with FooBar. Enjoy your read and stay with me until the end, because I'm going to tell you what is bullshit about it.

Hello,

We work for FooBar, a start-up organizing innovation contests and hackathons!

We wanted to tell you about an innovation challenge that would greatly interest you as well as your meetup members!

That's why we invite you to the FooBar Challenge, organized by IBM in cooperation with BRED, in partnership with FooBar and revolving around the customer experience revolution.

We're asking you to develop the prototype of a web or mobile application, to be used internally, that should allow BRED to know more about its customers. Participants will have access to the Spark and Bluemix tools to develop their projects.

More than €15.000 worth of prizes are to be won by the three best teams. A trip to San Francisco (worth €3000/person), MacBook and Oculus virtual reality headsets!

To participate, you just have create a team and present a first PowerPoint document of 5 slides.

The FooBar, IBM and BRED teams.

Note: BRED is a French bank.

If you don't see what is wrong with that message, let me explain.

First of all, refrain from adding exclamation marks everywhere! It's annoying! It made you look foolish! And trust me, you don't need that!

You "want to tell me about something that will greatly interest me and my meetup members"? I don't think so. You actually are desperate to have developers going to your hackathon and working for free. It seems like in your head hackathons are a cheap way to get developers to do your work for you. Feed them with pizza and beer, give them a Hipster MacBook and a geeky Oculus and they'll do whatever you ask them to do.

Because that's what you do. You are asking to develop a prototype. That's not the spirit of a hackathon, where you build whatever you want. Here it looks more like a classical set of specifications for your average day job project. And look what kind of project, an internal tool for a bank to know more about their customers. That's not anybody's dream project. That's more like the kind of app you'd build because that's your job and you have no other choice. I might be exaggerating a bit here, some people may like building this kind of app, but that's not the point. The point is that you do not tell people what to build in a hackathon.

Oh, and you're too kind to let the participants use Spark. You know, because Spark is OSS and no-one need your approval to start using it.

At that point in reading the email, I was already convinced that this thing, whatever that was, was a hackathon only by name. But the final selection on with a 5-slides PowerPoint was the nail to the coffin. First you're doing a filter before it even began, and then you're doing it on a bullshit PowerPoint?

Haha, thanks but no thanks.

I hope that other "hackathon organizers" will read this and work a bit more on understanding what a hackathon actually is before sending those pathetic emails.