Sending data to an iframe with Vue.js

Communicating from a parent window to a child iframe is a known problem in JavaScript and has already been solved. How to do it in the context of a Vue.js application is slightly different but based on the same principles.

I was confronted with the matter yesterday and had to find an elegant way to send data (credentials) from the parent window to a child iframe from my Vue.js app. Here's how I did it.

Initial markup

Both the parent (main window) and child (iframe) need some specific markup for this to work:

<!-- App.vue template -->
iframe(
  src='./iframe.html,
  v-on:load="onLoadIframe",
  name="myIframe"
)
// App.vue script
function findIframeByName(name) {
  return _.find(window.frames, frame => frame.name === name);
}

export default {
  methods: {
    onLoadIframe(event) {
      const iframe = findIframeByName(event.currentTarget.name);
      iframe.doSomething({
        appID: "SECRET_APP_ID",
        apiKey: "SECRET_API_KEY"
      });
    },
  },
};
<!-- iframe.html -->
<body>
  <script type="text/javascript">
    window.doSomething = function(parentData) {
      console.info(parentData);
    }
  </script>
</body>

How does this work?

When you'll load App.vue, the template will be rendered and will create the iframe, loading iframe.html in turn.

Because we added a v-on:load event on the iframe, we'll be able to know when the iframe will be loaded. At that point, we can read the property .currentTarget of the fired event.

This will give us a reference to the iframe HTML element. We can't do much with the HTML element; we can't access the window object that is inside of it directly.

But what we can do is reading its name attribute (myIframe). Then, using the findIframeByName method, we will loop over all the window.frames elements until we find the one that matches the name.

window.frames contains reference to all the sub-windows, so we can now start calling any method defined in the global window namespace of the child iframe. In our case, the window.doSomething method.

tl;dr

The basic trick here is to wait for the iframe to load, then get hold of a reference to its inside window object, to then call any globally available method available in it.

Importing iframe with Webpack and Vue.js

I've spent hours on a webpack + Vue.js + iframe issue yesterday. As I don't want all those hours to be completly wasted, I'm going to document my issue and the final solution.

The problem

I'm working on a Vue.js application, using Webpack for building all the assets. One of the pages need to include an iframe, loading a stand-alone keen-explorer.html file.

My problem was that the keen-explorer.html was not included in the final build and resulted in a 404.

Context: Why do I need to do that?

I need to instanciate a Keen.io explorer dashboard from inside my app. The keen-explorer JavaScript module cannot be imported from a script, as far as I can tell. It needs to be loaded inside the global window object, along with its dependencies (Bootstrap and jQuery).

I tried different ways to include it in my final Webpack build, but the iframe was the best solution I could find as it will isolate all the external dependencies from the rest of my app. Anyway, back to the problem at hand.

Naive approach and first 404

I'm using vue-loader to parse the content of my .vue files. Here is my template (using pug):

iframe(src="html/keen-explorer.html")

If I run webpack, if finishes without any error, but when loading the application, I have a 404 as html/keen-explorer.html cannot be found. Webpack did not include it in the final build and considered the src as any other attribute with no particular meaning.

Making vue-loader require iframe sources

Turns out that vue-loader does not automatically import the iframe[src] values by default. You have to update your webpack config to define what it should parse:

module: {
  rules: [
    {
      test: /\.vue$/,
      loader: 'vue-loader',
      options: {
        transformToRequire: {
          iframe: 'src',
        },
      },
    },
  ],
}

Now, if I rerun webpack, I have an error because webpack can't find html/keen-explorer.html. We're making progress, at least it tries to import it.

Fixing the filepath issues

vue-loader seems to treat all assets in the template as relative to the final output, while here I'm trying to reference a file present in my src/html directory.

First thing to do was to define a resolve alias in webpack to tell it where to look for files that starts with html/ using the following config:

resolve: {
  alias: {
    html: path.resolve(__dirname, 'src', 'html'),
  },
},

Then, you should not forget to prepend your src value with a ~ in the template, to tell vue-loader that this should be resolved as an import.

iframe(src="~html/keen-explorer.html")

Rerunning webpack results in no more filepath errors. The file is found and included in the final build.

Problem is... the filepath has been replaced by the content of the file, not its path in the final build. That means that my iframe is now trying to load <doctype>[...], which does not work at all.

Replacing the file with its built filepath

We're approaching the end and the solution is near. The last thing missing at that point is a way to replace the imported file by its built filepath in the final output.

Thankfully, that's what the file-loader loader is for:

module: {
  rules: [
    {
      test: /\.html$/,
      use: [
        {
          loader: 'file-loader',
          options: {
            name: '[hash].[ext]',
          },
        }
      ],
    },
  ],
},

Doing so I can have an iframe with a src set to the relative filepath to the built html file.

Conclusion

It took me about 25mn to write this article, but more than 6h to hunt and debug this issue from start to finish (not counting all the wrong leads trying to embed the iframe in base64).

To recap, if you want to load a standalone iframe from your Vue.js application suing webpack you need to:

  1. Configure vue-loader so if follows iframe[src] attributes
  2. Prepend ~ to your src values
  3. Configure the resolve alias so filepath can be resolved by webpack
  4. Add a file-loader to process .html pages

Publish my first npm package

I published my first npm package today. It's a micro-css framework, based on tachyons.css, but extended with Algolia-specific classes. What the package actually do is not the point of this post, though.

What I'd like to share here are the tricks and workarounds I had to get right to publish the final package on NPM.

Having a script ready for release

I've reused a release script we've been using at Algolia on one of our JavaScript project. It already handles all the nice dance of updating the develop branch, switching to master, merging, updating the version number and publishing to npm.

I've extended it a bit to add an actual build step, as I have to compile SCSS files to final CSS (including a minified version).

Not using yarn, but good old npm

Main problem I had was that I could run my ./scripts/release script from the command line and have everything uploaded, but if I ever did the same through yarn run release, it would fail on the actual publish phase.

After some digging I found than yarn cannot read the npm credentials files needed to publish. The solution is to run npm run instead of yarn run.

So I added a check at the start of my ./scripts/release script to stop execution if the script is not run from the context of npm. Turns out you can do that by reading the process.env._ variable that contains the path to the binary that is executing the script.

if (!process.env._.match(/npm$/)) {
  shell.echo('This script must be run with "npm run release"'.error);
  shell.echo('It will not correctly publish to NPM if used with yarn');
  process.exit(1);
}

That way, I won't forget that I need to run it through npm and not yarn.

Publishing needed files

Once published, I tried to install it as a dependency in another project and realized I was publishing unneeded files. I don't need to publish my ./script repository for example.

My package is also based on tachyons, so I include the SCSS source file of tachyons in my own SCSS file to build the final CSS file. But I don't want to publish all the sources files of tachyons when you're pulling my dependencies.

So I started fiddling with the files key of package.json. You're supposed to put in here and array of all the filepaths (as globs) you'd like to include in your final published package. You can also define a .npmignore file (with the exact same syntax as a .gitignore file) that is used to exclude some files from being publish.

The sneaky part here is that the files key has precedence over the .npmignore file. And in my case I wanted to include the ./scr directory, but exclude the ./src/vendors directory. No luck there for me.

In the end I had to completly ditch the files and use .npmignore. So instead of defining a list of files to include and then exclude some specific files, I had to define a list of all the files/directories I wanted to exclude. Not as easy, but it works and I now have the built .css files as well as my source .scss files in the final package.

Fighting with the postinstall script

Now, the last bit was a bit more tricky. I use a postinstall script in my package.json to automatically copy the tachyons.scss source files to ./src/vendors (the directory I want excluded from the previous step) from my node_modules.

That way I don't have to commit tachyons.css files to my repo, but still have it referenced in package.json, and pinned to a specific version. It keeps my dependencies clear, and my repo lightweight.

The thing is, the postinstall hook is called both when you're manually running yarn install locally, but also when your dependency is installed by someone else. That was a big surprise for me. I could imagine that was the case at first, for all the security implications this might cause (anyone could run an arbitrary script as part of the postinstall hook of any deep depenency you have in your project).

Still, I didn't want that to happen in my case because I actually excluded the ./scripts/postinstall script from my build, so the postinstall hook was failing, and the whole installation of my package was failing.

After more than 10 releases to test it, I settled on a trivial solution. At first I tried to check if the postinstall hook was triggered as part of a local or dependency install but could not find any reliable way to test it.

Instead, I checked if the ./scripts/postinstall file was present, and run it if so. Because I was excluding my ./scripts folder from the published packaged, the whole hook was not run when installed as a dependency.

Here is the final postinstall hook I used (the true is needed so the hook actually succeed for the install to complete).

{
  "scripts": {
    "postinstall": "(test -f ./scripts/postinstall && ./scripts/postinstall) || true"
  }
}

Server backup using Dropbox

Last week-end I received an email from the company that hosts this very own website. Their monitoring detected that there was an issue with my server. 2 hours later they were able to tell me something was wrong with my hard drive. 24 hours after that they explained the procedure to get the hard drive replaced.

I'm using this dedicated server for hosting websites, but also as a backup for some personal data and private repositories. This was too much reponsability for one server. Having it down made me realize my offsite backup was not 100% reliable.

I decided to keep this server for hosting my websites. Havin a dedicated server let me tweak nginx and do deploys with rsync. I'm deploying static websites so the security risk is low.

But now I needed to move my backup somewhere else. This whole experience made me realize I don't want to have to maintain the backup. I want something that "just works™" and where I could push my data. I decided to go with the Dropbox Plus plan as Dropbox a service I'm already using. $10/month for 1To is ok. I'm paying for the peace of mind it will bring me.

It also comes with features I couldn't have done with my dedicated server like a Web UI to browse pictures, or a way to share (and revoke) links with people. It's not the main features I was looking for, but they are great nice to have, still.

Now the question is: how do I move my 400Go+ to Dropbox. I don't even have that much space available on my local computer and downloading it all to re-upload it afterwards would take ages.

I have a rescue access to my server with a way to mount the hard drive and explore it. It's Debian-based, so I can install the Dropbox headless client.

Once installed, I also have to install their python helper and rename it to dropbox. From this tool, I can command Dropbox. I started by excluding all directories from the synchronization. I didn't want this rescure box to download all my current Dropbox.

Then I created a new directory in my Dropbox (through the Web UI), and added inside symbolic links to all the directories I wanted to backup. And it worked. I didn't expect it to work that easily, to be honest. I could see in the Web UI directories being created and all my data being upload. Now all I have to do is wait a couple hours while the backup is happening in the background.

That's the most original way of using Dropbox I ever did.

Meetup random user picker

Being co-organizer of two meetups (HumanTalks and TechLunch) in Paris, I often give random prizes to the attendees at the end of the sessions. It can be free tickets to conferences we are partners with, or gifts from some of our sponsors.

To choose who is going to get the prize, we resort to randomness and we have a bunch of JavaScript scripts lying around to do that. To make the process easier and scalable to more meetups, I created an online random attendee picker.

Screencast of the tool in
action

You enter the url of your event in the input field and it will get the page and pick one of the attendees at random. Because of CORS issues, I could not directly load and parse the remote meetup.com page from the website. I used webtask.io to do that instead.

webtask.io is mix between Gists and Heroku. You push a server-side snippet of js code to their platform (or write it directly in their online editor), and it automatically hosts it. You then have a url you can use to target your script, make it run, and get the results. It accepts query string as inputs.

The whole download, parse, format-as-JSON logic was then moved into this webtask script and that's this script that I'm requesting on my webpage. You can find the code on GitHub.

This kind of architecture is called serverless, and that is the future of the web. Static hosting through GitHub pages, while still allowing for server-side scripting when you need it. All of that, for free.