01 Nov 2017I use Auth0 on my current project to authenticate my users. It prompts my users with a clear modal UI to ask them to authenticate using Google/GitHub/Other third parties. It's an easy way to handle authentication from the front-end without too much hassle.
But my app also uses Firebase as its main database, and I'm querying it from the front-end. I've set my Database rules to auth != null
meaning that only authenticated users can read or write my data.
The Firebase JavaScript SDK provides widgets to handle authentication with a GUI directly with Firebase, but as I'm already using Auth0 for the rest of the app, I don't want to have to ask my users to authenticate twice.
Auth0 also do not provide any integration with Firebase out of the box. It used to provide one in the past, but it seems to have been deprecated. It means that I have to build my own plumbing between Auth0 and Firebase.
Because I already had to handle Firebase authentication in another part of the app, I was already familiar with the fact that I needed a custom token. All I needed to add was a way for me to request it from the front-end.
What I did was create a Firebase function publicly available on a specific url that will return a custom token. I will then query this url from my front-end to get the token and then authenticate to Firebase using it.
But such a naive implementation will expose my custom token through a public endpoint that anyone could request. I had to secure it a bit more.
What I did was to get the access_token
obtained from auth0 during authentication and send it to my Firebase function in its payload.
The Firebase function would then call auth0 with the access_token
to get information about the user associated with this token. If it succeeded (and the user email was matching the one initially sent), I could go forward and return the newly mint token. This token will then be used by the front-end to authenticate to Firebase.
In the end, here is the overview of the complete token dance:
- Authentication to Auth0, saving access_token locally
- Calling the Firebase function with this access_token
- In turns calls Auth0 again with this access_token to valide it
- It matches, so it returns a Firebase access token
- Call Firebase to authenticate using the access token I got from the Firebase function
28 Oct 2017On one of the projects I'm working on, I'm using both Firebase functions and Firebase Database. I'm calling functions in reaction to specific events, that will save data in my Firebase Database.
I managed to have something running in development in a week. As I was still developing, I kept the default ACL to read:true
and write:true
on the database, meaning that anyone could read and write my data.
When time came to put to production, I followed the security best practices of Firebase and set the rules to auth != null
meaning that only authenticated users could read or write data. I thought that "authenticated users" would mean anyone with the API key.
Turns out it's not what it means. Identification (through an API key) and Authentication (through a login/password) are two different things. And Firebase is expecting me to authenticate before being able to access my data, even if I'm calling the db from a Firebase function.
Most Firebase documentation will explain how to authenticate with a front-end application. The SDKs even provides GUI elements to make the integration smoother. It has widgets to incorporate authentication with third parties such as Twitter, GitHub or Google.
Authentication using a custom token
That's useful; except when you're running your app from the backend and can't use those GUI elements. To authenticate from the backend, I had to use another method: authenticating through a custom token.
Basically what it means is that I'll give to the Firebase authentication method a token that could only have been crafted by someone with admin access. I needed to create my own token to then use it to authenticate.
What was a bit strange to grasp at first was that I needed to instanciate both a firebaseAdmin
(to create the token) and a regular firebase
instance to actually authenticate using this token.
Getting the custom token
The first step is to have an instance of firebaseAdmin
running. I found all the information in the official Firebase documentation. It needed to be initialized with the credential
option set the a valid certificate generated from my serviceAccountKey.json
key. This part is crucial as it will allow my firebaseAdmin
to mint (create) new tokens.
import * as firebaseAdmin from 'firebase-admin';
firebaseAdmin.initializeApp({
credential: firebaseAdmin.credential.cert(serviceAccountConfig),
databaseURL: 'your_url'
});
let customToken = firebaseAdmin.auth().createCustomToken('backend');
The 'backend'
value can actually be any string, but will identify the user of this token (you can see it in your Firebase dashboard).
Authenticating using the custom token
Now that I had the token, I had to actually authenticate using it. To do so, I initialized my firebase
with initializeApp
as usual, then sign in with the custom token:
import firebase from 'firebase';
firebase.initializeApp({...your_config});
firebase.auth().signInWithCustomToken(customToken);
Now my firebase
instance can read and write data from my Firebase database.
Conclusion
Hope that helps. I got confused at first between identification and authentification and it also took me a while to understand that I needed to mint the custom token with the admin and the authenticate using it.
26 Oct 2017A picture is worth a thousand words, that's why I always try to add screencasts when describing an issue I'm facing. I found it useful to be able to record my screen when I'm filing a GitHub issue about some UI or UX issue.
I have a method called gif-record
in my command line toolbox that let me do that. It let me draw a rectangle on screen, record what is happening inside, and get a .gif
as output to share as I please;
It seems pretty simple when explained like that, but is actually some kind of Frankeinstein's monster plugging command line tools together to get to the end result. In this article, I'll guide you through the pieces so you can build your own for your own needs.
First of all, I'm using slop with slop -f "%x %y %w %h"
to draw a rectangle on screen, and get back the x,y coordinates, width and height. I then pass those coordinates to ffmpeg -f x11grab
using the -s {width}x{height}
and -i :0.0+{x},{y}
options to tell it to record the screen at those coordinates.
ffmpeg comes with a lot of option flags, but what I'm using is -y
to overwrite any existing file, -r 25
for a recording at 25FPS and -q 1
to get the best video quality possible.
To stop the ffmpeg
recording, you can either Ctrl-C
the command you started, or kill it by its pid
. In my script (see link at the end of the article) I chose the second option, but won't get into more details about that here.
For the next step, I also use ffmpeg
, but now that I have a video file, I'll convert it into a serie of still frames in .png
format. The command I'm using is ffmpeg -i {input_video_file.mkv} -r 10 'frames/%04d.png'
.
The -i
marks the input file and the frames/%04d.png
will define the pattern of the output files (in that case, they will be saved in the ./frames
folder, with incrementing 4-digits names).
The -r
flag is used once again to define the FPS. 10 is enough for my needs, as I record terminal output. It's smooth enough while keeping the filesize small, but feel free to increase it. I decided to keep my recording at 25FPS to have the smoothest recording possible, but adjust the still frame FPS depending on how smooth I want the end result.
Once I have all my still frames, I'll combine them into one .gif
file. At this point, I would recommend removing some of the first and last frames as I always end up recording some garbage at the start and end. Determining the number of files I need to delete is easy to calculate based on the FPS I defined; if I want to remove 2 seconds at the start with and FPS of 10, it means removing the 20 first frames.
Converting png
files into an animated gif
can be done using convert
(it's included in ImageMagick). The basic syntax is convert ./frames/*.png output.gif
, but I also add the -delay 10
option to the mix. The actual value to pass to -delay
will require some basic math: it should be equal to 100 divided by the FPS you defined earlier. For my previous example of an FPS of 10, the delay is 10, but if you had chosen an FPS of 25, the delay should then be set to 4 (100/25 = 4)
By default the generated gif will play once and then stop. I can control the number of times it loops by using the -loop
option. A value of 0
will make if loop indefinitely (my favorite).
At this stage I thought I was done, but the generated gif will most certainly be too heavy to upload to GitHub issues as it's not optimized at all.
Compressing a gif
will require a tool called gifsicle
. But I should not use the official one but its giflossy fork. The original gifsicle does not have an option to compress files in a lossy format while giflossy
(as the name suggests) can. Why are there two versions of the same tool in diverging branches? Well, OSS is hard.
Anyway, once the gifsicle fork is installed, I can used it with gifsicle input.gif --lossy=80 -o output.gif
. The lower the value I pass to --lossy
the more aggressive the compression will be. I also add --colors 256
to force the conversion of into a 256 palette.
And that's it. By plugging all those tools together, I now have a way to record parts of my screen and share the outputy, directly from my terminal.
You can have a look at my full implementation, wrapped in a ruby script if you're interested. You should also have a look at gifify which is the tool that I was originally using for converting videos to gif files.
22 Oct 2017To maximize the usage of my new laptop battery, I wanted to have it hibernate when I close the lid. I could see I had an "Hibernate" option in the settings, but it always stayed always greyed out and I could not select it.
One morning I sat at the table decided to fix this issue. First thing I tried was running sudo pm-hibernate
to see if I could actually hibernate. The command did nothing except returning and error code 1.
After some Googling, I understood it had to do with my BIOS configuration. I have Secure Boot enabled in my BIOS, which seems to prevent hibernating.
One reboot later, after having disabled this option from the BIOS, I ran sudo pm-hibernate
again, and this time I had much better results. My screen turned off for 2 seconds, then back again for 2 more seconds, then the laptop went to sleep.
Great! I'm making progress. So I'm pressing the switch button to turn it back on, but instead of coming back to my session, it initiated a whole reboot, going through the Lenovo splash screen and Ubuntu cryptsetup prompt.
More Googling told me that I need to configure GRUB to define what swap partition it should attempt to resume from. When you hibernate, the whole RAM memory is flushed to swap. GRUB needs to know the UUID of the swap disk holding that info.
To get the UUID, I typed sudo blkid | grep swap
to get a list of all swap devices. In my case, I had two of them:
/dev/mapper/ubuntu--vg-swap_1: UUID="b0d3688c-e44a-4972-b18a-43e79ca3777c" TYPE="swap"
/dev/mapper/cryptswap1: UUID="78df939a-d7a9-46dc-9082-d46415cd6e0a" TYPE="swap"
Ok, so which one is it? Because I'm using Ubuntu full disk encryption, one of those two disks is actually the encrypted swap disk and the other is the "live" decrypted swap. But which is which?
swapon -s
told me the name of the active swap: /dev/dm-3
. Ok, that's a good start. sudo dmsetup info /dev/dm-3
yield the final answer: it's cryptswap1
.
So cryptswap1
is the active swap; it means it's the decrypted swap, so ubuntu--vg-swap_1
is the actual encrypted swap. It actually makes sense, as the vg
in the name stands for "Volume Group", a term used in LVM terminology.
My issue with that setup is that I cannot tell GRUB to resume on the decrypted swap, because the UUID of this swap will be randomly assigned at each boot and more importantly it won't be decrypted yet when resuming.
But I cannot tell GRUB to boot on the encrypted swap disk either as it will be random garbage from its point of view.
I was stuck. Other solutions online suggest that I could flush my RAM to a disk that was not encrypted so I could resume from it, but that defeats the purpose of encrypting my disks if I dump everything that is in RAM to a readable disk.
After hours and hours of Googling and trying, I was about to give up. That's when I decided to ask one of my coworker that I know has a similar setup: Linux on a Lenovo laptop, using encrypted disks.
His answer was all I needed:
over the years I just made my mind over the fact that hibernate was broken and I never even try to see if it's fixed, I just consider it as broken forever
Ok.
That's not just me then. Hibernating and having an encrypted drive are mutually exclusive. Too bad, I'd rather keep the encrypted drive.
29 Sep 2017I used to do my JavaScript testing using a combination of mocha
, expect
and sinon
, but Jest packages all those features into one cohesive package.
Transitioning to Jest has been smooth. The part that took me a while to figure out is how to properly mock methods, and this is what I'm going to develop here.
Mocking direct methods
Imagine a dummy component with two methods, foo
and bar
, with the following implementation:
const component = {
foo(input) {
return {
id: input,
name: component.bar()
}
},
bar() {
const alpha = 'bar';
const beta = 'baz';
return `${alpha}-${beta}`;
}
}
export default component;
Calling foo(42)
will return { id: 42, name: 'bar-baz' }
.
The bar
method is straightforward to test as it does not have any dependencies. All we have to test is that the output is the one we are expecting. The code can be simplified but I'm making it overly complicated here on purpose.
The point is that when we test foo
, we don't want to deal with the internals of bar
. We should be able to change the internals of bar
and have our foo
tests still work. Actually, we could even completly change the return value of bar
and still have our foo
tests pass.
To achieve that decoupling, the trick is to mock the bar
method to control its behavior during our test.
import component from './component.js';
describe('component', () => {
afterEach(() => {
jest.restoreAllMocks();
});
it('should have the name set to the value of bar()', () => {
// Given
const input = 42;
const expected = {
id: input,
name: 'my-mock-name'
};
// When
jest
.spyOn(component, 'bar')
.mockReturnValue('my-mock-name');
const actual = component.foo(input);
// Then
expect(actual).toEqual(expected);
});
});
The first step is to call jest.spyOn(component, 'bar')
. This will replace the component.bar
method with a mock version. By default the mock version will behave like the original method. Spying on a method has other benefits (like being able to assert the number of times a method is called), but those won't be discussed in this article.
Once we've replaced the original method with our spy, we can now call .mockReturnValue('my-mock-name')
on it that will change the original method so it now always return my-mock-name
when called.
The last step is to call jest.restoreAllMocks()
in the afterEach
hook. afterEach
is called after each test, and restoreAllMocks
will restore all our spies to their original methods. If we don't do that, all our component.bar
calls in all our tests will always return my-mock-name
.
Mocking dependency methods
This first part was about mocking methods of one of our dependencies. But how do you mock sub-dependencies? Let's update our component so it now uses one of its own dependencies:
import dependency from 'dependency';
const component = {
foo(input) {
return {
id: input,
name: dependency.bar()
}
}
}
export default component;
There is no component.bar
method anymore here as component
is directly calling its dependency
.bar
method. To mock dependencies, we need a bit more plumbing.
import component from './component.js';
jest.mock('dependency'); // <-- Here
describe('component', () => {
afterEach(() => {
jest.restoreAllMocks();
});
it('should have the name set to the value of bar()', () => {
// Given
const input = 42;
const expected = {
id: 42,
name: 'my-mock-name'
};
// When
const dependency = require('dependency'); // <-- Here
jest
.spyOn(dependency, 'bar') // <-- Here
.mockReturnValue('my-mock-name');
const actual = component.foo(input);
// Then
expect(actual).toEqual(expected);
});
});
We've added jest.mock('dependency')
to our test file. It will tell Jest to replace all require
and import
of dependency
with a mock object. This means that whenever we will import dependency
(either in component.js
or in our tests), it will be replaced with a mock version.
Like we did in the previous example, we will hardcode the return value of the bar
method in our test. This time, we first need to import it (const dependency = require('dependency')
, so we can spy on it and mock its return value.
Conclusion
Hope that overview can help others. It took me some time to understand how mocking was working in Jest and I hope this will help other figure out all the pieces.
Tested with Jest v21.1.0