Writing faster: just speak

As a web and software developer, I often see coding as a creative way to solve problems. After years in the field, I take pride in writing clean, understandable, and maintainable code. Understanding the quality of my work gives me satisfaction, much like an artisan. Recently, I have been diving into automation tools that help streamline repetitive tasks, enhancing my productivity.

My journey with automation tools started with IFTTT, which I found quite useful but limited in functionality. Zapier was a step up, but it’s pricey for what it offers. Then I stumbled upon Make. This platform provides many features at a lower cost, allowing me to work faster. Even with some quirks, the speed and ease of development I gain from Make give me a "wow effect" that would have taken years to achieve otherwise.

Integrating tools like ChatGPT and Airtable has changed how I manage information. ChatGPT, as an API, allows me to manipulate data easily. Airtable acts as my source of truth, offering a user-friendly graphic interface and flexible views for managing data. Notion complements this by providing a smooth note-taking experience, especially with its new API. Working with these tools is not just efficient; it feels empowering.

I also find myself wanting to share what I learn. As a public-speaking trainer, I hear many interesting topics and wish to help others communicate their ideas effectively. I often think about the time I spend solving problems; sometimes I learn concepts that, if I could explain simply, would benefit others. However, creating an article or preparing a talk takes time I often don’t have.

To bridge this gap, I've embarked on a project to automate how I turn raw ideas into polished content. Through voice notes, I can quickly express my thoughts, and I want to transform these unstructured ideas into useful formats like articles or conference talks. The thought is to create a system that organizes and refines my notes, making it easier to share knowledge.

This process of transforming spoken ideas into written content involves leveraging AI tools that can help refine and structure my raw notes. I imagine a comprehensive database where I store these ideas, making the content creation process easier and faster. The audio notes I take can serve as the seed from which various media, such as blog posts or presentations, can grow.

In fact, this blog post is an example of what I’m trying to achieve. It all began with a simple audio note I recorded on my phone. By analyzing and refining that five-minute monologue, I've created something that I hope is engaging and useful to you. This journey of integrating automation and AI into my workflow has only begun, and I look forward to sharing more about this exploration as it evolves.

Disclaimer: This blog post hasn't been written by me. I recorded an audio note of what I wanted to say, then built an automation process (using the tools I mention) to generate the final blogpost. The ideas are mine, the way it's written is not. It's not good enough (I feel), but is a good start.

Using multiple variable modifiers in zsh

zsh has a lot of variable modifiers, and sometimes trying to put more than one on the same variable either throws an error or does nothing.

To fix that, I've been assigning and re-assigning to the same variable, each time with a new modifier added.

Today, I found a syntax that allows chaining/embedding of modifiers.

local myArray=(../../up-the-tree ./here ../in-the-parent)

# To display this array from the closest to the furthest, here is what I used to
do
myArray=(${(O)myArray})   # To sort it in reverse order
echo ${(F)myArray})       # To display it line by line

The following syntax also works:

echo ${(F)${(O)myArray}}

Padding a string with zsh

To pad a string with leading spaces in zs, you can use ${(r(15)( ))variableName).

  • r(15) pads on the right (use l for left padding)
  • The 15 defines the maximum length of the string
  • ( ) defines the character to use for padding (here, a space)

Proxy command completion in zsh

Commands like sudo take other commands as input and run them in a specific context. I also have a command similar, called lazyloadNvm that runs nvm use when needed.

I wanted zsh to suggest completion for the passed commands instead of completion for the initial command (sudo or lazyloadNvm).

The solution was to create a _nvm-lazyload comdef file like this:

#compdef

function _nvm-lazyload() {
  # Remove the first word (lazyloadNvm)
  shift words
  # Update which word is currently being focused for tab completion
  (( CURRENT-- ))
  # Re-run completion with the new input
  _normal
}
  • shift words removes the first element of the words array (sudo git status becomes git status)
  • (( CURRENT-- )) decreases the index of the word focused by tab completion. Updating words doesn't automatically updates CURRENT
  • _normal re-runs the completion functions with the current words and CURRENT context.

Debugging performance issues in zsh

If I add too much code in my .zshrc, my zsh takes longer to load. As I use a lot of different terminal windows (splitting one main tmux window), I want those split to happen quickly. This is why I'm trying very hard to keep the loading time of zsh under 150ms.

Hyperfine

One way to evaluate the current loading speed and track potential regressions and improvement is to use hyperfine

hyperfine --warmup 3 "zsh -i -c exit"

This will benchmark how long it will take on average to load zsh. The -i makes it load in interactive mode (so, by sourcing ~/.zshrc), and -c exit makes it execute the exit command.

Note that if you have any commands running asynchronously in the background (like prompt optimization), they will purposefuly not be included in the time.

zprof

The other debugging tool is to use zprof, which is the zsh profiler.

Include zmodload zsh/zprof at the top of your ~/.zshrc file, and zprof at the bottom. Next time you'll open zsh, you'll see a table like this:

num  calls                time                       self            name
-----------------------------------------------------------------------------------
 1)    1         240,22   240,22   65,76%    240,22   240,22   65,76%  oroshi_tools_pyenv
 2)    1          72,98    72,98   19,98%     72,98    72,98   19,98%  oroshi_theming_index
 3)    1          14,77    14,77    4,04%     14,77    14,77    4,04%  compinit
 4)    1          14,12    14,12    3,86%     14,12    14,12    3,86%  oroshi-completion-styling
 5)    1           5,85     5,85    1,60%      5,85     5,85    1,60%  oroshi_tools_fzf
 6)    1           4,95     4,95    1,35%      4,95     4,95    1,35%  _zsh_highlight_bind_widgets
 7)    1           4,56     4,56    1,25%      3,01     3,01    0,82%  oroshi_tools_z
 8)    1           2,27     2,27    0,62%      2,27     2,27    0,62%  _zsh_highlight_load_highlighters
 9)    2           2,17     1,09    0,60%      2,17     1,09    0,60%  promptinit
10)    1           2,10     2,10    0,58%      2,10     2,10    0,58%  oroshi_tools_ls
11)    7           1,51     0,22    0,41%      1,51     0,22    0,41%  add-zsh-hook
12)    7           0,64     0,09    0,17%      0,64     0,09    0,17%  compdef
13)    2           0,63     0,31    0,17%      0,63     0,31    0,17%  is-at-least
14)    1          14,85    14,85    4,07%      0,09     0,09    0,02%  oroshi_completion_compinit

This shows where most of the time is spend, the number of times a specific function is called, and if you scroll down you'll see details of the stacktrace of each sub command.

In this example, it is clear that my oroshi_tools_pyenv method is too slow, and I need to optimize it.

Note that this only tracks the time to run functions, not the time to source files, so if you need to benchmark a specific sourced file, wrap it in a function that you invoke immediatly.