In the days of yore, every interview I ran was a delightful circus of chaos: concurrency, Linux kernel deep dives, gdb/strace (and later eBPF and flamegraphs), and, of course, the classic “so, can you actually use the tools you listed?” quiz, which often includes Bash-related questions.
Ah, simpler times. Anyway, I asked those questions for a reason:
- It never failed to irritate so-called “senior YAML architects.”
- It separated the groovy cats from those who just memorized buzzwords.
- Admittedly, I was a far more insufferable young man than the refined gentleman I am today.
Also, deep down, I think I tried to form a squad of superhero ninja turtles—only to see them slog through soul-crushing configurational drudgery.
That experience truly helped me appreciate engineers who were just happy to configure Jenkins and install a Helm chart. Funny how life teaches humility!
I’ve learned my lesson and no longer ask those questions. However, I still get a strange satisfaction from a meticulously crafted script—especially since reproducing the same logic in another language would be much more challenging.
And that, dear reader, is precisely what I want to talk about.
The Bash
Look, I get it—really, I do. Bash is clunky, ancient even, designed when people were building pyramids (and some of its creators might already be buried in them). It’s not the sleekest language on the planet.
But here’s the thing: the people who created Bash were brilliant. They came up with clever and, more importantly — practical solutions.
And I genuinely believe that many modern engineers could benefit from learning Bash, at least enough to avoid reinventing the wheel each time they need to do something more advanced than the ls | grep file.
The Task
--- config: theme: base look: classic layout: dagre themeVariables: lineColor: '#fff' borderRadius: 5 primaryBorderColor: 'rgb(31, 31, 31)' fontFamily: 'JetBrains Mono' --- flowchart LR list["List of Images"] --> build["docker build"] build --> worker1["push worker 1"] & worker2["push worker 2"] worker1 --> webhook["webhook"] worker2 --> webhook list@{ shape: procs} style list fill:#dd72cb style build fill:#5ec8df style worker1 fill:#e3e3f9 style worker2 fill:#e3e3f9 style webhook fill:#49a1f5
It happened that I stepped into a pile of task (simplified version):
- Build Docker image.
- Push this image in a few Docker registries.
- Call webhook right after.
The naive solution would be:
#!/bin/bash -e
images=("image1" "image2" "image3")
for; do
done
Simple. Elegant. Deadly. But not optimal:
- We can build
image2
while pushingimage1
- We can push images in parallel
Async ⊆ concurrency and almost always — optimization. And ‘Premature optimization is the root of all evil’ © Donald Duck Knuth.
Instead, naivety is often sufficient: the simpler the idea, the easier it is to read and maintain.
If the downsides of a naive approach aren’t critical, then always pick the naive path with minimal optimizations.
Remember: one day, your code might be read by a JavaScript developer… and they might not survive the experience.
Concurrency
I met a lot of engineers who praise Go for its goroutines and channels, yet were unaware that you can run any process in the background using the &
symbol:
# This task runs in the background
&
# So, does this one
(
)&
# You can wait for a specific task using its PID
last_task_pid=
# Or wait for all background tasks to finish
The simplest concurrent version of our script would look like this:
&
&
&
&
Pipes
While we know how to run concurrent processes, it is time to make them talk to each other. Enter pipes.
You probably already know that ls | grep
sends all output from ls
straight to grep
. But how does that work?
A typical linux process has at least three file descriptors (fd) open (unless it explicitly closes them):
stdin
(0 fd)stdout
(1 fd)stderr
(2 fd)
Pipes simply forward data between these file descriptors. In many cases, they are far more useful than files, especially since a process can detect when a pipe closes and exits gracefully.
For better understanding, I recommend reading a kernel code for pipes. But if that sounds a bit too hardcore for now, I’ve got you covered with a simple C program to illustrate the concept:
Here are a few examples of how we can write to two places from a single command:
(; )
or with tee
| |
Yet, it looks messy. I bet we had something involving names... Yep, they named pipes!
Later, you might stumble upon something like: exec 3>/tmp/reg1.fifo
. This command connects file descriptor 3
to a named pipe.
I didn’t add this to confuse you—it’s practical. The idea is to open the pipe just once and keep it open until we explicitly close it later.
# Let’s create some named pipes for our pushers
# Start the worker that builds images
(
# Redirect file descriptors to the pipes
# Build images
for; do
# Notify the pipes about our triumph
done
# Once all images are built, close the pipes
# And clean up the mess
)& # The `&` means we're running this in the background
# Use xargs to avoid blocking the docker_build worker.
# Xargs buffers all stdin: that is how we don't block docker_build.
&
&
# Wait for all processes to complete
If you think that ↑ was overengineering—close your eyes and wait for SIGTERM.
Communication to webhook
Just like there’s more than one way to skin a cat, there’s more than one way to sync processes: signals + trap, lock files, the wait
command, or pipes.
As for me — pipes are easiest in implementation and understanding. But hey, if you’re curious about the other methods, drop a comment, and we’ll dive in!
So, let's go piping (pipening?)
We know that if someone tries to read from a pipe no one writes, it’ll just hang. And vice versa. It’s like trying to shake hands with someone who forgot to show up.
#!/usr/bin/env bash
(
< /tmp/pipe
)&
# Result:
# I'm waiting for the pipe.
# I'm going to write to the pipe.
# I'm done waiting for the pipe.
# I'm done writing to the pipe.
See? The reader waits until the writer shows up, then they both move on with their lives.
With two processes, things get a bit trickier:
#!/usr/bin/env bash
(
< /tmp/pipe1
< /tmp/pipe2
)&
Both processes just sit there, stubbornly waiting for the other to make a move. Classic deadlock. Reminds me of my previous marriage.
Solution? Run pipe-waiting processes in separate forks and wait for them.
Here's how:
(
&
pid1=
&
pid2=
)&
The result
If we combine all of this knowledge into a script, we will get something like this:
#!/usr/bin/env bash
LIST="image1 image2 image3"
(
for; do
(
&
wait_for_reg1=
&
wait_for_reg2=
)&
done
)&
&
&
And as a bonus, here are Go
code that are doing the same:
This is my first post, so please leave a comment and thank you for reading.