Bunster: Compile bash scripts to self contained executables

https://github.com/yassinebenaid/bunster

By thunderbong at

shrx | 1 comment | 2 weeks ago
It should be possible to run bash scripts on any system supported by jart's cosmopolitan library [1], which provides a platform-agnostic bash executable [2].

[1] https://justine.lol/cosmo3/

[2] https://cosmo.zip/pub/cosmos/bin/

shakna | 2 comments | 2 weeks ago
Yes and no.

It does work - but there are some OS-specific stuff that can still pop up and explode on you. There are different guarantees around open/write on Windows and *nix, and cosmopolitan doesn't 100% paper over those gaping differences. It doesn't change the underlying file locking behaviour of the file system, for example. You can run into thread-time guarantees and other streaming problems, when piping from one thing to another.

jart | 1 comment | 2 weeks ago
What are these gaping differences? File locks are a horror show. No doubt. But I can't think of any truly terrible differences with the other things you mentioned.
shakna | 1 comment | 2 weeks ago
File locks are the thing that has most been biting me, to be honest. The horrow show of NTFS, and Windows Defender, where a 'close' can become a costly operation.
jart | 1 comment | 7 days ago
We could potentially create a background worker thread for closing handles.
shakna | 0 comments | 7 days ago
From memory, rustup ended up creating a worker for closing handles, to avoid dealing with Windows Defender. So it can be a valid approach.

Though, cosmo might then run into Windows not being happy with two things attempting to open the same file, if the programmer then attempts to reopen the file. You could do some logic that cancels closing in the thread, and re-hands the handle to the user if the handle still exists, but that might be a little fragile.

To be honest... There may not actually be a good answer that works 100% of the time. Working around the platform might introduce race conditions you absolutely don't want.

shrx | 1 comment | 2 weeks ago
Interesting, thanks for pointing this out. Can I read more about potential pitfalls somewhere, or is it mostly trial and error?
shakna | 0 comments | 2 weeks ago
Justine's pretty easy to talk to and fairly available. I tend to try something, and then hit them up if things start falling apart.
Phlogistique | 4 comments | 2 weeks ago
The README fails to address the elephant in the room, which is that usually shell scripts mainly call external commands; as far as I can tell there is no documentation of which built-ins are supported?

That said, in a similar vein, you could probably create a bundler that takes a shell script and bundles it with busybox to create a static program.

mkesper | 1 comment | 2 weeks ago
Busybox commands often don't support all features used to and differ even substantially if you depend on GNU additions. https://www.busybox.net/about.html
1vuio0pswjnm7 | 0 comments | 2 weeks ago
I have been keeping a running list of busybox/toybox deficiencies and differences. There are more than I would have expected.

An alternative is to use crunchgen from NetBSD (also included with some of the later BSDs) which crunches full-featured, source tree versions of multiple utilities in a single, static binary. What busybox refers to as a "multi-call" binary.

It will be larger than busybox of course. I get evertyhing I need in a binary less than 5M.

nodesocket | 1 comment | 2 weeks ago
I also wondered this as well. How is something like "cat file.json | jq '.filename' | grep out.txt" implement into Go?
beepbooptheory | 2 comments | 2 weeks ago
I haven't looked at the code, but I assume this is just taking care of things like pipes, loops, variables, conditionals, etc, and leaving the actual binaries like jq as stubs assumed to be there. Its abstracting the shell, not the programs you run in the shell.
mananaysiempre | 2 comments | 2 weeks ago
Sure, but why is that an interesting goal? Historically, bash has had very good backwards compatibility, and it’s unlikely that you need new features anyway.
hnlmorg | 0 comments | 2 weeks ago
I have authored a shell in Go and while it doesn’t aim to replace coreutils, it does have a decent number of builtins as part of its application.

So in theory I could build a feature that allows you to ship a self contained executable like you’ve described.

If this is something you’re genuinely interested in and my shell has the right kind of ergonomics for you, then feel free to leave a feature request:

https://github.com/lmorg/murex

notnmeyer | 1 comment | 2 weeks ago
you can write bash but run the scripts on systems that may not have bash, is my first thought. packaging “shell” scripts into a scratch container or similar sounds pretty nice for certain use cases.
adamc | 0 comments | 2 weeks ago
If that's all you want, is compiling it all into go really better than just having a portable bash?
adamc | 0 comments | 2 weeks ago
Right, but wouldn't an app built around creating a container with all the dependencies make more sense?
zamalek | 1 comment | 2 weeks ago
I assume this is what they are talking about here:

> Standard library: we aim to add first-class support for a variety of frequently used/needed commands as builtins. you no longer need external programs to use them.

That's not going to be an easy task, and would basically entail porting those commands to go.

hezag | 0 comments | 2 weeks ago
Disclamer: the elephant in the room has nothing to do with ElePHPant, the PHP mascot.
mixedmath | 6 comments | 2 weeks ago
I'm confronted with a similar problem frequently. I have a growing bash script and it's slowly growing in complexity. Once bash scripts become sufficiently long, I find editing them later to be very annoying.

So instead, at some point I change the language entirely and write a utility in python/lua/c/whatever other language I want.

As time goes on, my limit for "sufficient complexity" to justify leaving bash and using something like python has dropped radically. Now I follow the rule that as soon as I do something "nontrivial", it should be in a scripting language.

As a side-effect, my bash scripting skills are worse than they once were. And now the scope of what I consider "trivial" is shrinking!

ComputerGuru | 3 comments | 2 weeks ago
My problem with python is startup time, packaging complexity (either dependency hell or full blown venv with pipx/uv). I’ve been rewriting shell scripts to either Makefiles (crazy but it works and is rigorous and you get free parallelism) or rust “scripts” [0] depending on their nature (number of outputs, number of command executions, etc)

Also, using a better shell language can be a huge productivity (and maintenance and sanity) boon, making it much less “write once, read never”. Here’s a repo where I have a mix of fish-shell scripts with some converted to rust scripts [1].

[0]: https://neosmart.net/blog/self-compiling-rust-code/

[1]: https://github.com/mqudsi/ffutils

roelschroeven | 2 comments | 2 weeks ago
I've often read that people have a problem with Python's startup time, but that's not at all my experience.

Yes, if you're going to import numpy or pandas or other heavy packages, that can be annoyingly slow.

But we're talking using Python as a bash script alternative here. That means (at least to me) importing things like subprocess, pathlib. In my experience, that doesn't take long to start.

$ cat helloworld.py #!/usr/bin/env python3 import subprocess from pathlib import Path print("Hello, world!\n")

$ time ./helloworld.py Hello, world!

real 0m0.034s user 0m0.016s sys 0m0.016s

34 milliseconds doesn't seem a lot of time to me. If you're going to run it in a tight loop than yes, that's going to be annoying, but in interactive use I don't even notice delays as small as that.

As for packaging complexity: when using Python as a bash script alternative, I mostly can easily get by with using only stuff from the standard library. In that case, packaging is trivial. If I do need other packages then yes, that can be major nuisance.

BobbyTables2 | 0 comments | 2 weeks ago
Python startup time gets much worse on low powered ARM systems executing from an sdcard — before the first import even occurs!

It certainly takes more effort for this to be a problem on modern x86 systems.

drdrey | 1 comment | 2 weeks ago
once you start importing more packages, you easily end up with 100+ ms startup time
robertlagrant | 1 comment | 2 weeks ago
At that point you're far beyond a bash script alternative though, aren't you?
ComputerGuru | 1 comment | 2 weeks ago
Not necessarily. Shell scripts often embody the unix “do one thing and do it right” principle. To download a file in a bash script you wouldn’t (sanely) source a bash script that implements an http client;, you would just shell out to curl or wget. Same for parsing a json file/response: you would just depend on and defer to jq. Whereas in python you could do the same but most likely/idiomatically would pull in the imports to do that in python.

It’s what makes shell scripts so fast and easy for a lot of tasks.

robertlagrant | 0 comments | 2 weeks ago
Fair enough, although in your examples Python comes with JSON support, but not a (very usable) HTTP client, and Bash has the reverse problem.

I would like it if Python just had a sane nice HTTP client built in, but it can also just shell out to curl.

jasfi | 0 comments | 2 weeks ago
Take a look at Nim, it solves those problems and integrates well with existing Python code.
fieu | 1 comment | 2 weeks ago
I have exactly the same issue. I maintain a project called discord.sh which sends Discord webhooks via pure Bash (and a little bit of jq and curl). At some point I might switch over to Go or C.

https://github.com/fieu/discord.sh

wiether | 4 comments | 2 weeks ago
First of all, thank you for your work!

I'm using it daily for many years now and it does exactly what I expect it to do.

Now I'm a little concerned by the end of your message because it could make its usage a bit trickier...

My main usecase is to curl the raw discord.sh file from GitHub in a Dockerfile and put in in /user/local/bin, so then I can _discord.sh_ anytime I need it. Mostly used for CI images.

The only constraint is to install jq if it's not already installed on the base image.

Switching to Go or C would make the setup much harder I'm afraid

fieu | 1 comment | 2 weeks ago
Thank you for using the project!

On the concern of it would be harder to setup, I think it would be easier in fact, you would simply curl the Go or C statically generated binary to your path and would alleviate the need for jq or curl to be installed alongside.

I think the reason I haven’t made the switch yet is I like Bash (even though my script is getting pretty big), and in a way it’s a testament to what’s possible in the language. Projects like https://github.com/acmesh-official/acme.sh really show the power of Bash.

That and I think the project would need a name change, and discord.sh as a name gets the point across better than anything I can think of.

wiether | 0 comments | 2 weeks ago
Sorry I misunderstood your message!

In that case yes, if it allows you to keep the project going, that's great!

Imustaskforhelp | 0 comments | 2 weeks ago
From what it seems , it seems that its possible to run this thing without installing go,rust,c itself

to quote from the page

With scriptisto you can build your binary in an automatically managed Docker container, without having compilers installed on host. If you build your binary statically, you will be able to run it on host. There are a lot if images that help you building static binaries, starting from alpine offering a MUSL toolchain, to more specialized images.

Find some docker-* templates via scriptisto new command.

Examples: C, Rust. No need to have anything but Docker installed!

Builds in Docker enabled by populating the docker_build config entry, defined as such:

Also I am watching the video again because I had viewed it a looong time ago !

Imustaskforhelp | 0 comments | 2 weeks ago
I suppose https://www.youtube.com/watch?v=eRHlFkomZJg

If you don't want to watch video , then I can link the tool it uses https://github.com/igor-petruk/scriptisto/wiki/Writing-scrip...

benediktwerner | 1 comment | 2 weeks ago
Why would that make the setup harder? If they provide a statically-linked executable, you can just download and run it, without even the need to install jq or anything else. It's not like they'd provide Go code and ask you to compile it yourself. Go isn't Python.
xenophonf | 0 comments | 2 weeks ago
Even Python isn't Python.

https://pyinstaller.org/

Works great. I use this instead of pipx.

NoMoreNicksLeft | 0 comments | 2 weeks ago
Yesterday, I had a problem where wget alone could do 98% of what I wanted. I could restrict which links it followed, but the files I needed to retrieve were a url parameter passed in with a header redirect at the end. I spent an hour relearning all the obscure stuff in wget to get that far. The python script is 29 lines, and it turns out I can just target a url that responds with json and dig the final links out of that. Usually though, yeh, everything starts as a bash script.
maccard | 0 comments | 2 weeks ago
I agree. My limit is pretty much one you start branching or looping, it should be in another tool. If that seems low to you, that’s the point
bigstrat2003 | 0 comments | 2 weeks ago
I definitely agree. Bash is such an unpleasant language to work with, with so many footguns, that I reach for a language like Python as soon as I'm beyond 10 lines or so.
AtlasBarfed | 0 comments | 2 weeks ago
Isn't this perfect for LLM?

You know, assuming they transpile well, I haven't tried a solid one yet.

I wonder if kernel code rewrites in rust with Llama (obviously reviewed are up to snuff.

skulk | 8 comments | 2 weeks ago
If you want portable shell-scripts that come with their dependencies bundled, Nix also has a solution: writeShellApplication[0] (and more simpler ones like writeShellScript).

    writeShellApplication {
      name = "show-nixos-org";

      runtimeInputs = [ curl w3m ];

      text = ''
        curl -s 'https://nixos.org' | w3m -dump -T text/html
      '';
    }
writeShellApplication will call shellcheck[1] on your script and fail to build if there are any issues reported, which I think is the only sane default.

[0]: https://nixos.org/manual/nixpkgs/stable/#trivial-builder-wri...

[1]: https://www.shellcheck.net/

samtheprogram | 1 comment | 2 weeks ago
So it compiles to a single executable that I can send to someone who isn’t on Nix?

Because if I wanted a portable shell script, I’d just write shell and check if something is executable in my path.

This just looks like Nix-only stuff that exists in an effort to be ultra declarative, and in order to use it you’d need to be on Nix.

skulk | 0 comments | 2 weeks ago
There is nix-bundle (which I admittedly have never had a reason to use)

https://github.com/nix-community/nix-bundle

azeirah | 0 comments | 2 weeks ago
Nix is the best.

If you're reading this and wondering how you can use this for yourself?

You don't need nixos at all. You can install nix on any linux-like system, including on MacOS

johnvaluk | 1 comment | 2 weeks ago
Is it possible to override shellcheck? It's a valuable tool that I use all the time, but it reports many false positives. It's not unusual for junior developers to introduce bugs in scripts because they blindly follow the output of shellcheck.
nerflad | 0 comments | 2 weeks ago
A comment before the problematic line can specify options to shellcheck, (e.g)

# shellcheck disable=SC2086

which remain valid within that block.

Of course, disabling the linter should be done with deliberation...

gchamonlive | 2 comments | 2 weeks ago
I still haven't come around to using nix in my daily workflow. My concern is high entry bar, obscure errors and breaking changes, but also excessive use of storage either because that's how it works or because I won't know how to manage the store well.

How's nix these days? How long would you expect someone with years of Linux management experience (bash, ansible, terraform, you name it, either onprem or on cloud) to get comfortable with nix? And what's would be the best roadmap to start introducing nix slowly in my workflow?

epic9x | 1 comment | 2 weeks ago
Start by using home-manager in your current environment. Once you can modularize your own config, start building other systems with it. It's a very deep rabbit hole, and starting off as a replacement for managing your own dotfile scripts and the like is a great way to try it out without having to replace whole systems.
rounce | 0 comments | 2 weeks ago
I'd say start even smaller by making a simple `flake.nix` with a devShell output within a project and use it to manage the project's dependencies, that way you're experiencing it within a fairly constrained opt-in environment. Nix is simple when you 'get it' but it can be quite overwhelming for someone new to it, Home-Manager is pretty big and has regions of complexity and while it might be a good candidate for daily driving Nix without running NixOS, IMO it's best to start really small.
burgerrito | 0 comments | 2 weeks ago
I like, Nix/NixOS, but the very lack of documentation makes me really angry sometimes, I'll admit
rounce | 1 comment | 2 weeks ago
Well you're still leaning on Nix to provide the dependencies. All `writeShellApplication` will do is prepend the `PATH` variable with the `bin` directories of the provided `runtimeInputs`, it still just spits out a bash script, not a binary that includes bash, the script, and the other dependencies. I reckon it's quite possible for someone to lean on Nix to implement producing an all-in-one binary though.
skulk | 1 comment | 2 weeks ago
mentioned in another comment, there are ways to bundle nix derivations into standalone run-on-any-linux binaries: https://github.com/nix-community/nix-bundle
rounce | 0 comments | 2 weeks ago
Thanks! I suspected something already existed like this but I didn't find anything from 30s of web search.
abathur | 0 comments | 2 weeks ago
If your shell scripts/libraries are a little more complex, resholve can also help package them a little more reliably.

(I'd say it's overkill for your example here, but it blocks on missing dependencies and can support tricky cases such as modular shell libraries that expect different implementations of the same command.)

randall | 0 comments | 2 weeks ago
omg i love nix so much.
sammnaser | 0 comments | 2 weeks ago
I don't see what problem this solves, especially in its current form only supporting Unix. Bash scripts are already portable enough across Unix environments, the headaches come from dependency versioning (e.g. Mac ships non-GNU awk, etc). Except with this, when something breaks, I don't even get to debug bash (which is bad enough), but a binary compiled from Go transpiled from bash.
nightowl_games | 1 comment | 2 weeks ago
One of the most critical elements of a shell script is that the source can be easily examined.

Bringing this into your system seems like a huge liability.

The syntax of shell scripts is terrible, but we write it to do simple things easily without needing more external tools.

git-bash on windows is generally good enough to do the kind of things most shell scripts do.

This tool feels like the worst of both worlds: bash syntax + external dependency.

BeetleB | 0 comments | 2 weeks ago
Oh. I was just about to comment that it may be easier to understand what it does by decompiling the binary than by looking at the actual unreadable Bash language ;-)
koolba | 1 comment | 2 weeks ago
Does it support eval?

Because then you could compile something like

    #!/usr/bin/env bash
    eval “$@“
And get a statically compiled bash!
Imustaskforhelp | 2 comments | 2 weeks ago
What does this do mate? (I tried to run it and it failed)
koolba | 0 comments | 2 weeks ago
It evaluates the arguments to the command as bash commands.

So if you save the file as foo.sh and add it to your PATH. You could run:

    $ foo.sh 'date ; ls ; foo=bar ; echo "Hello $foo"'
Or really anything you'd like as the argument is treated as a bash script.

NOTE: The original comment had the #! of the shebang backwards (as !#) due to a typo.

mbreese | 0 comments | 2 weeks ago
You’d need to pass in arguments…

All it does is evaluate the expression you pass in as arguments.

    ./evalme.sh echo hello world
The joke being that if you could transpile this evalme.sh script to a static binary, you’d effectively have a static version of bash itself (transpiled to Go).
epic9x | 0 comments | 2 weeks ago
Portability and other constraints I've discovered with the shell have always been a sign I need to reach for different tool. Bash is so often a "glue" language where accessibility and readability are it's primary feature right after the immediate utility of whatever it's automating. Writing POSIX compatible scripts is probably safer and can be validated with projects like shellcheck.

That said - this is a neat project and I've seen plenty of "enterprise" use-cases where this kind of thing could be useful.

jonathaneunice | 0 comments | 2 weeks ago
Ambitious.

Given the great diversity of shell scripting needed (even if just bash) across different variants of Linux and Unix and different platform versions, debugging the resulting transpiled executables is not something I'd be keen to take on. You'd want to be an expert in the Go ecosystem at minimum, and probably already committed to moving your utility programming into Go.

gtsop | 0 comments | 2 weeks ago
It is a very interesting technical feat to be able to do that... but should you do it?

My gut feeling says no. Unless I am missing something.

berbec | 1 comment | 2 weeks ago
Seeing as how they just implemented the IF statement[0] two weeks ago, I'm going to hold of for a few more releases before testing.

[0]: https://github.com/yassinebenaid/bunster/pull/88

withinboredom | 0 comments | 2 weeks ago
I think you’d have to say more. It looks quite sane to me.
josephcsible | 1 comment | 2 weeks ago
> Password and Expiration Lock: Surprisingly, some people have asked for this feature. Basically, It allows you to choose an expiry date at build time. the generated program will not work after that date. Also you can choose to lock the script using a password. whenever you try to run it, it prompts for the password.

Support for that makes me sad. It's antithetical to everything FOSS is.

xyzzy_plugh | 1 comment | 2 weeks ago
This is has nothing to do with FOSS. Self-detonating code is a great idea, something my peers and I often joke about but rarely actually implement (though I have done depreciations that are similar).

Here's some FOSS just for you:

   /* Copyright (c) 2025 xyzzy_plugh all rights reserved.
   
   Usage of the works is permitted provided that this instrument is retained with the works, so that any entity that uses the works is notified of this instrument.
   
   DISCLAIMER: THE WORKS ARE WITHOUT WARRANTY.
   */
   if(time(NULL) > 1767225600) exit(1);
josephcsible | 0 comments | 2 weeks ago
Yes, it's possible for FOSS programs to do those things. But the point I'm making is that since they're trivially removable from FOSS programs, I expect people who use those features for real to distribute closed-source binary-only programs.
rednafi | 1 comment | 2 weeks ago
Neat project. Can’t say I’ve ever been in a situation where I thought, “If only this shell script were a standalone binary.” By the time I get to that point, I’ve usually outgrown shell syntax and just jump straight to Go.

Still, I can see this being really handy for people who don’t speak Go or Rust but want to throw together a quick-and-dirty shell script and still need a standalone binary.

extraduder_ire | 0 comments | 2 weeks ago
I have. At one point I wanted to set a bash script to setuid/setgid.

By the time I read up on why that didn't work and how to "fix" it, I decided it was a bad idea and tried something else.

stabbles | 0 comments | 2 weeks ago
A big advantage of shell scripts is that they're scripts and you can peek in the sources or run with `-x` to see what it does.
ur-whale | 0 comments | 2 weeks ago
I'm not able to fathom the security implications of this but my gut tells me ... ugh.
vander_elst | 1 comment | 2 weeks ago
Are there performance drawbacks in particular with long pipelines (e.g. something like `cat | grep | sed | bc | paste | ...`)?
ComputerGuru | 0 comments | 2 weeks ago
To the contrary. They’re all run in parallel and the (standard) output goes directly from one to the next without being buffered by the shell. Unix overhead for process creation is very low compared to others, doing the same under, for example, Windows, would be more expensive.

But if you have to run n processes, much better to run them in a single pipeline like that.

(Source: I’m a shell developer. Fish-shell ftw!)

IshKebab | 0 comments | 2 weeks ago
This is fucking dumb. Sorry but this is just a paragon of everything wrong with Unix.

The only reason to use shell in the first place is because I can't use a binary compiled from a sane language.

This... Wow. This is like not having your cake and not eating it.

The shitness of Bash combined with the non-portability of binaries! Sign me up!

It's the opposite of https://amber-lang.com/ which tries to (not sure it succeeds) provide a sane language with the portability of shell (ignoring Windows).

That's a sensible project. This is just... Why does this exist?

Alifatisk | 1 comment | 2 weeks ago
Very cool, but since this transpiles Shell to Go, what makes this difficult to port to Windows?
forgotpwd16 | 1 comment | 2 weeks ago
Seems one project's goal is to convert frequently used commands to builtins. So maybe because currently converted scripts still use external programs that are usually only available in Unix.
stackskipton | 0 comments | 2 weeks ago
Also, Windows is not as fragmented so you don't tend to have different runtime environments where Ubuntu might include Y Utility but Rocky doesn't.

Powershell written for Windows 2016 is likely to work fine on 2019/2022/2025.

kmclean | 0 comments | 2 weeks ago
Science has gone too far.