it reminds me too much of programming in BASIC with line numbers. You end up squeezing steps into stages[sic]
Lol. True. I'm looking at build through fixed stages as simple as setup, configure, build, test, install, package and then an isolated step 'clean'. Setup is where you define what your project needs, configure is called after you're done with doing UI (console/X-windows) based selection of said configuration and then followed by other steps. There are cases where a pre_X and post_X would become necessary, but you can easily wrap them in neat functions and call them inside one of these call-backs. The 'clean' step need not even be defined as generated artefacts would be known at runtime anyway. Sequencing graphs as part of your build script would likely make things faster (but no proof yet).
Absolutely. In fact, it's a pain to embed it. Tried once before for a customer and came close to checking into a mental asylum. To embed Lua or JS would be very trivial as opposed to embedding Pything. The way I see the build system is something like a single binary no more than 5-10 MiBs that could be copy-pasted in some directory. So far, I'm assuming dependencies like PCRE, libArchive (+ all their friends), OpenSSL/embedTLS and libgit2 and most importantly a modified version of Lua all statically linked into the binary. The last I checked this was less than 10 MiB. Python on the other hand is a massive 150+ MiB bloatware that would require very specific file paths on Windows/Linux for it to exist and a butt-load of unnecessary libraries to even kick start. Also, (personal favourite bashing point) syntax.
There are several JS implementations ranging from ancient SpiderMonkey, Duktape, MuJS.. take a pick. All of these are so minimal, you can spend an afternoon with it and you'd have a portable working instance cleanly embedded into your binary without side-effects.
I've encountered features like that in the RPM build tools, but I'm not convinced it's the best solution in general
I'm looking at it from C/C++ perspective where it's quite common in enterprise cases for people to bring their favourite OSS project, then apply patches around it and compile it. In fact, this is so common, tools like Quilt
(http://savannah.nongnu.org/projects/quilt) exist very specifically to address these issues (though I personally find a temporary git repository would do it a lot cleaner). Projects like Yocto or OpenWRT use their own download-patch-compile sequence as part of their builds via complex Makefiles. Though in all these cases, the 'patch' step is optional. Note, I've seen several large-scale enterprise projects where people maintain an OSS project's tar.gz in SVN along with a group of patches. See a real-life examples (names changed to protect the innocent):
This isn't the first time I'm seeing in an org and I'm sure this won't be the last either.
I reckon, this per-project custom logic is the quickest way to alienate new contributors. Yet there are several valid/real reasons why they can't do it differently. Reasons range anywhere from disk space, unimportant for mainstream but important for project, unnecessary repository because we're still old-school with SVN, to just not worth giving a damn about.
That said, all of this is still in thin-air as I'm still working on a design. Nothing concrete except for some stray C files flying around in multiple directories just to test theories. Once I fixate on some basic design, I'll post it on GitLab and place a link here.