Do **not** be alarmed.
Be *very* alarmed :-)
[ui] username = Pozorvlak <email@example.com> merge = internal:merge [pager] pager = LESS='FSRX' less [extensions] rebase = record = histedit = ~/usr/etc/hg/hg_histedit.py fetch = shelve = ~/usr/etc/hg/hgshelve.py pager = mq = color =
I've been running benchmarks again. The basic workflow is
Suppose I want to benchmark three different simulators with two different compilers for three iteration counts. That's 18 configurations. Now note that the problem found in stage 5 and fixed in stage 6 will probably not be unique to one configuration - if it affects the invocation of one of the compilers then I'll want to propagate that change to nine configurations, for instance. If it affects the benchmarks themselves or the benchmark-invocation harness, it will need to be propagated to all of them. Sounds like this is a job for version control, right? And, of course, I've been using version control to help me with this; immediately after step 1 I check everything into Git, and then use
git fetch and
git merge to move changes between repositories. But this is still unpleasantly tedious and manual. For my last paper, I was comparing two different simulators with three iteration counts, and I organised this into three checkouts (x1, x10, x100), each with two branches (
simulator2). If I discovered a problem affecting
simulator1, I'd fix it in, say, x1's
simulator1 branch, then
git pull the change into x10 and x100. When I discovered a problem affecting every configuration, I checked out the root commit of x1, fixed the bug in a new branch, then
git merged that branch with the
simulator2 branches, then
git pulled those merges into x10 and x100.
Keeping track of what I'd done and what I needed to do was frankly too cognitively demanding, and I was constantly bedevilled by the sense that there had to be a Better Way. I asked about this on Twitter, and Ganesh Sittampalam suggested "use Darcs" - and you know, I think he's right, Darcs' "bag of commuting patches" model is a better fit to what I'm trying to do than Git's "DAG of snapshots" model. The obvious way to handle this in Darcs would be to have six base repositories, called "everything", "x1", "x10", "x100", "simulator1" and "simulator2"; and six working repositories, called "simulator2_x1", "simulator2_x10", "simulator2_x100", "simulator2_x1", "simulator2_x10" and "simulator2_x100". Then set up
update scripts in each working repository, containing, for instance
and every time you fix a bug, run#!/bin/sh darcs pull ../base/everything darcs pull ../base/simulator1 darcs pull ../base/x10
for i in working/*; do $i/update; done.
But! It is extremely useful to be able to commit the output logs associated with a particular state of the build scripts, so you can say "wait, what went wrong when I used the
-static flag? Oh yeah, that". I don't think Darcs handles that very well - or at least, it's not easy to retrieve any particular state of a Darcs repo. Git is great for that, but whenever I think about duplicating the setup described above in Git my mind recoils in horror before I can think through the details. Perhaps it shouldn't - would this work? Is there a Better Way that I'm not seeing?
straceor some equivalent.
statis slow on modern filesystems.
[Exercise for the reader: which build tools make which assumptions, and which compilers violate them?]