Over at the Comonad.Reader, Edward Kmett's been writing about so-called "abiding" pairs of binary operations: we say that * and + abide if (a+b)*(c+d) = (a*c)+(b*d) for all a, b, c and d. The term's a portmanteau of "above and beside", and comes from the following rather lovely notation:
a|c = (a*b)+(c*d) -|- b|d a|c = (a+c)*(b+d) --- b|dWe write * by stacking things vertically, and + by putting them side-by-side: then + and * abide if we can remove the lines without introducing ambiguity. By the way, + and * don't necessarily stand for ordinary addition and multiplication - in fact, they can't, because addition and multiplication don't abide! Just think of them as two machines: you feed two things into either of them, and get one thing out of the other end. We require the things we get back to satisfy the equation above (and yes, there are examples).
Why would we want to consider such a thing? Well, suppose (A,+,0) is a monoid: then one of the conditions we need for * to be a monoid homomorphism AxA -> A is that * and + abide. Generalise that example as far as you wish :-)
Though neither Edward's post nor the thesis he links to say so, this notation is related to the rather beautiful Eckmann-Hilton argument. I tried to say as much in a comment, but the proof works best in two dimensions, and his comment form strips <pre> tags. Fortunately, however, I have no such restriction here :-)
Theorem (Eckmann-Hilton): Let +, * be a pair of abiding binary operations on a set A, with units 0 and 1 respectively (so a+0 = a = 0+a and a*1 = a = 1*a for all a). Then 0 = 1, + = *, and + (and hence *) is commutative.
0 = 0|0 = 1|0 = 1|0 = 1 = 1 -|- --- - 0|1 0|1 1 x|y = 1|y = 1|y = y = y|1 = y|1 = y|x = 1|x = 1|x = x -|- --- - --- -|- -|- --- - x|1 x|1 x 1|x 1|x y|1 y|1 yQED.
You'll note that this chain of equations can be continued so that it loops round back to x+y again. If you do that, and arrange the stages in the proof in a circle, you get the "Eckmann-Hilton clock".
This, by the way, is the kind of thing I mean when I say I do "higher-dimensional algebra": some arguments naturally live in more than one dimension. In 2-d we're OK, because we can draw pictures on paper, but 3-d and higher rapidly gets hard to think about.
The Eckmann-Hilton argument originally arose in algebraic topology: it's how you prove that the higher homotopy groups π2, π3 must be commutative (the notation now has a geometric interpretation: we can literally lay out homotopies and morph them around each other as in the proof). In category theory, the argument is widespread, and it's related to the Stabilization Hypothesis (scroll down until you see the table, then read the paragraph or two above for the gist). A further example is the theorem that a monoid object in the category of monoids is a commutative monoid (because, as mentioned above, a monoid homomorphism AxA->A must abide with the binary operation on A).
I first encountered the Eckmann-Hilton argument in Joachim Kock's excellent book Frobenius algebras and 2D topological quantum field theories, which despite the title is a great introduction to the categorical way of thinking. It's aimed at beginning graduate students, is as easy to read as a maths textbook will ever be, and comes highly recommended. It's also interesting for the way the book's structured: it uses the bottom-up style currently in fashion in programming circles. The book introduces a series of "languages", each defined in terms of the previous one, which approach successively closer to the material in question. By the end of the book, we're able to replace literally pages of calculation with a few simple diagrams, and it's all perfectly rigorous. Better than that, the graphical language makes proofs extremely easy to construct. The main result of the book's pretty cute too :-)