Log in

No account? Create an account
Haskell doesn't need macros, eh? - Beware of the Train [entries|archive|friends|userinfo]

[ website | My Website ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

[Links:| My moblog Hypothetical, the place to be My (fairly feeble) website ]

Haskell doesn't need macros, eh? [Mar. 6th, 2008|11:25 am]
[Tags|, , , , , , , ]

A comparison of the Haskell argmunger with its Arc equivalent (or even its Perl equivalent) should make something clear. It's been claimed that Haskell doesn't need macros, because most of the things that Lispers need macros for can be done in Haskell using lazy evaluation and so on. But the argmunger example makes it clear that there are things for which Lispers don't need macros and Haskellers do - if not Template Haskell macros, then SYB or DRiFT or some other macro-like facility.

Lispers often say that coding in other languages feels like manually generating macroexpansions. I'm nobody's idea of a Lisp hacker, but I often feel like this when I'm writing Haskell. Which is why I'm learning about Template Haskell even though I'm not yet au fait with things like do-notation and monad transformers, which most Haskell experts would probably consider more basic. Would you choose before or after you'd walked to Moscow to staunch the blood from your severed arm?

From: (Anonymous)
2008-03-06 03:17 pm (UTC)
Argmunger... sounds like the name of the next Minister for Magic.
(Reply) (Thread)
[User Picture]From: ryani
2008-03-07 08:36 pm (UTC)

macros + strong typing = ?

I agree that a "real language" needs lispy macros. I'm not really happy with TH's approach, though, especially that you need to explicitly tell the compiler to run a macro with a splice; it should look like a regular function call.

But is there a good way to reconcile macros with strong typing? The simplest answer is just "run all the macros, then typecheck", but doesn't that give you error messages much later than you want? On the other hand, defining a type for "argmunge" is impossible in Haskell's type system.

This seems like a perfect application for dependent types; I'm seeing more and more uses for them, from "sprintf :: (s :: String) -> SprintfType s" to "argMunge :: (n :: Nat) -> (perm :: [Fin n]) -> ArgMungeType n perm".

Unfortunately, using dependent types seems to put you smack dab in the middle of writing proofs as opposed to writing programs. While they are the same thing (thank you Mr. Curry and Mr. Howard), often the "program" part is very compact and the proof part ends up pretty large.

Which is why dynamic typing seems to win for these sorts of problems; the proof for argmunge is pretty intuitive, so why write it down? (Although you did end up with an arity bug, so your first intuitive proof was clearly wrong!)

I guess TH puts a limited form of dependent types into Haskell; as long as the types can be fully instantiated into non-dependent types at compile time.
(Reply) (Thread)
[User Picture]From: pozorvlak
2008-03-07 10:00 pm (UTC)

Re: macros + strong typing = ?

I can't make my mind up about the need for explicit splices. On the one hand, it makes macros considerably less useful for implementing DSLs - or rather, all your DSLs need to have lots of $ signs dotted about the place :-) On the other, macros and functions have different semantics, so maybe it's not such a bad thing to have them visually distinct. Hmmm, is that still true in a lazy language? Not sure. More experience needed.

I definitely need to learn about dependent types some time. I wasn't aware of the proof drawback!

I guess TH puts a limited form of dependent types into Haskell; as long as the types can be fully instantiated into non-dependent types at compile time.
I think you've hit the nail on the head here! How much of a limitation that is remains to be seen...
(Reply) (Parent) (Thread)
[User Picture]From: ryani
2008-03-08 04:08 am (UTC)
The proof drawback is pretty simple to explain.

Lets say you have a function

SprintfType :: String -> *
SprintfType ('%':'d':s) = Int -> SprintfType s
SprintfType (c:s) = SprintfType s
SprintfType [] = String

Now if you have a function

sprintf :: (s :: String) -> SprintfType s

it's really easy to use for static strings:

sprintf "hello" :: String
sprintf "%d %d" :: Int -> Int -> String

There are two problems to consider, however:

1) How do you write sprintf? You need to prove for each case that the type of function you are generating is equal to SprintfType s. This is not that hard; you just mirror the type level computation at the value level:

sprintf :: (s :: String) -> SprintfType s
sprintf = sprintf' id -- use CPS for the result

sprintf' :: (String -> String) -> (s :: String) -> SprintfType s
sprintf' k ('%':'d':rest) = \i -> sprintf' (\s -> k (show i ++ s)) rest
sprintf' k (c:rest) = sprintf' (\s -> k (c:s)) rest
sprintf' k [] = k []

I've noticed this as a common pattern in dependently typed programs; you write the same program twice, once at the type level and once at the value level. I wonder if there is a way to automate it?

2) What if you don't want to pass in a static string? Then you need to provide a proof that the type created by the string is the one you think it is. For example, you can write a theorem that for all strings s that do not contain the "%" character, SprintfType (x ++ s) = SprintfType x = SprintfType (s ++ x). Then you can apply that theorem when calling sprintf.

But that's a lot of work. It's a lot easier to just say "I know this is true", pass in something you believe is safe, and be done with it.

Then somebody figures out a way to get an unquoted "%s" through your security and writes a buffer overrun exploit because you didn't prove your safety property :)
(Reply) (Thread)