why write type declarations in Haskell?-Collection of common programming errors

I am new to Haskell and I am trying to understand why one needs to write type declarations. Since Haskell has type inference, when do I need the first line at all? GHCI seems to generate correct output with I use ‘:t’

The only example I found so far that seems to need a declaration is the following.

maximum' :: (Ord a) => [a] -> a  
maximum' = foldr1 max

However, if I add “-XNoMonomorphismRestriction” flag declaration is not needed again. Are there specific situations when type inference does not work and one needs to specify types?

Since I could have a bug in type declaration and no direct benefit, I’d rather not write it. Again, I have just started learning Haskell, so please correct me if I am wrong, as I want to develop good habits.

EDIT: It turns out that the Type inference is a double-edged sword section of the Real World Haskell book has a nice discussion of this topic.

  1. Consider read "5". How can Haskell know the type of read "5"? It can’t, because theres no way to resolve the result of the operation, since read is defined as (Read a) => String -> a. a is not dependent on the string, so it must use context.

    However usually context is something like Ord or Num so its impossible to determine. This is not the monomorphism restriction but rather another case that can never be handled properly.

    Examples:

    Does not Work:

    read "0.5"
    putStrLn . show . read $ "0.5"
    

    Does Work:

    read "0.5" :: Float
    putStrLn . show . (read :: String -> Float) $ "0.5"
    

    These are necessary because the default Show instance, if I remember correctly, is Int.

    • when you have big Haskell programs, having type signatures often gives you better error messages from the compiler
    • sometime you can derive what a function does from its name and its signature
    • often a function is understandable a lot better with type signatures, e.g. if you make use of currying
    • even writing programs gets easier, I often start with type signatures and most functions declared as undefined. It everything compiles I know that my idea seems to fit not too bad. Then I go on and replace undefined by real code
  2. It’s usually because it makes it easier to read and sometimes easier to write. In a strongly typed language such as Haskell, often you’ll find yourself making functions that take some types and output another type and find yourself relying on what these types are instead of their names. After you get used to how the type system works, it can make it clearer what you intend to do and the compiler can catch you if you’ve done something wrong.

    But this is a preference thing. If you’re used to working in dynamically typed languages, you may find specifying no types to be easier than specifying them. It’s just two different ways to use the strong type system that Haskell provides.

    Then there are times when type inference doesn’t work, such as the “read” example that another answer gave. But those are inline type definitions rather than the type definition for a function.

  3. Peace of mind. It’s nice sometimes to make sure that the compiler agrees with your perception of what a function’s type should be. If the inferred type doesn’t unify with your given type, then the compiler will yell at you. Once you become familiar with the type system, you will find that optional type signatures can be a great boon to your coding confidence.

  4. One important thing I haven’t really seen covered in any answers is that you will often actually write your type definitions, and type signatures, before you write down any actual code. Once you have that “specification” complete, your implementation will be checked against it as you write it, which will make it easier to catch mistakes earlier on as the compiler checks that your types match. If you know, for example, that something should have a signature Int -> Int -> [a] -> [a] but, when writing it, rather than instantiating two parameters x and y, you instantiate only one parameter x by accident and use it twice, the compiler will catch the mistake at the point where you defined the function, as opposed to the point at which you tried to use it how you were supposed to use it.

Originally posted 2013-11-09 21:05:05.