Posts

Showing posts with the label haskell

“Pattern-matching” types in Haskell

1 I want a function f that acts as an increment on Int s and as an identity on all the other types. I tried to enable the TypeApplications extension and do this the most straight-forward way possible: f :: forall a. a -> a f @Int x = x + 1 f @_ x = x But the extension does not enable such patterns: <interactive>:6:1: error: Parse error in pattern: f @Int Is there pattern-matching for types (or a similar mechanism) in Haskell? haskell types functional-programming Share Improve this question ...

Merge multiple cases in Haskell

2 1 With a case _ of syntax like this: fun a b c = case (a, b, c) of (Just True, Just _, _) -> foo (Just True, _, Just _) -> foo _ -> bar Can I merge the first two conditions and avoid repeating foo ? Alternatively, is there any other (cleaner) way to express that I want to run foo if and only if a is Just True and either b or c are not Nothing ? haskell pattern-matching Share Improve this question ...

Why doesn't Haskell need Trampolining?

18 5 As Scala developer learning the IO Monad, and therefore technicalities of Trampolining in general that are necessary for recursion where tail call optimization is not possible, I wonder how Haskell seems to natively avoid it. I get that Haskell is a lazy language, however I wonder if someone could elaborate a bit further. For instance, why doesn't ForeverM stackoverflow in scala? Well, I can answer for trampoline, and I can find the actual code that does that in libraries and blogs. I actually implemented a basic trampoline myself to learn. How does it happens in Haskell? Is there a way to unpack the laziness a bit, give some pointers, and maybe documentation that would help in understanding it better? sealed trait IO[A] { ..... def flatMap[B](f: A =...

Is Haskell's `Const` Functor analogous to the constant functor from category theory?

18 1 I understand that many of the names in Haskell are inspired by category theory terminology, and I'm trying to understand exactly where the analogy begins and ends. The Category Hask I already know that Hask is not (necessarily) a category due to some technical details about strictness/laziness and seq , but let's put that aside for now. For clarity, The objects of Hask are concrete types, that is, types of kind * . This includes function types like Int -> [Char] , but not anything that requires a type parameter like Maybe :: * -> * . However, the concrete type Maybe Int :: * belongs to Hask . Type constructors / polymorphic functions are more like natural transformations (or other more general maps from Hask to itself), rather than morphis...

Why doesn't this function bottom out

5 xs = [1, 2] ++ undefined length $ take 2 $ take 4 xs My brain is reading this as length (take 2 ( take 4 xs ) ) If everything in the parenthesis is evaluated first, then why does this not error out with Exception: prelude.undefined? The answer given by the book is that take 2 is only taking the first two indices, but shouldn't the take 4 take precedence here and get evaluated first? haskell Share Improve this question Follow ...

Can the composite pattern be used to generate HTML from a tree and handle the indenting as well, or this inherently not possible?

6 I watched this video on the composite pattern, where the main example is how to use the pattern as a mean to generate HTML code from a tree structure describing a todo list where each item can be in turn a todo list, and it seems a convenient test-bed, so here it is a target HTML: [ ] Main <ul> <li>[ ] 1.</li> <li>[ ] 2. <ul> <li>[ ] 2.1</li> <li>[ ] 2.2</li> </ul> </li> <li>[ ] 3.</li> </ul> (Sorry if the top [ ] Main doesn't make sense, but I don't know HTML; furthermore that's irrelevant to my question, I believe.) I understand that design patterns are mostly an OOP "thing", however I'm often referring to the article Design P...

What does seq actually do in Haskell?

17 4 From Real World Haskell I read It operates as follows: when a seq expression is evaluated, it forces its first argument to be evaluated, then returns its second argument. It doesn't actually do anything with the first argument: seq exists solely as a way to force that value to be evaluated. where I've emphasised the then because to me it implies an order in which the two things happen. From Hackage I read The value of seq a b is bottom if a is bottom, and otherwise equal to b . In other words, it evaluates the first argument a to weak head normal form (WHNF). seq is usually introduced to improve performance by avoiding unneeded laziness. A note on evaluation order: the expression seq a b does not guarantee that a will be evaluated before b ...

Why does Haskell use mergesort instead of quicksort?

In Wikibooks' Haskell, there is the following claim: Data.List offers a sort function for sorting lists. It does not use quicksort; rather, it uses an efficient implementation of an algorithm called mergesort. What is the underlying reason in Haskell to use mergesort over quicksort? Quicksort usually has better practical performance, but maybe not in this case. I gather that the in-place benefits of quicksort are hard (impossible?) to do with Haskell lists. There was a related question on softwareengineering.SE, but it wasn't really about why mergesort is used. I implemented the two sorts myself for profiling. Mergesort was superior (around twice as fast for a list of 2^20 elements), but I'm not sure that my implementation of quicksort was optimal. Edit: Here are my implementations of mergesort and quicksort: mergesort :: Ord a => [a] -> [a] mergesort [] = [] mergesort [x] = [x] mergesort l = merge (mergesort left) (mergesort right) where size = div...