I've been following this series since the author started it.

For context, when I first became aware of the Mark Seemann, he was very deep into the object-oriented way of doing things. I discovered him because he was the author of (the excellent) Dependency Injection in .NET. A while after writing that book, he started getting into F#, and eventually Haskell, and blogged about it all the while. The upshot being, his blog archive now tells a fascinating story of the evolution of one person from being a an extremely accomplished object-oriented programmer toward being a serious functional programming advocate.

Been following this guy for a while now too. And you're absolutely right. It's been fun watching his evolution. Glad to see this series of articles posted here. I've been sharing this series with my coworkers and we've since gotten into Haskell, Elm, and just taking more functional approaches in general in our primary stack.

For the sake of throwing credit out there, my understanding is that the new edition[1] is very heavily revised, and most the revision was done by the co-author, Steven van Deursen.

I think I remember seeing Seemann write on his blog that the new version is a much better reflection of his current thoughts on how to do DI, and object-oriented design in general, than the first edition was. In particular, the 2nd edition is much less focused on heavyweight tools, and takes an overall cleaner approach to things. Nowhere near as much Castle (the .NET equivalent of Spring) type stuff.

I've been saying for a few years that design patterns are algebras seen dimly. People have asked me for references for that, and I haven't had anything to point them to. At this point I would modify it as algebras plus invariants.

I would say category theory is the wrong title for this, though. Category theory is occupied with the algebras that emerge when you put algebras into the same space and look at mappings among them. It emerged in algebraic topology where you would construct a family of algebras that describe aspects of a geometric structure and then establish connections among them to talk about the effects of operations on those geometric structures. Categories gave you one algebra, functors gave you families of algebras, and natural transformations gave you operations on the underlying spaces.

Many of the algebras described here and others that I have found useful emerged earlier in abstract algebra or universal algebra (monoids, lattices) or in parallel in combinatorics (magmas).

The category theoretic view is often an inconvenient setting. Lattices, for example, are best dealt with simply as a set with some operations that obey specific axioms. You get theorems from the axioms, and if you need one of those theorems to apply to your system, you make sure you implement it in such a way that it obeys those axioms.

Or if you have an algebra near to what you need, you may be able to modify it in the abstract setting to keep properties that you want. I ended up doing this with lattices to describe tagging regions of genomes.

The agreement part is that the algebraic structures that naturally arise in my programming experience are indeed ultra-basic: monoids, lattices, semirings, partial orders, and so on. (When I need elementary linear algebra things are getting fancy!)

The disagreement arises from the fact that the right home for these basic concepts is often surprisingly exotic. For example, it's often the case that what you need is not a monoid in the category of sets, but in the category of sets and relations or partial orders and monotone functions.

Another example is when you want to represent syntax trees with scoped binding structure. Now a syntax tree is the dumbest possible algebraic structure (it's a free algebra with all generators and no relations), but when you have binding you need a free algebra in the category of presheaves over finite sets and injections.

I am still learning this stuff (and have only had undergrad abstract algebra), so thank you for the context and motivation.

As it happens, today I was thinking about semirings, and how to apply them to discrete event systems[1][2], and was fascinated to see that both algebraic geometers (Alexander Grothendieck) and computer scientists (Robin Milner) were doing[3] some of the same[4] sorts of tricks to transfer algebraic structures (like rings) over to settings lacking an inverse (like semirings)--rather than take the direct approach (as you mentioned in your lattice example):

There are many times in mathematics where some class of objects has a nice additive structure, except that one can’t always invert elements. One approach is to say, “oh well,” and develop the theory of monoids; another,which we’ll discuss today, adds in formal inverses, in effect pretending that they exist, and uses them to recover something useful.

The point of design patterns is not just to solve particular problems, but to solve larger problems by putting together solutions to smaller problems in a coherent way. Category theory is just the right language for talking about putting things together, even if the individual pieces you want to put together are less sophisticated than, say, a cartesian closed category or a universal property.

So I disagree, I think the title is right. If you don't go as far as category then you've stopped short of the biggest pay-off.

> The point of design patterns is not just to solve particular problems, but to solve larger problems by putting together solutions to smaller problems in a coherent way.

That is an interesting description of design patterns, and not one I've heard before.

I've written a few blogs posts somewhat related to this at https://philipnilsson.github.io/Badness10k/, though it depends on the reader's level on whether they're a any good as an introduction.

The author appears to making the common functional programmer mistake of using "Category thoeory" to mean "formal mathematical language", likely because they usually encounter category theory for the first time when they see Haskell's "monad", shortly before they see all the Haskell typeclasses named for other non-category-specific algebraic objects like monoids and morphisms.

Rest assured, you absolutely do not need any category theory to be an effective functional programmer.
Everything is a category because category theory is the most general formulation of algebra.
But you don't need to know categories for any of it. Not even for monads.

It's like trying to use calculus of variationsnto find the area of a triangle. You can so it, but it's pointlessly overcomplicated overengineering. Chasing after ever-higher abstractions when you only have one concrete case to abstract over, is the functional programmer's version of OO architecture astronautry.

Yeah, I often wonder if people interested in "category theory" from a programming perspective are really just interested in "algebra".

Certainly most of the "category theory for programmers" tutorials I've seen have the flavor of an undergrad algebra course: starting out with a long introduction about why you should care about abstract structures, then giving a few definitions.

The only difference is that for some reason they dive right into esoteric definitions like "monoids" and "magmas" and "monads" instead of the bedrock useful things from algebra like "groups", "rings", "fields", etc.

By the way, "monoid" and "magma" are no more categorical concepts than "groups" or "vector spaces" or anything else, reinforcing my suspicion that categories are not the fundamental thing people care about here.

> Notice that there's more than one way to combine numbers. You can add them together, but you can also multiply them. Could there be a common abstraction for that? What about objects that can somehow be combined, even if they aren't 'number-like'? The generalisation of such operations is a branch of mathematics called category theory,

You would think the author was about to introduce the concept of a "group" here. Literally if you replace "category theory" with "group theory" here, the entire paragraph still makes sense. What in the world is the point of talking about categories here? Again, when people say "category theory" do they really just mean "algebra"?

It's weird. I really don't get how this caught on, and to be honest it feels a bit like cargo culting. As someone who studied math, it seems ludicrous to me that someone should care about category theory without having studied algebra first. It's like memorizing the C++ standard without ever having written a real program. The whole point of categories is to generalize algebraic structures; how can you appreciate the point of them without actually having seen any concrete algebraic structures?

> The only difference is that for some reason they dive right into esoteric definitions like "monoids" and "magmas" and "monads" instead of the bedrock useful things from algebra like "groups", "rings", "fields", etc.

A remark: from a programming perspective, monoids are not esoteric, and are indeed more important than any of the intro abstract algebra structures. This is because free monoids are ordered lists, the most fundamental data structure in all of programming.

You can have a long and productive career as a C programmer knowing nothing but how to create and iterate over arrays! Monoids are so important that every single modern language (from C++ and Java all the way to Haskell and Scala) devotes a HUGE amount of language and standard library surface area to render data structures as sequences.

They are so important that even a language like Haskell is willing to accept utterly nonsensical semantics (hello, Traversable) to better support sequences.

> You can have a long and productive career as a C programmer knowing nothing but how to create and iterate over arrays!

True.

> Monoids are so important that every single modern language (from C++ and Java all the way to Haskell and Scala) devotes a HUGE amount of language and standard library surface area to render data structures as sequences.

Yes and no. Yes, they devote a huge amount of language and standard library surface area to sequences. No, nobody in C cares about that being a monoid, or is thinking about monoids when they use sequences, just like no NBA players think about general relativity when they're shooting a basketball. With C++... maybe a few think about it that way.

> They are so important that even a language like Haskell is willing to accept utterly nonsensical semantics (hello, Traversable) to better support sequences.

I've mostly ignored Haskell for the past decade. What happened with Traversable?

That's a very helpful comment. I wonder if partly to blame is the fact the mathematics doesn't have great terminology here: it's taught in US Universities as "Abstract Algebra", but my impression is that actual mathematicians find that term silly. On the other hand, just calling it "algebra" is difficult for people with more humble mathematical backgrounds since for them "algebra" is solving simultaneous equations etc.

Indeed; in case it’s not clear from my comment, the above poster is right. I’m talking about abstract algebra, I.e. the study of algebraic structures, not about elementary algebra.

In French, the situation is even worse: number theory is also called “arithmetic”. According to legend, sometimes innumerate peasants wanting to learn how to do basic sums would pick up a book called “arithmetic” in a bookstore that turned out to be a dense theoretical monograph on number theory.

This. It's my biggest peave that FP advocacy is now so wrapped up in Haskell and allusion of category theory based enlightenment. It sometimes feels a little like some of the old schools template shenanigans you could get up with C++, ostensible it was all so generic and reusable but in reality all it useful did was massage your ego by helping you feel really clever.

There doesn't seem to be any actionable content in this series.
Definitions and examples are presented, but there's nothing to say "And now you can do $incredibly_useful_thing which you weren't able to do before, like this..."

Maybe that's going to be added later, but so far I'm not seeing any practical benefit.

> Category theory ... has the potential to be a major cohesive force in the world, building rigorous bridges between disparate worlds, both theoretical and practical.

I got this same impression while delving into type theory [0], but even moreso with category theory. It seems the chaos of the information explosion is being organized and how we presently do software and data may go the way of the dinosaur "overnight".

Indeed, we (David Spivak et al) are (among other things) trying to apply those seven sketches in applied category theory to business problems and are soliciting proposals for up to $1.5M in venture funding from interested start-ups to do so: https://conexus.ai/ventures. Individuals interested in applying CT are also encouraged to apply to get matched with a venture.

My supervisor's opinion on this topic seems to be that we would not end at category theory, but that it is a step into the right direction of where to focus our attention.

EDIT: For example, my MSc was on the topic of a category equipped with a functor into it. This allows one to construct (parts of, e.g. I haven't done subobject classifiers) a self dual set theory.

To be honest I don't see the majority of developers interested into Category Theory. We have to hand it on a silver plate by solving some problems to show it is useful. It doesn't sound pragmatic enough for people to even start looking at it.

One way I'm thinking is to extract "patterns" automatically from their code. Then, it enables to write code review, give advises, pointers, find pattern duplication, common ground vocabulary... I'm sure structuring automatically the code from CT point of view can be helpful.

Disclaimer:
Few years ago, I felt in love with theory category and more precisely the sketches (from Ehresmann). I linked Machine Learning and Category Theory [1] by automatically mapping data structure and algorithm definitions together (input/output and operations between) in order to be able to run any algorithm on any set of data. Then I introduced an heuristic based on Kolmogorov complexity to find the best model (algorithm output) to summarize the input data. Loved it !

My mathematical training is mainly from the category theory school of thought (more specifically the set theory inspiring category theory school of thought) and I have to say that I find it difficult to move from category theory to functional programming. I think it's because mathematics is more about a way of thinking than about the tools.

For example, reading category theory introductions meant for programmers to me is confusing, even though I know the mathematics already! I still need to get around to Milewski's book which looks to be dually useful as "functional programming for category theorists".

I stopped for a moment to come back to this thread and say that this was extremely well-written. No words egregiously used or out of place, just the right conciseness and tone. Love it.

Exactly. Design patterns are a type of language... in fact Christopher Alexander, the architect who came up with the concept, talks about "pattern languages" (and that's the title of his first book). And category theory is a language... it's a language much more suitable for talking about the kinds of abstractions we have to deal with in software engineering (and maybe mathematics or at least parts of mathematics as well). Design patterns help, but they're too fuzzy... category theory lets you say the same things much more precisely.

I like this comparison, because one of Christopher Alexander's requirements for patterns is that you can draw a picture of them. And what is category theory but diagrams!

I got really into F# for a couple years, and so Mark Seemann was someone I would read and watch quite often. Something I've noticed, is that F# has had very little impact as a language, but the small group of people involved with it seem really worth paying attention to. I feel like there is a chance that F#'s major contribution to programming will be its enthusiasts more than its project. For instance, check out this site by Scott Wlaschin. http://www.fsharpforfunandprofit.com

Can you expand on the connection between linq and monads? I have a friend that's a long time .net developer and I've been trying to explain some of the benefits of FP, but we lack a common language to discuss the concepts. Also, doesn't the lack of higher kinded types preclude use of monads in c#?

If you implement your own type that includes the right functions with the right signatures, you can incorporate expressions with that type into LINQ expressions. Crucially, SelectMany (most notably, IEnumerable.SelectMany, but the compiler just needs a function of the right name and signature and does not care about the interface) is the monadic bind. I believe Eric Lippert expands on this in his blog series on monads: https://ericlippert.com/2013/02/21/monads-part-one/

Yes, lack of the higher kind polymorphism makes it harder to express these abstract types universally, but only concrete implementations with Select and SelectMany.

SelectMany is similar to a monadic bind, with an extra selector parameter in its signature, you can achieve the following expression:

from one in Maybe.Of(1)
from two in MaybeAddOne(one)
Select two

The first line will become a Select call and the second line will become a SelectMany call.

Another thing is async/await is pretty much similar to LinQ expressions, but it's hardcoded with the type Task.

> Yes, lack of the higher kind polymorphism makes it harder to express these abstract types universally

Here's a way of doing ad-hoc polymorphism to implement a more general monad in C#. It's limited in that the return type for Bind can be any monad (although it still type checks). There are a few other issues (not least it's ugly as hell). But, it does allow for building of very general monadic operations (along with other ad-hoc polymorphic types).

// Typeclass (ish)
public interface Monad<MA, A>
{
MA Return(A x);
MA Fail(object error = null);
MB Bind<MonadB, MB, B>(MA ma, Func<A, MB> f)
where MonadB : struct, Monad<MB, B>;
}
// Class instance of Monad for Option
public struct MOption<A> : Monad<Option<A>, A>
{
public MB Bind<MonadB, MB, B>(Option<A> ma, Func<A, MB> f)
where MonadB : struct, Monad<MB, B> =>
ma is Some<A> x
? f(x.Value)
: default(MonadB).Fail();
public Option<A> Return(A x) =>
new Some<A>(x);
public Option<A> Fail(object error = null) =>
new None<A>();
}
// Option 'discriminated union'
public interface Option<A>
{
}
public class Some<A> : Option<A>
{
public readonly A Value;
public Some(A value) => Value = value;
}
public class None<A> : Option<A>
{
public None() { }
}
// Testing adding any two M<int> types together. Doesn't need to just work with
// ints, it's just a simple example.
public class Test
{
public void Test1()
{
// Some 200
var r1 = AddAnyMonads<MOption<int>, Option<int>>(new Some<int>(100), new Some<int>(100));
// None
var r2 = AddAnyMonads<MOption<int>, Option<int>>(new Some<int>(100), new None<int>());
}
public MInt AddAnyMonads<MonadInt, MInt>(MInt ma, MInt mb)
where MonadInt : struct, Monad<MInt, int> =>
default(MonadInt).Bind<MonadInt, MInt, int>(ma, x =>
default(MonadInt).Bind<MonadInt, MInt, int>(mb, y =>
default(MonadInt).Return(x + y)));
}

I have a more complete example in language-ext [1] which unifies monads that take inputs (like Reader and State) and produce output (like Writer) with the more simple monads like Option.

Link is syntactic sugar for monads, just like do notation in Haskell. When you write 'from a in selector select a.x' that is translated to 'selector.then(a => return(a.x))' where 'then' is the C# equivalent of 'bind' and 'return' the same as Haskell's 'return'. That means that line is the same as 'do {a <- selector; pure (x a)}' in Haskell.

## 47 Comments:

I've been following this series since the author started it.

For context, when I first became aware of the Mark Seemann, he was very deep into the object-oriented way of doing things. I discovered him because he was the author of (the excellent)

Dependency Injection in .NET. A while after writing that book, he started getting into F#, and eventually Haskell, and blogged about it all the while. The upshot being, his blog archive now tells a fascinating story of the evolution of one person from being a an extremely accomplished object-oriented programmer toward being a serious functional programming advocate.He's also done a very good job of explaining what FP has to offer to an object-oriented programmer, or at least one who likes a certain style of OOP. For example, see https://blog.ploeh.dk/2017/01/27/from-dependency-injection-t...

Been following this guy for a while now too. And you're absolutely right. It's been fun watching his evolution. Glad to see this series of articles posted here. I've been sharing this series with my coworkers and we've since gotten into Haskell, Elm, and just taking more functional approaches in general in our primary stack.

Same here. His book (there's now a second edition!) is excellent. I've found few authors that publish articles that resonate with me as much as Mark.

He's on a couple of episodes of .NET Rocks podcast too, definetly worth a listen.

For the sake of throwing credit out there, my understanding is that the new edition[1] is very heavily revised, and most the revision was done by the co-author, Steven van Deursen.

I think I remember seeing Seemann write on his blog that the new version is a much better reflection of his current thoughts on how to do DI, and object-oriented design in general, than the first edition was. In particular, the 2nd edition is much less focused on heavyweight tools, and takes an overall cleaner approach to things. Nowhere near as much Castle (the .NET equivalent of Spring) type stuff.

[1]: https://www.manning.com/books/dependency-injection-principle...

I've been saying for a few years that design patterns are algebras seen dimly. People have asked me for references for that, and I haven't had anything to point them to. At this point I would modify it as algebras plus invariants.

I would say category theory is the wrong title for this, though. Category theory is occupied with the algebras that emerge when you put algebras into the same space and look at mappings among them. It emerged in algebraic topology where you would construct a family of algebras that describe aspects of a geometric structure and then establish connections among them to talk about the effects of operations on those geometric structures. Categories gave you one algebra, functors gave you families of algebras, and natural transformations gave you operations on the underlying spaces.

Many of the algebras described here and others that I have found useful emerged earlier in abstract algebra or universal algebra (monoids, lattices) or in parallel in combinatorics (magmas).

The category theoretic view is often an inconvenient setting. Lattices, for example, are best dealt with simply as a set with some operations that obey specific axioms. You get theorems from the axioms, and if you need one of those theorems to apply to your system, you make sure you implement it in such a way that it obeys those axioms.

Or if you have an algebra near to what you need, you may be able to modify it in the abstract setting to keep properties that you want. I ended up doing this with lattices to describe tagging regions of genomes.

I half-agree with you.

The agreement part is that the algebraic structures that naturally arise in my programming experience are indeed ultra-basic: monoids, lattices, semirings, partial orders, and so on. (When I need elementary linear algebra things are getting fancy!)

The disagreement arises from the fact that the right home for these basic concepts is often surprisingly exotic. For example, it's often the case that what you need is not a monoid in the category of sets, but in the category of sets and relations or partial orders and monotone functions.

Another example is when you want to represent syntax trees with scoped binding structure. Now a syntax tree is the dumbest possible algebraic structure (it's a free algebra with all generators and no relations), but when you have binding you need a free algebra in the category of presheaves over finite sets and injections.

I am still learning this stuff (and have only had undergrad abstract algebra), so thank you for the context and motivation.

As it happens, today I was thinking about semirings, and how to apply them to discrete event systems[1][2], and was fascinated to see that both algebraic geometers (Alexander Grothendieck) and computer scientists (Robin Milner) were doing[3] some of the same[4] sorts of tricks to transfer algebraic structures (like rings) over to settings lacking an inverse (like semirings)--rather than take the direct approach (as you mentioned in your lattice example):

There are many times in mathematics where some class of objects has a nice additive structure, except that one can’t always invert elements. One approach is to say, “oh well,” and develop the theory of monoids; another,which we’ll discuss today, adds in formal inverses, in effect pretending that they exist, and uses them to recover something useful.[1] https://www.cl.cam.ac.uk/%7Esd601/papers/semirings.pdf

[2] http://r6.ca/blog/20110808T035622Z.html

[3] https://web.ma.utexas.edu/users/a.debray/lecture_notes/sophe...

[4] https://old.reddit.com/r/haskell/comments/2d8q24/additivemul...

The point of design patterns is not just to solve particular problems, but to solve larger problems by putting together solutions to smaller problems

in a coherent way. Category theory is just the right language for talking about putting things together, even if the individual pieces you want to put together are less sophisticated than, say, a cartesian closed category or a universal property.So I disagree, I think the title is right. If you don't go as far as category then you've stopped short of the biggest pay-off.

> The point of design patterns is not just to solve particular problems, but to solve larger problems by putting together solutions to smaller problems in a coherent way.

That is an interesting description of design patterns, and not one I've heard before.

I've written a few blogs posts somewhat related to this at https://philipnilsson.github.io/Badness10k/, though it depends on the reader's level on whether they're a any good as an introduction.

The author appears to making the common functional programmer mistake of using "Category thoeory" to mean "formal mathematical language", likely because they usually encounter category theory for the first time when they see Haskell's "monad", shortly before they see all the Haskell typeclasses named for other non-category-specific algebraic objects like monoids and morphisms.

Rest assured, you absolutely do not need any category theory to be an effective functional programmer. Everything is a category because category theory is the most general formulation of algebra. But you don't need to know categories for any of it. Not even for monads.

It's like trying to use calculus of variationsnto find the area of a triangle. You can so it, but it's pointlessly overcomplicated overengineering. Chasing after ever-higher abstractions when you only have one concrete case to abstract over, is the functional programmer's version of OO architecture astronautry.

Yeah, I often wonder if people interested in "category theory" from a programming perspective are really just interested in "algebra".

Certainly most of the "category theory for programmers" tutorials I've seen have the flavor of an undergrad algebra course: starting out with a long introduction about why you should care about abstract structures, then giving a few definitions.

The only difference is that for some reason they dive right into esoteric definitions like "monoids" and "magmas" and "monads" instead of the bedrock useful things from algebra like "groups", "rings", "fields", etc.

By the way, "monoid" and "magma" are no more categorical concepts than "groups" or "vector spaces" or anything else, reinforcing my suspicion that categories are not the fundamental thing people care about here.

> Notice that there's more than one way to combine numbers. You can add them together, but you can also multiply them. Could there be a common abstraction for that? What about objects that can somehow be combined, even if they aren't 'number-like'? The generalisation of such operations is a branch of mathematics called category theory,

You would think the author was about to introduce the concept of a "group" here. Literally if you replace "category theory" with "group theory" here, the entire paragraph still makes sense. What in the world is the point of talking about categories here? Again, when people say "category theory" do they really just mean "algebra"?

It's weird. I really don't get how this caught on, and to be honest it feels a bit like cargo culting. As someone who studied math, it seems ludicrous to me that someone should care about category theory without having studied algebra first. It's like memorizing the C++ standard without ever having written a real program. The whole point of categories is to generalize algebraic structures; how can you appreciate the point of them without actually having seen any concrete algebraic structures?

> The only difference is that for some reason they dive right into esoteric definitions like "monoids" and "magmas" and "monads" instead of the bedrock useful things from algebra like "groups", "rings", "fields", etc.

A remark: from a programming perspective, monoids are not esoteric, and are indeed more important than any of the intro abstract algebra structures. This is because free monoids are ordered lists, the most fundamental data structure in all of programming.

You can have a long and productive career as a C programmer knowing nothing but how to create and iterate over arrays! Monoids are so important that every single modern language (from C++ and Java all the way to Haskell and Scala) devotes a HUGE amount of language and standard library surface area to render data structures as sequences.

They are so important that even a language like Haskell is willing to accept utterly nonsensical semantics (hello, Traversable) to better support sequences.

> You can have a long and productive career as a C programmer knowing nothing but how to create and iterate over arrays!

True.

> Monoids are so important that every single modern language (from C++ and Java all the way to Haskell and Scala) devotes a HUGE amount of language and standard library surface area to render data structures as sequences.

Yes and no. Yes, they devote a huge amount of language and standard library surface area to sequences. No, nobody in C cares about that being a monoid, or is thinking about monoids when they use sequences, just like no NBA players think about general relativity when they're shooting a basketball. With C++... maybe a few think about it that way.

> This is because free monoids are ordered lists, the most fundamental data structure in all of programming.

Well, I think unsigned integers are even more fundamental than ordered lists, and those behave as the ring of integers mod 2^bit_width .

So aren't rings even more important in programming than monoids?

My answer would be that neither really is.

You're confusing a few examples of important things happening to be X, with the

conceptof X being an important concept.When people use ordered lists, they are not thinking of them as monoids, or using any interesting theorems from monoid theory.

If mathematicians had never come up with the definition of monoids, C and C++ would have been designed in exactly the same way as they are now.

> They are so important that even a language like Haskell is willing to accept utterly nonsensical semantics (hello, Traversable) to better support sequences.

I've mostly ignored Haskell for the past decade. What happened with Traversable?

That's a very helpful comment. I wonder if partly to blame is the fact the mathematics doesn't have great terminology here: it's taught in US Universities as "Abstract Algebra", but my impression is that actual mathematicians find that term silly. On the other hand, just calling it "algebra" is difficult for people with more humble mathematical backgrounds since for them "algebra" is solving simultaneous equations etc.

Indeed; in case it’s not clear from my comment, the above poster is right. I’m talking about abstract algebra, I.e. the study of algebraic structures, not about elementary algebra.

In French, the situation is even worse: number theory is also called “arithmetic”. According to legend, sometimes innumerate peasants wanting to learn how to do basic sums would pick up a book called “arithmetic” in a bookstore that turned out to be a dense theoretical monograph on number theory.

This. It's my biggest peave that FP advocacy is now so wrapped up in Haskell and allusion of category theory based enlightenment. It sometimes feels a little like some of the old schools template shenanigans you could get up with C++, ostensible it was all so generic and reusable but in reality all it useful did was massage your ego by helping you feel really clever.

Ah, but you see, now it's a different set of people who get to feel really clever...

There doesn't seem to be any actionable content in this series. Definitions and examples are presented, but there's nothing to say "And now you can do $incredibly_useful_thing which you weren't able to do before, like this..."

Maybe that's going to be added later, but so far I'm not seeing any practical benefit.

Also LtU has this at the moment which is supposed to be a very approachable introduction to applied category theory:

http://lambda-the-ultimate.org/node/5571

>

Category theory ... has the potential to be a major cohesive force in the world, building rigorous bridges between disparate worlds, both theoretical and practical.I got this same impression while delving into type theory [0], but even moreso with category theory. It seems the chaos of the information explosion is being organized and how we presently do software and data may go the way of the dinosaur "overnight".

0: https://en.m.wikipedia.org/wiki/Type_theory

Indeed, we (David Spivak et al) are (among other things) trying to apply those seven sketches in applied category theory to business problems and are soliciting proposals for up to $1.5M in venture funding from interested start-ups to do so: https://conexus.ai/ventures. Individuals interested in applying CT are also encouraged to apply to get matched with a venture.

I missed seeing the individual option. Thanks for that. See you around.

My supervisor's opinion on this topic seems to be that we would not end at category theory, but that it is a step into the right direction of where to focus our attention.

EDIT: For example, my MSc was on the topic of a category equipped with a functor into it. This allows one to construct (parts of, e.g. I haven't done subobject classifiers) a self dual set theory.

You can look at the HOTT program to see what a synthesis of (Higher) Category Theory and Type Theory would look like (homotopytypetheory.org).

thanks for the link. I read the intro and seems very approachable.

To be honest I don't see the majority of developers interested into Category Theory. We have to hand it on a silver plate by solving some problems to show it is useful. It doesn't sound pragmatic enough for people to even start looking at it.

One way I'm thinking is to extract "patterns" automatically from their code. Then, it enables to write code review, give advises, pointers, find pattern duplication, common ground vocabulary... I'm sure structuring automatically the code from CT point of view can be helpful.

Disclaimer: Few years ago, I felt in love with theory category and more precisely the sketches (from Ehresmann). I linked Machine Learning and Category Theory [1] by automatically mapping data structure and algorithm definitions together (input/output and operations between) in order to be able to run any algorithm on any set of data. Then I introduced an heuristic based on Kolmogorov complexity to find the best model (algorithm output) to summarize the input data. Loved it !

[1] https://link.springer.com/content/pdf/10.1007/978-3-540-7497...

My mathematical training is mainly from the category theory school of thought (more specifically the

set theoryinspiring category theory school of thought) and I have to say that I find it difficult to move from category theory to functional programming. I think it's because mathematics is more about a way of thinking than about the tools.For example, reading category theory introductions meant for programmers to me is confusing, even though I know the mathematics already! I still need to get around to Milewski's book which looks to be dually useful as "functional programming for category theorists".

I stopped for a moment to come back to this thread and say that this was extremely well-written. No words egregiously used or out of place, just the right conciseness and tone. Love it.

See also: "Compiling to categories" Conal Elliott

http://conal.net/papers/compiling-to-categories/

This is a good real example of applied category theory.

Exactly. Design patterns are a type of language... in fact Christopher Alexander, the architect who came up with the concept, talks about "pattern languages" (and that's the title of his first book). And category theory is a language... it's a language much more suitable for talking about the kinds of abstractions we have to deal with in software engineering (and maybe mathematics or at least parts of mathematics as well). Design patterns help, but they're too fuzzy... category theory lets you say the same things much more precisely.

I like this comparison, because one of Christopher Alexander's requirements for patterns is that you can draw a picture of them. And what is category theory but diagrams!

Discussed at the time: https://news.ycombinator.com/item?id=15399787

Rúnar Bjarnason gave a presentation in 2015 in which he implemented the GoF patterns in a functional style.

https://www.youtube.com/watch?v=hmX2s3pe_qk

I got really into F# for a couple years, and so Mark Seemann was someone I would read and watch quite often. Something I've noticed, is that F# has had very little impact as a language, but the small group of people involved with it seem really worth paying attention to. I feel like there is a chance that F#'s major contribution to programming will be its enthusiasts more than its project. For instance, check out this site by Scott Wlaschin. http://www.fsharpforfunandprofit.com

I would highly recommend this talk by Scott Wlaschin https://www.youtube.com/watch?v=srQt1NAHYC0

This series is extremely well-written and well-organized.

Just wondering since most of the examples are in C#, why no Monads? It seems it's pretty straightforward to express the idea in LinQ expressions.

Can you expand on the connection between linq and monads? I have a friend that's a long time .net developer and I've been trying to explain some of the benefits of FP, but we lack a common language to discuss the concepts. Also, doesn't the lack of higher kinded types preclude use of monads in c#?

Monads in C# [1]

Arr<A>, Lst<A>, Seq<A>, Set<A>, HashSet<A>, Que<A>, Stck<A>, Option<A>, OptionAsync<A>, Either<A>, EitherAsync<A>, Try<A>, TryAsync<A>, TryOption<A>, TryOptionAsync<A>, Reader<Env, A>, Resource<A>, Writer<MonoidW, W, T>, State<S, A>, Task<T> (extensions), Nullable<T> (extensions), Validation<Fail, Success>, Validation<MonoidFail, Fail, Success> + Monad transformer stack

[1] https://github.com/louthy/language-ext/

If you implement your own type that includes the right functions with the right signatures, you can incorporate expressions with that type into LINQ expressions. Crucially, SelectMany (most notably, IEnumerable.SelectMany, but the compiler just needs a function of the right name and signature and does not care about the interface) is the monadic bind. I believe Eric Lippert expands on this in his blog series on monads: https://ericlippert.com/2013/02/21/monads-part-one/

Yes, lack of the higher kind polymorphism makes it harder to express these abstract types universally, but only concrete implementations with Select and SelectMany.

SelectMany is similar to a monadic bind, with an extra selector parameter in its signature, you can achieve the following expression:

The first line will become a Select call and the second line will become a SelectMany call.Another thing is async/await is pretty much similar to LinQ expressions, but it's hardcoded with the type Task.

> Yes, lack of the higher kind polymorphism makes it harder to express these abstract types universally

Here's a way of doing ad-hoc polymorphism to implement a more general monad in C#. It's limited in that the return type for Bind can be any monad (although it still type checks). There are a few other issues (not least it's ugly as hell). But, it does allow for building of very general monadic operations (along with other ad-hoc polymorphic types).

I have a more complete example in language-ext [1] which unifies monads that take inputs (like Reader and State) and produce output (like Writer) with the more simple monads like Option.[1] https://github.com/louthy/language-ext/blob/master/LanguageE...

Link is syntactic sugar for monads, just like do notation in Haskell. When you write 'from a in selector select a.x' that is translated to 'selector.then(a => return(a.x))' where 'then' is the C# equivalent of 'bind' and 'return' the same as Haskell's 'return'. That means that line is the same as 'do {a <- selector; pure (x a)}' in Haskell.