Hacker News

87 Comments:
notacoward said 6 days ago:

Pretty neat. Three things that occurred to me while reading it.

(1) Makes it pretty clear why exceptions are even more of a control-flow nightmare than goto. What a messy graph.

(2) The 'defer' diagram is a big WTF. Couldn't even figure out which box is supposed to represent which statement in the example.

(3) I'd love to see a graph of the control flow when you throw in a few spurious lambdas like "advanced" C++ programmers tend to do (and similar in a few other languages). Might even make exceptions look good.

bibyte said 6 days ago:

My takeaway was the exact opposite of yours. For example the goto graph seems very intuitive and simple. But it is much more complex to reason about compared to exceptions. My takeaway was that simple graphs like these can't really explain the pros and cons of high level programming constructs.

beetwenty said 6 days ago:

The tradeoff in goto-vs-exception is that a goto needs an explicit label, while an exception allows the destination to be unnamed, constrained only by the callstack at the site where it's raised.

That makes exceptions fall more towards the "easy-to-write, hard-to-read" side of things; implied side-effects make your code slim in the present, treacherous as combinatorial elements increase. With error codes you pay a linear cost for every error, which implicitly discourages letting things get out of hand, but adds a hard restriction on flow. With goto, because so little is assumed, there are costs both ways: boilerplate to specify the destination, and unconstrained possibilities for flow.

Jumping backwards is the primary sin associated with goto, since it immediately makes the code's past and future behaviors interdependent. There are definitely cases where exceptions feel necessary, but I believe most uses could be replaced with a "only jump forward" restricted goto.

perthmad said 6 days ago:

It's even clearer when you consider higher-order functions. These diagrams are only able to represent functions of order at most 2, i.e. what is called "dependency injection".

lwb said 6 days ago:

My impression was that the article made the exceptions graph much more complicated than it needed to be. Each function doesn’t need three boxes, and they don’t need to each throw exceptions twice. IMO exceptions are much cleaner in practice and are great for things like “kill this script” or “throw a 500 error”.

edflsafoiewq said 6 days ago:

Actually the exceptions graph is simplified. The code to perform unwinding is not depicted.

lwb said 6 days ago:

The graph isn’t about implementations — it also doesn’t show that you have to increment a counter to make a loop.

rovolo said 6 days ago:

RE (2): Defer. The diagram has an extra box in the control flow which is unnecessary. You can label the big boxes from top to bottom like so: A E B <unnecessary> D C.

A good use case for understanding the control flow of 'defer' is when cleaning up resources.

    mutex.lock();
    defer { mutex.unlock(); }

    val f = File.open(filename);
    defer { f.close(); }

    // write to file
It's a nice construct because it keeps construction and destruction close together instead of far apart

    mutex.lock();
    val f = File.open(filename);

    // write to file

    f.close();
    mutex.unlock();
ken said 6 days ago:

If you think exceptions look messy as a flow chart, try conditions! It’s be like exceptions x come-from.

And yet in practice, it’s really not that confusing. Expressivity counts for more than the number of branch opcodes generated by the compiler.

DonHopkins said 6 days ago:

It all boils down to "flaggy code" -vs- "spaghetti code" (or vexillology -vs- pastafarianism).

https://wiki.c2.com/?GoTo

dabei said 6 days ago:

Exception is a nightmare only if you use it as a general purpose control flow mechanism.

notacoward said 6 days ago:

Unfortunately, many do. When that includes popular libraries and frameworks, "abuse" becomes idiomatic or even strictly necessary. When it's in the language itself, such as Python's use of KeyError or StopIteration, that's even more true. Since better alternatives exist for most exception use cases, I'm of the opinion that direct language support was a bad idea. A more general and explicit cousin of signal handlers would suffice for those few "stop the world" cases, and can be done via library functions. Optional types, multiple return values, or plain old error codes (I like Zig's "non-ignorable error code" approach) suffice for the rest.

DonHopkins said 6 days ago:

That's the exception that proves the rule.

tel said 6 days ago:

This sort of formalism is bad for handling multiple returns and thus exceptions. For instance, they become very simple in continuation passing style whereas GOTO is difficult.

Flowcharts feel like a simple and universal way of talking about, well, control flow, but they imply a certain kind of bias toward a kind of reasoning which isn't necessarily all that better than code itself.

antpls said 5 days ago:

> (2) The 'defer' diagram is a big WTF. Couldn't even figure out which box is supposed to represent which statement in the example.

I found the defer diagram very accurate and understandable, but I already knew what "defer" does.

From the preceding examples, I understand that the little box is an instruction the developer didn't write, but was added automatically. From the graph, I understand that the instructions are executed in a different order from what it's written by the developer (and exactly in reverse order).

LessDmesg said 6 days ago:

(1) There's nothing messy about that graph. It's just two nested function calls. An exception by itself is just a goto to the catch block.

poiuyt098 said 6 days ago:

yep, the goto example is doing something completely different and less than the exception code, of course it will look more messy.

zozbot234 said 6 days ago:

There's actually a fairly natural way of representing Result-like types (including exceptions) in a flow chart. Draw multiple "exit" terminals for the function, one for "success" and one for each error case. Then a function invocation can have multiple flowlines leading from it, one for each case. And one can "rethrow" the exception quite naturally, by having a flowline for a "rethrown" exception lead to an exit terminal of its own. This also applies by analogy to the case of multiple entry terminals, which come up most naturally when desugaring async functions.

(It's actually even simpler than that when accounting for the functor and monad properties of Result<T, Err>, but the graphical representation for a "functor" or "monad" is a bit more complex; these would be drawn as labeled regions, and functor and monad properties would describe how such regions can be "merged" or "combined".)

avodonosov said 6 days ago:

exceptions are simple and convenient

DonHopkins said 6 days ago:

Except when they're not.

avodonosov said 6 days ago:

Don't blindly repeat what Go fans say, it's useful to use own head sometimes.

DonHopkins said 6 days ago:

What are you even talking about? Are you replying to the wrong post? I don't even know how to play Go. I prefer Othello. It's simple and convenient. Anyway, didn't Dijkstra write a paper warning "Go Considered Harmful"?

jfkebwjsbx said 6 days ago:

> when you throw in a few spurious lambdas

What do you mean?

zabzonk said 6 days ago:

A brilliant explanation of why no-one uses flowcharts.

notacoward said 6 days ago:

...because if they did, they might realize that some of their favorite structures are a hot mess. Seeing things in a new way often leads to learning, which a good developer would welcome. Personally I can't remember the last time I used an actual flowchart, but I do use state machines. Sometimes I'll go through an exercise of re-casting complex logic as a state machine. Often, that leads to a realization that making the state machine explicit would yield far more testable and maintainable code than leaving it in its more common if/loop/nested-function form.

AnimalMuppet said 6 days ago:

> ...because if they did, they might realize that some of their favorite structures are a hot mess.

They may only be a hot mess when expressed as a flowchart. But so what? I'm not programming in flowcharts. If exceptions can do what I want cleanly, why do I care if it's a mess considered as a flowchart? That's not my problem.

This argument is like saying that the library function I call is a mess of a flowchart. Why should I care? Can I express what I need to express cleanly by calling that function, or not?

Of all the arguments against exceptions, "it's got a mess of a flowchart" is the absolutely least relevant one.

throwaway17_17 said 6 days ago:

The flow charts in the link represent control flow. Compiler optimizations are, in part, constrained by their ability to map and statically determine control flow. So I think it is a grave mistake to say that something having a ‘mess of a flowchart’ is the least relevant concern about some PL construct. The implication that a flowchart representation offers nothing to a programmer, implies that the performance characteristics of your code does matter to the programmer. So sure, in some application domains (although I don’t think there are any) performance may take a backseat to performance and efficiency, this information is useful and can very well illustrate the pitfalls and trade offs of using certain constructs in code.

AnimalMuppet said 6 days ago:

I'm pretty much never worried about the performance of the "exception taken" case. The compiler can be really quite inefficient there before I have any reason to care.

There are places to worry about compiler efficiency. I don't think that this is one of them.

zozbot234 said 6 days ago:

Statically determining data flow is just as important, since it means being able to parallelize your code. Flowcharts are a very limited tool, other diagramming approaches (such as proof nets) can generalize on them in a helpful way.

whatshisface said 6 days ago:

>they might realize that some of their favorite structures are a hot mess

Normal brain: Find the representation that best expresses the structure to discover the hidden simplicity in everything.

Galaxy brain: Use flowcharts to draw out how exceptions work, forget your previous understanding, and then realize that they are too complicated for anyone to use.

gugagore said 6 days ago:

I strongly agree that multiple ways to look at things is good.

Programming languages have different "things" built in. Like, if you have re-entrant function calls, then your language has a built-in data structure called "the stack". If you have some algorithm that needs a stack, you have the choice of using the built-in stack, or using an explicit stack.

If you have a programming language with some sort of type system, then you have tagged data built-in. You can model your domain in the type system, or you can model your domain using a tuple like (tag, datablob).

If you have object-oriented programming with dynamic (single) dispatch on the first argument, then your language somewhere has a built-in data structure called a table/dictionary/associative-array. If you want to implement some sort of switching behavior you can do a switch on the `tag`, or encode the tag in the type system, and use dynamic dispatch.

Some programming languages have support for asynchronous calls. That programming language has a built-in scheduler of sorts. You can use it, or build your own, (or use the OS!).

A state machine, I think, is a great example of the same idea. Your CPU has a bunch of state (e.g. the program counter), and your programming languages has a bunch more state (like the current stack frame, the line number). You can encode your state machine in that, or you can model it explicitly. How well you can do it "implicitly" depends on other language features, like whether you have coroutines.

I think it's challenging, without expertise that might come only from having solved an identical problem already, to know ahead of time when to use these built-in things or roll your own. When you mention testing, I think that's a great example of expertise you've acquired. If you would like to test certain invariants in your state machine, e.g. "a transition from A to B never happens", then you're going to have a hard time doing so if your state machine's state is encoded in the line number of your language interpreter or the program counter, or whatever.

gnulinux said 6 days ago:

Exceptions are rather easy to reason in the code but their flow chart looks like a hot mess. So your argument is invalid?

carapace said 6 days ago:

> Exceptions are rather easy to reason in the code

In some code, for some people.

> their flow chart looks like a hot mess

In code where the "exceptions are rather easy to reason" about the flowchart would also be easy to read.

In code where the exceptions hide a flowchart that's a hot mess the exception syntactic sugar hides the mess.

airstrike said 6 days ago:

One could argue hiding the mess is the whole point of the exception. It allows you think about the problem with a much smaller cognitive load.

I'm not suggesting every code with exceptions is good. Just saying that not every exception is bad, and many are indeed great.

I can't imagine writing Python without try blocks...

carapace said 5 days ago:

Describing the mess in a not-messy way is the whole point of exceptions. If you have a mess in your code either clean it up or document it, don't hide it.

- - - -

Sometimes the problem you're solving really does require a gnarly flow-chart. If that's the case, you're almost certainly better off writing (and documenting!) a state machine instead.

If your control-flow graph is only a little gnarly then representing it with exception syntax is fine IMO (I write Python code too, with exceptions.)

The problem comes when you accidentally create a hidden gnarly graph (as the result of bad design up front, or "drift" over time as a piece of code gets reworked, maybe by multiple people who may not all be privy to the whole history of the design and code, or both) and then forget to do something crucial along some flow of control and you have a hard-to-debug bug.

notacoward said 6 days ago:

> I can't imagine writing Python without try blocks...

In other words, Python requires that model, but making something necessary isn't the same as it being good. If you go around breaking people's arms then splints and casts become necessary. Does that make arm-breaking OK?

> It allows you think about the problem with a much smaller cognitive load.

There's a very fine line between "smaller cognitive load" and "sweeping stuff under the rug". Since you mentioned Python, I'll point out that a lot of Python code only looks simple because it's incomplete and will fail catastrophically for what should be innocuous errors. Add the proper error handling and it's just as complex as code using an error-return or optional-type model. Often that means even more cognitive load because the happy path and the handlers are in multiple methods/classes/files (especially likely as multiple people hack on code over time).

chrisseaton said 6 days ago:

These graphs are how the compiler I work on represents your programs while it's compiling them!

throwaway17_17 said 6 days ago:

This is essentially the point I made in a comment above, these representations are valid considerations for a programmer to weigh when selecting abstractions and constructs to implement some functionality. But it seems like very few people agree.

tartoran said 6 days ago:

They can be useful though, like once in a blue moon on a white boarding session with non-technical users, explaining a bug or something like that

DonHopkins said 6 days ago:
tartoran said 6 days ago:

Yes, this is a great use. I wonder if there's any tool to automate this.

I just gave Modern Love a listen, thanks, this was great.

DonHopkins said 6 days ago:

Paul Lamere from The Echo Nest made an awesome music analysis demo called "The Infinite Jukebox" (for when your favorite song just isn't long enough), for automatically finding loopable points in music. And it can use that analysis to stretch a song out to any duration you want, by seamlessly skipping and looping parts of it to shorten or stretch it. It's interactive, so you can point and click on the arcs to control how it loops at any point.

https://en.wikipedia.org/wiki/The_Echo_Nest

http://infinitejukebox.playlistmachinery.com/

Infinite Jukebox

For when your favorite song just isn't long enough

This web app lets you upload a favorite MP3 and will then generate a never-ending and ever changing version of the song. Infinite Jukebox uses the Echo Nest analyzer to break the song into beats. It plays the song beat by beat, but at every beat there's a chance that it will jump to a different part of song that happens to sound very similar to the current beat. For beat similarity the uses pitch, timbre, loudness, duration and the position of the beat within a bar. There's a nifty visualization that shows all the possible transitions that can occur at any beat. Built at Music Hack Day Boston 2012.

Billie Jean Forever:

http://infinitejukebox.playlistmachinery.com/?trid=TRSFLTX13...

Scatman (Ski-Ba-Bop-Ba-Dop-Bop) by Scatman John:

http://infinitejukebox.playlistmachinery.com/?trid=TRXKEZN13...

Lots more cool Echo Nest demos:

http://static.echonest.com/labs/

http://static.echonest.com/labs/demo.html

tartoran said 6 days ago:

Wow, I didn't know about this. Thank you, this is great !!

[0] This is a slightly related vaporware loop which, I don't know, I find it mesmerizing.

[0]: https://invidio.us/watch?v=-RFunvF0mDw

anigbrowl said 6 days ago:

You can use them for just about anything once you get used to the idea of dynamic nesting. I'm particularly fond of this implementation:

http://www.dsprobotics.com/applications.html

theaeolist said 6 days ago:

Have a look at these diagrams for PL concepts, but done formally and with execution rules:

https://tnttodda.github.io/Spartan-Visualiser/

MadWombat said 6 days ago:

I don't know about everyone else, but to me these are confusing as fuck. In continue/break/early return diagrams, there is no second condition, so it is not clear when/why the dotted line logic path happens. The "no fall-through" has so many arrows it takes some zen meditation to figure out what it means. And the exception diagram is just amazing, does the catch block get executed no matter whether there was an exception or not? Because there is an arrow leading to it from the regular, no exception, path. I could go on, but it becomes progressively more and more bizarre.

keithb- said 6 days ago:

I agree. The semantics even within this set of flowcharts is confusing: blocks start out representing statements but by the time you get to Dependency Injection, they represent anything (library, service, etc.). Also the dotted-line vs solid line doesn't help. With Goto or Exception, the dotted line is the process, not the alternative which is solid. Those need to be inverted because "normal" flow of control from statement to statement is altered and the "exceptional" flow is now active and other statements are ignored. Just waiting for someone to say "we tried this with UML already. please just stop."

martinfjohansen said 6 days ago:

Hi, I am the author of the article, thanks for noting an error in the exception and finally diagrams! I have fixed the error where controlflow always went to the catch block.

grawprog said 6 days ago:

>I don't know about everyone else, but to me these are confusing as fuck.

I was alright right up until it got to exceptions. Simple statements are fairly nicely represented in flowchart form and almost condense the information a bit. Then it hits exceptions and everything after that just got so convoluted it became pretty ridiculous. Imagine trying to write out even 100 lines of modern code like that. You'd need a page the size of the side of a barn just for that.

lwb said 6 days ago:

The point this article makes most clear is exactly to what extent C++ has the attitude of “screw it, let’s put it in”. There are only a handful of constructs listed that C++ doesn’t implement.

Also, TIL about “long jump”. Seems like a feature I’d never use in a million years, but maybe it has its applications.

TFortunato said 6 days ago:

LongJmp can be used as a way to implement exceptions at a low level, especially in languages like C, where you dont have them as a language construct.

You call SetJmp, where you would want to put your try/catch blocks, then no matter how deep down some call stack you go, if an errir occurs, you can immediately jump back to where you called setjmp and handle it / report the error or whatever, rather than having to handle and bubble up the error by at every intermediate level, "returning" your way up the call stack.

0xff00ffee said 6 days ago:

Raise your hand if you at first thoroughly expected to mock this, and then giggled at "function" (vs. subroutine), patted yourself on the back for knowing everything up until ... ASPECT? DAFUQ IS ASPECT?

Just me? Ok, I'll show myself out.

Great way to visualize. I think Exceptions, Finally (and Promises) got a little messy, but would make a good poster.

antpls said 5 days ago:

The actual question is : did it allow you to quickly understand the "aspect" concept by using a common graphical representation? If yes, that's a win

0xff00ffee said 5 days ago:

No. The explanation isn't clear: why is g() inserted at one point and h() inserted at another. This needs to be explained.

However, from what I observe, "aspects" look a lot like Common LISP Object System (CLOS) before/after/around methods.

quelltext said 5 days ago:

I'm sure you're not the only one, but yes AOP is something a lot of people interested in programming languages have likely heard of at some point.

said 5 days ago:
[deleted]
tabtab said 6 days ago:

The C-style switch/case construct is obsolete and awkward, and cleaner alternatives exist, such as VB.net's technique. There is no need for "Break".

The following could be added to C-style languages without overlapping with the existing syntax:

     select(a) {
        when 1,2,3 {...}
        when 4 {...}
        when 5,6 {...}
        ...
        otherwise {...}
     }
C# recently added a pattern-matching alternative, but it's still a bit verbose for most needs.
nneonneo said 6 days ago:

GCC has had case ranges as an extension for a long time now: https://gcc.gnu.org/onlinedocs/gcc/Case-Ranges.html

It’s a far cry from general pattern matching (a la Rust, Scheme) but it’s helpful in some circumstances. Otherwise, you can also stick multiple case labels together, i.e.

    switch(x) {
        case 1 ... 4:
        case 7:
        case 9 ... 16:
            ...
        default:
            ...
    }
I don’t see why you need to call the C-style construct obsolete and awkward, though. It works plenty well for enumerations, which is basically what it was designed for. If you need to do something more fancy, if/else if chains always work.
tabtab said 6 days ago:

Ranges are nice, but not a replacement for sets. (Allow both!) The main "fix" I propose is to get rid of the Break statement and the very need for it. If you have set-based lists, you don't need "fall through" behavior because you can have more than one item per matching expression.

The Break statement is error prone because it's easy to forgot. Some languages "solved" that by making Break required; but if it's required, then it's superfluous: code-bloating eye clutter. If you never used a language that didn't need "Break" you may not understand how annoying it is to go back.

My suggestion above allows C-ish languages to add a modernized version and yet keep the existing construct for backward compatibility. In other words, the old one would be "deprecated".

int_19h said 5 days ago:

> If you have set-based lists, you don't need "fall through" behavior because you can have more than one item per matching expression.

This is not entirely true. Ranges and sets allow you to run the same code for different inputs. Fallthrough allows you to run partially the same code for different inputs.

It's a rare case, so I don't think its utility warrants making fallthrough behavior the default. But it's nice to have it explicitly, like "goto case" in C#.

Anyway, the reason why it is the way it is in C, is because case labels are literally labels, and the switch statement itself is just a branched goto - which is why e.g. Duff's Device is a thing. And that, in turn, is probably because it's a descendant of "switch" in Algol-60, which was basically an array of labels.

tabtab said 2 days ago:

> Fallthrough allows you to run partially the same code for different inputs.

True, but it's a screwy way to manage such overlaps. The volume of errors and confusion exceeds the benefits in my opinion. There are other ways to do such. For non-trivial conditionals (such as overlaps), if-else is often the better tool anyhow. Case/Switch should only be used for simple mutually-exclusive value look-ups.

raslah said 6 days ago:

I actually really like flowcharts, though they are somewhat inefficient to draw out more complex structures, they are great for working out small sections of thorny logic. The only issue I sort of take those in the article is the physical flow is a little distracting when decisions don't branch laterally. having them continue top to bottom sort of throws me off. perhaps its just because i draw them with branches to the side.

dependenttypes said 5 days ago:

The break one does not make a lot of sense to me. Shouldn't they use another rhombus and a solid line instead? Same for continue and early return. Similar thing holds for goto.

inetknght said 6 days ago:

As someone who's self-taught everything, this is... awesome. It's exactly what I needed to be able to up my game in describing code to management using the flowcharts they've asked for.

zabzonk said 6 days ago:

In case you haven't gleaned it from the majority of comments here, flowcharts are a terrible way to describe code, particularly modern, multi-threaded code that uses exceptions. Don't go there.

inetknght said 6 days ago:

I quite agree. Unfortunately management wants single-threaded code and wants it documented with flowcharts. So right now there's lots (!) of clouds detailing "this calls into another API documented in another flowchart..."

AnimalMuppet said 6 days ago:

To me, that screams "put your resume out on the street".

I mean, I'm not opposed to documentation. It's important. But if your management thinks that flowcharts are the way to do it, I wonder what millennium they think it is.

inetknght said 6 days ago:

Indeed? Flowcharts aren't the only documentation. Swagger describes the API parameters, flowcharts describe the API code flow, and text provides a story, and comments in the code describe particulars.

Flowcharts are just where I've felt that I've significantly underperformed; where management has asked for more of them.

Putting my resume on the street is something I do often. Want to look at it?

If you think flowcharts aren't the way to describe code flow, then I'm curious what method you think is better.

AnimalMuppet said 6 days ago:

Ah. Flowcharts were all you mentioned, so I assumed (bad move) that they were all you did.

Depending on what exactly you are doing, parts of UML might be better. It lets you describe different aspects with different diagrams, so that, if one kind of diagram is sub-optimal for showing some aspect of what's going on, another kind might be better.

crimsonalucard said 6 days ago:

This is because flow charts are two dimensional. What you need is a 3rd dimension to represent parallelism. These programs are better represented as 3d structures rather then 2d flow charts or even text.

AnimalMuppet said 6 days ago:

3D. I like it. I think I agree.

The closest I've seen to anything that actually does that is UML. But it does so only by having multiple different diagrams. That's like having a drawing with a front view and a side view - it's not a 3D diagram, it's two 2D diagrams that you can use to kind of see what's going on in 3D.

I don't know of any good system of notation that does what you're asking, nor any software for drawing or viewing such diagrams. But I agree that 2D flowcharts don't represent parallelism well, and that 3D looks like a better answer.

zozbot234 said 6 days ago:

UML is non-rigorous and has a whole lot of pointless incidental complexity. Intuitive representations of parallelism are fairly natural in data flow, but that is precisely dual to flowcharts; you can not represent both on the same 2D diagram without some very precise rules about what portions of the diagram are intended to represent each. Proof nets give you something not unlike this.

AnimalMuppet said 5 days ago:

Oh, I never said UML was ideal. I merely said that, if you squint, you could see it as a clumsy, inadequate, and probably unintentional half-step in the direction of crimsonalucard's idea.

anigbrowl said 6 days ago:

Obviously you don't like them, but flow based programming is pretty great actually.

throwaway17_17 said 6 days ago:

If you find these flowcharts instructive, and I think that you are well founded in doing so, good on you. A majority of other comments seem to be of the opinion that pointing out the complexity of control flow generated by the constructs in a language is a negative. It certainly seems like the prevailing attitude is ‘if I can’t see it, it doesn’t exist.’ But you said you were working on game code, so I will encourage you to keep thinking about the complexity of the choices you make in which constructs to use and don’t buy into the thought that says performance implications are not a really concern.

inetknght said 6 days ago:

> you said you were working on game code

Actually, I didn't. I said I wanted to "up my game" which is a phrase commonly meaning "to get better". Said another way: I needed to get better at describing code to management.

> I will encourage you to keep thinking about the complexity of the choices you make in which constructs to use

Indeed, management generally wants simpler code even at the cost of performance. Simpler code generally translates to cheaper developers and cheaper maintenance.

> don’t buy into the thought that says performance implications are not a really concern

I actually write DNA analysis software. Performance implications definitely are a concern. :)

But performance is useless if I'm the only developer who understands how it all comes together (eg, high bus factor). So I've been writing a lot of documentation to that end, from file-level down to the systems-level and bit-level descriptions. I really would like better support for diagrams in markdown.

edflsafoiewq said 6 days ago:

Speaking of reasoning, I believe a Hoare proof of correctness for a flowchart is

* a choice of a precondition and postcondition, Pre(B) and Post(B), for every basic block B

* for every basic block B, a proof that {Pre(B)} B {Post(B)}

* for every arrow a : B1 -> B2, a proof that Post(B1) /\ Cond => Pre(B2) where Cond is the condition for the arrow to be taken

proc0 said 6 days ago:

I love this! Visually seeing computation flow is great for understanding what the code is doing.

azhenley said 6 days ago:

Best article I have read in a while. I'm going to try to incorporate this into my PL course.

edflsafoiewq said 6 days ago:

The goto diagram contains an erroneous arrow and an extra block. It should look like

       o
       |
       v
    +--□
    |
    |  □<-+
    |  |  |
    |  v  |
    +->□--+
_0ffh said 6 days ago:

Pretty decent, but covers the pre-test loop as "loop" (unqualified), omitting the post-test loop.

emmanueloga_ said 6 days ago:

I think flow charts are a lot more useful when the granularity is more coarse: the boxes should represent functions, not the logic inside.

    compute thing -> predicate true ------> compute some other thing
        |         -> predicate false ->|
        \---------------loop-----------/
... etc ...
richdougherty said 5 days ago:

Pretty neat, but notably missing some of the more functional control flow concepts: tail calls, call with current continuation, delimited continuation, etc.

PacifyFish said 6 days ago:

What did the author use to make the charts?

martinfjohansen said 6 days ago:

I used "yEd live".

DonHopkins said 6 days ago:

The Nassi-Shneiderman diagram is one way of representing structured program flow by dividing up a two-dimensional area. It can't represent unstructured gotos and exceptions, though.

https://en.wikipedia.org/wiki/Nassi%E2%80%93Shneiderman_diag...

>A Nassi–Shneiderman diagram (NSD) in computer programming is a graphical design representation for structured programming. This type of diagram was developed in 1972 by Isaac Nassi and Ben Shneiderman who were both graduate students at Stony Brook University. These diagrams are also called structograms, as they show a program's structures.

>Overview: Following a top-down design, the problem at hand is reduced into smaller and smaller subproblems, until only simple statements and control flow constructs remain. Nassi–Shneiderman diagrams reflect this top-down decomposition in a straightforward way, using nested boxes to represent subproblems. Consistent with the philosophy of structured programming, Nassi–Shneiderman diagrams have no representation for a GOTO statement.

>Nassi–Shneiderman diagrams are only rarely used for formal programming. Their abstraction level is close to structured program code and modifications require the whole diagram to be redrawn, but graphic editors removed that limitation. They clarify algorithms and high-level designs, which make them useful in teaching. They were included in Microsoft Visio and dozens of other software tools, such as the German EasyCode.

>In Germany, Nassi–Shneiderman diagrams were standardised in 1985 as DIN 66261. They are still used in German introductions to programming, for example Böttcher and Kneißl's introduction to C, Baeumle-Courth and Schmidt's introduction to C and Kirch's introduction to C#.

>Nassi–Shneiderman diagrams can also be used in technical writing.

When Ben Shneiderman and Ike Nassi (who were grad students at the time) submitted their paper about them to Communications of the ACM, it was quickly rejected on October 4, 1972, with the most brutal rejection letter Ben Shneiderman has ever received.

But then they submitted it to the unrefereed ACM SIGPLAN, where it was published in August 1973, and since then it has been cited many times, widely implemented by many tools, and used for educational and documentation purposes. It's a great example of the importance of persistence for people whose new ideas are brutally rejected by respected authorities:

Flowchart techniques for structured programming:

https://dl.acm.org/doi/10.1145/953349.953350

Scan of the brutal referee's report:

https://www.cs.umd.edu/hcil/members/bshneiderman/nsd/rejecti...

>My first thought was to write a referees report which would by its sarcastic nature be funny. For example, I thought of writing that it was a sound, useful theory, but it wasn't practical because it would be difficult to design flowcharting templates to be manufactured and sold.

>I guess, however, that it is best to come right out and say that I feel the best thing the authors could do is collect all copies of this technical report and burn them, before anybody reads them. My opinion is that it shows the inexperience and ignorance of the authors with respect to programming in general, and their misunderstanding of so-called structured programming.

>To say that [BEGIN body END] should be written as [NS diagram] is ridiculous. Even more ridiculous is having to write [DO i=1 TO N; DO J=1 TO N; S = 0; DO K=1 TO N; S=S+A(I,K)*B(K,J) END C(I,J)=S END END] as [NS diagram].

>The authors mention that "the ease with which a structured flow-chart can be translated into a structured program is pleasantly surprising". My retort is "yes, just erase those silly boxes!"

>Flowcharts are a crutch we have invented to try to understand programs written in a confusing style. This was due to our ignorance of the programming process and what was needed -- after all, programming is only 20-30 years old. So-called "structured programming" helps to limit us to, as Dijkstra calls them, "intellectually manageable" programs, in which case flowcharts are completely useless and in fact a hindrance to programming. They shouldn't be used.

>I shudder at the thought of further explorations revolving around the context-free nature of this [flowchart] language.

Ben's rejection letter story:

https://www.cs.umd.edu/hcil/members/bshneiderman/nsd/rejecti...

>Rejection letter from the Communications of the ACM

>Ben Shneiderman, June 12, 2003

>Our submission of the structured flowcharts to the Communications of the ACM was quickly rejected, on October 4, 1972, by the Programming Languages Editor David Gries of Cornell University. He included a single anonymous reference letter which is on paper that has a Cornell University watermark. I assume Gries gave our paper to one of his colleagues (you can play the guessing game too), who wrote the most brutal rejection letter I have ever gotten.

>The reviewer wrote: "I feel that the best thing the authors could do is collect all copies of this technical report and burn them, before anybody reads them." As graduate students, this stinging rejection shocked us, but we kept getting enthusiastic responses from people around us. We sent the paper to the unrefereed ACM SIGPLAN Notices, where it was published in August 1973. It didn't take long for others to produce extensions, software tools, and applications of structured flowcharts.

>The next problem was theft of the idea. I had sent a draft to respected colleagues, and soon others published slight variations. One of these respected colleagues was Ned Chapin, who greatly troubled us by publishing what he called 'Chapin Charts.' A friend of mine sent me his published paper with a note encouraging me to sue. For several years I feared that Chapin's reputation and his frequent professional seminars would wind up leaving the idea tied to his name, but as the decades passed, the ending has proved to be a happy one. We called the idea 'structured flowcharts, but they are widely known as Nassi-Shneiderman Diagrams.

>Another problem was the appearances of patents for variations on our idea, but these have not limited the widespread recognition we have gotten over the years.

>I wish every graduate student or young inventor would have the pleasure of seeing his/her ideas spread so far and influence the work of so many people. I also hope that the story of the bold rejection of our novel idea and its eventual international success, is an inspiration for anyone whose new ideas are rejected by some respected authorities.

More on Ben Shneiderman and NS diagrams:

https://news.ycombinator.com/item?id=11368430

https://news.ycombinator.com/item?id=11370099

https://news.ycombinator.com/item?id=22093072

Supermancho said 6 days ago:

This is a wonderful idea, but by the given numbering scheme, I can identify a slew of problems.

[1a. If] Note This is the first time we see synchronous calls, decisioning, and the representation makes no specific callout to the structure. The structure IS the case.

[4. Break] InconsistencyConfusion This is the first time we see a specific part of the flow called out as the structure (it isn't a structure). This is flow control, not a construct. This confusion plagues the rest of the document.

[7. Switch (No Fall-Through)] InconsistencyConfusion This is the same as If/elseif.../else/, but the post goes with the oddly specific "switch". Would be more appropriate as [1c. if/elseif.../else] and a [1d. Switch] - The different terminology if/switch does not change the fact they are the same structure. Grouping Switch-type by name is a bad choice.

[9. Local Goto] Omission The Goto example has no exit node.

[11. Exceptions] Mistake The exception travels the stack, meaning (from the perspective of the original parent function), from grandchild to the child, to the parent catch.

[12. Finally] OmissionConfusion Finally should be it's own block, since it is treated as such and has an implicit catch mechanism, even if it's not always required in the syntax.

[13. Blocking, Synchronous Calls] Confusion This is rather pointless. All the flows, prior, were blocking. Now introducing another metaphor that changes past examples is inconsistent. Blocks which are synchronous (blocking) are represented by 2 different forms. It's unnecessary.

[14a. Non-Blocking, Asynchronous Call, Callback] InconsistencyConfusion There is a whole function called, which should be represented by a starting and ending node.

[14b. Device Input] InconsistencyConfusion This isn't even a different construct. Event handling is what this is and is poorly represented, in the same way as 14a.

[15. Framework] MistakeConfusion This isn't a construct, nor is it an accurate representation. Frameworks are a collection of idioms, which usually has little to do with constructs (as defined in the beginning). It feels like the author has gone off the rails here.

[18 Come-from & 19 Aspects & 21 Destructor] Confusion? The size of the block is supposed to indicate a specific syntax, which is inappropriate for describing logical constructs. They should be full sized and optionally labeled blocks, as they are evaluated.

[20 Dependency Injection] Confusion This is another "timing adds a new way to represent things" issue. There's no difference between an Aspect and DI, logically.

[22. Defer] *Confusion Adding data (even a function) to a separate stack, which is called later, does not make the diagram presented.

All in all, this needs work. Even the grouping description at the bottom does nothing to help make this more useful.