I am really excited by Racket.
Unlike most minimalist LISP languages with a community of individual hackers who each have their own macros and packages, this one is battery included while still being the most flexible with the #lang header. Seriously, it comes with awesome data structures (actuelly more than Python).
The only thing that prevents me from using it as much as Python or C++ is the lack of tools for major editors. DrRacket is NOT ok, I would like a langserver and vscode extension so badly! When I have a bit of free time and have finished other projects with a higher priority, I'll definitely give it a try. (yes I'm aware there are some langservers but none of them is really production ready)
Emacs has a decent Racket mode with most of the functionality of DrRacket.
These days racket-mode for Emacs is more than decent!
Could wish for similar modes for SublimeText and VisualCode though.
Thanks for the pointer, I'll try and see whether I can port racket-mode as a vscode extension.
A Racket language server would be great. I wonder if some DrRacket internals could be spun off.
VS Code has a pretty good one
Hey, author here! The extension is only a syntax highlighting one atm (albeit a pretty good one, I hope), but I have bigger plans for it. The problem is, the development of Racket LSP has slowed a lot as of late, and I'm not skilled enough to make one myself.
Do Naughty Dog still use Racket? Anyone from there able to give some insight into what tools/dev environment they use there?
No they don't. AFAIK they moved away from Racket. I once contacted them for potential open positions I could list on racketjobs.com (I'm the creator).
Is the site down? I'm getting an empty page in Firefox currently.
Might be that non-www doesn't get redirected. Thanks for letting me know. Try this https://www.racketjobs.com.
And yet I'm still too stupid to understand the macro system. Oh well.
I also didn't know there was a Chez version. Racket already has a native AOT compiler, right? What would the advantage be to run on Chez, faster or smaller compiled code?
Is it the general Scheme syntax-rules macros that you're struggling with? Or something unique to Racket? (which I haven't used)
If it's the former, I found this little article helpful when I was new to Scheme:
For Racket specific macro tooling syntax-parse  in combination with syntax classes is usually the suggested starting point. Hygenic with pattern matching and a bunch of other features for parsing s-expression grammars. As mentioned down-thread Greg Hendershott's Fear of Macros  is a good starting point for a number of folks. The curriculum for Racket School 2018  also has exercises that help introduce the syntax-parse tooling.
This is great tutorial. I used to be confused by the macros in scheme
See http://blog.racket-lang.org/2019/01/racket-on-chez-status.ht... and scroll down to “Performance of Compiled Code.” That was back in January, so things may be even better now.
Try out this guide perhaps: http://www.greghendershott.com/fear-of-macros/
> And yet I'm still too stupid to understand the macro system. Oh well.
This talk by Robby Findler helped me a ton when I was first learning about macros in Racket. I've found that the best way to learn, though, is by doing: when you find a use case for a macro, figure out how to do that particular thing and, slowly but surely, things will start to click.
For the macros, I prefer to recommend to start with very straightforward macros that use `define-syntax-rule`. For example
As other comment said, you have to find some good opportunity to use a simple macro.
#lang racket (define-syntax-rule (run3times code ...) (begin (begin code ...) (begin code ...) (begin code ...))) (define x 0) (run3times (set! x (add1 x)) (display x)) ; shows ==> 123
(In this case, it is better to use the build-in `for`. Actually, `for` is defined in a macro, it is not part of the internal "secret" low level language.)
Once you are confident and you grok the difference between a macro and a function, you can try the other ways to define macros. These other ways are more flexible and can make weirder macros, and have better support when the macro is used wrongly. There are good links in the sibling comments.
Another way to learn about all this stuff from the experts themselves is Racket School: https://school.racket-lang.org/
Getting rid of C dependency and bootstrap Racket in Scheme.
A few people exist who are genuinely able to use Racket macros, but they all have a PhD in CS.
As opposed to the larger group who think they understand them. I was once in that group, but now I've just given up.
Racket's macro transformers are about as easy as hygenic macros get unfortunately, I can occasionally manage to make a er-macro-transformer based macro in chicken work how I want.
defmacro was a lot easier, but I understand the aversion to it, even if it was rare that it was/is a problem.
 syntax-rules based macros are cake, naturally, but they're also incredibly limited, although I have seen people implement massive complex OO systems using a sort of ad-hoc state machine built in pages upon pages of syntax-rules rules.
I really wish there was a standard Scheme that compiled to native for all desktop and mobile OS'es and also supported multi-threading. Does anyone know about one ?
Racket does tick some of those boxes:
* it has support for OS-level threads via places
* it can produce binary distributions via `raco distribute` by packing together the interpreter and your compiled code into a single executable
* while the current implementation is based on a bytecode-compiler and VM, I believe the Chez implementation actually compiles to native code (although the whole process is transparent to the user)
* it is possible to run Racket on ARM devices, in fact there was a recent thread about that on the mailing list
I realize this isn't 100% what you're looking for, but Racket does come with many nice features. Alternatively, you might also want to take a look at CHICKEN Scheme.
LambdaNative is a cross-platform development environment written in Scheme, supporting Android, iOS, BlackBerry 10, OS X, Linux, Windows, OpenBSD, NetBSD, FreeBSD and OpenWrt.
LambdaNative is a wrapper and abstraction over Gambit Scheme. It does not support multi-core threading.
Well, that was my best shot :(. I'm sorry I don't have an answer then.
You were asking for threads, and gambit has lightweight threads, so I assumed there was a way to use them via ln. Multi-core threading is a different story though.
Yes, it's a bit of a pity that great functional languages like Scheme and even the veteran common LISP are loosing out to modern languages like Kotlin because of lack of support in these critical, functional areas. Especially when REPL based development is so amazingly productive.
A few implementations have native threads
Kawa Scheme - compile to java bytecode - https://www.gnu.org/software/kawa/index.html
Chez Scheme - compile to C - POSIX & Windows - https://cisco.github.io/ChezScheme/
Cyclone Scheme - compile to C - Linux only - https://justinethier.github.io/cyclone/index
Guile - VM + JIT compiler - POSIX & Windows - https://www.gnu.org/software/guile/
Bigloo - compile to C and java bytecode - POSIX, Windows, Mac, Android - https://www-sop.inria.fr/mimosa/fp/Bigloo/
Common Lisp has real threading with a standardized interface via Bordeaux-threads and, with some limitations runs on most of the desktop and mobile operating systems one would expect (the limitation being that most workflows have you write the GUIs in a different language and then call into CL, although EQL is an exception to this rule: https://www.cliki.net/EQL
Does EQL still depend on smoke like common-qt does? Because smoke is so deprecated now that I simply gave up trying to get it to build common-qt on a recent distro.
I don’t think so, it’s built on ECL, so the lisp is compiled to C and then linked to Qt. However, I’m not sure how all the details work.
And, there’s a qt5 version.
I know it's no Scheme, but Clojure seems to be doing OK (+ has solid multi core threading, REPL driven development, is a Lisp etc).
I guess at the end of the day it's very hard to compete with the JVM.
Indeed I miss CL's "live coding" every day since.
but do all applications need multi-core threading? Languages like Python get around this using multiprocess module, which is not the same as "multi-core" threading, but it is probably good enough for a majority of applications.
REPL based development, Interactive development are also features. Multi-core isn't probably the top most thing on everybody's list of things.
Is typed racket still very slow? I'm interested in hearing how Racket's gradual typing works.
Typed racket isn't slow (as far as scripting languages go). Maybe you're thinking of Racket's "contract" system?
AFAIK typed racket does type-checking during compilation (via raco); the resulting code is no slower than normal Racket, although I'm not sure if it's faster either.
The contract system is different; it works on normal, untyped Racket code and performs checks at runtime; similar to using Python decorators to check the input and output of a function. This slows things down, so it's recommended to only check things at module boundaries. For example:
This module exposes the `factorial` function, which checks (at runtime) whether its argument and return value are non-negative integers (i.e. 0, 1, 2, ...). The actual calculation is done by the `fact` function, which is private and doesn't do any checks. If we don't separate the checks from the calculation, we would end up running the checks on every recursive call, which would slow things down a lot.
#lang racket (require racket/contract) (require racket/match) (provide factorial) (define/contract (factorial n) (-> exact-nonnegative-integer? exact-nonnegative-integer?) (fact n)) (define (fact n) (match n [0 1] [_ (* n (fact (- n 1)))]))
As far as I'm aware, Racket's gradual typing is done by contracts at the interface between typed and untyped code. Hence it's not that typed racket is slow, it's that passing untyped values into typed functions can be slow, if we want accurate blame information for errors.
I'm referring to Takikawa et al (2016), who reported performance 50-100 times worse performance numbers by mixing typed and untyped Racket code together. Actually 100 times slower execution. They used type annotations on the module level. Would be interesting to know if Racket developers have proven them wrong and if so, how.
Just to make it clear, the authors of that paper (including myself) are all to some extent Racket developers. Some big improvements have been made but there are still pathological cases. For the latest published on this see this paper: http://users.cs.northwestern.edu/~robby/pubs/papers/oopsla20...
But also it's important to note that it's not Typed Racket or Racket in isolation that are slow, but the inter-mingling of the two due to contract overhead.
Yes, that fits with my own experience. I've not used Typed Racket itself, but I've used contracts in untyped Racket. I found them so slow that I ended up using a macro which discarded them unless it was a run of the test suite.
Figure 3 in that paper is enlightening: the fully typed version takes 0.7x as long as the untyped version, so Typed Racket is slightly faster than normal Racket. Most of the partially-typed versions take 50x to 100x as long, as you say, showing that it is indeed the contracts that slow everything down.
Takikawa is a core racket dev, I believe.
It depends upon what you mean by slow. Typed Racket code compiles more slowly than normal Racket code, but should run about as fast or faster (for a small set of specifically-typed code).
Font looks like shit in Firefox on macOS X.
Maybe link to the blog post  instead.
For reasons I find difficult to understand, the text is between <pre> tags and uses a monospace font (Inconsolata). It is curious that a project that cites Matthew Butterick as a contributor should make such poor decisions about typography, especially when the rest of the documentation is above average in that area.
It's wrapped in <pre> tags because it is preformatted text of the type that would be used in email, etc., announcements (probably becauae it's the exact text from those media, without transformation), so not using <pre> would lose significant formatting.
Of course, the formatting is broken anyway, because two of the continuation lines of bullet-pointed lines are underindented by one space.
Making the font bold is what makes it look so bad in Firefox. In combination with the font.
Matthew is a contributor, doesn't mean he is the Jony Ive of every single web page they publish.
No, but he is the “Jony Ive” of the website on which that announcement is published.