I've been working on various versions of Thinc for a long time --- it's always been the name for the machine learning code powering spaCy. But previous versions really weren't worth using for other developers, and so I never wanted to advertise the library as something people should use.
The new version's been mostly rewritten, and this time, my feelings are different :). There's honestly a lot here that DL people should find interesting, even if you're not really looking for new tools atm (which is very understandable).
If you first learned deep learning by using PyTorch and TensorFlow, you might find Thinc to be an interestingly different perspective. The backpropagation mechanics are much more exposed, but functional programming means managing the gradients is little trouble.
It's also a great way to try out Python's new type annotation features. We designed the library to use type annotations, and have pretty detailed support for numpy semantics.
Other nice features include the config system and the wrappers system: you can use models from external libraries as part of a network, and wire different frameworks together. The idea behind Thinc is to work in conjunction with current DL tooling. We want to avoid providing our own optimised engine, making the library much more lightweight and compatible.
I'm going to try this simply because spaCy is so great. It's my goto for NLP in different languages (notably German) and a joy to work with.
With some friends talking about how productive you guys are today! Thanks for releasing this, we'll playing around with it at the office.
Gotta say though, amazing work on branding across the suite of inittiatives.
Damn, SpaCy and FastAPI, not to mention Prodigy, make up a huge percentage of our teams development stack. It took us a while to even realize that tiangolo and syllogism work together, but there was definitely a moment of slack jawedness across the team when this became clear.
Having built SpaCy from source, Thinc was something I've dealt with in passing before. Needed to look over some of it's source for debugging briefly, but never paid a whole lot of attention to it. This is basically a complete overhaul and it's beautiful.
It blows my mind how productive this team is. ExplosionAI and the SpaCy compatible model implementations from HuggingFace are basically responsible for a _huge_ amount of practical progress in making modern NLP models quickly and easily accessible. Now a general development framework, that frankly makes TF Keras and PyTorch pale in comparison wrt simplicity, is absolutely astounding. Congrats everyone on the release!
Now I need to go back to packing up my apartment and ponder how I'll ever approach this velocity and quality.
I'm entering a phase where this stack may play a big part. Have you written up any of your experiences? (Or are you open to emailed questions?)
I actually have a draft write up of what I've tentatively termed a "default prototyping stack for data science" but that focuses less on SpaCy/NLP and ML frameworks in general and more on effective methods to produce simple applications that consume your model output, partially via FastAPI actually.
I'm happy to answer questions to the extent that I'm able to though. Feel free to contact me via any of the methods available in the link on my profile.
FastAPI creator here.
If you've used FastAPI you'll probably like Thinc.
It uses Python type hints extensively and even includes a Mypy plugin. So, you'll get nice and clear errors right in your editor for many operations that before were Numpy/Tensor "black magic". All before even running your code.
And you can wrap TensorFlow, PyTorch, and MXNet with it, mix them with Thinc layers, etc.
Just for clarification, is it like a Keras replacement?
But Keras does not support PyTorch or MXNet. I think the design of Model, Block, and Layer like this is very intuitive and shared among several frameworks. I wish it could have multi-GPU/multi-node training capability (i.e. support horovod or gloo).
I must be missing something but this looks more complicated and confusing than plain PyTorch.
Interesting project! How does the type system work with variadic types and arbitrary tensor manipulations? e.g. a function transforms tensor[A, B] -> tensor[A].
The best answer is, "with some difficulty". I've written a bit about the struggles here: https://thinc.ai/docs/usage-type-checking#model-types
There are basically two approaches, TypeVar and overload. You probably want TyepVar if you have exactly that sort of situations, but there's a lot about type vars I still don't understand, and the docs say almost nothing. The problem is the responsibility is kind of split in two places: the Python typing module doesn't really take any responsibility for usage, and mypy sees it as not mypy-specific, so neither describes it very well.
Our current solution is to avoid being fully variadic in the tensor type, and instead have a limited number of subclasses. This allows us to get by with overload, because we can enumerate out the set of argument-to-return mappings. This is especially important for the numpy API where you often have to overload on the types and values of other arguments, e.g. if you pass axis=(0, 1), you'll get a different rank of tensor back than if you had written axis=-1.
You can see the gory details of how I've done the numpy types here: https://github.com/explosion/thinc/blob/master/thinc/types.p...
Thanks for the detailed reply and the links!
how does it compare to Tensorflow, Keras, PyTorch ?
... and fast.ai?
So no custom CUDA kernels, just CuPy? Isn't that a performance issue? (Based on installation notes)
You can write custom CUDA kernels, and I've written a few to support operations over our ragged format. Actually Thinc makes it pretty easy to optimise a specific bit of code with a custom op...cupy's fused decorator can also work well in some situations.
What you don't get is the compile-time auto-optimisation. Like, there's no asynchronous dispatch like you would get from PyTorch.
If you write a chunk of operations as just cupy maths, and then write the same thing in PyTorch and use the PyTorch wrapper, you can expect the PyTorch one to perform better. You would also need to write the backprop callback for the cupy maths you did. Sometimes you might find optimisations PyTorch doesn't though, especially around speed vs memory trade-offs.
Part of the philosophy and difference between this and other frameworks is that we do not do any compilation or trickery of any sort: what you write is what gets executed. Obviously this is slower a lot of the time, but it means we can play well with others --- we're not wrestling for control of more of the graph so we can make more optimisations, and we're not limiting the hand optimisations you can do for custom situations.
Here's an example of how I've wrapped CUDA kernels, using Cupy's RawKernel feature. Most people do these as strings within Python source, but I find that super ugly. I like to keep the cuda source in .cu files, and then read in the file to compile it.
* The CUDA kernels: https://github.com/explosion/thinc/blob/master/thinc/backend...
* The code that calls cupy.RawKernel and the wrapping functions: https://github.com/explosion/thinc/blob/master/thinc/backend...
* The wrappers are called by the CupyOps object: https://github.com/explosion/thinc/blob/master/thinc/backend... . This object has the same API across backends, with some functions redefined with backend-specific implementations. In the NumpyOps object, I instead call into custom Cython code.
My CUDA skills aren't great, so I'm sure there are improvements that could be made. I'd welcome suggestions if anyone has them.