Cambridge scientists create living organism with redesigned DNA(theguardian.com)
The latest WIRED magazine had some truly mind-boggling articles in this vein.
This guy (a renowned smallpox researcher) was able to synthesize a cousin of smallpox (horsepox) using commercially/publicly-accessible tools and resources. He did this to try and create a new/better smallpox vaccine, because he believes that motivated actors will be able to synthesize smallpox in the next 20 years, and that the world needs to be ready with better vaccines for the same. His team's resulting publication ("we made smallpox, this is the gist of how") was met with strongly negative responses.
The gist of these articles is that while synthesizing functioning viruses/microorganisms was possible in the past (TFA says the first "synthetic" organism was made in 2010), it's much easier/faster/cheaper to do so today, when crispr techniques/tools are more generally wielded and better-understood.
"...because he believes that motivated actors will be able to synthesize smallpox in the next 20 years, and that the world needs to be ready..."
So the essence is "we are doing it first, becuase otherwise others will be doing it first"... So the race who will come first there continues... No (further) comment...
I'd be curious as to what the actual critics said. I would gather that it is (significantly) more nuanced than "you demonstrated that something dangerous can be created. You shouldn't be attempting to make dangerous things."
How is CRISPR used for genome synthesis? I thought it was all driven by synthesis, and overlapping homology of sticky ended fragments, and ligase, but maybe that's the old days.
It's not used for synthesis. It's very helpful for modifications of existing genomes, but it has no relevance in commercial DNA synthesis (which is a purely chemical process).
What are the design tools that synthetic biology researchers use to create genomes? Is it some sort of EDA environment like Cello or Asimov, or is it done manually?
Mostly its just manipulating text files w/ bash, R, python.
The tools used by this particular study:
That’s a good question!
I wouldn’t be surprised if the academic team in this case developed something relatively simple. Let’s try to find out how they dispatched the synthesis orders they made over the two years.
Was it excel or csv at the end of the day?
If they were an enterprise biotech I would bet the would have a much more elaborate in-house data & design toolchain + Lims. But academic teams rarely have the resources or drive necessary to engineer digital tools approaching that level of sophistication.
Other software academics might use - benching - teselagen - maaaaybe antha-lang - ???
Alternatively, perhaps the synthesis vendor supplied their own optimized design and inventory tool. Who did they buy from - gen9?
Plenty of excellent open and closed source tools are available. For example check out clc genomics workbench
> Known as Syn61, the bug is a little longer than normal, and grows more slowly, but survives nonetheless.
Why is this? If AGC is identical in function to TCG, and the same holds for the other replacements, shouldn't the new organism function exactly the same?
Nothing in biology is exactly the same - or said another way, biology is optimized at many different scales - from basic (even quantum) physics, all the way up to social adaptations. So screwing with codons could have very subtle effects that have nothing to do with the physical design of the organism, but nonetheless affect its size and growth rate. Off the top of my head, to give some random examples - the locations on the genome of two metabolic enzymes have been shifted by what amounts to a few nanometers, and that affects metabolic flux, or the DNA now has more purines in a row than before and that causes some charge buildup that calls in DNA repair mechanisms slightly more often and disables transcription slightly more, or the genome when folded up is slightly more rounded than oblong because of strange sterics, or any number of very weird optimizations all the way down the physical causal ladder.
The Ars article has some more details: https://arstechnica.com/science/2019/05/researchers-make-the...
> Unfortunately, there's a big gap between what a DNA synthesis machine can output and the multi-million-base-long genome. The group had to do an entire assembly process, stitching together small pieces into a large segment in one cell and then bringing that into a different cell that had an overlapping large segment. "Personally, my biggest surprise was really how well the assembly process worked," Schmied said. "The success rate at each stage was very high, meaning that we could do the majority of the work with standard bench techniques."
> During the process, there were a couple of spots where the synthetic genome ended up with problems—in at least one case, this was where two essential genes overlapped. But the researchers were able to tweak their version to get around the problems that they identified. The final genome also had a handful of errors that popped up during the assembly process, but none of these altered the three base codes that were targeted.
So it sounds to me like the process wasn't quite perfect. They also note that DNA "redundancy can also allow fine-tuning of gene activity, as some codes are translated into proteins more efficiently than others".
I wonder how many simple ordinary mutations (not necessarily codon-affecting errors) are in the final E. coli genome? Any process like this... Probably quite a few, all of which would be sand in the gears.
The DNA/RNA that encodes the proteins can itself be structured in a way that might be disrupted by synonymous amino acid changes. In particular, recent work in the field has shown that changing codons near the start of the gene can disrupt transcriptional/translational machinery.
These effects are often minor but this bacteria has undergone hundreds of millions of years of optimization via natural selection and some researchers have come along and disrupted 18,000 sites. Probably the slower growth and length abnormalities are just that the bacteria is a little miscalibrated and displaying minor symptoms of malaise.
Not an expert, but for example maybe decoding the TCG instruction (in the ribosome) is slower or uses more energy.
I doubt that's the only reason but according to this table , AGC triplets are 2x more prevalent than UGC in that bacteria.
Edit: this the frequency of that codon being used o the genome sequence. But the assumption could be that it is also preferably produced as a t-RNA so it can replicate its genome efficiently.
It's more likely that they have introduced errors, or non-preferred patterns, which affect the number of replication forks and stall the replication machinery.
That is some scary stuff, actually. Got the same feeling as watching some of Boston Robotics' videos.
What about Mycoplasma laboratorium?
It's partially synthetic according to Google