A visit from The Great Internet Migratory Box of Electronics Junk

tgimboej - The Great Internet Migratory Box of Electronics Junk
A mysterious package showed up on my doorstep today - box "INTJ-28", an instance of the The Great Internet Migratory Box of Electronics Junk, also known as TGIMBOEJ, which is the invention of a group called Evil Mad Scientist Laboratories (note: I am not making this up). The concept is someone sends a box of electronics junk to a recipient (e.g. me), the recipient takes some parts, adds some new parts, and sends it to a new recipient. Each recipient documents the box. There are currently about 120 of these boxes roaming the world.
Inside the box of junk
How did I end up with this box? I signed up on the request list a bit over a year ago, and was recently chosen by Mr. INTJ to receive a box. In other words, some total stranger on the internet sends me a box of junk. In turn, I've picked another total stranger from the list to receive the updated box of junk.
The contents of the box of junk
The contents of the box were pretty interesting. At the top is speaker wire, a USB/PS2 adapter, network cable, and a firewire cable. The blue box is the puzzling "JBM Electronics Gateway Cellular Router C120+F". To the right of it is a box of 9 small speakers, which seems like more speakers than anyone would need (which may be why they are in this box). In the next row is a wall-wart, USB extension, firewire cable, gender changer, RCA cable, a very bright 9-LED module, a Intel PRO/1000 MT Gigabit adapter, binding posts, a little mystery board, a short Ethernet cable, and a USB cable. At the bottom is a blinking USB ecobutton, two small stepper motors, a large stepper motor, and a HP combination calculator / numeric pad (which seems to be broken). Under the stepper motor is a PIC 18f4431 microcontroller, designed for motor control, and a Basic Stamp 2p microcontroller module. I think the Basic Stamp is the prize of the box, but I'm leaving it for the next recipient since I'm unlikely to switch from Arduino to a new platform. (Click the above image for a larger version.)

I ended up taking the big stepper motor, the LED, a few cables, the binding posts, one of the speakers, and the ecobutton. What I added to the box will be a surprise to the next recipient, whom I hope will post soon.

Car radio repair made difficult

My wife's car radio suddenly quit working, so I figured I'd take a look and see if I could fix it. The first problem was that it was mounted in the dash with 5-sided security fasteners, apparently to frustrate radio thieves who only have standard tools. Even my 100-piece security bit set let me down in this occasion, entirely lacking in the pentagonal category. Fortunately, Ebay rapidly provided me with the appropriate tool, and I started removing the radio. The alarm light started flashing and it made some angry beeps, but I was able to get the radio out without the alarm going off. I opened up the radio and took a look inside.
inside the radio
The symptoms were that the radio lit up and the display worked fine, but you could only hear extremely faint sound even if you cranked it up all the way. Using my diagnostic powers, I figured the problem was probably in the amplifier, which would probably be near the the back of the radio. Looking more closely, I noticed a capacitor oozing hideous brown gunk. Using my diagnostic powers again, I decided that this might be the problem. (It turns out that leaking electrolytic capacitors is a common problem, known as the capacitor plague.)
capacitor oozing gunk
I unsoldered the capacitor and removed the gunk as best I could. At least it wasn't as disgusting as the nest of ants that caused my previous electronic problem. The really annoying thing with the repair was the radio's circuit board had big globs of sticky heat sink compound exactly where I grabbed the board every time I picked it up. You can see the white patches in the lower left of the first picture. There was originally much more compound, but after getting it on my fingers twenty times, there wasn't much left. I should have learned to be more careful, but no such luck...

I figured it would be easy to get a replacement capacitor, so I checked the parts supplier Digi-Key. The good news was they had the exact capacitor listed. The bad news is they didn't have it in stock, and it would take 6 months to get it from the factory. Checking other parts catalogs, I found that this capacitor wasn't the easy-to-find commodity part I had expected, but a special short-and-wide capacitor designed to fit into the tight space, that nobody carried in stock. Too impatient to wait for 6 month delivery, I got a standard capacitor, which was the wrong size to mount nicely. I'll get no points for style, but I did manage to wedge it in place by putting it at a crazy angle. (I also put in new heat sink compound to replace the compound that I got all over my fingers.)
New capacitor installed in the radio
After replacing the radio, the radio wouldn't do anything because it needed the security code. Through surprising foresight, I actually had the code, and after putting the code in I found that the radio just gave me static. Oops, I forgot to connect the antennas to the radio. That was easy to fix, since I conveniently noticed before tightening up the security fasteners. With all the wires in place, I tried again and the radio seems to work as well as ever. It's always nice when one of my crazy projects actually works.

Update: no radio happiness

Unfortunately the radio quit working again after a couple days. I don't know if it has some deeper problem that killed the new capacitor too, or if the gunk from the old capacitor damaged something, but looks like it's time for a new radio. Oh well, I'll file this under less-successful-projects.

Getting the (literal) bugs out of a driveway gate controller

My driveway has a gate that automatically opens when you drive up, and then closes. Recently it stopped working and this blog post describes how I got the bugs out of the controller, reverse-engineered it, and got it working again.

The gate is controlled by a Stanley 24600 gate controller, which has the fairly simple task of running the motors to open the gate and close the gate as appropriate. Everything worked fine for years, until one recent day it just stopped working, which was rather inconvenient. When I opened up the controller box, I noticed a couple potential problems.
Gate controller box infested with ants
First, the circuit board had some disgusting cocoons on it, which were apparently shorting out the board and preventing it from working. I don't know what sort of insect made the cocoons, and I didn't want to wait around to find out.
Gate controller board infested with cocoons
In addition, see those light brown piles at the bottom of the controller box? An ant nest had decided to move in for some reason, and they had brought thousands of eggs with them. I don't think the ants were actually interfering with the function of the controller, but it's hard to debug a circuit when ants keep crawling on you. Here's a closeup of the pile of eggs:
Closeup of ants and eggs in the controller.
Getting rid of the ant nest was easy. Opening up the box scared the ants, and they scurried around and moved out in 15 minutes flat, taking the eggs with them. I replaced the weather stripping around the box to keep them out.

The hideous cocoons on the other hand were a bigger problem. I removed them from the board and scrubbed the circuit board clean with rubbing alcohol. After replacing the circuit board, the gate would start opening but would not stop opening, which was a change but not really an improvement.

Everything else checked out fine (power, fuses, sensors, motors), so I knew the problem had to be in the controller. Unfortunately the controller is obsolete and nobody makes replacements or has documentation, so I figured I better get it fixed. I started poking around the circuit boards to try to figure out how they worked and where the problem might be.

The logic provided by the controller is simple, but not trivial: it can use a signal to open, or can use a signal to both open and close, or can use a signal to reverse, and has a stop signal. It also has a separate board to automatically close after a delay.

The controller is implemented using some CMOS gates and flip flops for the logic, and a couple 555 timers to control how long to run the motors to open and close the gates. Three big relays and two giant capacitors control the motors, while a few small relays provide additional logic. The controller also has a handful of transistors for various functions, and a bunch of resistors, diodes, and capacitors. The board on the right is the main logic board, and the smaller board on the left provides the optional automatic-close function. Note the variable resistors (with long white shafts) to adjust the various timings. There's also an AC-to-DC circuit on the board to power the logic, and a triac to switch low-voltage AC on and off for reasons I never figured out.

Newer controllers, of course, replace all this discrete logic with a microcontroller. They also provide a lot more functionality and options. It's interesting, though, to see how these circuits were implemented in the "olden days".
Gate controller circuit boards

I spent a bunch of time following PCB traces around (both visually and with a continuity tester) trying to figure out how all the pieces work together. It was a slower process than I had expected, since many of the traces go under components and you can't see where they end up. I was getting frustrated and considered building my own controller out of an Arduino, since it would be about 10 lines of code. However, I figured interfacing with the sensors and AC relays would be a pain, and I didn't want to burn out the expensive motors with a programming error. Thus, I continued to analyze the existing controller.

I got most of the logic figured out when I happened to discover a trace that didn't have continuity from one end to the other. Apparently one of the traces that powers some of the transistors had cracked while I was cleaning off the cocoons. I put a jumper across the bad trace, and everything worked perfectly:
Jumper wire to fix controller board

I've had lots of computer problems due to bugs, but this is the first time my problem was real live insect bugs. (Obligatory Grace Hopper bug link.)

Update: Nature still hates my gate

Today, my gate would only close half-way. After some investigation, I discovered that a vine had somehow gotten caught on the gate, and it was strong enough to keep the gate from closing all the way. I was a bit surprised that the gate motors weren't strong enough to rip the vine off, but I guess as a safety feature they don't have a lot of extra force. This problem was a lot easier to fix than the previous, since I just needed to remove the vine.

I conclude, however, that nature hates my gate and is trying to keep it from working, whether it takes insects or plants. What next? Ice storm? Tornado?
A vine attacks my gate

Using Arc to decode Venter's secret DNA watermark

Recently Craig Venter (who decoded the human genome) created a synthetic bacterium. The J. Craig Venter Institute (JCVI) took a bacterium's DNA sequence as a computer file, modified it, made physical DNA from this sequence, and stuck this DNA into a cell, which then reproduced under control of the new DNA to create a new bacterium. This is a really cool result, since it shows you can create the DNA of an organism entirely from scratch. (I wouldn't exactly call it synthetic life though, since it does require an existing cell to get going.) Although this project took 10 years to complete, I'm sure it's only a matter of time before you will be able to send a data file to some company and get the resulting cells sent back to you.

One interesting feature of this synthetic bacterium is it includes four "watermarks", special sequences of DNA that prove this bacterium was created from the data file, and is not natural. However, they didn't reveal how the watermarks were encoded. The DNA sequences were published (GTTCGAATATTT and so on), but how to get the meaning out of this was left a puzzle. For detailed background on the watermarks, see Singularity Hub. I broke the code (as I described earlier) and found the names of the authors, some quotations, and the following hidden web page. This seems like science fiction, but it's true. There's actually a new bacterium that has a web page encoded in its DNA:
DNA web page
I contacted the JCVI using the link and found out I was the 31st person to break the code, which is a bit disappointing. I haven't seen details elsewhere on how the code works, so I'll provide the details. (Spoilers ahead if you want to break the code yourself.) I originally used Python to break the code, but since this is nominally an Arc blog, I'll show how it can be done using the Arc language.

The oversimplified introduction to DNA

DNA double helix Perhaps I should give the crash course in DNA at this point. The genetic instructions for all organisms are contained in very, very long molecules called DNA. For our purposes, DNA can be described as a sequence of four different letters: C, G, A, and T. The human genome is 3 billion base pairs in 23 chromosome pairs, so your DNA can be described as a pair of sequences of 3 billion C's, G's, A's, and T's.

Watson and Crick won the Nobel Prize for figuring out the structure of DNA and how it encodes genes. Your body is made up of important things such as enzymes, which are made up of carefully-constructed proteins, which are made up of special sequences of 18 different amino acids. Your genes, which are parts of the DNA, specify these amino acid sequences. Each three DNA bases specify a particular amino acid in the sequence. For instance, the DNA sequence CTATGG specifies the amino acids leucine and tryptophan because CTA indicates leucine, and TGG indicates tryptophan. Crick and Watson and Rosalind Franklin discovered the genetic code that associates a particular amino acid with each of the 64 possible DNA triples. I'm omitting lots of interesting details here, but the key point is that each three DNA symbols specifies one amino acid.

Encoding text in DNA

Now, the puzzle is to figure out how JCVI encoded text in the synthetic bacterium's DNA. The obvious extension is instead of assigning an amino acid to each of the 64 DNA triples, assign a character to it. (I should admit that this was only obvious to me in retrospect, as I explained in my earlier blog posting.) Thus if the DNA sequence is GTTCGA, then GTT will be one letter and CGA will be another, and so on. Now we just need to figure out what letter goes with each DNA triple.

If we know some text and the DNA that goes with it, it is straightforward to crack the code that maps between the characters and the DNA triples. For instance, if we happen to know that the text "LIFE" corresponds to the DNA "AAC CTG GGC TAA", then we can conclude that AAA goes to L, CTG goes to I, GGC goes to F, and TAA goes to E. But how do we get started?

Conveniently, the Singularity Hub article gives the quotes that appear in the DNA, as reported by JCVI:

"TO LIVE, TO ERR, TO FALL, TO TRIUMPH, TO RECREATE LIFE OUT OF LIFE."
"SEE THINGS NOT AS THEY ARE, BUT AS THEY MIGHT BE."
"WHAT I CANNOT BUILD, I CANNOT UNDERSTAND."
We can try matching the quotes against each position in the DNA and see if there is any position that works. A match can fail in two ways. First, if the same DNA triple corresponds to two different letters, something must be wrong. For instance, if we try to match "LIFE" against "AAC CTG GGC AAC", we would conclude that AAC means both L and E. Second, if the same letter corresponds to two DNA triples, we can reject it. For instance, if we try to match "ERR" against "TAA CTA GTC", then both CTA and GTC would mean R. (Interestingly, the real genetic code has this second type of ambiguity. Because there are only 18 amino acids and 64 DNA triples, many DNA triples indicate the same amino acid.)

Using Arc to break the code

To break the code, first download the DNA sequences of the four watermarks and assign them to variables w1, w2, w3, w4. (Download full code here.)
(= w1 "GTTCGAATATTTCTATAGCTGTACA...")
(= w2 "CAACTGGCAGCATAAAACATATAGA...")
(= w3 "TTTAACCATATTTAAATATCATCCT...")
(= w4 "TTTCATTGCTGATCACTGTAGATAT...")
Next, lets break the four watermarks into triples by first converting to a list and then taking triples of length 3, which we call t1 through t4:
(def strtolist (s) (accum accfn (each x s (accfn (string x)))))
(def tripleize (s) (map string (tuples (strtolist s) 3)))

(= t1 (tripleize w1))
(= t2 (tripleize w2))
(= t3 (tripleize w3))
(= t4 (tripleize w4))
The next function is the most important. It takes triples and text, matches the triples against the text, and generates a table that maps from each triple to the corresponding characters. If it runs into a problem (either two characters assigned to the same triple, or the same character assigned to two triples) then it fails. Otherwise it returns the code table. It uses tail recursion to scan through the triples and input text together.
; Match the characters in text against the triples.
; codetable is a table mapping triples to characters.
; offset is the offset into text
; Return codetable or nil if no match.
(def matchtext (triples text (o codetable (table)) (o offset 0))
  (if (>= offset (len text)) codetable ; success
      (with (trip (car triples) ch (text offset))
  ; if ch is assigned to something else in the table
  ; then not a match
  (if (and (mem ch (vals codetable))
           (isnt (codetable trip) ch))
        nil
             (empty (codetable trip))
        (do (= (codetable trip) ch) ; new char
           (matchtext (cdr triples) text codetable (+ 1 offset)))
             (isnt (codetable trip) ch)
        nil ; mismatch
      (matchtext (cdr triples) text codetable (+ 1 offset))))))
Finally, the function findtext finds a match anywhere in the triple list. In other words, the previous function matchtext assumes the start of the triples corresponds to the start of the text. But findtext tries to find a match at any point in the triples. It calls matchtext, and if there's no match then it drops the first triple (using cdr) and calls itself recursively, until it runs out of triples.
(def findtext (triples text)
  (if (< (len triples) (len text)) nil
    (let result (matchtext triples text)
      (if result result
        (findtext (cdr triples) text)))))
The Singularity Hub article said that the DNA contains the quote:
"TO LIVE, TO ERR, TO FALL, TO TRIUMPH, TO RECREATE LIFE OUT OF LIFE." - JAMES JOYCE 
Let's start by just looking for "LIFE OUT OF LIFE", since we're unlikely to get LIFE to match twice just by chance:
arc> (findtext t1 "LIFE OUT OF LIFE")
nil
No match in the first watermark, so let's try the second.
arc> (findtext t2 "LIFE OUT OF LIFE")
#hash(("CGT" . #\O) ("TGA" . #\T) ("CTG" . #\I) ("ATA" . #\space) ("TAA" . #\E) ("AAC" . #\L) ("TCC" . #\U) ("GGC" . #\F))
How about that! Looks like a match in watermark 2, with DNA "AAC" corresponding to L, "CGT" corresponding to "I", "GGC" corresponding to "F", and so on. (The Arc format for hash tables is pretty ugly, but hopefully you can see that in the output.) Let's try matching the full quote and store the table in code2:
arc> (= code2 (findtext t2 "TO LIVE, TO ERR, TO FALL, TO TRIUMPH, TO RECREATE LIFE OUT OF LIFE."))
#hash(("TCC" . #\U) ("TCA" . #\H) ("TTG" . #\V) ("CTG" . #\I) ("ACA" . #\P) ("CTA" . #\R) ("CGA" . #\.) ("ATA" . #\space) ("CGT" . #\O) ("TTT" . #\C) ("TGA" . #\T) ("TAA" . #\E) ("AAC" . #\L) ("CAA" . #\M) ("GTG" . #\,) ("TAG" . #\A) ("GGC" . #\F))
Likewise, we can try matching the other quotes:
arc> (= code3 (findtext t3 "SEE THINGS NOT AS THEY ARE, BUT AS THEY MIGHT BE."))
#hash(("CTA" . #\R) ("TCA" . #\H) ("CTG" . #\I) ("GCT" . #\S) ("CGA" . #\.) ("ATA" . #\space) ("CAT" . #\Y) ("CGT" . #\O) ("TCC" . #\U) ("AGT" . #\B) ("TGA" . #\T) ("TAA" . #\E) ("TGC" . #\N) ("TAC" . #\G) ("CAA" . #\M) ("GTG" . #\,) ("TAG" . #\A))
arc> (= code4 (findtext t4 "WHAT I CANNOT BUILD, I CANNOT UNDERSTAND"))
#hash(("TCC" . #\U) ("TCA" . #\H) ("ATT" . #\D) ("CTG" . #\I) ("GCT" . #\S) ("TTT" . #\C) ("ATA" . #\space) ("TAA" . #\E) ("CGT" . #\O) ("CTA" . #\R) ("TGA" . #\T) ("AGT" . #\B) ("AAC" . #\L) ("TGC" . #\N) ("GTG" . #\,) ("TAG" . #\A) ("GTC" . #\W))
Happily, they all decode the same triples to the same letters, or else we would have a problem. We can merge these three decode tables into one:
arc> (= code (listtab (join (tablist code2) (tablist code3) (tablist code4))))#hash(("TCC" . #\U) ("TCA" . #\H) ("CTA" . #\R) ("CGT" . #\O) ("TGA" . #\T) ("CGA" . #\.) ("TAA" . #\E) ("AAC" . #\L) ("TAC" . #\G) ("TAG" . #\A) ("GGC" . #\F) ("ACA" . #\P) ("ATT" . #\D) ("TTG" . #\V) ("CTG" . #\I) ("GCT" . #\S) ("TTT" . #\C) ("CAT" . #\Y) ("ATA" . #\space) ("AGT" . #\B) ("GTG" . #\,) ("TGC" . #\N) ("CAA" . #\M) ("GTC" . #\W))
Now, let's make a decode function that will apply the string to the unknown DNA and see what we get. (We'll leave unknown triples unconverted.)
(def decode (decodemap triples)
  (string (accum accfn (each triple triples (accfn
      (if (decodemap triple)
        (decodemap triple)
        (string "(" triple ")" )))))))
arc> (decode code t1)
"(GTT). CRAIG VENTER INSTITUTE (ACT)(TCT)(TCT)(GTA)(GGG)ABCDEFGHI(GTT)(GCA)LMNOP(TTA)RSTUVW(GGT)Y(TGG)(GGG) (TCT)(CTT)(ACT)(AAT)(AGA)(GCG)(GCC)(TAT)(CGC)(GTA)(TTC)(TCG)(CCG)(GAC)(CCC)(CCT)(CTC)(CCA)(CAC)(CAG)(CGG)(TGT)(AGC)(ATC)(ACC)(AAG)(AAA)(ATG)(AGG)(GGA)(ACG)(GAT)(GAG)(GAA).,(GGG)SYNTHETIC GENOMICS, INC.(GGG)(CGG)(GAG)DOCTYPE HTML(AGC)(CGG)HTML(AGC)(CGG)HEAD(AGC)(CGG)TITLE(AGC)GENOME TEAM(CGG)(CAC)TITLE(AGC)(CGG)(CAC)HEAD(AGC)(CGG)BODY(AGC)(CGG)A HREF(CCA)(GGA)HTTP(CAG)(CAC)(CAC)WWW.(GTT)CVI.ORG(CAC)(GGA)(AGC)THE (GTT)CVI(CGG)(CAC)A(AGC)(CGG)P(AGC)PROVE YOU(GAA)VE DECODED THIS WATERMAR(GCA) BY EMAILING US (CGG)A HREF(CCA)(GGA)MAILTO(CAG)MRO(TTA)STI(TGG)(TCG)(GTT)CVI.ORG(GGA)(AGC)HERE(GAG)(CGG)(CAC)A(AGC)(CGG)(CAC)P(AGC)(CGG)(CAC)BODY(AGC)(CGG)(CAC)HTML(AGC)"
It looks like we're doing pretty well with the decoding, as there's a lot of recognizable text and some HTML in there. There's also conveniently the entire alphabet as a decoding aid. From this, we can fill in a lot of the gaps, e.g. GTT is J and GCA is K. From the HTML tags we can figure out angle brackets, quotes, and slash. We can guess that there are numbers in there and figure out that ACT TCT TCT GTA is 2009. These deductions can be added manually to the decode table:
arc> (= (code "GTT") #\J)
arc> (= (code "GCA") #\K)
arc> (= (code "ACT") #\2)
...
After a couple cycles of adding missing characters and looking at the decodings, we get the almost-complete decodings of the four watermarks:

The contents of the watermarks

J. CRAIG VENTER INSTITUTE 2009
ABCDEFGHIJKLMNOPQRSTUVWXYZ 0123456789(TTC)@(CCG)(GAC)-(CCT)(CTC)=/:<(TGT)>(ATC)(ACC)(AAG)(AAA)(ATG)(AGG)"(ACG)(GAT)!'.,
SYNTHETIC GENOMICS, INC.
<!DOCTYPE HTML><HTML><HEAD><TITLE>GENOME TEAM</TITLE></HEAD><BODY><A HREF="HTTP://WWW.JCVI.ORG/">THE JCVI</A><P>PROVE YOU'VE DECODED THIS WATERMARK BY EMAILING US <A HREF="MAILTO:[email protected]">HERE!</A></P></BODY></HTML>

MIKKEL ALGIRE, MICHAEL MONTAGUE, SANJAY VASHEE, CAROLE LARTIGUE, CHUCK MERRYMAN, NINA ALPEROVICH, NACYRA ASSAD-GARCIA, GWYN BENDERS, RAY-YUAN CHUANG, EVGENIA DENISOVA, DANIEL GIBSON, JOHN GLASS, ZHI-QING QI.
"TO LIVE, TO ERR, TO FALL, TO TRIUMPH, TO RECREATE LIFE OUT OF LIFE." - JAMES JOYCE

CLYDE HUTCHISON, ADRIANA JIGA, RADHA KRISHNAKUMAR, JAN MOY, MONZIA MOODIE, MARVIN FRAZIER, HOLLY BADEN-TILSON, JASON MITCHELL, DANA BUSAM, JUSTIN JOHNSON, LAKSHMI DEVI VISWANATHAN, JESSICA HOSTETLER, ROBERT FRIEDMAN, VLADIMIR NOSKOV, JAYSHREE ZAVERI.
"SEE THINGS NOT AS THEY ARE, BUT AS THEY MIGHT BE."

CYNTHIA ANDREWS-PFANNKOCH, QUANG PHAN, LI MA, HAMILTON SMITH, ADI RAMON, CHRISTIAN TAGWERKER, J CRAIG VENTER, EULA WILTURNER, LEI YOUNG, SHIBU YOOSEPH, PRABHA IYER, TIM STOCKWELL, DIANA RADUNE, BRIDGET SZCZYPINSKI, SCOTT DURKIN, NADIA FEDOROVA, JAVIER QUINONES, HANNA TEKLEAB.
"WHAT I CANNOT BUILD, I CANNOT UNDERSTAND." - RICHARD FEYNMAN

The first watermark consists of a copyright-like statement, a list of all the characters, and the hidden HTML page (which I showed above.) The second, third, and fourth watermarks consist of a list of the authors and three quotations.

Note that there are 14 undecoded triples that only appear once in the list of characters. They aren't in ASCII order, keyboard order, or any other order I can figure out, so I can't determine what they are, but I assume they are missing punctuation and special characters.

The DNA decoding table

The following table summarizes the 'secret' DNA to character code:
AAA ? AAC L AAG ? AAT 3
ACA P ACC ? ACG ? ACT 2
AGA 4 AGC > AGG ? AGT B
ATA space ATC ? ATG ? ATT D
CAA M CAC / CAG : CAT Y
CCA = CCC - CCG ? CCT ?
CGA . CGC 8 CGG < CGT O
CTA R CTC ? CTG I CTT 1
GAA ' GAC ? GAG ! GAT ?
GCA K GCC 6 GCG 5 GCT S
GGA " GGC F GGG newline GGT X
GTA 9 GTC W GTG , GTT J
TAA E TAC G TAG A TAT 7
TCA H TCC U TCG @ TCT 0
TGA T TGC N TGG Z TGT ?
TTA Q TTC ? TTG V TTT C
As far as I can tell, this table is in random order. I analyzed it a bunch of ways, from character frequency and DNA frequency to ASCII order and keyboard order, and couldn't figure out any rhyme or reason to it. I was hoping to find either some structure or another coded message, but I couldn't find anything.

More on breaking the code

It was convenient that the JCVI said in advance what quotations were in the DNA, making it easier to break the code. But could the code still be broken if they hadn't?

One way to break the code is to look at statistics of how often different triples appear. We count the triples, convert the table to a list, and then sort the list. In the sort we use a special comparison function that compares the counts, to sort on the counts, not on the triples.

arc> (sort
  (fn ((k1 v1) (k2 v2)) (> v1 v2))
  (tablist (counts t2)))
(("ATA" 41) ("TAG" 27) ("TAA" 25) ("CTG" 18) ("TGC" 16) ("GTG" 16) ("CTA" 15) ("CGT" 14) ("AAC" 13) ("TGA" 10) ("TTT" 10) ("GCT" 10) ("TAC" 10) ("TCA" 8) ("CAA" 7) ("CAT" 7) ("TCC" 7) ("TTG" 5) ("GGC" 4) ("GTT" 4) ("ATT" 4) ("CCC" 4) ("GCA" 3) ("AGT" 2) ("TTA" 2) ("ACA" 2) ("CGA" 2) ("GGA" 2) ("GTC" 1) ("TGG" 1) ("GGG" 1))
This tells us that ATA appears 41 times in the second watermark, TAG 27 times, and so on. If we assume that this encodes English text, then the most common characters will be space, E, T, A, O, and so on. Then it's a matter of trial-and-error trying the high-frequency letters for the high-frequency triples until we find a combination that yields real words. (It's just like solving a newspaper cryptogram.) You'll notice that the high-frequency triples do turn out to match high-frequency letters, but not in the exact order. (I've written before on simple cryptanalysis with Arc.)

Another possible method is to guess that a phrase such as "CRAIG VENTER" appears in the solution, and try to match that against the triples. This turns out to be the case. It doesn't give a lot of letters to work on, but it's a start.

Arc: still bad for exploratory programming

A couple years ago I wrote that Arc is bad for exploratory programming, which turned out to be hugely controversial. I did this DNA exploration using both Python and Arc, and found Python to be much better for most of the reasons I described in my earlier article.
  • Libraries: Matching DNA triples was trivial in Python because I could use regular expressions. Arc doesn't have regular expressions (unless you use a nonstandard variant such as Anarki). Another issue was when I wanted to do graphical display of watermark contents; Arc has no graphics support.
  • Performance: for the sorts of exploratory programming I do, performance is important. For instance, one thing I did when trying to figure out the code was match all substrings of one watermark against another, to see if there were commonalities. This is O(N^3) and was tolerably fast in Python, but Arc would be too painful.
  • Ease of and speed of programming. A lot of the programming speed benefit is that I have more familiarity with Python, but while programming in Arc I would constantly get derailed by cryptic error messages, or trying to figure out how to do simple things like breaking a string into triples. It doesn't help that most of the Arc documentation I wrote myself.
To summarize, I started off in Arc, switched to Python when I realized it would take me way too long to figure out the DNA code using Arc, and then went back to Arc for this writeup after I figured out what I wanted to do. In other words, Python was much better for the exploratory part.

Thoughts on the DNA coding technique

Part of the gene map.  Watermark 2b is shown in the DNA, with an arc gene below it I see some potential ways that the DNA coding technique could be improved and extended. Since the the full details of the DNA coding technique haven't been published by the JCVI yet, they may have already considered these ideas.

Being able to embed text in the DNA of a living organism is extremely cool. However, I'm not sure how practical it is. The data density in DNA is very high, maybe 500 times that of a hard drive (ref), but most of the DNA is devoted to keeping the bacterium alive and only about 1K of text is stored in the watermarks.

I received email from JCVI that said the coding mechanism also supports Java, math (LaTeX?), and Perl as well as HTML. Embedding Perl code in a living organism seems even more crazy than text or HTML.

If encoding text in DNA catches on, I'd expect that error correction would be necessary to handle mutations. An error-correction code like Reed-Solomon won't really work because DNA can suffer deletions and insertions as well as mutations, so you'd need some sort of generalized deletion/insertion correcting code.

I just know that a 6-bit code isn't going to be enough. Sooner or later people will want lower-case, accented character, Japanese, etc. So you might as well come up with a way to put Unicode into DNA, probably as UTF-8. And people will want arbitrary byte data. My approach would be to use four DNA base pairs instead of three, and encode bytes. Simply assign A=00, C=01, G=10, and T=11 (for instance), and encode your bytes.

If I were writing an RFC for data storage in DNA, I'd suggest that most of the time you'd want to store the actual data on the web, and just store a link in the DNA. (Similar to how a QR code or tinyurl can link to a website.) The encoded link could be surrounded by a special DNA sequence that indicates a tiny DNA link code. This sequence could also be used to find the link code by using PCR, so you don't need to do a genome sequence on the entire organism to find the watermark. (The existing watermarks are in fact surrounded by fixed sequences, which may be for that purpose.)

I think there should also be some way to tag and structure data that is stored DNA, to indicate what is identification, description, copyright, HTML, owners, and so forth. XML seems like an obvious choice, but embedding XML in living organisms makes me queasy.

Conclusions

Being able to fabricate a living organism from a DNA computer file is an amazing accomplishment of Craig Venter and the JCVI. Embedding the text of a web page in a living organism seems like science fiction, but it's actually happened. Of course the bacterium doesn't actually serve the web page... yet.

Sources

Decoding the secret DNA code in Venter's synthetic bacterium

Craig Venter recently created a synthetic bacterium with a secret message encoded in the DNA. This is described in more detail at singularityhub. (My article will make much more sense if you read the singularityhub article.)

I tried unsuccessfully to decode the secret message yesterday. I realized the problem was that I was thinking like a computer scientist, not a biologist. Once I started thinking like a biologist, the solution was obvious.

I won't spoil your fun by giving away the full answer (at least for now), but I'll mention a few things. [Update: I've written up the details here.] There are four watermarks. The first contains HTML. (That's right! They put actual HTML - complete with DOCTYPE - into the DNA of a living creature. That's just crazy!)
The HTML in the genome
I sent email to the link, and they told me I was the 31st person to break the code. I guess I'll need to be faster next time.

The second, third, and fourth watermarks each contain co-authors and an interesting scientific quote. The first watermark also contains the full character set in order, to help verify the decoding, although I was unable to decode a few of the special characters because they only appear once.

I found the following quotes (which match the quotes given by JCVI and in the singularityhub article):
"TO LIVE, TO ERR, TO FALL, TO TRIUMPH, TO RECREATE LIFE OUT OF LIFE." - JAMES JOYCE
"SEE THINGS NOT AS THEY ARE, BUT AS THEY MIGHT BE."
WHAT I CANNOT BUILD, I CANNOT UNDERSTAND." - RICHARD FEYNMAN
Note that the quotes are one per watermark, not all in the fourth watermark as the singularityhub article claims.

If you want to try decoding the code, the DNA of the watermarks is at Science Magazine. The complete genome is a NCBI, in case you want it.

My loyal readers will be disappointed that I didn't use the Arc language to decode this, unlike my previous cryptanalysis and genome adventures. I started out with Arc, but my existing Arc cryptanalysis tools didn't do the job. I switched to Python since I'm faster in Python and I wanted to do some heavy-duty graphical analysis.

I tried a bunch of things that didn't work for analysis. I tried autocorrelation to try to determine the length of each code element, but didn't get much of a signal. I tried looking for common substrings between the watermarks both visually and symbolically, and I found some substantial strings that appeared multiple times, but that didn't really help. I discovered that the sequence "CGAT" never appears in the watermarks, but that was entirely useless. I tried applying the standard genetic code (mapping DNA triples to amino acids), since apparently Craig Venter used that in the past, but didn't come up with anything. I tried to make various estimates of how many bits of data were in the watermarks and how many bits of data would be required using various encodings. I briefly considered the possibility that the DNA encoded vectors that would draw out the answers (I think a Pickover book suggested analyzing DNA in this way), or that the DNA drew out text as a raster, but decided the bit density would be too low and didn't actually try this.

Then I asked myself: "How would I, myself, encode text in DNA?" I figured Unicode would be overkill, so probably it would be best to use either ASCII or a 6-bit encoding (maybe base-64). Then each pair of bits could be encoded with one of C, G, A, and T. Based on the singularityhub article I assumed the word "TRIUMPH" appeared, so I took "riumph" (to avoid possible capitalization issues with the first letter), encoded it as 6-bits or 8-bits, with any possible starting value for 'a', and the 24 possibilities for mapping C/G/A/T to 2-bits, and then tried big-endian and little-endian, to yield 15360 possible DNA encodings for the word. I was certain that one of them would have to appear. However, a brute force search found that none of them were in the DNA. At that point, I started trying even more implausible encodings, with no success, and started to worry that maybe the data was zipped first, which would would make decoding almost impossible. I started to think about variable-length encodings (such as Huffman encodings), and then gave up for the night. I was wondering if it would be worth a blog post on things that don't work for decrypting the code or if I should quit in silence.

In the middle of the night it occurred to me that the designers of the code were biologists, so I should think like a biologist not a computer scientist to figure out the code. With this perspective, it was obvious how to extend the genetic code to handle a larger character set. The problem was there were 64! possibilities. Or maybe 64^26 or 26 choose 64 or something like that; the exact number doesn't matter, but it's way too many to brute force like I did earlier. However, based on the quotes in the singularityhub article, I assumed that the substring " OUT O" appeared in the string, made a regular expression, and boom, it actually matched the watermark, confirming my theory. Then it was a quick matter of extending the technique to decode the full watermarks. As far as I can tell, the encoding used was arbitrary, and does not have any fundamental meaning. As I mentioned earlier, I'm leaving out the details for now, since I don't want to ruin anyone else's fun in decoding the message.

In conclusion, decoding the secret message was a fun challenge.

Your relationship: mathematically doomed or not?

I came across an interesting paper that uses a mathematical model of relationships to show that relationships are likely to fall apart unless you put more energy into them than you'd expect. To prove this, the paper, A Mathematical Model of Sentimental Dynamics Accounting for Marital Dissolution by José-Manuel Rey, uses optimal control theory to find the amount of effort to put into a relationship to maximize lifetime happiness. Unfortunately, due to a mathematical error, the most interesting conclusions of the paper are faulty. (Yes, this is wildly off topic for my blog.)

To briefly summarize the paper, it considers the feeling level of the relationship to be a function of time: x(t). The success of the relationship depends on the effort you put into the relationship, which is also a function of time: c(t). In the absence of effort, the feelings in the relationship will decay, but effort will boost the relationship. This is described with the catchphrase "The second thermodynamic law for sentimental interaction", which claims relationships deteriorate unless energy is added.

Next, your overall happiness (i.e. utility) is based on two more functions. The first function U(x) indicates how much happiness you get out of the relationship. The better the relationship, the happier you are, but to a limit. The second function D(c) indicates how much it bothers you to work hard on the relationship. All else being equal, you want to put some amount of effort c* into the relationship. Putting more effort than that into the relationship drains you, and putting a lot more effort drains you a lot more.

To summarize the model so far, you need to put effort into the relationship or it will deteriorate. If you put more effort into the relationship, you'll be happier because the relationship is better, but unhappier because you're working so hard, so there's a tradeoff.

The heavy-duty mathematics comes into play now. You sum up your happiness over your entire life to obtain a single well-being number W. (A discount rate is used so what happens in the short term affects W more than what happens many years in the future.) Your total happiness is given by this equation:
Equation W
Your goal is to figure out how much effort to put into the relationship at every point in the future, to obtain the biggest value of W. (I.e. determine the function c(t).) By using optimal control theory, the "best" effort function will be a solution to the paper's Equation 2:
Equation 2
From this, you can compute how much effort to put into the relationship at every point in the future, and what the final destiny of the relationship will be. The paper determines that there is an optimal equilibrium point E for the relationship, and you should adjust the effort you put into the relationship in order to reach this point.

However the paper has one major flaw at this point. It assumes that all trajectories satisfy Equation 2, not just the optimal one. (As the paper puts it, "The stable manifold is the only curve supporting trajectories leading to equilibrium.") From this, the paper reaches the erroneous conclusion that relationship dynamics are unstable, so a small perturbation will send the relationship spiraling off in another direction. In fact, a small perturbation will cause a small change in the relationship. The paper's second erroneous conclusion is that "effort inattentions" (small decreases in the effort put into the relationship) will cause the relationship to diverge down to zero and relationship breakup. In fact, the paper's model predicts that small decreases in the effort will cause small decreases in the relationship.

To make this clearer, I have annotated Figure 3 from the paper. My annotations are the "yellow notes":
Annotated Figure 3
The trajectories that the model obtains according to Equation 2 are mostly nonsensical, and exclude sensible trajectories, as I will explain.

Black line A' shows what happens if you start off putting "too much" effort into the relationship. You end up putting more and more effort into the relationship (upper-right), which will yield worse and worse well-being (W), turning the relationship into an obsession. Obviously this is neither sensible or optimal.

The model claims that Black line A'' is explains relationship breakups, where the relationship effort drops to 0, followed by the collapse of the relationship below xmin. Again, this is a nonsensical result obtained by using Equation 2 where it does not apply. According to the model, effort c* is the easiest level of relationship effort to provide. Thus, a trajectory that drops below c* is mathematically forced to worsen well-being function W, and doesn't make sense according to the model of the paper.

The paper poses the "failure paradox", that relationships often fail even though people don't expect them to. This is explained as a consequence of "effort inattention", which drops your relationship from a good (equilibrium) trajectory to a deteriorating trajectory such as A''. I show above (in orange) two alternate trajectories that recover the relationship, rather than causing breakup. The horizontal orange line assumes that after some decline, you keep the relationship effort fixed, causing the relationship to reach the stable orange dot above Wu. The diagonal orange line shows that even if your relationship is on a downwards trajectory, you can reverse this by increasing the effort and reach the "best" point E. Note that both of these trajectories satisfy the basic model of the paper, and the trajectories achieve much higher wellbeing than trajectory A''. This proves that A'' is not an optimal trajectory. (I believe the optimal trajectory would actually be to jump back up to the yellow-green line as soon as possible and proceed to E.)

Another erroneous conclusion of the paper is that E is the unique equilibrium point and is an unstable equilibrium. In fact, any point along the upwards dotted diagonal line is a stable equilibrium point. Along this line, the effort c is the exact amount to preserve the relationship feeling level x at its current value. Any perturbation in the relationship feeling x will be exponentially damped out according to Equation 1. In other words, if the feeling level in the relationship gets shifted for some reason (and the effort is unchanged), it will move back to its original path. Alternatively, if the effort level changes by a small amount, it will cause a correspondingly small (but permanent) shift in the relationship feeling, moving along the diagonal line. The relationship will not suddenly shift to line A' or A'' and go crazy.

To summarize, the paper makes the faulty assumption that all relationship trajectories (and not just the optimal one) will follow Equation 2. As a result, the paper yields nonsensical conclusions: relationships can surge up to infinity (line A') or down to 0 (line A''). The paper also reaches the mistaken conclusions that relationships have a single equilibrium point, this equilibrium point is unstable, and temporary inattention can cause relationships to break up. These conclusions are all erroneous. According to the paper's model followed correctly, relationships are stable to a rather boring degree.

Overall, I found the mathematics much more convincing in Gottman's The Mathematics of Marriage: Dynamic Nonlinear Models which applies catastrophe theory (the hot math trend of the 70s) to marriage. To vastly oversimplify, once your relationship is in an unhappy state, it takes a great deal of effort to flip it back to a happy state. This is pretty much the exact opposite of the linear model that Rey uses.

Disclaimer: My original posting was rather hand-waving; I've significantly edited this article to fill in some of the mathematical details.

Credits: I came across this article via Andrew Sullivan. ("Hat tip" is just too precious a term for me to use.)

USB Panic Button with Linux and Python

This article describes how to use a USB Panic Button with Python. The panic button is a pushbutton that can be read over USB. Unfortunately, it only comes with drivers for Windows, so using it with Linux is a bit of a challenge. I found a Perl library that can read the Panic Button using low-level USB operations, so I ported that simple library to Python.

First, download and install PyUSB. (You may also need to install python-devel.)

Next, dowwnload my PanicButton library.

Finally, use the library. The API is very simple: create a button object with PanicButton(), and then call read() on the button to see if the button is pressed. For example, the following code will print "Pressed" when the button is pressed. (I know, a button like this should be more dramatic...)

import PanicButton
import time

button = PanicButton.PanicButton()

while 1:
  if button.read():
    print "Pressed"
  time.sleep(.5)
A couple caveats. First, you need to run the Python code as root in order to access the device. (Maybe you can get around this with udev magic, but I couldn't.) Second, the button actually triggers when it is released, not when it is pressed.

How the library works

By running lsusb, you can see that the Panic Button's USB id is 1130:0202. We use the PyUSB library to get a device object:
class PanicButton:
  def __init__(self):
    # Device is: ID 1130:0202 Tenx Technology, Inc. 
    self.dev = usb.core.find(idVendor=0x1130, idProduct=0x0202)
The Linux kernel grabs the device and makes it into a hidraw device, so we need to detach that kernel driver before using the device:
    try:
      self.dev.detach_kernel_driver(0)
    except Exception, e:
      pass # already unregistered
Reading the status of the device is done through a USB control transfer. All the magic numbers come from the PanicButton Perl library. See details (translated from German).
  def read(self):
    return self.dev.ctrl_transfer(bmRequestType=0xA1, bRequest=1, wValue=0x300, data_or_wLength=8, timeout=500)[0]
Hopefully this will be of help to anyone trying to interface the USB Panic Button through Python.