Dna Based Computing Of Strategic Assignment Problems

1. Introduction

In recent decades the word ‘‘computer’’ has become synonymous with an electronic computing machine due to the overwhelming success of this particular paradigm. Indeed, as we build faster and faster electronic computers, we are beginning to reach physical limits, beyond which our current technology cannot venture. Due to the limitation of traditional silicon based computer, user requirement is not fulfilling. DNA computing is a brand new research area which receives more and more attentions from both biologists and computer scientists.

DNA computing is a form of computing which uses DNA, biochemistry and molecular biology, instead of the traditional silicon-based computer technologies. This field was initially developed by Leonard Adleman of the University of Southern California, in 1994 [1]. Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem(the traveling salesman problem), whose solution required finding a path from start to end going through all the points (cities) only once. As the number of cities increases, the problem becomes more difficult until its solution is beyond analytical analysis altogether, at which point it requires brute force search methods. TSPs with a large number of cities quickly become computationally expensive, making them impractical to solve on even the latest super-computer. Adleman’s demonstration only involves seven cities, making it in some sense a trivial problem that can easily be solved by inspection. Nevertheless, his work is significant for a number of reasons. Since the initial Adleman experiments, advances have been made and various Turing machines have been proven to be constructible [2, 3]. Adelman's experiment is regarded as the first example of true nanotechnology.

DNA computing is fundamentally similar to parallel computing in that it takes advantage of the many different molecules of DNA to try many different possibilities at once [4]. In the cell, DNA is modified biochemically by a variety of enzymes, which are tiny protein machines that read and process DNA according to nature's design. There is a wide variety and number of these "operational" proteins, which manipulate DNA on the molecular level. For example, there are enzymes that cut DNA and enzymes that paste it back together. Other enzymes function as copiers and others as repair units. Molecular biology, Biochemistry, and Biotechnology have developed techniques that allow us to perform many of these cellular functions in the test tube. It's this cellular machinery, along with some synthetic chemistry, that makes up the palette of operations available for computation. Just like a CPU has a basic suite of operations like addition, bit-shifting, logical operators (AND, OR, NOT NOR), etc. that allow it to perform even the most complex calculations, DNA has cutting, copying, pasting, repairing, and many others. And note that in the test tube; enzymes do not function sequentially, working on one DNA at a time. Rather, many copies of the enzyme can work on many DNA molecules simultaneously. This is the power of DNA computing, that it can work in a massively parallel fashion.

In 2002, researchers from the Weizmann Institute of Science in Rehovot, Israel, unveiled a programmable molecular computing machine composed of enzymes and DNA molecules instead of silicon microchips [5]. In January 2013, researchers were able to store a JPEG photograph, a set of Shakespearean sonnets, and an audio file of Martin Luther King, Jr.'s speech “I Have a Dream” on DNA digital data storage [6].

2. Conventional vs. DNA Computing

Unlike conventional computers, DNA computers perform calculations parallel to other calculations. Conventional computers operate linearly, taking on tasks one at a time. It is parallel computing that allows DNA to solve complex mathematical problems in hours, whereas it might take electrical computers hundreds of years to complete them. Transistor-based computers typically handle operations in a sequential manner. Increasing performance of silicon computing means faster clock cycles (and larger data paths), where the emphasis is on the speed of the CPU and not on the size of the memory. Better performance cannot be achieved by only doubling the clock speed or doubling the RAM of the system. While in DNA based computing, performance improves due to memory capacity and parallel processing.

Molecular computing is massively parallel computing, by taking advantage of the computational power of molecules, usually biological molecules. It is another way to surmount certain limitations of traditional silicon-based computing. We can remove size problem of traditional computers by making the processors as small as a molecules. This technique makes circuits thousand times smaller than traditional silicon based computers. Cells (living parts) of organisms are ingredients for computation. These provide the basic idea of computing, as these tiny parts are complete machines and perform all the processing for the organisms activities. It can overcome on two major limitations of silicon-based traditional computers: storage capacity and processing speed. In this technique, 560enzymes and amino acids of DNA are used to solve particular problem.

DNA performs parallel operations while conventional, silicon-based computers typically handle operations sequentially. A modern CPU basically repeats the same “fetch and execute (fetches an instruction and the appropriate data from main memory and executes it) cycle” over and over again. This process is repeated many, many times in a row, and really, really fast. Typically, increasing performance of silicon computing means faster clock cycles, placing emphasis on the speed of the CPU and not on the size of the memory. Oppositely, the power of DNA computing comes from its memory capacity and parallel processing. In other words, DNA loses its appeal if forced to behave sequentially.

DNA's key advantage is that it will make computers smaller than any computer that has come before them, while at the same time holding more data. One pound of DNA has the capacity to store more information than all the electronic computers ever built; and the computing power of a teardrop-sized DNA computer, using the DNA logic gates, will be more powerful than the world's most powerful supercomputer. More than 10 trillion DNA molecules can fit into an area no larger than 1 cubic centimeter (0.06 cubic inches). With this small amount of DNA, a computer would be able to hold 10 terabytes of data, and perform 10 trillion calculations at a time. By adding more DNA, more calculations could be performed. DNA computers have the potential to take computing to new levels, picking up where Moore's Law leaves off. There are several advantages to using DNA instead of silicon:

•  DNA computers are many times smaller than today's computers.

•  The large supply of DNA makes it a cheap resource.

•  As long as there are cellular organisms, there will always be a supply of DNA.

•  Unlike the toxic materials used to make traditional microprocessors, DNA biochips can be made cleanly.

•  Based on parallel data processing

•  DNA Computer has Big Data storage capacity.

•  Low power consumption

For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. Furthermore, particular mathematical computations have been demonstrated to work on a DNA computer. As an example, DNA molecules have been utilized to tackle the assignment problem [7].

With a conventional computer, one method would be to set up a search tree, measure each complete branch sequentially, and keep the shortest one. A very laborious method would be to generate all possible paths and then search the entire list! With this algorithm, generating the entire list of routes for a 20-city problem could theoretically take 45 million GBytes of memory. This would take a 100 MIPS (million instructions per second) computer two years just to generate all paths (assuming one instruction cycle to generate each city in every path). However, using DNA computing, this method becomes feasible because 1015 is just a nanomole of material, a relatively small number for biochemistry. Also, routes no longer have to be searched through sequentially: operations can be done all in parallel (Adams).

3. DNA Sequencing

By DNA Sequencing we calculate the precise order of nucleotides within a DNA molecule. DNA is a long polymer made from repeating units called nucleotides [8, 9, 10]. It includes any method or technology that is used to determine the order of the four bases—adenine(A), guanine(G), cytosine(C), and thymine(T)—in a strand of DNA. Figure 1. shows DNA Helix. The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases. DNA is well-suited for biological information storage. The DNA backbone is resistant to cleavage, and both strands of the double-stranded structure store the same biological information. Biological information is replicated as the two strands are separated.

DNA sequencing may be used to determine the sequence of individual genes, larger genetic regions (i.e. clusters of genes or operons), full chromosomes or entire genomes. Sequencing provides the order of individual nucleotides in DNA or RNA (commonly represented as A, C, G, T, and U) isolated from cells of animals, plants, bacteria, archaea, or virtually any other source of genetic information. The advent of rapid DNA sequencing methods has greatly accelerated biological and medical research and discovery. To encode information using DNA, one can choose an encoding scheme mapping the original alphabet onto strings over {A, C, G, T}, and proceed to synthesize the obtained information-encoding strings as DNA single strands. A can chemically bind to an opposing T on another single strand, while C can similarly bind to G. Bases that can thus bind are called Watson=Crick (W=C) complementary, and two DNA single strands with opposite orientation and with W=C-complementary bases at each position can bind to each other to form a DNA double strand in a process called base pairing. Knowledge of DNA sequences has become indispensable for basic biological research, and in numerous applied fields such as diagnostic, biotechnology, forensic biology, virology and biological systematics. Each purine base is the Watson-Crick complement of a unique pyrimidine base (and vice versa) — adenine and thymine form a complementary pair, as do guanine and cytosine. We describe this using the notation A = T, T = A, C = G, G = C. The chemical ties between the two WC pairs are different — C and G pair through three hydrogen bonds, while A and T pair through two hydrogen bonds

A DNA sequence that totally describes a processor on decoding is bound to be complicated. Such complicated sequences will be difficult to work with during hybridization phase unless they are well formed and ordered. Thus a good encoding scheme must be biologically realistic, yet well ordered. By placing related nodes near each other, we allow Localization of Reactions wherein related characteristics are changed simultaneously. This bio inspired node placement scheme provides a good platform to build simple hybridization rules.

A computation will consists of a succession of bio-operations (Kari, 1997), such as cutting and pasting DNA strands, separating DNA sequences by length, extracting DNA sequences containing a given pattern, or making copies of DNA strands. The DNA strands representing the output of the computation can then be read out and decoded. The rapid speed of sequencing attained with modern DNA sequencing technology has been instrumental in the sequencing of complete DNA sequences, or genomes of numerous types and species of life, including the human genome and other complete DNA sequences of many animal, plants, and microbial species. DNA computing is based on the idea that molecular biology processes can be used to perform arithmetic and logic operations on information encoded as DNA strands.

For a collection of interacting entities, one measure commonly used for assessing the system’s property is the free energy. The stability and form of a secondary configuration is usually governed by this energy, the general rule-ofthumb being that a secondary structure minimizes the free energy associated with a DNA sequence. The free energy of a secondary structure is determined by the energy of its constituent pairings, and consequently, its loops. Nevertheless, some simple dynamic programming techniques can be used to approximately determine base pairings in a secondary structure of a oligonucleotide DNA sequence. Among these techniques, the Nussinov-Jacobson (NJ) folding algorithm is one of the simplest and most widely used schemes.

4. DNA Based Computing Device

DNA Based computing devices can be formed by several well-known ways which have unique feature. Most of these build the basic logic gates (AND, OR, NOT) associated with digital logic from a DNA basis. Some of the different bases include DNAzymes, deoxyoligonucleotides, enzymes, DNA tiling, and polymerase chain reaction. The switches are based on deoxyribozymes that use oligonucleotides as both input and output.

4.1. Enzymes

Many enzymes contain highly active molecule which can be used for DNA based computing component. Enzyme based DNA computers are usually of the form of a simple Turing machine; there is analogous hardware, in the form of an enzyme, and software, in the form of DNA [11]. Benenson, Shapiro and colleagues have demonstrated a DNA computer using the FokI enzyme [12] and expanded on their work by going on to show automata that diagnose and react to prostate cancer: under expression of the genes PPAP2B and GSTP1 and an over expression of PIM1 and HPN [13]. Their automata evaluated the expression of each gene, one gene at a time, and on positive diagnosis then released a single strand DNA molecule (ssDNA) that is an antisense for MDM2. MDM2 is a repressor of protein 53, which itself is a tumor suppressor [14]. On negative diagnosis it was decided to release a suppressor of the positive diagnosis drug instead of doing nothing. A limitation of this implementation is that two separate automata are required, one to administer each drug. The entire process of evaluation until drug release took around an hour to complete. This method also requires transition molecules as well as the FokI enzyme to be present. The requirement for the FokI enzyme limits application in vivo, at least for use in “cells of higher organisms” [15]. It should also be pointed out that the 'software' molecules can be reused in this case.

4.2. DNAzymes

DNAzyme, which is also known as deoxyribozyme, is a Catalytic DNA which catalyzes a reaction when interacting with the appropriate input, such as a matching oligonucleotide. These DNAzymes are used to build logic gates analogous to digital logic in silicon; however, DNAzymes are limited to 1-, 2-, and 3-input gates with no current implementation for evaluating statements in series.

The DNAzyme logic gate changes its structure when it binds to a matching oligonucleotide and the fluorogenic substrate it is bonded to is cleaved free. While other materials can be used, most models use a fluorescence-based substrate because it is very easy to detect, even at the single molecule limit [16]. The amount of fluorescence can then be measured to tell whether or not a reaction took place. The DNAzyme that changes is then “used,” and cannot initiate any more reactions. Because of this, these reactions take place in a device such as a continuous stirred-tank reactor, where old product is removed and new molecules added.

Two commonly used DNAzymes are named E6 and 8-17. These are popular because they allow cleaving of a substrate in any arbitrary location [17]. Stojanovic and MacDonald have used the E6 DNAzymes to build the MAYA I [18] and MAYA II [19] machines, respectively; Stojanovic has also demonstrated logic gates using the 8-17 DNAzyme [20]. While these DNAzymes have been demonstrated to be useful for constructing logic gates, they are limited by the need for a metal cofactor to function, such as Zn2+ or Mn2+, and thus are not useful in vivo [16, 21].

A design called a stem loop, consisting of a single strand of DNA which has a loop at an end, are a dynamic structure that opens and closes when a piece of DNA bonds to the loop part. This effect has been exploited to create several logic gates. These logic gates have been used to create the computers MAYA I and MAYA II which can play tic-tac-toe to some extent [22].

4.3. Toehold Exchange

DNA computers have also been constructed using the concept of toehold exchange. In this system, an input DNA strand binds to a sticky end, or toehold, on another DNA molecule, which allows it to displace another strand segment from the molecule. This allows the creation of modular logic components such as AND, OR, and NOT gates and signal amplifiers, which can be linked into arbitrarily large computers. This class of DNA computers does not require enzymes or any chemical capability of the DNA [23].

5. Basic Component of DNA-Based Computer

The entire DNA strand for a particular processing unit is split into 2 parts: Active Component and Passive Component. The Active Component participates in the hybridization. This component represents the architectural details as well as the Instruction Set details. It is created in the encoding stage and expanded in the hybridization stage i.e., the basic architectural details such as 128 bit instruction length is added to the DNA string in the encoding stage while finer aspects such as the usage of a Carry Save Adder (CSA) is added to the string during recombination stage. The Passive Component is formed in the mutation stage. It consists of the Finite State Machine (FSM) description, the Netlist and basic Placement details of the processor. The netlist defines the connectivity across various components of an electronic design while placement is the process of assigning exact locations to various components in the chip’s core area. The active components of various processors react with each other during hybridization while the passive components are formed during mutation. They do not take part actively during hybridization. But, the FSM details of the two processors are inherited by the offspring based on its instruction set. These inherited FSMs are then used as guidelines to form the actual FSM of the offspring. Some major component of DNA based Computer are:

5.1. DNA Field-Effect Transistor

DNA field-effect transistor (DNAFET) is a field-effect transistor which uses the field-effect due to the partial charges of DNA molecules to function as a biosensor. Biosensor works as an analytical device, used for the detection of an analyte, which combines a biological component with a physicochemical detector. A biosensor typically consists of a bio-recognition component, biotransducer component, and electronic system which include a signal amplifier, processor, and display. Transducers and electronics can be combined, e.g., in CMOS-based microsensor systems [24, 25]. Biosensors can be classified by their biotransducer type. The most common types of biotransducers used in biosensors are 1) electrochemical biosensors, 2) optical biosensors, 3) electronic biosensors, 4) piezoelectric biosensors, 5) gravimetric biosensors, 6) pyroelectric biosensors.

An important part in a biosensor is to attach the biological elements (small molecules/protein/cells) to the surface of the sensor (be it metal, polymer or glass). The simplest way is to functionalize the surface in order to coat it with the biological elements. This can be done by polylysine, aminosilane, epoxysilane or nitrocellulose in the case of silicon chips/silica glass. Subsequently the bound biological agent may be for example fixed by Layer by layer depositation of alternatively charged polymer coatings.

In electronics, a transistor controls the flow of electrons along a circuit. Similarly, in biologics, a Transcriptor (DNA based Transistor) controls the flow of a specific protein, RNA polymerase, as it travels along a strand of DNA. Transcriptor-based gates alone do not constitute a computer, but they are the third and final component of a biological computer that could operate within individual living cells.

Arrays of DNAFETs can be used for detecting single nucleotide polymorphisms (causing many hereditary diseases) and for DNA sequencing. Their main advantage compared to optical detection methods in common use today is that they do not require labeling of molecules. Furthermore they work continuously and (near) real-time. DNAFETs are highly selective since only specific binding modulates charge transport. The structure of DNAFETs is similar to that of MOSFETs with the exception of the gate structure which, in DNAFETs, is replaced by a layer of immobilized ssDNA (single-stranded DNA) molecules which act as surface receptors. When complementary DNA strands hybridize to the receptors, the charge distribution near the surface changes, which in turn modulates current transport through the semiconductor transducer. Transducer converts a signal in one form of energy to another form of energy. A sensor is used to detect a parameter in one form and report it in another form of energy, often an electrical signal.

5.2. DNA-Based Storage System

In DNA Computer, data are stored in base sequence of DNA and these data will be digital. It could be used to record data in a living cell, without using silicon chips like we do in computers. This could allow us to track cell divisions to study cell processes like development, ageing and the changes that occur in cancer. This technology uses artificial DNA made using commercially available oligonucleotide synthesis machines for storage and DNA sequencing machines for retrieval. The researchers used a simple code where bits were mapped one-to-one with bases, which had the shortcoming that it led to long runs of the same base, the sequencing of which is error-prone. This research result showed that besides its other functions, DNA can also be another type of storage medium such as hard drives and magnetic tapes.

This type of storage system is more compact than current magnetic tape or hard drive storage systems due to the data density of the DNA. It also has the capability for longevity, as long as the DNA is held in cold, dry and dark conditions, as is shown by the study of woolly mammoth DNA from up to 60,000 years ago, and for resistance to obsolescence, as DNA is a universal and fundamental data storage mechanism in biology. These features have led to researchers involved in their development to call this method of data storage "apocalypse-proof" because "after a hypothetical global disaster, future generations might eventually find the stores and be able to read them” [24]. It is, however, a slow process, as the DNA needs to be sequenced in order to retrieve the data, and so the method is intended for uses with a low access rate such as long-term archival of large amounts of scientific data [26, 27].

DNA is directional, and now a team, led by Jerome Bonnet at Stanford University have found a way to encode information using enzymes that can flip small pieces of DNA between their normal direction, which acts as a 0, and the opposite direction, which acts like a 1. This means it can store one bit of data. And the process used is reliable enough to write and rewrite information again and again.

On August 16, 2012, the journal Science published research by George Church and colleagues at Harvard University, in which DNA was encoded with digital information that included an HTML draft of a 53,400 word book written by the lead researcher, eleven JPG images and one JavaScript program. Multiple copies for redundancy were added and 5.5 petabits can be stored in each cubic millimeter of DNA [28].

To encode information using DNA, one can choose an encoding scheme mapping the original alphabet onto strings over {A, C, G, T}, and proceed to synthesize the obtained information-encoding strings as DNA single strands. A computation will consists of a succession of bio-operations (Kari, 1997), such as cutting and pasting DNA strands, separating DNA sequences by length, extracting DNA sequences containing a given pattern, or making copies of DNA strands. The DNA strands representing the output of the computation can then be read out and decoded.

5.3. DNA Microarray

DNA Microarrays are small, solid supports onto which the sequences from thousands of different genes are immobilized, or attached, at fixed locations. The supports themselves are usually glass microscope slides, the size of two side-by-side pinky fingers, but can also be silicon chips or nylon membranes. The DNA is printed, spotted, or actually synthesized directly onto the support. With the aid of a computer, the amount of mRNA bound to the spots on the microarray is precisely measured, generating a profile of gene expression in the cell. It is important that the gene sequences in a microarray are attached to their support in an orderly or fixed way, because a researcher uses the location of each spot in the array to identify a particular gene sequence. The spots themselves can be DNA, cDNA, or oligonucleotides.

When a gene is activated, cellular machinery begins to copy certain segments of that gene. The resulting product is known as messenger RNA (mRNA), which is the body's template for creating proteins. The mRNA produced by the cell is complementary, and therefore will bind to the original portion of the DNA strand from which it was copied. To determine which genes are turned on and which are turned off in a given cell, the messenger RNA molecules present in that cell is first collected. After that we label each mRNA molecule by using a reverse transcriptase enzyme (RT) that generates a complementary cDNA to the mRNA. During that process fluorescent nucleotides are attached to the cDNA. The tumor and the normal samples are labeled with different fluorescent dyes. Next, the labeled cDNAs onto a DNA microarray slide are placed as shown in Figure 2. The labeled cDNAs that represent mRNAs in the cell will then hybridize – or bind – to their synthetic complementary DNAs attached on the microarray slide, leaving its fluorescent tag. A special scanner is used to measure the fluorescent intensity for each spot/areas on the microarray slide.

There are two major application forms for the DNA microarray technology:

1. Identification of sequence (gene / gene mutation)

2. Determination of expression level (abundance) of genes of one sample or comparing gene transcription in two or more different kinds of cells

6. The Adleman Experiment

There is no better way to understand how something works than by going through an example step by step. So let’s solve our own directed Hamiltonian Path problem, using the DNA methods demonstrated by Adleman. The concepts are the same but the example has been simplified to make it easier to follow and present.

Suppose that a person ‘X’ lives in Los Angeles (LA), and he want to visit four cities: Dallas, Chicago, Miami, and New York (NY), with New York being the final destination as shown in Figure 3. The airline he takes, has a specific set of connecting flights that restrict which routes he can take (i.e. there is a flight from Los Angeles to Chicago, but no flight from Miami to Chicago). What should his itinerary be if he wants to visit each city only once?

It should take you only a moment to see that there is only one route. Starting from Los Angeles you need to fly to Chicago, Dallas, Miami and then to New York. Any other choice of cities will force you to miss a destination, visit a city twice, or not make it to New York. For this example you obviously don’t need the help of a computer to find a solution. For six, seven, or even eight cities, the problem is still manageable. However, as the number of cities increases, the problem quickly gets out of hand. Assuming a random distribution of connecting routes, the number of itineraries you need to check increases exponentially. Pretty soon you will run out of pen and paper listing all the possible routes, and it becomes a problem for a computer.

The method Adleman used to solve this problem is basically the shotgun approach mentioned previously. He first generated all the possible itineraries and then selected the correct itinerary. This is the advantage of DNA. It’s small and there are combinatorial techniques that can quickly generate many different data strings. Since the enzymes work on many DNA molecules at once, the selection process is massively parallel. Specifically, the method based on Adleman’s experiment would be as follows:

•  Generate all possible routes.

•  Select itineraries that start with the proper city and end with the final city.

•  Select itineraries with the correct number of cities.

•  Select itineraries that contain each city only once.

All of the above steps can be accomplished with standard molecular biology techniques.

6.1. Technique for Generating Routes

Strategy: Encode city names in short DNA sequences. Encode itineraries by connecting the city sequences for which routes exist.

DNA can simply be treated as a string of data. For example, each city can be represented by a "word" of six bases:

The entire itinerary can be encoded by simply stringing together these DNA sequences that represent specific cities. For example, the route from L.A -> Chicago -> Dallas -> Miami -> New York would simply be GCTACGCTAGTATCGTACCTACGGATGCCG, or equivalently it could be represented in double stranded form with its complement sequence.

Synthesizing short single stranded DNA is now a routine process, so encoding the city names is straightforward. The molecules can be made by a machine called a DNA synthesizer or even custom ordered from a third party. Itineraries can then be produced from the city encodings by linking them together in proper order. To accomplish this you can take advantage of the fact that DNA hybridizes with its complimentary sequence. For example, you can encode the routes between cities by encoding the compliment of the second half (last three letters) of the departure city and the first half (first three letters) of the arrival city. For example the route between Miami (CTACGG) and NY (ATGCCG) can be made by taking the second half of the coding for Miami (CGG) and the first half of the coding for NY (ATG). This gives CGGATG. By taking the complement of this you get, GCCTAC, which not only uniquely represents the route from Miami to NY, but will connect the DNA representing Miami and NY by hybridizing itself to the second half of the code representing Miami (...CGG) and the first half of the code representing NY (ATG…). Figure 4. shows the idea of route encoding using DNA Sequence.

Random itineraries can be made by mixing city encodings with the route encodings as shown in Figure 5. Finally, the DNA strands can be connected together by an enzyme called ligase. What we are left with are strands of DNA representing itineraries with a random number of cities and random set of routes. For example:

We can be confident that we have all possible combinations including the correct one by using an excess of DNA encodings, say 1013 copies of each city and each route between cities. Remember DNA is a highly compact data format, so numbers are on our side.

6.2. Itineraries Selection: Start and End with Correct Cities

Strategy: Selectively copy and amplify only the section of the DNA that starts with LA and ends with NY by using the Polymerase Chain Reaction.

After generating the routes, we now have a test tube full of various lengths of DNA that encode possible routes between cities. What we want are routes that start with LA and end with NY. To accomplish this we can use a technique called Polymerase Chain Reaction (PCR), which allows you to produce many copies of a specific sequence of DNA. PCR is an iterative process that cycle through a series of copying events using an enzyme called polymerase. Polymerase will copy a section of single stranded DNA starting at the position of a primer, a short piece of DNA complimentary to one end of a section of the DNA that you're interested in. By selecting primers that flank the section of DNA you want to amplify, the polymerase preferentially amplifies the DNA between these primers, doubling the amount of DNA containing this sequence. After many iterations of PCR, the DNA you're working on is amplified exponentially. So to selectively amplify the itineraries that start and stop with our cities of interest, we use primers that are complimentary to LA and NY. What we end up with after PCR is a test tube full of double stranded DNA of various lengths, encoding itineraries that start with LA and end with NY.

6.3. Itineraries Selection: Based on Correct Number of Cities

Strategy: Sort the DNA by length and select the DNA whose length corresponds to 5 cities.

Our test tube is now filled with DNA encoded itineraries that start with LA and end with NY, where the number of cities in between LA and NY varies. We now want to select those itineraries that are five cities long. To accomplish this we can use a technique called Gel Electrophoresis as shown in Figure 6, which is a common procedure used to resolve the size of DNA. The basic principle behind Gel Electrophoresis is to force DNA through a gel matrix by using an electric field. DNA is a negatively charged molecule under most conditions, so if placed in an electric field it will be attracted to the positive potential. However since the charge density of DNA is constant (charge per length) long pieces of DNA move as fast as short pieces when suspended in a fluid. This is why you use a gel matrix. The gel is made up of a polymer that forms a meshwork of linked strands. The DNA now is forced to thread its way through the tiny spaces between these strands, which slows down the DNA at different rates depending on its length. What we typically end up with after running a gel is a series of DNA bands, with each band corresponding to a certain length. We can then simply cut out the band of interest to isolate DNA of a specific length. Since we know that each city is encoded with 6 base pairs of DNA, knowing the length of the itinerary gives us the number of cities. In this case we would isolate the DNA that was 30 base pairs long (5 cities times 6 base pairs).

6.4. Itineraries Selection: Have a Complete Set of Cities

Strategy: Successively filter the DNA molecules by city, one city at a time. Since the DNA we start with contains five cities, we will be left with strands that encode each city once.

DNA containing a specific sequence can be purified from a sample of mixed DNA by a technique called affinity purification, as shown in Figure 7. This is accomplished by attaching the compliment of the sequence in question to a substrate like a magnetic bead. The beads are then mixed with the DNA. DNA, which contains the sequence you're after then hybridizes with the complement sequence on the beads. These beads can then be retrieved and the DNA isolated.

So we now affinity purifies five times, using a different city complement for each run. For example, for the first run we use L.A.'-beads (where the ' indicates compliment strand) to fish out DNA sequences which contain the encoding for L.A. (which should be the entire DNA because of step 3), the next run we use Dallas'-beads, and then Chicago'-beads, Miami'-beads, and finally NY'-beads. The order isn’t important. If an itinerary is missing a city, then it will not be "fished out" during one of the runs and will be removed from the candidate pool. What we are left with the itineraries that start in LA, visit each city once, and end in NY. This is exactly what we are looking for. If the answer exists we would retrieve it at this step.

Adleman's experiment solved a seven city problem, but there are two major shortcomings preventing a large scaling up of his computation. The complexity of the traveling salesman problem simply doesn’t disappear when applying a different method of solution - it still increases exponentially. For Adleman’s method, what scales exponentially is not the computing time, but rather the amount of DNA. Unfortunately this places some hard restrictions on the number of cities that can be solved; after the Adleman article was published, more than a few people have pointed out that using his method to solve a 200 city HP problem would take an amount of DNA that weighed more than the earth. Another factor that places limits on his method is the error rate for each operation. Since these operations are not deterministic but stochastically driven (we are doing chemistry here), each step contains statistical errors, limiting the number of iterations you can do successively before the probability of producing an error becomes greater than producing the correct result. For example an error rate of 1% is fine for 10 iterations, giving less than 10% error, but after 100 iterations this error grows to 63%.

7. Nussinov-Jacobson(NJ) Algorithm

The NJ algorithm provides some guidelines for DNA code design. The NJ algorithm is based on the assumption that in a DNA sequence q1q2 . . . qn, the energy of interaction, α(qi, qj ), between the pair of bases (qi, qj ) is independent of all other base pairs. The interaction energies α(qi, qj ) are negative quantities whose values usually depend on the actual choice of the base pair (qi, qj ). Let Ei,j denote the minimum free energy of the subsequence qi . . . qj . The independence assumption allows us to compute the minimum free energy of the sequence q1q2 . . . qn through the recursion

where Ei,i = Ei,i−1 = 0 for i = 1, 2, ..., n. The value of E1,nis the minimum free energy of a secondary structure of q1q2 . . . qn. Note that E1,n≤ 0. A large negative value for the free energy, E1,n, of a sequence is a good indicator of the presence of a secondary structure in the physical DNA sequence.

The NJ algorithm can be described in terms of free-energy tables. In a free-energy table, the entry at position (i, j) (the top left position being (1,1)), contains the value of Ei,j. The table is filled out by initializing the entries on the main diagonal and on the first lower sub-diagonal of the matrix to zero, and calculating the energy levels according to the recursion in (1). The calculations proceed successively through the upper diagonals: entries at positions (1, 2),(2, 3), ...,(n − 1, n) are calculated first, followed by entries at positions (1, 3),(2, 4), ...,(n−2, n), and so on. Note that the entry at (i, j), j > i, depends on α(i, j) and the entries at (i, l), l = i, . . . , j−1, (l, j), l = i+1, . . . , n−1, and (i + 1, j − 1). The complexity of the NJ algorithm is O(n 3 ), since each of the O(n 2 ) entries requires O(n) computations. The minimum-energy secondary structure itself can be found by the backtracking algorithm which retraces the steps of the NJ algorithm. From a DNA code design point of view, it would be of considerable interest to determine a set of amenable properties that oligonucleotide sequences should possess so as to either facilitate testing for secondary structure, or exhibit a very low probability for forming such a structure. The overall complexity of computing the free-energy tables of a DNA codeword q1q2 . . . qn and all of its cyclic shifts is O(n3).

8. Design Methodology

Conventional microprocessor design automation involves optimization at various phases to minimize gate count, power, chip area and to maximize performance. Many tools are available to automate these processes. Evolutionary algorithms are applied towards these optimization processes. This is due to the fact that evolution is an optimization process by which the phenotype of a population gets optimized over successive generations. The fact that evolution has produced complex organisms such as humans stands testimony to its success as an optimization process. Natural evolution involves DNA based processes. But modeling this natural evolution at the DNA level with realistic encoding and recombination processes has not been attempted to automate microprocessor design [29]. In fact, microprocessors, which are less complex than humans, can be evolved naturally without human intervention if their characteristics (phenotype) can be mapped onto the DNA domain through a biologically realistic encoding process. Then automating microprocessor design process would reduce to combining different microprocessor DNAs to produce an offspring and simulating their working environment thereby incorporating random variation and selection which form the two major steps in natural evolution.

DNA computing focuses on the use of massive parallelism, or the allocation of tiny portions of a computing task to many different processing elements. The structure of the DNA allows the elements of the problem to be represented in a form that is analogous to the binary code structure. Trillions of unique strands of DNA are able to represent all of the possible solutions to the problem. Some scientists predict a future where our bodies are patrolled by tiny DNA computers that monitor our well-being and release the right drugs to repair damaged or unhealthy tissue. “The speed of any computer, biological or not, is determined by two factors: (i) how many parallel processes it has; (ii) how many steps each one can perform per unit time. The exciting point about biology is that the first of these factors can be very large: recall that a small amount of water contains about 1022 molecules. Thus, biological computations could potentially have vastly more parallelism than conventional ones.” DNA computers can perform 2 x 1019 (irreversible) operations per joule. Existing supercomputers aren’t very energy-efficient, executing a maximum of 109 operations per joule. Here, the energy could be very valuable in future. So, this character of DNA computers can be very important. DNA uses Adenosine Triphosphate (ATP) as fuel to allow ligation or as a means to heat the stand to cause dissociation.

Figure 8 shows processes of computation phase of DNA Computer. Parameter Trees are built by using DNA based source input. The DNA sequences of the processors are then obtained using a specific, nature inspired Encoding scheme. Hybridization between various sequences is driven by certain Methodology. Certain components of a processor like Carry Save Adder, Baugh-Wooley Multiplier etc are pre-encoded and stored in the DNA Pool. These components can then be added to the offspring processor based on the user specifications. Mutation involves the extraction of the overall netlist of the processor. Graphical Decoding is done to fix the layout level map of the processor and incorporates Embryogenesis. An optimizer attempts to improve the time and space requirements of a program. There are many ways in which code can be optimized, but most are expensive in terms of time and space to implement.

8.1. Tree Formation

The first phase of the design methodology involves mapping the characteristics of a processor onto a Directed Acyclic Graph (parameter tree) where each node represents a characteristic and the edge weight between two nodes quantifies the dependency between the nodes. To build the parameter tree, various characteristics that define a processor must be obtained. Then various relationships among these characteristics must be studied. This can be done by collecting a large number of processor specifications and observing the trend among various parameters. For example, if most of the processors show that increasing the cache levels increases the power consumption tremendously, a high edge weight can be associated between these two parameters. A simple data mining algorithm can automate this process. The Node Function as shown in Figure 9, of a particular node is a mathematical relation connecting the Node Data of the node to the Node Data of its parents and children. Figure 8 shows a simple functional dependence. The child node Maximum Number of Instruction per Clock-cycle (I), dependents on three of its parent’s node: the No. of Cores(C), the Number of Data Paths per core (D) and the Number of Processors(P). “I” is just the direct product of these values. The node function is very important because it defines the actual mathematical relations between nodes. Thus node values can be fixed up and validated during hybridization. For example, if the user specifies a “I” of 18, then we can fix the No. of Cores, No. of Processors and No. of Data Paths per core as any combination of 3, 3, 2 (3x3x2=18). This could be one of the methodologies during hybridization. The edges in the figure have weights Wc, Wd and Wp. These weights signify the level of dependence between two nodes: higher the weight, higher the dependence. For example, Cache Power is most affected by Voltage. These weights play a vital role while traversing the tree.

For optimal performance, we use supervised learning algorithms of Neural Network. By applying this process to non-connected nodes, inferences can be drawn about the way they interact indirectly. For example, instruction size and type of full adder used in the ALU don’t have any obvious relationship. But if hidden relations do exist between them, then they could be found out using this method. The node function can be stored in standard formats like post-fix notations with variables as pointers to other nodes. The node function is mostly architecture independent. But, sometimes, certain architectures may change the function to an extent. For example, the inclusion of Hyper-threading may change the equation governing Level 1 Cache Size. In natural DNA sequences, related genes are placed close to each other. So a Depth First Traversal (DFT) is performed on the parameter tree to build a linear sequence of nodes. Since DFT visits a node that is most strongly related to the node being processed, it provides a list where related nodes are placed together. By specifying directions on the edges, the DFT is restricted in exploring only the children of the node being processed. When a node has multiple parents and has a stronger connection to one of its unexplored parents than any of its children, it would be obviously better to place that parent node next to the node being processed.

8.2. Encoding Strategy

The process of creation of the actual DNA strands is carried out in the encoding phase. Variable Length Delimiter Based Encoding (VLDBE) system is used for encoding. A delimiter is a special sequence which serves as a flag that represents the beginning of a node, a field, a number, a character, a symbol or any other data entity. The delimiter is set as ATCG+ where + is a wild card that can be substituted by any of the symbols. Each substitution signifies a different delimiter. In particular, AA signifies No Delimiter. This is very important to identify a data sequence which contains a substring ‘ATCG’. For example if a data contains the string GAGCTATCGCGA, then this sequence would be transformed and encoded as GAGCTATCGAACGA. The ‘AA’ string signifies that the ATCG sequence was a data element rather than a part of a delimiter sequence. The ‘AA’ string is ignored when the sequence is decoded. Each symbol in a DNA sequence is known as a Dit, which is an acronym for a DNA digit. Table 2 lists DNA Sequence and their meaning during encoding process.

The encoding process transforms information to a base 4 domain. The base 4 system allows us to define nature inspired hybridization rules. Moreover, if characteristics of a species can be related to each parameter of the processor, then the DNA strands defining those traits can be used. By using this trait based encoding and natural DNA recombination rules, evolution can be completely realistic. This encoding scheme for numbers involves a straight forward conversion to base 4 systems.


8.2.1. Coding Technique in DNA Computing

A DNA code is simply a set of (oriented) sequences over the alphabet Q = {A, C, G, T}. One design principle that arises out of a study of the NJ algorithm is that DNA codes should contain a cyclic structure. The key idea behind this principle is based on the observation that once the free-energy table, and consequently, the minimum free energy of a DNA sequence q have been computed, the corresponding computation for any cyclic shift of q becomes easy.

While testing DNA sequences for secondary structure is one aspect of the code design process, it is equally important to know how to design codewords that have a low tendency to form secondary structures. The obvious approach here would be to identify properties of the sequence of bases in an oligonucleotide that would encourage secondary structure formation, so that we could then try to construct codewords which do not have those properties. For example, it seems intuitively clear that if a sequence q has long, non-overlapping segments s1 and s2 such that s1 = s2RC, then there is a good chance that q will fold to enable s1 to bind with s2R thus forming a stable structure. Actually, we can slightly strengthen the above condition for folding by requiring that s1 and s2 be spaced sufficiently far apart, since a DNA oligonucleotide usually does not make sharp turns, i.e., does not bend over small regions. In any case, the logic is that a sequence that avoids such a scenario should not fold.

For any pair of length-n words p = p1p2 . . . pnand q = q1q2 . . . qn over the alphabet Q, the Hamming distancedH(p, q) is defined as usual to be the number of positions i at which piqi. We further define the reverse Hamming distance between the words p and q to be dRH(p, q) = dH(p, qR). Similarly, their reverse-complement Hamming distance is defined to be dRCH (p, q) = dH(p, qRC ). For a DNA code C, we define its minimum (Hamming) distance, minimum reverse (Hamming) distance, and minimum reverse-complement (Hamming) distance in the obvious manner:

The above result shows that the complexity of testing a DNA code with M length-n codewords for secondary structure is reduced from O(Mn3) to O(Mn2), if the code is cyclic. It is also worth pointing out that a cyclic code structure can also simplify the actual production of the DNA sequences that form the code.

Another important code design consideration linked to the process of oligonucleotide hybridization pertains to the GC-content of sequences in a DNA code. The GC-content, wGC (q), of a DNA sequence q = q1q2 . . . qn is defined to be the number of indices i such that qi∈ {G, C}. A DNA code, in which all codewords have the same GC-content, w, is called a constant GC-content code. The constant GC-content requirement assures similar thermodynamic characteristics for all codewords, and is introduced in order to ensure that all hybridization operations take place in parallel, i.e., roughly at the same time. The GC-content is usually required to be in the range of 30–50% of the length of the code.

DNA code Construction Methodology:


8.2.1.1. Binary Mapping

The problem of constructing DNA codes with some of the properties desirable for DNA computing can be made into a binary code design problem by mapping the DNA alphabet onto the set of length-two binary words as shown in Table 3. The mapping is chosen so that the first bit of the binary image of a base uniquely determines the complementary pair to which it belongs.

Let q be a DNA sequence. The sequence b(q) obtained by applying coordinatewise to q the mapping given in Table 3., will be called the binary image of q. If b(q) = b0b1b2 . . . b2n−1, then the subsequence e(q) = b0b2 . . . b2n−2 will be referred to as the even subsequence of b(q), and o(q) = b1b3 . . . b2n−1 will be called the odd subsequence of b(q). Given a DNA code C, we define its even component E(C) = {e(p) : p ∈ C}, and its odd component O(C) = o(p) : p ∈ C}. It is clear from the choice of the binary mapping that the GC-content of a DNA sequence q is equal to the Hamming weight of the binary sequence e(q). Consequently, a DNA code C is a constant GC-content code if and only if it’s even component, E(C), is a constant-weight code.

Let B be a binary code consisting of M codewords of length n and minimum distance dmin, such that c B implies that c B. For w > 0, consider the constant-weight subcode Bw = {u B : wH(u) = w}, where wH(·) denotes Hamming weight. Choose w > 0 such that n ≥ 2w + |dmin/2|, and consider a DNA code, Cw, with the following choice for its even and odd components:

where <lex denotes lexicographic ordering. The a <lex b in the definition of O ensures that if abRC ∈ O, then baRCNo O , so that distinct codewords in O cannot be reverse-complements of each other.

Therefore, the DNA code C = Uwmaxw=dminCw, with wmax = (n − |dmin/2|)/2, has 1/2 M(M − 1) ∑wmaxw=dmin |Aw|2 codewords of length 2n, and satisfies dH(B) ≥ dminand dRCH (B) ≥ dmin.


8.2.1.2. DNA Codes from Generalized Hadamard Matrices

A generalized Hadamard matrix HH(n, m) is an n×n square matrix with entries taken from the set of mth roots of unity, m = {e −2πil/m, l = 0, ..., m − 1}, that satisfies HH = nI. Here, I denote the identity matrix of order n, while ∗ stands for complex-conjugation. We will only concern ourselves with the case m = p for some prime p. A necessary condition for the existence of generalized Hadamard matrices H(n, p) is that p|n. The exponent matrix, E(n, Zp), of H(n, p) is the n×n matrix with entries inƵp = {0, 1, 2, . . . , p−1}, obtained by replacing each entry (e −2πi)l in H(n, p) by the exponent l.

For any k ∈ Ƶ+, there exist DNA codes D with 3k −1 codewords of length 3k − 1, with constant GC-content equal to 3k−1 , dH(D) = 2 · 3k−1 , dRCH (D) ≥ 3k−1 . And in which each codeword is a cyclic shift of a fixed generator codeword g.

DNA codes with constant GC-content can obviously be constructed from constant-composition codes over Ƶp by mapping the symbols of Ƶp to the symbols of the DNA alphabet, Q = {A, C, G, T}. DNA codes can be obtained from such generators by mapping {0, 1, 2, 3} onto {A, G, C, T}. Although all such mappings yield codes with (essentially) the same parameters, the actual choice of mapping has a strong influence on the secondary structure of the codewords.

8.3. Hybridization

In DNA Based Computer hybridization is an important phase. The DNA sequences are set to replicate and create trillions of new sequences based on the initial input sequences in a matter of seconds, called DNA hybridization. Hybridization depends on nature and function as shown in Figure 10. Hybridization between various sequences are driven by certain heuristics .When DNA pool updates, new value is used for hybridization. The microprocessor DNA sequences are combined to form an offspring. The recombination algorithm relies on the principle of Localization of Reaction according to which the probability of a node participating in recombination is determined by its proximity to a reacting node. The number of computational errors in a DNA system designed for solving an instance of a 3-SAT problem [31] were reduced by generating DNA sequences that avoid folding and undesired hybridization phenomena.

Share this story

In computer science and mathematics, problems can be broadly categorized as hard or easy. This doesn't refer to the complexity of programming an algorithm to solve a problem, but rather to how the size of the problem influences the time it takes to find a solution. The existence of hard problems forms the very bedrock of computer security, so any paper that claims to be on the path to an efficient way of finding solutions to hard problems deserve some scrutiny.

Which is why a paper entitled "DNA-Based Computing of Strategic Assignment Problems" drew my skeptical eye. The paper implies (but does not actually state) that the strategic assignment problem falls into the category of hard, and claims that computing using DNA may efficiently solve such problems. Is that really so?

(Update: Since this article was published, I have learned that the problem that the researchers were trying to solve is most likely the linear assignment problem, which, in the terms used below, falls into the easy category. However, the text of the research paper tries to imply that the problem is more akin to the general assignment problem, which is hard. So the answer to the above question is "no.")

All math is hard, right?

Is the difference between hard and easy problems really so important? It is, and the difference allows us to engage in secure communications over the Internet (think online banking). Easy problems scale in a polynomial way, so if you have an eight-bit problem, then the time to solve it might scale as 83. If you increase the problem to a 16 bit problem, then the time taken to solve it increases to 163. Hard problems scale exponentially, so our eight-bit problem might be 38, and increasing to 16 bits increases the time taken to solve the problem to 316.

To illustrate the difference between hard and easy problems, let's take a specific example. Imagine that finding the prime factors of an eight-bit number takes 512 seconds. If this is an easy problem, then we can estimate that going to 256 bits will extend the computation time to 17 million seconds on the same computer. That seems like a large increase, but, in fact, you're looking at just a few years of computation time, ignoring any increase in computation speed. If it is a hard problem, however, the 256 bit problem will take 4.6×1087 seconds.

So you can see how such scaling is problematic. The difficulty of tackling a hard problem is what makes public-private key systems, such as the ones used in Internet banking, so secure at the moment. Now, if someone were to claim that they could solve a hard problem efficiently—that is, take a problem that scales exponentially and show that their solution scales polynomially—then that would be very surprising, and quite worrying. 

The paper from Shu, Wang, and Yong, based at Nanyang Technological University in Singapore, doesn't make that explicit claim. But you would have to be unable to read between the lines to not get that idea from the paper.

The problem that Shu and colleagues have focused on is the strategic assignment problem, for which there is little information on the Internet—I suspect that it may have an alternative name that is more common, but I was unable to find it. They give a good (and historic) example in the paper. Imagine that you have three horses that you are going to race pair-wise against a competitor. Before the races begin, you observe that your competitor's fastest horse is faster than your fastest horse, and his second-fastest horse is faster than your second-fastest horse, and his slowest horse is faster that your slowest horse.

The problem, then, is how can you arrange them to win the majority of races? The answer is to race your fastest horse against your competitors second-fastest horse, your second-fastest horse against their slowest, and your slowest horse against their fastest horse. Baring injury, you should win two out of three races. The problem, very simply put, boils down to matching between items in pairs of lists.

A DNA computer program?

Getting a solution to the problem via DNA turns out to be relatively easy. First, the researchers assigned a set of single-stranded DNA sequences to each item in each list. To control processing, they ensured that each assignment contains a sequence that is recognized by one of five different DNA-cutting restriction enzymes. They also assigned some additional sequences to denote stopping points. The researchers then generated all these sequences separately, and then mixed them together to allow them to become double stranded DNA. The DNA strands were then stitched together to create all possible sequences.

Most of these sequences are garbage, and the real work comes from finding the right bit of DNA. The first step is to only choose those sequences that start with the right list—the solution matches items from list A to items in list B, so if a sequence begins with list B, it should be thrown out. To do this, the researchers use a polymerase chain reaction (PCR) to amplify the DNA strands that started with one of the items in list A and ended at one of the stop sequences.

The next step is to choose only those strands that may have each sequence only once—the correct solution will have each list item only once. This is done by running the DNA through a gel that sorts the DNA strands by length, allowing the researchers to pick out the DNA band that corresponds to the right length. Unfortunately, the selected DNA may have repeated list items, and these must be eliminated.

This is what the restriction enzymes are for. The researchers add one of the restriction enzymes to the collection of DNA that was taken out of the gel. The enzyme breaks the DNA up wherever it finds a specific sequence. Any of the DNA strands that contain a specific list item more than once ends up being cut more than once, and thus is much shorter than the rest. 

The solution containing these DNA fragments is then divided up and each solution has one of the remaining restriction enzymes added. These different restriction enzymes cut off different stretches of DNA—ultimately the strands that had the right solutions in a specific order end up being cut down to a particular length. Now, in this particular case, the order of the matches doesn't matters, so it is enough to run another gel and separate out the DNA corresponding to 100 base pairs (in this example).

Sequence those 100 bases, and you have your solution. I should also note that figuring out the right length to look for is not difficult, since it is simply one less than the number of items in the list. I think that's pretty cool. It appears to be a smart, sophisticated, and elegant version of DNA computing. By itself, it warrants attention, but it probably isn't as general a result as the authors would like to think.

(Update: it has been pointed out that this is a particularly bad example of DNA computing. At the time of writing, I was more interested in the fact that with the methodology produced an answer that didn't rely on statistical distributions of DNA lengths—I felt that by itself was worth reporting. Unfortunately, this probably wasn't that new anyway.)

And that is fast and scalable?

The authors claim this is fast and scaleable. I am skeptical. As it stands, we have two gels and one PCR step to go through, as well as two digestion steps—not even getting into generating the sequences and then sequencing the result. The nice thing is that most of these steps require a fixed amount of time, independent of the size of the problem. So in that respect, yes, at some point, DNA computing may well beat traditional computation.

But I worry about scalability in terms of the actual computing process. This was possibly the simplest version of their example of DNA computing. The choice of DNA restriction enzymes and the fact that the order of the matches doesn't matter were both critical to making this example work. I suspect that figuring out a sequence of DNA restriction enzymes to apply is itself a hard problem. If that is true, then this work simply transforms one hard problem into another hard problem with little gain. To make matters worse, if the order of the matched pairs was important, then I suspect that it's impossible to find a set of restriction enzymes that provide a solution.

So, the answer to the title of this post is: no, not now, and probably not ever. To be a little less flippant: the researchers now need to show that this method can, even in principle, be used for a wide range of problems—can it be turned into a universal computer? They also need to explain why the selection of restriction enzymes is an easy problem to solve. If they can show that, then the world of commerce will tremble at their feet.

Physical Review Letters, 2011, DOI: 10.1103/PhysRevLett.106.188702

0 Replies to “Dna Based Computing Of Strategic Assignment Problems”

Lascia un Commento

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *