Biochemistry, polymers, and technology
- Related Topics:
- chemical bonding
- chemical element
- chemical compound
- biochemistry
- chemical reaction
News •
Organic chemistry, of course, looks not only in the direction of physics and physical chemistry but also, and even more essentially, in the direction of biology. Biochemistry began with studies of substances derived from plants and animals. By about 1800 many such substances were known, and chemistry had begun to assist physiology in understanding biological function. The nature of the principal chemical categories of foods—proteins, lipids, and carbohydrates—began to be studied in the first half of the century. By the end of the century, the role of enzymes as organic catalysts was clarified, and amino acids were perceived as constituents of proteins. The brilliant German chemist Emil Fischer determined the nature and structure of many carbohydrates and proteins. The announcement of the discovery (1912) of vitamins, independently by the Polish-born American biochemist Casimir Funk and the British biochemist Frederick Hopkins, precipitated a revolution in both biochemistry and human nutrition. Gradually, the details of intermediary metabolism—the way the body uses nutrient substances for energy, growth, and tissue repair—were unraveled. Perhaps the most representative example of this kind of work was the German-born British biochemist Hans Krebs’s establishment of the tricarboxylic acid cycle, or Krebs cycle, in the 1930s.
But the most dramatic discovery in the history of 20th-century biochemistry was surely the structure of DNA (deoxyribonucleic acid), revealed by American geneticist James Watson and British biophysicist Francis Crick in 1953—the famous double helix. The new understanding of the molecule that incorporates the genetic code provided an essential link between chemistry and biology, a bridge over which much traffic continues to flow. The individual “letters” that make the code—four nucleotides named adenine, guanine, cytosine, and thymine—were discovered a century ago, but only at the close of the 20th century could the sequence of these letters in the genes that make up DNA be determined en masse. In June 2000, representatives from the publicly funded U.S. Human Genome Project and from Celera Genomics, a private company in Rockville, Md., simultaneously announced the independent and nearly complete sequencing of the more than three billion nucleotides in the human genome. However, both groups emphasized that this monumental accomplishment was, in a broader perspective, only the end of a race to the starting line.
DNA is, of course, a macromolecule, and an understanding of this centrally important category of chemical compounds was a precondition for the events just described. Starch, cellulose, proteins, and rubber are other examples of natural macromolecules, or very large polymers. The word polymer (meaning “multiple parts”) was coined by Berzelius about 1830, but in the 19th century it was only applied to special cases such as ethylene (C2H4) versus butylene (C4H8). Only in the 1920s did the German chemist Hermann Staudinger definitely assert that complex carbohydrates and rubber had huge molecules. He coined the word macromolecule, viewing polymers as consisting of similar units joined head to tail by the hundreds and connected by ordinary chemical bonds.
Empirical work on polymers had long predated Staudinger’s contributions, though. Nitrocellulose was used in the production of smokeless gunpowder, and mixtures of nitrocellulose with other organic compounds led to the first commercial polymers: collodion, xylonite, and celluloid. The last of these was the earliest plastic. The first totally synthetic plastic was patented by Leo Baekeland in 1909 and named Bakelite. Many new plastics were introduced in the 1920s, ’30s, and ’40s, including polymerized versions of acrylic acid (a variety of carboxylic acid), vinyl chloride, styrene, ethylene, and many others. Wallace Carothers’s nylon excited extraordinary attention during the World War II years. Great effort was also devoted to develop artificial substitutes for rubber—a natural resource in especially short supply during wartime. Already by World War I, German chemists had substitute materials, though many were less than satisfactory. The first highly successful rubber substitutes were produced in the early 1930s and were of great importance in World War II.
During the interwar period, the leading role for chemistry shifted away from Germany. This was largely the result of the 1914–18 war, which alerted the Allied countries to the extent to which they had become dependent on the German chemical industries. Dyes, drugs, fertilizers, explosives, photochemicals, food chemicals (such as chemicals for food additives, food colouring, and food preservation), heavy chemicals, and strategic materiel of many kinds had been supplied internationally before the war largely by German chemical companies, and, when supplies of these vital materials were cut off in 1914, the Allies had to scramble to replace them. One particularly striking example is the introduction of chlorine gas and other poisons, starting in 1915, as chemical warfare agents. In any case, after the war ended, chemistry was enthusiastically promoted in Britain, France, and the United States, and the interwar years saw the United States rise to the status of a world power in science, including chemistry.
All this makes clear why World War I is sometimes referred to as “the chemists’ war,” in the same way that World War II can be called “the physicists’ war” because of radar and nuclear weapons. But chemistry was an essential partner to physics in the development of nuclear science and technology. Indeed, the synthesis of transuranium elements (atomic numbers greater than 92) was a direct consequence of the research leading to (and during) the Manhattan Project in World War II. This is all part of the legacy of the dean of nuclear chemists, American Glenn Seaborg, discoverer or codiscoverer of 10 of the transuranium elements. In 1997, element 106 was named seaborgium in his honour.
The instrumental revolution
As far as the daily practice of chemical research is concerned, probably the most dramatic change during the 20th century was the revolution in methods of analysis. In 1930 chemists still used “wet-chemical,” or test-tube, methods that had changed little in the previous hundred years: reagent tests, titrations, determination of boiling and melting points, elemental combustion analysis, synthetic and analytic structural arguments, and so on. Starting with commercial labs that provided an out-source for routine analyses and with pH meters that displaced chemical indicators, chemists increasingly began to rely on physical instrumentation and specialists rather than personally administered wet-chemical methods. Physical instrumentation provides the sharp “eyes” that can see to the atomic-molecular level.
In the 1910s J.J. Thomson and his assistant Francis Aston had developed the mass spectrograph to measure atomic and molecular weights with high accuracy. It was gradually improved, so that by the 1940s the mass spectrograph had been transformed into the mass spectrometer—no longer a machine for atomic weight research but rather an analytical instrument for the routine identification of complex unknown compounds (see mass spectrometry). Similarly, colorimetry had a long history, dating back well into the previous century. In the 1940s colorimetric principles were applied to sophisticated instrumentation to create a range of usable spectrophotometers, including visible, infrared, ultraviolet, and Raman spectroscopy. The later addition of laser and computer technology to analytical spectrometers provided further sophistication and also offered important tools for studies of the kinetics and mechanisms of reactions.
Chromatography, used for generations to separate mixtures and identify the presence of a target substance, was ever more impressively automated, and gas chromatography (GC) in particular experienced vigorous development. Nuclear magnetic resonance (NMR), which uses radio waves interacting with a magnetic field to reveal the chemical environments of hydrogen atoms in a compound, was also developed after World War II. Early NMR machines were available in the 1950s; by the 1960s they were workhorses of organic chemical analysis. Also by this time, GC-NMR combinations were introduced, providing chemists unexcelled ability to separate and analyze minute amounts of sample. In the 1980s NMR became well known to the general public, when the technique was applied to medicine—though the name of the application was altered to magnetic resonance imaging (MRI) to avoid the loaded word nuclear.
Many other instrumental methods have seen vigorous development, such as electron paramagnetic resonance and X-ray diffraction. In sum, between 1930 and 1970 the analytical revolution in chemistry utterly transformed the practice of the science and enormously accelerated its progress. Nor did the pace of innovation in analytical chemistry diminish during the final third of the century.
Organic chemistry in the 20th century
No specialty was more affected by these changes than organic chemistry. The case of the American chemist Robert B. Woodward may be taken as illustrative. Woodward was the finest master of classical organic chemistry, but he was also a leader in aggressively exploiting new instrumentation, especially infrared, ultraviolet, and NMR spectrometry. His stock in trade was “total synthesis,” the creation of a (usually natural) organic substance in the laboratory, beginning with the simplest possible starting materials. Among the compounds that he and his collaborators synthesized were alkaloids such as quinine and strychnine, antibiotics such as tetracycline, and the extremely complex molecule chlorophyll. Woodward’s highest accomplishment in this field actually came six years after his receipt of the Nobel Prize for Chemistry in 1965: the synthesis of vitamin B12, a notable landmark in complexity. Progress continued apace after Woodward’s death. By 1994 a group at Harvard University had succeeded in synthesizing an extraordinarily challenging natural product, called palytoxin, that had more than 60 stereocentres.
These total syntheses have had both practical and scientific spin-offs. Before the “instrumental revolution,” syntheses were often or even usually done to prove molecular structures. Today they are a central element of the search for new drugs. They can also illuminate theory. Together with a young Polish-born American chemical theoretician named Roald Hoffmann, Woodward followed up hints from the B12 synthesis that resulted in the formulation of orbital symmetry rules. These rules seemed to apply to all thermal or photochemical organic reactions that occur in a single step. The simplicity and accuracy of the predictions generated by the new rules, including highly specific stereochemical details of the product of the reaction, provided an invaluable tool for synthetic organic chemists.
Stereochemistry, born toward the end of the 19th century, received steadily increasing attention throughout the 20th century. The three-dimensional details of molecular structure proved to be not only critical to chemical (and biochemical) function but also extraordinarily difficult to analyze and synthesize. Several Nobel Prizes in the second half of the century—those awarded to Derek Barton of Britain, John Cornforth of Australia, Vladimir Prelog of the Soviet Union, and others—were given partially or entirely to honour stereochemical advances. Also important in this regard was the American Elias J. Corey, awarded the Nobel Prize for Chemistry in 1990, who developed what he called retrosynthetic analysis, assisted increasingly by special interactive computer software. This approach transformed synthetic organic chemistry. Another important innovation was combinatorial chemistry, in which scores of compounds are simultaneously prepared—all permutations on a basic type—and then screened for physiological activity.
Chemistry in the 21st century
Two more innovations of the late 20th century deserve at least brief mention, especially as they are special focuses of the chemical industry in the 21st century. The phenomenon of superconductivity (the ability to conduct electricity with no resistance) was discovered in 1911 at temperatures very close to absolute zero (0 K, −273.15 °C, or −459.67 °F). In 1986 two Swiss chemists discovered that lanthanum copper oxide doped with barium became superconducting at the “high” temperature of 35 K (−238 °C, or −397 °F). Since then, new superconducting materials have been discovered that operate well above the temperature of liquid nitrogen—77 K (−196 °C, or −321 °F). In addition to its purely scientific interest, much research focuses on practical applications of superconductivity.
In 1985 Richard Smalley and Robert Curl at Rice University in Houston, Tex., collaborating with Harold Kroto of the University of Sussex in Brighton, Eng., discovered a fundamental new form of carbon, possessing molecules consisting solely of 60 carbon atoms. They named it buckminsterfullerene (later nicknamed “buckyball”), after Buckminster Fuller, the inventor of the geodesic dome. Research on fullerenes has accelerated since 1990, when a method was announced for producing buckyballs in large quantities and practical applications appeared likely. In 1991 Science magazine named buckminsterfullerene their “molecule of the year.”
Two centuries ago, Lavoisier’s chemical revolution could still be questioned by the English émigré Joseph Priestley. A century ago, the physical reality of the atom was still doubted by some. Today, chemists can maneuver atoms one by one with a scanning tunneling microscope, and other techniques of what has become known as nanotechnology are in rapid development. The history of chemistry is an extraordinary story.
Alan J. Rocke