Texas State Attorney General Ken Paxton has opened an investigation over whether or not Lululemon has misled the public over their claims of the safety and health of its products. Specifically, this is in regard to the alleged presence of so-called forever chemicals and microplastics. According to USA Today, Paxton said “I will not allow any corporation to sell harmful materials to consumers at a premium price under the guise of wellness and sustainability.” Lululemon sells upscale activewear under the claims of health & wellbeing as well as sustainability. Naturally, Lululemon denies the allegations of harmful substances in its activewear.
Lululemon has a website that presents its restricted chemicals list, RSL. Presumably this might remind vendors to avoid any of the many chemicals on the RSL. Just as importantly it serves as a public document to render intractable any legal liability for the presence any of the chemicals that a vendor may try to pass off in their work.
How this plays out is unclear. Lululemon will have to vigorously fight the allegations not only for itself, but for all of the other apparel manufacturers who also use perfluorinated substances made from PFAS and PFOS to provide water repellency.
What is a perfluorocarbon?
But first, what does ‘perfluorinated’ mean? The word is a verbal shortcut that makes these substances easier to talk about. The word breaks into 2 parts- ‘per’ and ‘fluorinated’. With chemical substances, ‘per’ indicates that the molecule in question has fluorine atoms attached to every part that would otherwise have hydrogen atoms. Take butane for example. The chemical formula is C4H10. It is a simple hydrocarbon substance consisting of only two chemical elements, 10 hydrogen atoms and 4 carbon atoms. It is the fuel in your butane lighter.
Perfluorinated butane in this simple example would have the formula C4F10. Every hydrogen from the butane – C4H10– is replaced with fluorine. Fluorine is a gas much like chlorine but is a much more reactive chemical. It attacks most substances except fluorinated materials like Teflon(TM). Carbon-based substances containing fluorine are also called fluorocarbons. This class of substances finds considerable use in refrigerants and fire extinguishing foams.
Fluorine- The savage beast
Fluorine is commonly supplied as 20 % mixture of F2 in nitrogen, helium or argon. Organic and inorganic substances are known to vigorously ignite on contact with fluorine gas. Interestingly, no reports of lethal effects have been reported by inhalation of the dilute gas. This might be due in part to the extra care chemists use when handling it. Like chlorine and bromine, dilute fluorine can cause severe irritation of the nose and eyes.
Many drug molecules on the market contain one or several fluorine atoms connected to the structure. A fluorine atom is very similar in size to a hydrogen atom so it’s contribution to drug activity is not based on steric bulk. Fluorine groups a drug molecule can bring increased lipophilicity and increased resistance to metabolic degradation. The presence of fluorine atoms can help the drug molecule to better pass through the blood/brain barrier or raise the compatibility and binding strength with hydrophobic features on the target molecule.
It turns out that the features that suppress metabolic degradation in the body also suppress biodegradation of excreted fluorinated drugs in water treatment plants and in the environment. This leads to soil and water accumulation and potentially bioaccumulation in the food chain.
===============
National Research Council (US) Committee on Acute Exposure Guideline Levels. Acute Exposure Guideline Levels for Selected Airborne Chemicals: Volume 8. Washington (DC): National Academies Press (US); 2010. 5, Fluorine Acute Exposure Guideline Levels. Available from: https://www.ncbi.nlm.nih.gov/books/NBK220011/
I’m saddened to learn of Emeritus Professor of Chemistry Michael P. Doyle’s passing. Mike had retired recently from the University of Texas San Antonio. Those who knew Mike knew that he was a dynamo of research productivity in undergraduate institutions and later in a PhD granting institution. In his career, Mike had 439 or more papers published (Google) in prominent publications. One of his secrets to productive research in 4-year institutions was the use of multiple postdocs who guided his undergraduate researchers in their work. Together they performed synthetic organic and organometallic chemistry and produced a rich tapestry of rhodium catalyzed carbene transformations.
Like research faculty everywhere, Mike’s success in grant writing was key to keeping up a productive lab and attracting excellent post-doctoral chemists. His publications and fascinating catalysis chemistry as well as his participation in the many national institutions of chemistry kept him near the leading edge of his field.
I was a postdoc in the Doyle lab from 1990 to 1992. My interest was in developing a career path similar to Mike’s and his lab was the place to be. During my time with him at Trinity University, Mike’s interest was in asymmetric catalysis and my part in that was asymmetric C-H insertion into secondary carbons with diazoacetates and a chiral dirhodium paddle wheel complex. We used chiral Pirkle-type GC columns to determine the enantiomeric excesses (% ee) of many of our products. The undergrads caught on to this right away and could run a reaction and get GC and NMR results right away on their own.
During my time with Mike at Trinity, the university had a visit and a talk before the whole university community by Margaret ‘Maggie’ Thatcher, former Prime Minister of Great Britain. Of interest is the fact that Maggie was a chemist by training. That day she toured our labs to see what Mike’s research was all about.
He held positions as a Professor at institutions including Hope College, University of Arizona, Trinity University, and the University of Maryland. He has also been a visiting professor at the University of Iowa (Google).
During his career Mike participated in many professional organizations and won numerous awards. He served in the executive committee of the ACS Division of Organic Chemistry for 23 years in many capacities as member, councilor as well as its chair. He served as chair of the Executive Board of Chemical & Engineering News (C&EN). He was a founder and first president of the Council for Undergraduate Research and first chairman of National Conferences on Undergraduate Research. He was also president of Research Corporation for Science Advancement.
Mike was immortalized with one of the highest recognitions of all, a named chemical reaction: the Doyle–Kirmse Reaction.
Fellow of the American Chemical Society
2013 – Fellow, National Academy of Inventors
2010 – Fellow of the American Association for the Advancement of Science (AAAS)
2009 – Fellow of the American Chemical Society
2003 – Member of the National Academy of Medicine (NAM)
1994 – Fellow of the American Association for the Advancement of Science (AAAS)
A list of citations in Google Scholar can be found here.
Probably the most prominent of the Doyle catalysts is Rh2(5S-MEPY)4, or dirhodium(II) core surrounded by four bridging chiral methyl 2-pyrrolidinone-4(S/R)-carboxylate (MEPY) ligands. In the lab it was just known as “MEPY”. It was a real workhorse, useful in several types of chemistries including cyclopropanation, cyclopropenation, Lewis acid catalysis and C-H insertion.
MEPY catalyst was prepared from commercial dirhodium tetraacetate and ligand and performed in a Soxhlet extraction setup with sodium carbonate to remove the acetic acid produced. It could be cleaned up by column chromatography.
The Doyle Rh2(5S-MEPY)4 catalyst showing the dirhodium core with 1 of 4 chiral, enantiomerically pure, methyl 2-pyrrolidinone-4-(S/R)-carboxylate ligands. The above structure was captured with acetonitrile ligands in the two axial positions.
Mike’s gift for academic research was built on his boundless energy and friendly nature. He loved working with undergraduate chemistry majors and nearly all got their work published in prestigious journals. We postdocs participated his research model and methods for our future academic careers.
I had the opportunity to visit with Mike at his home in San Antonio 2 years ago on the occasion of his retirement from UTSA. He was in good spirits and spent time gabbing with each of us. Glad I went.
Editorial note: I fixed nomenclature of MEPY. Excuse me.
I’ve come around on this business of the atom being almost entirely empty space. This is an established bit of folklore in intro chemistry and physics. It dates back to experiments by Hans Geiger and Ernest Marsden under Ernest Rutherford, showing how alpha particles could sail through thin gold foil and infrequently, an alpha particle would impact something hard and scatter. The striking thing about the experimental results was just how infrequent the scattering was. The conclusion eventually drawn was that the atoms in the gold were mostly empty space.
But what if that space wasn’t quite empty? What if that space was a beehive of electrons at maybe half light-speed and mutually repelled by one another yet attracted to the nucleus. Each electron is a single point negative charge. The nucleus has a diameter 100,000 times smaller with equal but opposite charge. The strong positive nuclear charge field holds the electrons tightly but only to the to the point where electron-electron repulsion is balanced in atoms with more than two electrons.
The electron is a point charge manifestation of the electromagnetic force, but with mass and angular momentum. It is a perturbation in the electric field. It doesn’t fly like a ball, it exists in the manner of a wave of chance. It has none of what humans think of as material substance, rather it is purely a quantum mechanical manifestation. It is shaped by 3-dimensional standing waves of probability density surrounding the nucleus. This probability density is defined by a spherical harmonic wave series. We chemists know this harmonic series as s, p, d and f “orbitals”. Electron probability density extends from the nucleus to the outer orbitals of the atom with s, p, d, and f orbitals occupying space defined by their unique wave equations.
Source: Wikipedia. The atomic orbital series for the hydrogen atom. The blue fringed shapes represent the space available in each atomic orbital. The orbitals have no reality as “objects” themselves. Instead, they define regions of space that an electron can inhabit. The hydrogen atom is used because there are no complications with electron-electron repulsion. The orbital structure of the hydrogen atom can be defined precisely as an equation. Atoms from lithium and up cannot.
As a reminder, the shape of an orbital itself defines a region of space where an electron of a certain energy is most likely to be found. It is not necessary to be able to calculate the position of the electron moment to moment to understand its properties. Heisenberg’s Uncertainty Principle does not allow for high precision determination of both position and momentum simultaneously, so this is where the universe tells us that ‘ya can’t have everything’. However, energy levels and transitions between them can be measured precisely. Exact position of an electron is not necessary. Besides, the 3-body problem shows up very early in the periodic table and spoils the fun anyway.
The edges of orbitals are not sharp but rather feather off into space and are pragmatically defined by a reasonable certainty as encircling an overall 95 % probability density.
What about the ’empty space’ view of the atom? As previously surmised, the filled concentrically overlapping occupied orbitals of an atom define a region of electron probability density that is not ever empty except for the hydrogen cation, H+.
Recall that the mass of the electron is small, about (1/1800)th that of the proton mass. This says that the space between outer edge of the atom and the nucleus is occupied by the electrons which are in constant motion constrained only by the individual 3-dimensional orbitals.
This forces us the think more clearly about what constitutes ’empty space’ of the atom. That empty space is filled with a diffuse, low mass density swarm of negative charges. Only orbital nodes have zero electron density, but all orbitals have some probability density throughout the interior of the atom.
Perhaps better way to describe the space between electron and nucleus is to simply mention the dimensions of the atom and its nucleus in meters as an example.
And for the Rutherford gold foil experiment, the diffuse electron density around the nucleus would pose little resistance to an alpha particle with its larger momentum passing through, giving the illusion of empty space.
A gold foil of larger thickness will easily block all alpha particles. Alpha’s are stopped by losing their energy to ion formation when passing through matter.
Back in my undergraduate days I remember finding the CRC Handbook of Laboratory Safety. One photograph really stuck with me. Years later I decided to replicate it and use the image in chemical safety training.
Picture by Arnold Ziffel.
The picture above shows what happens when a solution of soluble proteins in water is subjected to a large excursion in pH in both directions- both highly acidic and highly basic (caustic). The take home lesson is intended to be “wear your damned safety glasses or face shield”. The obvious comparison is between egg white and your corneas. Both have transparency and proteins.
In reality, there is no comparison between the composition between an egg white and a cornea. The human cornea is far, far more complex in composition and structure. Eggs are widely available and cheap. While we see many human corneas every day, they are attached to living people who would, no doubt, put up a tussle with anyone seeking to abscond with one. For a demo, egg white will have to do.
The major protein in egg white is the globular phosphoglycoprotein ovalbumin at 54 % abundance. According to Google it is “A major storage protein with phosphorylation properties.” Ovalbumin and human serum albumin share a name but little else. Ovalbumin serves as a storage protein source for developing chicks. Human serum albumin serves to maintain balance in the osmotic pressure of blood and to transport substances in the blood stream.
The egg white image is really about the effects of corrosives on protein. Ovalbumin exposed to strong acid will rearrange its globular structure in a way that renders it insoluble and causes agglomeration. Thus the opaque appearance. Trainees look at the image and, knowing they themselves are made of protein, can silently draw their own conclusions about the risk of getting a corrosive in their eyes.
This is just a short note to remind chemistry students to consider taking a class on polymer chemistry if it is available. While the endless collection of widgets made from synthetic polymers may seem to be tedious as hell, the chemistry and engineering of commodity polymers is really quite fascinating and maybe just a little artsy.
Polyolefin chemistry got very interesting with the development of metallocene catalysts in the 1990s and were later refined to non-metallocene constrained geometry catalysts. Do a keyword search of Catalysts for olefin polymerization in Google Patents and see for yourself. You’ll find that Group IV metals are heavily represented.
In my industrial career as a PhD organic/organometallic chemist I was kept busy for about 10 years with in-house Reaction Calorimetry (RC), Accelerating Rate Calorimetry (ARC), Differential Scanning Calorimetry (DSC) as well as Thermogravimetric Analysis (TGA) for validating thermal process safety. To institutionalize this I was asked to start a process safety department and began standardizing experimental protocols and a database for the results. I was able to scour the internet for thermochemical papers, looking for mentions of energetic properties. As always, much can be learned by just looking around.
Thermal process safety refers to safe operation of chemical manufacturing in regard to the generation of heat in a reaction mass and the hazards arising therein. The hazards from uncontrolled self-heating include acceleration of reaction kinetics producing accelerating heat and pressure evolution. If the reaction enthalpy and subsequent temperature rise theoretically exceeds the boiling point of the solvent despite the cooling jacket and the chilled condenser, then self-heating can lead to a boil up and uncontrolled ejection of the reaction mass. With insufficient cooling, the temperature will rise to the solvent bp and boil off the solvent first, carrying much heat away as heat of evaporation. Once most of the solvent has boiled away and if the reaction mass continues to self-heat, the temperature will continue to rise and peak at some undesired level as the reactants are consumed. Further heating of the now hot, highly concentrated reaction mass, potentially leading to successive reactions that may or may not be exothermic.
When a liquid phase reaction mass self-heats faster than heat can be removed, the reactor pressure will begin to rise. As pressure builds, the boiling point of the reaction mass begins to rise, slowing down the boil-off. A sudden drop in pressure, as with the burst of a rupture disk, will cause a superheated solution to promptly boil throughout the reaction mass. This means that flash vaporization can lead to bubble formation throughout the volume of the reaction mass producing a foam. The severity will depend on the pressure drop and the bp of the solvent. If the headspace is sufficiently small, the foam can expand rapidly and begin to exit through the vent pipe. A properly engineered vent pipe has been sized to vent gas/vapor at specified conditions. Since a foam is part liquid and part gas/vapor, it lacks the overall compressibility of a gas/vapor so the resulting foam flow may be lower than calculated for a gas/vapor, slowing the rate of depressurization.
The distinction between gas and a vapor is that a vapor may be condensable as with most solvent vapors, but evolved gases like hydrogen, methane or carbon dioxide will combine with a non-condensable blanket gas like nitrogen and resist condensation in the by the chiller. The point is that if one is relying on a chilled condenser to knock down non-condensable gases as a pressure management control, then a rude shock is headed your way, especially if the rupture disk bursting pressure is higher than need be.
Flow into vent pipes that exit outdoors may discharge hot reaction mass onto the roof or wherever the vent terminates. If the vent terminates into a knockdown drum or other catch vessel, the hot reaction mass contacts whatever may be in those vessels.
Image: Mettler Toledo. The Mettler-Toledo RC1 rigged for dual feed and distillation or reflux. The two brown bottles (lower right) sit on balances and feed from two reagent bottles into the reactor (lower left). The feed of liquid reactants is pre-programmed and is controlled quite accurately. Reagents are fed into an agitating reaction mass (yellow) while the temperature and enthalpy (H or h) are monitored on the fly. The instrument monitors the jacket and reactor temperatures and with the help of heat capacities, Cp, can display the enthalpy of the reaction as it proceeds.
Fortunately, the thermal profile leading up to the above scenario can be modeled in properly conducted RC1 experiments. But exactly what can be done beforehand?
First, let’s realize that the total self-heating temperature rise can be measured. We add that ΔT (temperature rise) to the proposed reaction temperature Tr and get a maximum temperature of the synthetic reaction, MTSR. Once we have this, if the MTSR is greater than the bp of the solvent(s), then we know that an uncontained runaway is possible. What to do then?
R&D needs to justify the problematic low boiling solvent with the reaction temperature to be applied.
R&D needs to provide input on lowering the design reaction temperature.
Is a lower Tr for a longer reaction time feasible?
How sensitive is the reaction to a higher bp solvent substitution?
If the chosen solvent conveniently forces side or waste products to precipitate and be removed by filtration, then we have the conundrum of safety vs efficient processability.
The magnitude of the hazard in the minds of everyone involved may be quite different and will require a documented decision process. Engineering input here is invaluable.
Thermal runaway profile. Source. The linked article is well written.
Tp Process Temperature
ΔTad Adiabatic Temperature rise
MTSR Maximum Temperature of the Synthetic Reaction
TMRad Adiabatic Time to Maximum Rate
Tx Time of Cooling Loss
What if the design solvent is truly required for a feasible and economic process? This needs to be discussed first with chemists and engineers in the room and a decision rendered. If chemist input says no solvent change is desirable or economic, the engineers need to speak up as to whether there is an engineering work around. If a ready engineering work around is not feasible and the product is still important, then the chemists need to be challenged to find procedure that denies the equipment a runaway condition.
If we’re lucky, a partial batch reaction method can be used wherein the reactor is charged with solvent and most of the reactant compounds are in the vessel at the beginning of the run. The final reactant is slowly fed into the reactor and the reaction temperature is controlled by the feed rate. Reaction calorimetry can be used to arrive at a plausible maximum feed rate that is fast but not too fast. A reaction calorimeter is basically a chemical reaction detector and can be used to look for an approximate reaction onset temperature. Remember that onset temperature is not a physical or chemical property. It depends on the detection equipment and the rate of heating.
Like everything else, success can depend on first asking the right questions. On the graphic computer display of the RC1, you can determine the response to an aliquot of reagent addition. Does the heat production, q, rise promptly with addition or does it lag? If there is a lag or a latency, it means that over-charging by operators at scale can happen if they are looking for a prompt “heat kick” on addition of the feed.
The RC1 can also show the length of time to react away the feed or what the total reaction time may be If the reaction has a natural response lag, then a defined charge mass is called for. A response lag may also be due to the presence of water which must first be quenched under the reaction conditions. The most insidious situation is when the feed reactant accumulates in the reactor over the course of the reaction. This is very difficult to judge by the operators. The feed may accumulate until the reaction suddenly begins and accelerates out of control. This is not uncommon.
Finally, a proper “batch reaction” is one in which all of the reactants are loaded into the reactor all at once, the temperature is adjusted and the reaction begins. It is critical that before a new batch reaction is allowed, the chemists must show that this will not result in a runaway condition. This is where reaction calorimetry shines. The safety of a batch reaction is reproduced in the RC1and the progress is monitored. The RC1 can also be used to explore various reaction conditions to see if runaway potential can be easily blundered into. How narrow are the safe operating parameters? Many plant incidents happen at shift changes where the continuity of watchfulness may diverge for a time, even with automation.
TMRad Adiabatic Time to Maximum Rate
A very informative piece of data to have is the TMR- Time to Maximum Rate. This can be obtained by an Accelerated Rate Calorimeter or ARC. The instrument consists of a furnace into which is placed a sample “can” which can be made of metal or glass. The furnace raises the sample temperature gradually using a heat-wait-search (HWS) method searching for an onset temperature.
Once an onset temperature is found, the HWS is automatically halted and the furnace keeps adjusting its temperature to match the rising internal sample temperature. If the internal sample temperature and the exterior furnace temperature are the same, then the sample is under adiabatic conditions and no heat flows in or out of the sample can. The sample temperature is driven by self heating only.
Knowing the sample mass and the best guess at Cp, constant pressure heat capacity, the reaction enthalpy can be determined. From the data, the Time to Maximum Rate (TMR) can be calculated to give an equation. It is the time that a substance that is self-reacting takes to reach the maximum rate of heat output as a function of sample temperature. The instrument also records sample pressure. If the sample pressure does not return to ambient pressure at room temperature, this would mean that a non-condensable gas was evolved.
Image of the Phi-Tec II ARC system. from H.E.L. company. My ARC experience is with this model.
A typical ARC experiment took me from 6 to 24 hours to complete the HWS routine.
What TMR data allows one to do is to find a reaction temperature the reaches maximum rate in 24 hours or more. You plug in a temperature and you get a TMR. The temperature needed to produce a TMR of 24 hours is considered by many in industry to be the uppermost safe processing temperature. It helps to answer what the maximum safe temperature in which a solid can be dried before decomposition begins.
To outsource safety testing or not
First and foremost, a commercial safety test lab understands and uses procedures that are agreed upon and standardized. Also, if down the road there comes a related event, your response to criticism will be to refer to the test lab experts, not some ham fisted employee monkeying around in the lab doing improvised experiments. Certain safety matters should be referred to the commercial lab experts for valid results and for CYA. This applies especially to energetic materials like nitroaromatics or nitrate esters.
Chemical manufacturing is conducted at many scales from laboratory gram scale products for R&D, multi-kilogram kilo-lab batch processing to the colossal commodity scale continuous manufacturing of petrochemicals, agrichemicals, polymers, flavors & fragrances, and pharmaceuticals. Nearly all of these commodity chemicals and polymers are well known and have safety issues related only to flammability, exposure and dose.
Outsourcing tests that can be done inhouse is a missed opportunity to accumulate more skills which is company treasure. I’m speaking of calorimetry. Calorimeters can be brought on-site and meshed in with research and development. Just learning how to interpret thermograms alone brings workers new insights into their chemistry.
What is best for your company? In-house safety testing or outsourced safety testing? Like nearly everything else in life, the answer depends on the situation. If you need to survey for explosive hazards for the first time, there are several competent commercial labs available that will use standard protocols. My experience is that they employ just engineers or a mix of chemists and engineers. They conduct standard testing protocols wherein a series of samples are exposed step-wise to a series of ever increasing stimuli intensity to find the boundary conditions of sensitivity to various stimuli, like heat, friction, impact, dust explosion parameters, burn tests, static charge lifetimes and minimum ignition energy (MIE) with electrostatic discharge.
Explosibility testing
Sensitivity to explosive behavior is tested in numerous ways to flesh out the sensitivity profile. Testing is performed in stages where the least intense stimuli are tried first to screen for highly sensitive substances. The results of any single test run are graded as ‘Go/No Go’ or ‘positive/negative’. The terms ‘Go’ or ‘Negative’ mean that an explosive property was observed.
Part of explosives testing is finding out what kinds of stimuli lead to initiation of an explosion. The Bureau of Mines (BOM) drop weight test looks for the maximum safe impact energy. There is a friction test, an electrostatic discharge test, and many others. If the sample does not give a Go result at the maximum machine impact or friction, then it is regarded as safe under those precise conditions. In the BOM test, the higher the number (in drop distance of a 5 or 10 kg weight), the more stable it is to impact.
You get the testing data. Now what?
Now how do you take numerical test data and convert it to safer operations? This is where engineers can be most useful. Imagine a substance that has a 34 inch BOM drop weight result with a 10 kg anvil. Will any process equipment mash down on the substance inadvertently? Put this ball in the court of engineers and let them chew on it. This data moves workers closer to confidence in safety.
Outsourcing safety testing and explosive screening can lead to a conundrum. Outsourcing anything means that certain expertise may not be internalized for your company’s use, the user or manufacturer. Commercial labs will absolutely not comment on how the material can be safely used, whether or not it is too dangerous or nominally safe under your use conditions. Safe use is not an endorsement they will make, they will only stand behind their results from standard testing protocols. I’d do the same.
Before safety testing you were alone. Now, with safety data, you are still alone but with numbers. Engineers and plant operators are invaluable in locating equipment that delivers impacts or friction. They can also help to identify non-grounded equipment that may generate or accumulate electrostatic charge. Always get the plant people involved.
It didn’t take long to realize that if we sent samples out to commercial labs for calorimetry testing, the samples were subjected to unfamiliar standard test methodology. Early on it was fascinating to see what kind of experimental setups were used and what the results looked like. Being a synthesis chemist I was unfamiliar with calorimetry. My earlier exposure to calorimetry was limited to what appeared in molecular dynamics and mechanics modeling. Acquiring actual data on reaction enthalpies and onset conditions myself awakened a fascination that carried me far into reaction calorimetry and thermochemistry.
What was not clear at the outset of receiving external calorimetric, electrostatic and explosive test data was what to do with it. Using external hazard data to inform operational procedure was new to everyone. Yes, we could learn from an ARC experiment what temperature the onset to a runaway condition begins, but how to use the measurements in practice wasn’t always obvious.
Incidents have three phases- initiation, propagation and termination. You have to ask this: if an incident initiates, what is the preferred propagation direction to termination? Yes, this can be controlled somewhat but only in advance. For instance, if an explosion happens, what is the least terrible direction for the blast to go? These matters should be considered in the design phase of construction of a chemical facility. If they weren’t, then decisions must be made despite the lack of preplanning.
As an example, a commercial explosives company I’m aware of built their manufacturing facility out in the European countryside. Explosive materials were prepared, stored and handled in small buildings distributed over a large area with distance, berms and trees separating them. If an explosion happened, the blast wave would be isolated from other assets and attenuated by distance, berms and forest. Here, the propagation phase was suppressed by distance and topography.
Another explosion highlights the folly of not segregating manufacturing operations. A plant manufacturing a hydroxylamine called HOBT suffered a catastrophic incident where a reactor blew apart explosively during a process previously performed many times. The reactor was housed in a structure that had expanded over time by adding manufacturing space by piecemeal addition as needed. This resulted in a building that was a rabbits warren of rooms and hallways even including admin space. The explosion did not just happen without warning. The reactor began to overheat from accumulating heat of reaction and became unresponsive to cooling efforts by the operator. As the operator turned to go get help, the reactor exploded sending parts up and out of the building, with the agitator landing on the roof of an adjacent business and onto railroad tracks. Heat transfer oil ran out of the building and flowing into the nearby river. The operator was blown through a sheet rock wall but survived. The shock wave propagated into adjacent spaces and down hallways, blowing out windows, internal and external doors including overhead doors.
The sad thing is that another plant suffered a devastating explosion 20 years earlier making the same hydroxylamine product. Perhaps lessons were learned at this plant, but those lessons didn’t to the other plant.
The lesson is clear. In chemical manufacture the R&D folks must be sure that all chemical properties are well understood and such knowledge is a part of accessible in-house expertise. If there is no R&D, meaning that a large scale procedure is simply written up and performed without the scrutiny of cold expert eyes evaluating it, then you are stepping onto a high wire without a net. Both plants making the hydroxylamine had experienced chemists on site and performed the procedure without incident many, many times. Even then, incidents happened but how many incidents were averted by expert judgement? We’ll never know.
Experience
Let’s talk about experience. Career chemists are like everyone else- they may have accumulated years of experience. Some of the learning’s a person has accumulated are captured in writing and available to staff. Other learning’s reside in a person’s head only and are perhaps regarded as ‘obvious’. Or the serious hazards are actually disclosed on the Safety Data Sheet which was filed away without scrutiny. Knowledge of explosibility of a particular substance could be too narrow by virtue of time and obscurity to serve as walking around knowledge by many chemists. Some of us are accustomed to spotting explosive functional groups (explosophores) on a molecule but many are not.
For some individuals, their 18 years of experience is better described as 6 years repeated twice, or worse. Years of experience should always imply years of continuous improvement.
The main reason that process safety was a separate department was to prevent production and R&D from having vested interest in how test measurement results were interpreted and used or ignored. If calorimetric data suggests that a particular process reaction can run away or if a reaction should be initiated and run at a lower temperature, managers personally responsible for productivity may object owing to increased plant time or lower processing yields. This is especially problematic if prior experience has never shown a hint of a hazard, yet. Or, incidents in the past were not taken seriously or properly understood. The phrase “we’ve always done it this way” can be a very difficult barrier to overcome. And even if overcome, can revert back to the old practices over time.
This forces management to deal with safety margins and acceptable risk. They should automatically understand that zero risk is not possible. However, they may look back over the production history and not realize that they spent too much time near the edge of disaster.
Unknown risks
Imagine wearing a blindfold while standing 2 meters from the rim of the Grand Canyon. Someone turns you around a few times to scramble your senses. Now, even while not knowing the location of the rim, it is possible to walk around blindfolded and not go over the edge. You could do this for a short or a long time period and not fall in. Slowly you begin to doubt the hazard is real since you have not gone over the edge. Soon the risk is forgotten in the frenzy to reduce costs. Then one day you fall into the canyon and on the way down you muse about your own folly.
Not a single reader has asked about the photograph in the header of this blog, so I’ll save the many peoples of the world from having to ask. Mineral collecting has been a lifelong weakness of mine so there was no surprise when I bought the pink mineral in a rock shop in Leadville, Colorado. The pinkish mineral in the sample is rhodochrosite, the state mineral of Colorado. Like most samples, it comes from the now-closed Sweet Home Mine, a failed silver mine in Buckskin Gulch outside of Alma, CO, between Breckenridge and Fairplay. If you are ever in Denver with spare time on your hands, the mineral collection at the Denver Museum of Nature & Science has a stunning collection on display of rhodochrosite from the Sweet Home Mine.
Source: Google Maps. Location of Sweet Home Mine outside of Alma, Colorado.
To get to the site take gravel road 8 from Alma up Buckskin Gulch which eventually terminates at a trailhead near the base of several fourteeners in the Mosquito Range. We once tried to find the mine by driving up the gulch above Alma, but there were no signs identifying the mine.
Source: Google earth. Location of Sweet Home Mine in Buckhorn Canyon.
While we did not positively identify the mine on our trip, a photograph (below) was found later of a building associated with the mine. We did see it but sailed right on by. The mine is located on private property so wandering around the site is not permitted.
Source: Facebook. The famous Alma King rhodochrosite specimen with museum dudes for scale.
Source: personal specimen purchased at a rock shop in Leadville, CO. The rhodochrosite section is placed next to manganese on the periodic table just because it looked cool. The gold-colored bits on the specimen are likely chalcopyrite.
The mining district was discovered in the usual way- the search for placer metals like gold led miners up Buckskin Creek into the gulch looking for the source of the lode deposit. Originally a silver mining claim was made in 1873. The sporadic silver mining operation was abandoned in 1966. In 1991 the mine was bought out by Collector’s Edge Minerals, a consortium, and modernized. After a period of activity, the Sweet Home Mine was closed in 2004. However, another mine called the Detroit City Portal was begun by Collectors Edge on nearby Mt. Bross in 2016. This new operation, yielding many fine specimens was finally closed in September of 2024.
Source: Mindat.org. Looking north towards the Sweet Home Mine and what appears to be Mt Democrat on horizon.
“Mineralization is generally in base metal-silver-rhodochrosite-fluorite veins predominately hosted by meta-igneous and metamorphic rocks, with minor mineralization in porphyritic dikes and pegmatites. There are five main veins in descending order of production: the Main, Tetrahedrite, Watercourse, Blaine and Blue Mud veins. The Blue Mud Vein is a barren post-mineralization fault-vein, and production from the Blaine Vein was minor. Overall, the planned extent of the mine is small (1000 feet x 400 feet) with about 5,000 feet of workings, and the overall hydrothermal alteration zone small, despite evidence of on-strike continuation of the veins in the collapsed Tanner Boy workings directly across Buckskin Gulch. And even within a vein, rhodochrosite finds were limited.”
“Three conditions were responsible for the formation of vugs: (1) changes in strike and dip of veins, (2) vein intersections, and (3) openings formed by fault bends controlled by host rock foliation. In general, the 2nd condition was responsible for major pockets, and the 3rd for most smaller pockets. Exploration focused on fault/vein intersections. Fluid inclusion studies suggest that the hottest fluid flow produced the gemmiest ruby-red rhodochrosites.” Minedat.org
Deposits found in the mine result from mineral-saturated hydrothermal fluids moving from the mineral source-rock into faults and fractures in the formation that were cooler, leading to precipitation of the minerals. The large size of the rhodochrosite crystals in the museum collection suggests that the precipitation was gradual.
According to Minedat.org, after the buyout of the Sweet Home Mine by Collector’s Edge Minerals and subsequent modernization, ground penetrating radar was used to survey for vugs. According to the AI overview by Google in a search for “vugs”-
Vugs are- “small to medium-sized hollow spaces or cavities within rocks, often lined with beautiful, well-formed crystals like quartz or calcite, formed by mineral-rich fluids filling natural voids left by dissolution, tectonic shifts, or gas bubbles in volcanic rocks, prized by collectors for their exposed crystal formations.”
Only makes sense, right? Liquids within voids in the rock have the opportunity for crystals to grow into. Vugs are associated with faults and fractures which can be filled with hydrothermal fluids within a formation. Lode gold, silver, lead, etc., as well as quartz may line or even fill the vug. This is why some of the best mineral crystals are only found in mines and this certainly applies to rhodochrosite. Rhodochrosite contains manganese (II) which is oxidizable to a higher, more positive oxidation state, so protection from atmospheric oxygen deep within a rock formation prevents decomposition of the mineral.
Crystallographic structures of rhodochrosite are shown below-
Source: Mindat.org. A view of the crystal structure rotated to see the planar arrangement of Manganese (2+) in purple and carbonate anions (2-) in grey and red.
Source: Minedat.org. In this view the alternating layers of carbonate anions (CO3 2-) delineating the carbon and oxygen atoms. The trigonal shape of carbonate can be seen.
Below is a representation of the unit cell with atom labels. Clear images are tricky with crystal structures. Overlapping features are hard to avoid.
Source: Minesdat.org. The labeled unit cell of rhodochrosite. Partial carbonate structures can be seen contributing to the unit cell.
Rhodochrosite is manganese (II) carbonate, MnCO3, and is insoluble in water but as a metal carbonate it is acid sensitive and therefore subject hydrolysis or chemical or microbial oxidation to Mn(III) or Mn(IV). Like a great many common ionic substances, it is not regarded as suitable for jewelry applications because it is not comprised of silicate or aluminum silicate subunits common in semiprecious and more robust minerals like sapphire, beryl or garnet. The structure is composed of MnO6 octahedra connected by trigonal carbonate units. The large buff-colored balls are manganese atoms and the smaller, bluish-colored balls connected directly to the manganese atoms are oxygen atoms. The middle-sized darker balls not connected directly to the manganese atoms are the carbon atoms of carbonate.
Manganese is not uncommon in the Colorado Rockies. A mining geologist once complained to me that there was so much manganese in their gold mine tailings that it was a regulatory problem for them. For a time pyrolusite, or manganese dioxide (MnO2), was mined in Colorado, near Salida. Never a large operation, pyrolusite could be used in the extraction of gold from its ore.
Crushed pyrolusite was placed below a wooden container along with sodium chloride. To this mixture was added concentrated sulfuric acid. This generated gaseous hydrochloric acid which was then oxidized by the manganese dioxide in the pyrolusite into chlorine gas which flowed up through the container of gold ore combined with the gold ore and generated gold chloride. The water-soluble gold chloride was removed with water, then isolated and into this pregnant solution was dumped scrap iron. The iron reduced the gold chloride and finely divided gold precipitated out. This was a pretty danged clever method for use in the field as it required only water, NaCl, H2SO4 and pyrolusite mineral which could have been mined in Colorado.
Oh, BTW. You might know that a way to generate a stream of fairly dry HCl gas (in a lab fume hood!!!) is to place granular NaCl into a vented flask and slowly drip conc H2SO4 from an addition funnel on it. A stream of nitrogen is used to force a flow of HCl out of the flask and through a sparge tube into your reaction flask.
And, speaking of metals ...
Nearby the Sweet Home Mine, a hop, skip and a jump across the ridge to the NW is the Climax Molybdenum Mine on Fremont Pass just west up the road from the Copper Mountain Ski Resort. This major mining operation is owned and operated by Climax Molybdenum Company, a subsidiary of Freeport-McMoRan. If you look at the image for a minute, perhaps you can see that most of Bartlett Mountain is gone. Just imagine laboring in a frigid mine above the 11,000 ft altitude. I’d be dead by noon the first day …
Source: Google Earth. Just a few miles NNW of the Sweet Home Mine is the Climax Molybdenum Mine on Freemont pass.
The mineral of interest at the Climax is molybdenite, or molybdenum sulfide, MoS2. The deposit was discovered in 1879 by prospector Charles Senter who was actually prospecting for gold or silver. By 1895 Senter found a chemist who determined that the mineral contained molybdenum. At that time, however, there was no market for the moly. In a few years steelmakers discovered that molybdenum had application in steel making and, with the onset of WWI. the mine went into full production after it was discovered that the Germans were using it to strengthen steel in their tanks and weapons.
The National Mining Museum and Hall of Fame down the road in Leadville has a large collection of interesting artifacts from early mining efforts at Climax. If you have been in many mines, you’ll know that they are mostly hallways that have been blasted out of solid rock. When mining activity stops, they are eerily quiet.
Image source: National Mining Museum in Leadville, CO. Colorized photo of lunch time in the mine.
Molybdenum sulfide is also valued as a dry lubricant for use in the temperature extremes and vacuum of space. Dry, low vapor pressure lubricants are used to prevent evaporation and contamination of optical surfaces on a satellite.
I received an email from Academia.org stating that they could turn a research paper which they suggested into a cartoon. Well, what could I do but give it a try?
Cartoon based on a my paper of mine in JOC (long ago) on the facile synthesis of molecules with chiral, enantiomerically pure quaternary carbons. It was a synthetic methodology paper.
Considering that the cartoon has only 4 panels to it, this isn’t so terrible. The title of the paper did have some ordinary vocabulary in it like: the, of, and pure. Isn’t that enough for everyone? Crimony!
In truth this “service” is meant to tickle my funny bone enough to lower my cheapskate defenses, hopefully causing me to subscribe to their service. It didn’t work, this time.
The “facile synthesis machine” is up there with the “Wayback Machine” in terms of wishful thinking.
Summary: The point of this essay is to remind people that, while the works of Charles Darwin and Jean-Baptiste Lamarck were obviously profound in the understanding of many aspects of the biology of life on earth and its adaptation to the environment, their work is very much a product of the mid-19th century. This was prior to the atomic and molecular theories of matter were developed. Since that time the fields of biochemistry and molecular genetics have grown to a high level of sophistication and provided many mechanistic details on how evolution can occur at the level of molecules. With the advancement of biochemistry and molecular genetics, evolution is recognized as a molecular phenomenon using chemical mechanisms not unfamiliar to chemists. It seems likely that if Darwin, Lamarck and others did not make their early contributions to evolutionary theory, biochemists and biologists of the 20th century would certainly have proposed evolution as an inevitable consequence of the mutability of life.
………………..
The frame of reference in this essay is that of an organic chemist’s mechanistic view of the fundamentals of chemical change in biochemistry or molecular biology. Let’s just call it chemistry.
Of the many features of popular science content, one annoyance to the writer stands out: Articles on evolution remain fixated on Charles Darwin’s mid-19th century opus magnum, “On the Origin of the Species“. Darwin’s survey expeditions on the Beagle from 1831 to 1836 as a gentleman companion and naturalist resulted in sharp observations, sample collection, notes, books and years of scholarly lectures.
The question of biological change from evolution dredged up considerable controversy early on, most prominently from the religious communities and lasting to this very day. Much later, after chemistry based on atomic theory was well established, creationists began to sermonize on the statistical problems with the right atoms coming together is the correct order to produce a person. The mantra was that creation implies Creator. Within the context of the Abrahamic religions, the creation of life was clearly stated in religious texts. To assert otherwise was simple heresy. Eventually, the more literate opponents of evolution latched onto the physical principle of entropy.
Entropy is a concept that creationist’s love to unsheathe and swing around. They will say that the 2nd Law of Thermodynamics opens up an apparent contradiction. The crux of their argument is based on equating entropy with ”disorder’. Life itself is comprised of many kinds of highly ordered matter, but the universe is supposed to be getting more disordered. How can this be?
What doesn’t get mentioned is the considerable disorder produced from the life and growth of organisms.
/*BeginEditorial Comment*/
The term “disorder” is the Disneyland word for entropy. It is a highly over-simplified cartoon word meant to describe entropy, which is a thermodynamic state variable. Entropy has the physical units of energy per degree Kelvin per mole. It refers to irreversibly dispersed energy in a system. In my opinion, the word “disorder” is too loosey goosey a definition for even a loose definition.
/*End Editorial Comment*/
A protein molecule doesn’t appear to be “ordered” to the untrained eye. However, a protein molecule has 3 levels of structure in its final construction. First is the specific sequence of amino acids in the protein chain. Second, this protein chain consists of chemical bonds that are free to rotate and chemical bonds that are not as with the peptide bond. This rotational freedom of motion allows a protein chain to rotate about many of its chemical bonds and come to rest in a place where two features may have a reversable mutual attraction. Or, the protein relaxes in a particular configuration that has the least strain.
A third form of protein structure comes from the attraction of individual proteins with another. Proteins often align with another to from large complexes, frequently imbedded in cell walls. These protein structures can contain a channel where ions may pass from interior to exterior of the cell. The channel can be opened or closed in response to external stimulation.
As a protein chain is assembled, it has amino acid features in it that can form hydrogen bonds which allow particular stretches of the chain to reach around and weakly and reversibly connect with itself. An amino acid that has a thiol (or sulfhydryl, -S-H) group can react with another to form a disulfide linkage (-S-S-). The disulfide linkage is a covalent linkage and thus somewhat stable though is subject to reductive or oxidative cleavage.
A length of protein can form a helical secondary structure, a somewhat flattened secondary structure called a beta-pleated sheet, or an unstructured sequence of amino acids.
A few words about entropy, S
One of the ideas frequently cited in creationism is entropy. It is cited because they take evolution as contrary to entropy and the Second Law of Thermodynamics. According to Google, the thermodynamic definition of entropy is given as-
Unavailable Energy: In thermodynamics, entropy quantifies the portion of a system’s thermal energy that cannot be converted into useful work. The more disordered a system, the less energy is available for work.
A more expanded definition is-
Entropy is fundamentally linked to the second law of thermodynamics, which states that the total entropy of an isolated system always tends to increase over time. This means that systems naturally move towards a state of greater disorder and less available energy for work.
The usual interpretation is that the total entropy of the universe is always increasing. This gets interpreted as the world becoming more ‘disorderly’. Unfortunately, the word ‘disorderliness’ can be cognitive bias. The natural meter-scale world we reside in provides many examples of orderliness, which is often just a value judgement by people seeking tidiness.
Creationists often portray abiogenesis and evolutionary change as highly improbable, suggesting the ultra-minuscule chance that the necessary atoms could connect perfectly to create life. If this process was truly random, I might concur. However, the formation of molecules from atoms and the subsequent reactions leading to further changes are not entirely random. Any two atoms or molecules colliding are subject to random motions, true enough. However, what happens during and after a collision and subsequent reaction is far from random. At Earth surface temperatures allowing liquid water, atoms and molecules engage in water compatible reactions, yielding a limited array of possible outcomes and sometimes even a sole outcome. Given certain conditions such as temperature and chemical surroundings, each atom or molecule is restricted to a fairly small number of reaction channels or pathways. Life did not spontaneously arise or evolve from a purely haphazard broth of atoms.
Careless assertions about entropy and the order/disorder of matter can lead to specious conclusions.
The better definition of entropy comes from statistical mechanics. Entropy describes how energy is distributed among the microscopic states of a system. It describes how many ways the system can be arranged at the microscopic level while still appearing the same macroscopically. [From ChatGPT]
Entropy as disorder is perhaps a cul-de-sac rather than the road to understanding.
The Creationist’s respect for the 2nd Law is quaint, but it is more a matter of picking up their opponent’s club and beating them with it in error. When finished, they set it back down and walk away satisfied that they have used science to beat science.
Dipoles- Nature’s Sticky Spots
At the atomic level of matter during a reaction, atoms or molecules may undergo a rearrangement of charge leading to +/- ionic species producing a single pole or a dipole.
Graphic by Arnold Ziffel.
Image by Arnold Ziffel.
When atoms and molecules undergo electronic change the surrounding solvent environment may help or hinder a given transformation. If during the course of a chemical reaction a transient charge is produced, the solvent ‘bag’ enclosing the reacting molecules can promote or hinder the reaction transformation.
My personal policy is to limit the word ‘entropy’ to subjects related to the atomic scale or to heat engines. A loose pile of bricks should rather be described as in disarray.
Molecules and even neutral noble gas atoms can form transient dipoles, causing small, short lived attractive forces between atoms. These are called Van der Waals forces. Graphics by Arnold Ziffel. Graphics by Arnold Ziffel. The ease with which a reaction mechanism proceeds may be subject to solvation effects. Formation of a dipole requires that negative charge is pulled away from positive charge. This takes the application of work against the natural attractive force between positive and negative charges. It takes energy to electronically alter a molecule to produce charge separation to form a dipole. A shell of dipolar solvent molecules around a reacting dipolar molecule can stabilize accumulating polarity in a molecule sufficient to aid the transformation.
Chemical reactions proceed mechanistically in a stepwise manner, and with over 150 years of extensive and peer reviewed chemical research and development, much chemistry has become quite predictable across a wide range of substances. A crucial aspect of modern organic chemistry is the understanding of reaction mechanisms. Biochemists focus on the mechanisms of reaction in aqueous environments, while classical organic chemists commonly avoid water in lab work. However, the principles of physical chemistry support both fields. Indeed, physical chemistry is the cornerstone of the chemical sciences.
On mutation
My greatest hangup about language commonly used to describe evolution is when someone says “The _______ evolved the ability to _______ in order to survive.” True, but In the minds of many this may suggest that a specific genetic change was purposely triggered to achieve the ‘goal’ of enhanced survivability. If a genetic change occurred that improves survivability, it begins randomly. It’s rightly been said that evolution is blind going forward. The DNA of an organism struggling for survival will not automatically give rise to an offspring that have resistance to a given threat. Rather, with each successive daughter cell there is a chance that a beneficial mutation has occurred. But mutations could happen anywhere in the genome. Some mutations may be beneficial, and others may be problematic in terms of survival. There is a chance that the mutation may never be expressed from dormant genes. In order for a mutation to pass forward, it must happen before or during reproduction. Mutations to the parent organism after it has reproduced end right there, beneficial or not.
Radiation-induced mutations to the DNA are more likely to occur during mitosis in cell division when copied DNA strands are being pulled apart and into the daughter cell. The DNA strands are not yet wound with the histones and are more accessible to external influences like radiation or chemical insult.
/* Anecdote: DNA Breakage */
In my second go-around with radiation for prostate cancer in spring of 2024, I learned that the practice now is to deliver approximately the same overall radiation dose as before, but in fewer and larger doses. The idea is to cause breakage in both DNA strands of the double helix rather than just a single strand in the cancer cells. I had 4 sessions of 8 gray in 2024 as opposed to 21 sessions of ~1.5 gray in 2014. I cannot account for why the total dosages are not equal, but there were specific cancerous tissues like the prostate and the seminal vesicles to hit the first time around.
/* End Anecdote */
Today there is a growing understanding that there are actions with the DNA polymer-histone structure that do not involve changes in the genetic sequences. This is called Epigenetics. The total human DNA double helix stretched out is approximately 2 meters in length yet must be contained within a cell. The way that DNA double helix does this is to wrap around a series of individual proteins called histones for compaction into a smaller structure called chromatin. Finally, the chromatin folds into the familiar chromosome structure.
Source: Wikipedia. Public domain image produced by the National Institutes of health.
But is that adaptation by the existing genome composition? Genetic evolution is blind going forward. If a species evolves with a survival advantage of some kind, there can be no “foresight” involved. If it is truly genetic evolution, the end result is because of heritable genetic changes at the level of molecules. If a changing environment causes altered expression of an existing gene in response, say a gene that is otherwise dormant but is suddenly “awakened” by the new environment somehow, then perhaps this is a form of “adaptation within the existing genome” rather than evolution by editing of the genome. This is where epigenetics operates.
Charlie Darwin
The naturalist, geologist and biologist Charles Darwin‘s claim to fame is substantial and well deserved. His book “On the Origin of the Species” is the work that is cited by many but read by few. Over his lifetime he had published considerable work before Origin of the Species. What may be less known is that the notion of evolutionary change wasn’t something that he alone scraped together. Others had previously speculated out loud and in print about changes in species over time. His grandfather, the physician Erasmus Darwin, produced a volume titled Zoonomia that anticipated some of the work of Lamarck which foreshadowed the concept of evolution.
Charles Darwin drawing by Samuel Laurence, 1853. Source: Wikipedia.
In 1831, Charles Darwin embarked on what was originally planned as a two-year voyage of discovery aboard the H.M.S. Beagle, but which ultimately spanned five years. The expedition’s primary goal was exploration, and Darwin, recommended for his scientific interests, joined as a gentleman naturalist with Captain Robert Fitzroy, rather than as a mere specimen collector. Fitzroy, a Vice-Admiral in the Royal Navy and a scientist, led the journey. Darwin, during the voyage, dispatched bones, fossils, seeds, illustrations, and writings back to England, garnering significant interest from geological and natural history circles. Of interest, prior to setting sail Darwin had acquired skills in taxidermy.
After Darwin’s return to England, he spent many years speaking, writing, rewriting and publishing his accounts of the voyage. He is buried in London’s Westminster Abbey just a few meters from the grave of Sir Isaac Newton.
Our acknowledgement of the value of Charles Darwin’s work and methodology is fully legitimate. Darwin’s theory of evolution was a major step change in how we think about biology, speciation and introduced us to natural selection. What is missing from Darwin’s work, however, is the physicochemical mechanism of how evolution works. This is understandable simply because biochemistry was unknown at that time. Inheritance at the molecular level was a mystery until the early-mid 20th century when the molecular biology of the gene began to come together. DNA and RNA had to be isolated and characterized as well as observations made of their x-ray structures. The connection of DNA and RNA polymers to protein formation and composition had to be arduously worked out. Accounts of this are easily found on the internet.
One point of this essay is to argue that while Darwin and others began to coalesce the varied observations of macroscale adaptation and speciation found around the world into a grand theory, the mechanism of evolution lay at the Ångstrom to nanometer scale of the molecule. I find it impossible to deny that if Darwin and others had not come up with evolution at the macroscale, biochemists would have discovered molecular evolution and it would have been used as a basis for the evolution of species.
Sidebar. Rosalind Franklin (25 July, 1920 to 16 April, 1958)
Much acrimony has been made over the alleged snubbing of physical chemist Rosalind Franklin in the selection of 1962 Nobel Prize winners in Physiology or Medicine for the discovery of the structure of DNA. A recent paper in the 25 April, 2023, issue of Nature brings together some little-known details of the cold-shoulder given Franklin as a co-discoverer of the double helix structure of DNA.
The disqualifying event for the 1962 Nobel Prize occurred in 1953 with Franklin’s death. At the time, posthumous awards were accepted only if the death occurred between nomination and the award date which was not the case for Franklin.
The story of the discovery of the double helix structure of DNA involves 4 central characters: James Watson and Francis Crick from the University of Cambridge, UK; Rosalind Franklin and Maurice Wilkins at King’s College, London.
Before any of this started, the involvement of deoxyribonucleic acids in heredity had been suggested by Oswald Avery in 1944 (below). Watson and Crick, Franklin and Wilkins did not discover DNA. They did, however, use x-ray diffraction of crystalline DNA fibers and model building to deduce the chemical structure of B DNA.
At King’s College at the time of this story, the biophysics group was led by John Randall whose deputy was the New Zealand-born biophysicist Maurice Wilkins. In 1951 Franklin joined the Department on a 3-year fellowship having come from Paris where she used x-ray diffraction to study the structure of coal. By this time Wilkins had been working on DNA since 1948. A personality clash arose between the more assertive Franklin and the less confrontational Wilkins, so Randall divided certain DNA samples between them. Franklin received a calf’s thymus-derived sample of the highly purified material from the Swiss chemist Rudolf Signer – Wikipedia at the University of Bern and Wilkins received a “poorer sample” from Chargaff at Columbia University in NYC.
Sidebar to the sidebar.
Chargaff had become interested in DNA after Oswald Avery at the Rockefeller Institute published a paper in 1944 concluding “The evidence presented supports the belief that a nucleic acid of the desoxyribose type is the fundamental unit of the transforming principle of Pneumococcus Type III“. Avery and colleagues had developed the first immune serum for a strain of pneumococcus from the blood of horses. Along with colleague Michael Heidelberger they found that polysaccharides associated with this strain of pneumococcus could be isolated from water-soluble spherical capsules around the cocci and are antigens, which later led Heidelberger to discover that antibodies are proteins. These are now fundamental facts in molecular biology and immunology.
Back to the double helix
Wilkins had earlier discovered that there were 2 forms of the DNA in solution- the crystalline A form and the paracrystallineB form. Franklin took the A form and Wilkins the B form. Franklin discovered that the A form will convert to the B form in higher humidity and revert back to the A form in lower humidity.
Photo 51. The x-ray diffraction pattern of the “B” form of DNA, taken by Raymond Gosling while working under Wilkins. Source: Wikipedia
Unfortunately for Franklin with the A form, Wilkins’ B form is what is found in the cell.
From a biochemical standpoint, mutation of DNA sequences makes chemical sense and today is routinely observed in DNA assays. An increasing number of diseases or characteristics are linked to distinct mutations in DNA. Deoxyribonucleic (or desoxyribonucleic) acid (DNA) is a chemical substance that, like all chemicals, is susceptible to its chemical environment and whatever particular substances happen to be nearby or to ultraviolet or ionizing radiation or to highly reactive chemical species like free radicals.
Left to right: A, B and Z DNA structures. Image from Wikipedia. Note the difference between Franklin’s A-DNA vs Wilkin’s B-DNA. They differ by the extent of hydration.
Biochemical Evolution
Charles Darwin is renowned for his well-articulated, evidence-based argument on the evolution of species by natural selection. He courageously introduced a detailed new theory within the conservative British scientific community. His ideas fascinated many leading naturalists of the time, who adopted and furthered the theory. Truly, it marked a considerable progression for that period.
“But but but, it’s just a theory!” This is a common objection by creationists and religious zealots thinking they have found the weak underbelly of evolution. They claim it is “just” a theory as though a theory was merely a fanciful excursion of the imagination where all opinions are of equal validity.
A theory is an overarching explanation or model subject to improvement over time with which arguments are made in support of or against a core concept. This core concept is initially built on a pedestal of clay. As better analysis and experimental data come in, the pedestal is strengthened or weakened. Furthermore, the theory may be unequally affected across its breadth with some aspects perhaps tossed out and others supported. Theories themselves evolve and strengthen with evidence. Scientists are naturally anxious to contribute to sorting out the truth of a theory.
Another objection to evolution is the previously stated notion that “creation implies the existence of a creator.” A common argument is that a watch is such an unlikely collection of highly refined components that there must be a watchmaker behind it. The human hand or eye are the anatomical examples often cited. There is a bit of vocabulary that muddles these arguments. The use of the word “creation” presupposes that the universe is something that was assembled by a creator. If you see the world as something that had to have been created, then the idea of evolution may be difficult to swallow.
The question of life on Earth has two important aspects to it. One is the evolution or change that species undergo over time. The other is the initiation or abiogenesis of life from non-living matter. Of the two areas, the evolution is the most developed concept.
Biochemists and molecular biologists have taken Darwin’s evolution from a mid-19th century macroscopic theory supported by the fossil record, geological observations and the gross anatomy of animal species from around the world to the submicroscopic machinations of molecules. This has been a gigantic leap forward in understanding not just in the chemistry of all life but also the evolutionary physicochemical mechanisms of life. Life is one of the things that chemicals can do given opportunity and time.
Christian and other churches reacted negatively to evolution in Darwin’s time as most do today. Darwin and many geologists concluded that the Earth was far, far older than did scholars withing the church. So, what is this about? Are church leaders skeptical or just stubborn? Is this even a good question?
Types of thinking
I would offer that people can be spread between two bookends in regard to thinking: Devotional thinkers and analytical thinkers. Devotional thinkers have a core doctrine supporting their beliefs and think and behave in a way that their belief guides them. Devotional thinkers study their doctrines in an effort to be in better alignment with it. It is not uncommon for devotional thinkers to limit their exposure to things not aligned with their devotion. Devotional thinkers are sometimes labeled faithful. Their goal is to study supernatural doctrine and align one’s personal behavior.
Analytical thinkers will naturally adopt a baseline worldview that comports with their education, observations and logical sensibilities. But when presented with new data or just a compelling idea, analytical thinkers may be persuaded to open new vistas in their thinking or at least set the idea aside as new thinking under consideration. Analytical thinkers are sometimes labeled as skeptics.
It is impractical to approach each new circumstance one encounters ab initio. In order to explain how an airplane flies it is not presently necessary to first independently derive Newton’s laws of gravity and Prandtl’s fluid dynamics so as to set the stage for lift and drag forces. Everyone has a practical baseline picture of the world that serves as a conceptual starting point for some kind of conclusions on reality. The discipline of science is highly vertical with old knowledge built upon or revised by new knowledge. The requirement for accuracy is practiced by the investigator and checked upon by peer reviewers. Nobody wants to be that scientist who has published a paper with faulty science requiring a retraction in the Retracta Acta.
To be skeptical of the evolution of the species is on one hand to require supporting evidence and compelling arguments. On the other hand, many people dismiss evolution altogether as being contrary to their faith-based notions while posturing as an “evolution skeptic.” However, when physical evidence or collected data are thrown on the table for all to examine and when that evidence is part of a trail of evidence logically or mechanistically interconnected, then to dismiss the logic or measurements is to go beyond skepticism. Apologists would claim that they are keeping their faith against adverse influence or even resisting evil. But standing against evidence could really be considered simple stubbornness for fear of perceived divine consequences or discomfort.
DNA and RNA are polymeric substances comprised of four major subunits. Three of the subunits are shared by both DNA and RNA and the fourth is a different component characteristic to RNA. This determination took some time to arrive at the correct structures and the chemical mechanisms.
Graphic by Arnold Ziffel.
Chemicals interact by particular mechanisms depending on what is present and physical conditions like temperature, pressure or interfering substances. These mechanisms are a built-in, reliable feature of matter in our universe. When multiple mechanisms are possible, the fastest one tends to prevail, channeling matter down that pathway. The fastest channel will have to lowest energy barrier to cross. However, if the reverse mechanism is possible then a balance will be struck between two reservoirs of substances. This is the basis for thermodynamic equilibrium. The reaction direction with the lower energy barrier will be faster, and if the reverse direction isn’t possible, the mechanism will preferentially populate the direction with the lower energy barrier. We would say that the reaction is under kinetic control. If the reaction can go both ways, then a balance will be struck producing substances on both sides of the energy barrier producing thermodynamic control.
One argument offered by Creationists is that the probability of all the atoms in a human coming together to form that human is 1 in 10stupid large. In other words, they say, highly improbable within the age of the universe. And if that was how it works, then I’d agree. But it is definitely not how chemistry and evolution work.
Evolution happens by a biochemical ensemble of mechanisms in solvent water constrained by the boundaries of chemistry and physics and specifically to what is possible in aqueous media. Liquid water is necessary rather than solid or crystalline phase water because for bio- or any chemistry to operate, molecules have to diffuse around and collide in order to react. Biochemistry and therefore evolution occurs at temperatures between roughly -10 oC and 45 oC, plus or minus a bit and at midrange pH levels.
Life is substantially based on carbon because carbon forms stable chemical bonds with nitrogen, oxygen, sulfur, hydrogen and especially with itself. Phosphorus appears as phosphate. Carbon can form chains of indefinite length and 3, 4, 5, 6, 7 and 8-membered rings or larger with 5 and 6 being the most common ring sizes. Tens of millions of different chemicals structures are possible within this group of elements. Nature is crammed with ring systems in natural products.
A line drawing and 3D rendering of Taxol or Paclitaxel. Silicon does not do this. Image: Wikipedia.
Biochemistry is not based on silicon, even though silicon has certain chemical similarities to carbon. Silicon does not easily form chains greater than 2 silicon atoms in length and it has a strong affinity to oxygen. This affinity is very much thermodynamic in nature and is difficult to overcome chemically at biological temperatures and pH. Silicon-nitrogen bonds are hydrolytically unstable at low to moderate pH. All of this adds up to poor utility for silicon in biomolecules.
Each of these carbon-nitrogen, carbon-oxygen, carbon-phosphate, carbon-sulfur, carbon-hydrogen and carbon-carbon bond combinations as well as the various combinations of N, O, P, S, H atoms have their own variations as well. Other atoms like iron, calcium, sodium, potassium, magnesium, chromium, selenium, iodine, and a few others serve purposes other than for molecular skeletons generally.
The point of citing all of these combinations of atoms is to emphasize that each has unique chemical properties and unique reactivities. The slapdash Creationist assertion that evolution merely brings atoms together to form an organism and no consideration of reactivity is mentioned. While molecules in solution are more or less randomly banging around, their entry points for successful chemical interactions are far from completely random. In fact, a given molecule will react only in a few ways depending on what it collides with. A complex molecule like glucose has several reactive sites, but it still has a limited menu of reaction types available at physiological conditions.
Molecules are tiny objects that can have even smaller features where changes can happen. These features can only do certain kinds of chemical reactions. They are called ‘functional groups’ and they are limited to a limited set of reaction mechanisms, often only one mechanism.
Think of each of these limited types of reactions as a channel. Overall, biochemical transformations happen through these particular channels. There may be numerous channels possible on a molecule affording diverse reactive outcomes. Even among the possible transformations, a few channels will react faster and thus dominate. The point is that there are not an infinite number of ways that molecules can exhibit reactivity. This means that evolutionary change through biochemical mechanisms does not have an infinite set of likely chemical pathways. There can be many, to be sure. But the entire ensemble of biochemical mechanisms operating in an organism do not have to change to allow a given evolutionary change.
Evolutionary changes can occur in very subtle ways. A biochemical modification may result from a misreading of the normal genetic code or from some other off-normal situation, but this is not an evolutionary change. A genetic change can result from an alteration in the sequence of the genetic code itself. A change in the sequence of the DNA may lead to a heritable mutation if the change is survivable. A mutation in an unused stretch of DNA may occur and lead to no effect. If the genetic change is fatal to the new cell, cell death can occur, and the mutation will not be passed forward.
Cosmic ray showers. Image NASA. High energy cosmic rays impinge on the upper atmosphere and collide with air molecules, causing nuclear reactions that result in showers of nuclear particles like muons. Most of us do not realize that we have muons in our lives, but there we are.
Energetic cosmic radiation from outer space and the sun is constantly showering the Earth’s upper atmosphere. When a particle or a photon from space impacts a person, it will penetrate to some depth, dumping kinetic energy into tissues that can break chemical bonds to form ion pairs or radical species. Ion pairs can reconnect as before or with other species to form new substances. Radicals are neutral atoms or molecules where electrons in the form of lone pairs or covalent bonds are evenly split into 2 radical species where each is neutral but have unfilled octets. Radicals tend quench themselves by popping off a hydrogen radical from a nearby molecule or by colliding with the radical that was originally separated.
Radiation exposure of living tissue or other material objects produces ‘stochastic’ damage because the kinetic energy of the radiation particle or photon far exceeds the energy needed to cause bond breakage or general scrambling of biomolecules. Stochastic radiation damage is fairly unselective so, as the radiation passes through materials, the energetic particle dumps some or all of its energy into the material directly along its path of movement.
One measure of the potency of a given particle or photon of radiation is the number of ion pairs produced per inch or centimeter. The three major types of radiation are alpha, beta and gamma. Alpha particles produce the most ion pairs because of its high kinetic energy so it dumps its kinetic energy along a very short distance. Beta particles can travel a bit longer distance (several inches) but require less shielding than gamma rays. While gamma rays are quite penetrating, their ion pair production is very low.
Any given human exposure to radiation can result in no observable effect or tissue damage. Not every exposure to radiation will result in cancer. Because radiation damage to tissues is stochastic, a survivable mutation of DNA is random.
Enzymes are proteins that act as catalysts or enablers of chemical transformations. These protein enzymes have places on them that are clefts, ridges and valleys along their exterior where other molecules or coenzymes can collide and interact called an active site. It is possible for the enzyme to be active continuously and catalyze transformations when the right substrate molecule jostles along. It is also possible for the enzyme to require ‘activation’ in order to function. One means of activation comes from phosphorylation of the substrate to be acted upon, phosphorylation of the enzyme itself, while yet another is where an external molecule binds to a particular spot, resulting in an alteration in the overall shape of the enzyme. This alteration in the enzyme’s shape can open the active site of the enzyme and allow the intimate contact necessary for a particular molecule to diffuse in, bind and undergo a catalyzed transformation. In this way, an enzyme can be deactivated as well.
Much new drug discovery and design is based on toggling an enzyme “off or on” with a suitable substrate. The substrate can be constructed so as to be highly specific, to be detachable or to sacrifice itself by covalently connecting to the enzyme and prevent it from further functioning. This last category is sometimes referred to as a suicide substrate inhibitor. Penicillin is an example of a suicide inhibitor that covalently combines with an enzyme, shutting it down permanently. Penicillin and its many analogs have a strained 4-membered ring in them called a beta lactam ring that can relieve the strain by ring opening to a straight chain by connecting to a feature on the enzyme. This is irreversible though bacteria have developed the ability of reacting with penicillin to eliminate its ability deactivate a target enzyme.
Enzymes can be very sensitive to a change of one amino acid which could lead to little change or it could cause the enzyme to operate a few percent faster or slower. Or it could change the reaction rate or specificity by a great deal. It might even allow different substrates to be acted upon by the enzyme. Let’s say that this change in the operating rate of the mutated enzyme causes a chain of successive biochemical reactions to operate faster, say by increasing the efficiency in the use of energy. If this results in the survival rate of the organism increasing by a bit, it may impart a survival advantage. If the alteration of the enzyme reduces the survival rate, then the organism may continue to survive or not.
But hold on. Evolution is blind going forward. Not all changes register as an advantage or even show an effect. If the mutation happens after reproduction, then it does not get inherited by succeeding generations. If the mutation is fatal, then the cell dies and the genetic change halts. If the process of evolution is so iffy, how does anything happen?
We should reflect on how fast chemical reactions can happen. At room temperature, water is undergoing collisions at a rate of ~1010 per second. Now imagine 1 mole of water, 18 grams, all molecules undergoing ~1010 collisions per second. One mole of water contains 6.02 x 1023 water molecules. Simple mindedly, that adds up to 6.02 x 1033 collisions per second in those 18 grams, or 1 thirsty swig of water. But that’s not all. These water molecules are also vibrating, rotating and translating at perhaps 1012 vibrations per second. In general, each collision will carry a particular probability of a bond breaking or bond forming event. Water is a boring example, but we can see that a reactive biomolecule is also undergoing a very large number of collisions per second, each with a certain chance of participating in a reaction. Even though a given reaction may be of low probability per collision, a great many collisions raise the odds of fruitful interaction.
Rapid molecular collisions in combination with a limited range of reaction channels means that the molecules will sort themselves out by way of finding the lowest energy-barrier, fastest reaction channel to follow. This is far from completely random. The fastest reaction channel can consume its inputs the fastest and the product from this fast channel will predominate.
Why would there be DNA, protein or other biomolecules that are fragile enough to suffer mutation in the first place? Why hasn’t DNA evolved into a sturdier structure free of mistranslation, mutation and other errors in its functions? There are certainly substances that are more robust than DNA or RNA like hydrocarbon polymers, silicates, urethanes, urea linkers and other polymers that are much more stable to chemical insult. The DNA double helix is, after all, held together by low energy hydrogen bonds.
A key requirement of life as we know it is that something has to prompt DNA to unravel and split strands of deoxyribonucleic acid chains apart. In order to unravel, the structure holding it safely in the double helix form must be capable of assembling and coming apart when prompted. The nucleic acid structure along each chain of the double helix have phosphate linkages. Because phosphoric acid is a weak mineral acid it can lose one, two or three acid protons (mono- di- or tribasic) under physiological conditions.
The phosphate linkage in DNA works very well for life. It allows free rotation about the linkage and is quite polar for good compatibility with water. Phosphate can form bonds between themselves: 1, 2 or3 phosphates can link, leading to short chains of phosphate anhydrides. Because phosphate is relatively stable under physiological conditions yet is able to function, it is nearly ideal for its purpose. Phosphate is phosphorus (V) with 4 oxygens bonded in a tetrahedral fashion. Three of the oxygen atoms have single P-O bonds with one P=O double bond. When connected as anhydrides, mono-, di- and triphosphate anhydrides may form. The linking oxygen atom connecting the two end phosphates can be displaced and added to another substrate. This is called phosphorylation and is critical in biochemistry.
Abiogenesis
Abiogenesis is the big puzzle at this point in history. Evolution is not the same as abiogenesis. How did life begin? As we look around at the Earth today, we see an overprinting of billions of years of planetary, geologic, oceanic, and atmospheric transformations. The present world at the surface is nowhere near that world at the time when life began to flicker into existence. One of the primary differences is the chemistry in play. Before oxygen began to accumulate in the atmosphere, many elements may have been exposed at a reduced oxidation state, that is to say electron rich. The chemistry of an atom greatly depends on the state of its valence electrons. So much so that the atom loses its identity when charged or in molecular form. For instance, +H (protium or hydrogen cation) is chemically different from –H (hydride ion) is different from 0H (atomic hydrogen)and all are different from molecular H2. Referring to +H as hydrogen is incorrect. It is properly referred to as “hydrogen ion” or “protium ion”. The ions are distinct chemical species. This range of possibilities in the state of reactive atoms (other than the noble gases) somewhat complicates the chemistry of prebiotic Earth.
At this point I’ll refer the reader to the InterWebs for deeper insight into abiogenesis.
In a YouTube interview recently, Roger Penrose commented that one of his beefs with quantum mechanics (QM) was that it depends on consciousness collapsing a wave function. He said much more, but this struck me deeply. I have been struggling for years trying to verbalize my own brain-stem level suspicions. (To be clear, a Venn diagram with Penrose and myself overlap only insofar as we are both English speaking bipedal mammals.)
He also commented that by “wrong” he means “incomplete” and recalled that both Einstein and Schrodinger agreed on this. He noted that “incomplete” constitutes “wrong”. In another interview he comments that Schrodinger’s Cat was meant to illustrate that the superposition of a cat being both dead and alive was a problem that Schrodinger recognized. A cat cannot really be dead and alive simultaneously.
He said that while QM was about more than the evolution of a quantum state, it is also about measurement, but the measurement problem violates the quantum equation. QM gives the probability of a given state which is a superposition of probabilities.
QM is not my specialty, however, I have coursework in it like all chemists have had. In grad school I had quantum chemistry, again like all chem grad students have had, but it nearly did me in. Not having had a semester of diff eq, I was at a distinct disadvantage. Grad school QM goes well beyond the particle in a 1-dimensional box. The course consisted of mathematical derivations of the theory, but not much about the meaning in English. We were supposed to see the equations in their abstract purity and extrapolate to some kind of comfort level with notions our brains could grasp. It was based on the Copenhagen Interpretation of QM. Philosophically, the notion that the wave function would collapse on inspection by a brain was an idea that even today I cannot get past.
Penrose had a similar beef except that he is Roger Penrose and I’m some lesser ape gawping in from up the holler. Still, though I’m doomed to go to my grave with only very rudimentary understanding of QM, the concept of probability density all by itself as well as the spherical harmonics defining atomic orbitals has been a major benefit for my thinking. And for that I’m grateful.
Below is a cut & paste copy of text from Wikipedia outlining Penrose’s ideas-
Penrose’s idea is a type of objective collapse theory. For these theories, the wavefunction is a physical wave, which experiences wave function collapse as a physical process, with observers not having any special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: a Planck mass worth of matter, which he calls the “‘one-graviton’ level”.[1] He then hypothesizes that this energy difference causes the wave function to collapse to a single state, with a probability based on its amplitude in the original wave function, a procedure derived from standard quantum mechanics. Penrose’s “‘one-graviton’ level” criterion forms the basis of his prediction, providing an objective criterion for wave function collapse.[1] Despite the difficulties of specifying this in a rigorous way, he proposes that the basis states into which the collapse takes place are mathematically described by the stationary solutions of the Schrödinger–Newton equation.[4][5] Recent theoretical work indicates an increasingly deep inter-relation between quantum mechanics and gravitation.[6][7]Wikipedia.
I just donated to Wikipedia so I don’t feel too bad about this cut & paste. Please donate to Wikipedia. We want to avoid a paywall being put up in front of it.