Category Archives: Chemical Industry

Re-thinking start-up opportunities

It is interesting how ones perception of opportunities in the world depend on your context. I have academic colleagues who are in nanotechnology, for instance. When I have spoken of the apparent dearth of entrepreneurialism in chemistry, the sincere feedback I get is that there are nanotech-related startups out there. You know, I don’t doubt this.

What I was unable to articulate to my friends was that we need people willing to start companies for the manufacture of starting materials and intermediates for less cutting edge applications. I’m afraid that the word “start-up” has come to mean “bleeding edge technology”.

Have you ever tried to source specialty silanes or halogenated hydrocarbons for instance? The choices of manufacturers in North America are very slim. There are companies in the USA and Canada who manufacture pharma related materials. But believe it or not, not everyone needs costly cGMP manufactured feedstocks. You can find suppliers of thousands of varieties of boronic acids, esters, and difluorides. But what if you want an alkyl chloride?  In my experience there has been a mass extinction of North American halogenators in the last 10 years.

In the previous 5 decades US taxpayers have heavily subsidized US industry by the establishment of a university research complex residing at many dozens of public and private universities. Several generations of faculty at these institutions have written and been awarded a large number of grants over the decades that have produced the scientific talent. Some of the graduates have been the children of those whose combined support via taxed income paid for the complex. Others, in the form of foreign undergraduates, graduate students, and post-docs have been invited to come to the US and take advantage of this rich resource.

I, for one, am in support of sharing the scientific knowledge that has been so expensive in time and money. But what we find is that over the decades, the unstoppable advance of civilization has come to apply the inventions of technology to increase industrial efficiency by reducing the need for labor. Thus, as technology has advanced, the man-hours needed for any given item of commerce has generally declined.

When you combine this natural consequence of invention with a cultural inclination to export industrial production, what you get is a post-industrial civilization that becomes unable to support its previous level of comfort.

The US has been exporting its industrial magic faster than it can adapt to deindustrialization.  Whereas in previous times whole cities have grown around manufacturing plants, today we have whole cities substantially abandoned and blighted (like parts of Detroit). Public corporation shareholders who have taken full advantage of the rich infrastructure of the USA have pulled up stakes and moved to Mexico or Asia.  This article in Forbes is telling.

The combination of automation plus outsourcing overseas with the absentee landlord management of public corporations has triggered a basic instability in our culture. No one really knows how this will play out.

This is what leads me to urge my colleagues out there to consider starting out on your own. It will be hellishly difficult and will consume 5-15 years of your life. I have been a part of several failed startups myself. It is really hard to do. But let me say this: Avoid starting with a one-act pony, and find a way to have something to sell right away.  Not all start-ups have to bring single item, new technology on stream. Find a niche selling high value added, low volume products. Don’t be intimidated by environmental complications and zoning. You have to put your head down and plow through it.  Showing up and some hard-headed persistance counts for a lot.

Mercury Processing at the New Idria Smelter

A few Andreas Feninger photos found at the Library of Congress are shown below.  The New Idria mine was a productive mine and smelting operation in central California. Note the fellow at the tilted sorting table, physically agitating the mercury from the solid soot and allowing it to run down the table for collection.  This is a gravity sorting process. Hard to know what kind of occupational exposure the poor fellow is into.

Worker collecting mercury from soot from smelter at New Idria mine, ca 1942. Library of Congress.

Since the early days of Spanish mercury trade, mercury has been packaged in iron flasks. According to my sources, the 76 lb sizing of the flask was based what laborers and pack animals could plausibly carry all day. In the picture below, a flask is being filled with mercury at the New Idria smelter.

Mercury Filling Station at New Idria Mercury Smelter, 1942. Photo by Andreas Feninger, Library of Congress.

Cinnabar ore was crushed and then roasted in a rotary kiln. This process not only released the sulfur from the cinnabar (HgS), but also decomposed the oxide and volatized the mercury. The mercury vapor was knocked down from the exhaust gas in condensers.

Rotary Kiln at the New Idria Mine and Smelter, 1942. Photo by Andreas Feninger, Library of Congress.

Process development with calorimetry

I’ve turned my attention to reaction calorimetry recently. A reaction calorimeter (i.e.,  Mettler-Toledo RC1) is an apparatus so constructed as to allow the reaction of chemical substances with the benefit of measuring the heat flux evolved. Reaction masses may absorb heat energy from the surroundings (endothermic) or may evolve heat energy into the surroundings (exothermic).

Calorimetry has been around for a very long time. What is relatively recent is the development of instrumentation, sensor, and automation packages that are sufficiently user friendly that RC can be plausibly used by people like me: chemists who are assigned to implement a technique new to the organization.  What I mean by “user friendly” is not this: an instrument that requires the full time attention of a specialist to operate and maintain it.

A user friendly instrument is one engineered and automated to the extent that as many adjustments as possible are performed by the automation and that the resulting sysem is robust enough that operational errors and conflicting settings are flagged prior to commencing a run.  A dandy graphic user interface is nice too. Click and drag has become a normal expectation of users.

An instrument that can be operated on demand by existing staff is an instrument that nullifies the need for specialists. Not good for the employment of chemists, but normal in the eternal march of progress. My impression is that RC is largely performed by dedicated staff in safety departments. What the MT RC1 facilitates is the possibility for R&D groups to absorb this function and bring the chemists closer to the thermal reality of their processes. Administratively, it might make more sense for an outside group to do focus on process safety, however.

In industrial chemical manufacture the imperative is the same as for other capitalistic ventures- manufacture the goods with minimal cost inputs to provide acceptable quality. Reactions that are highly exothermic or are prone to initiation difficulties are reactions that may pose operational hazards stemming from the release of hazardous energy.  A highly exothermic reaction that initiates with difficulty- or at temperatures that shrink the margin of safe control- is a reaction that should be closely studied by RC, ARC, and DSC.

It is generally desirable for a reaction to initiate and propagate under positive administrative and engineeering controls. Obviously, it is desirable for a reaction to be halted by the application of such controls. Halting or slowing a reaction by adjustment of feed rate or temperature is a common approach.  For second order reactions, the careful metering of one reactant to the other (semi-batch) is the most common approach to control of heat evolution.

For first order reactions, control of heat evolution is had by control of the concentration of unreacted compound or by brute force management of heating and cooling.

Safe operation of chemical processing is about controlling the accumulated energy in the reactor. The accumulated energy is the result of accumulated unreacted compounds. Some reactions can be safely conducted in batch form, meaning that all of the reactants are charged to the reactor at once. At t=0, the accumulation of energy is 100 %. A reliable and properly designed heat exchange system is required for safe operation (see CSB report on T2). In light of T2, a backup cooling system or properly designed venting is advised.

The issue I take with the designers of the process performed at T2 is this: They chose to concentrate the accumulated energy by running the reaction as a batch process. This is a philosphical choice. The reaction could have been run as a semibatch process by feeding the MeCp to the Na with a condenser on the vessel. Control of the exotherm could have been had by control of the feed rate and clever use of the evaporative endotherm. A properly sized vent with rupture disc should always be used. These are three layers of protection. 

Instead, they chose on a batchwise process relying on a now obviously inadequate pressure relief system, and the proper functioning of water to the jacket.

No doubt the operators of the facility were under price and schedule pressure. The MeCp manganese carbonyl compound they were making is an anti-knock additive for automotive fuels and therefore a commodity product. I have no doubt at all that their margins may have been thin and that resources may not have been there to properly engineer the process. This process has “expedient” written all over it in my view.

Reactions that have a latent period prior to noticeable reaction are especially tricky. Often such reactions can be rendered more reliable by operation at higher temperatures. Running exothermic reactions at elevated temperatures is somewhat counter-intuitive, but the issue of accumulation may be solved.  

Disclaimer: The opinions expressed by Th’ Gaussling are his own and do not necessarily represent those of employers past or present (or future).

Power Generation with Mercury Turbine

The South Meadow generating station was operated by the Hartford Electric Company in Hartford, CT. The unit described in the 1931 Pop Sci article used 90 tons of mercury in the boiler. The article states that the South Meadow generator produced as much as 143 kWh from 100 lbs of coal, as opposed to an average of 59 kWh from conventional coal fired plants and 112 kWh from exceptionally efficient coal fired plants. The article describes an incident at the plant where a breech of containment from an explosion in the mercury vapor system occurred, releasing mercury and exposing workers to mercury vapor.

The Schiller Mercury Power Station in Portsmouth, NH, is described in this link.

Mr. Thiel Speaks

When you look for science news at news aggegation sites like Google News or popular publications like, well, any given magazine or newspaper, or (yawn) any given non-fiction television program, what you are likely to find are fluff pieces on topics related to medicine, automobiles, and telecommunications. To people in the news business, scientific progress means new kinds of medicines, better cars, and the latest (n+1)G cell phone or iPad.

It is possible for even successful people to apply pop-culture metrics to economic theory. For instance, the founder of PayPal, Peter Thiel, has written an essay for The National Review in which he questions the motives of scientists as well as their ability to maintain the growth of scientific progress.

The state of true science is the key to knowing whether something is truly rotten in the United States. But any such assessment encounters an immediate and almost insuperable challenge. Who can speak about the true health of the ever-expanding universe of human knowledge, given how complex, esoteric, and specialized the many scientific and technological fields have become? When any given field takes half a lifetime of study to master, who can compare and contrast and properly weight the rate of progress in nanotechnology and cryptography and superstring theory and 610 other disciplines? Indeed, how do we even know whether the so-called scientists are not just lawmakers and politicians in disguise (italics mine), as some conservatives suspect in fields as disparate as climate change, evolutionary biology, and embryonic-stem-cell research, and as I have come to suspect in almost all fields?

The article goes on to paint a picture of failure on the part of the scientific community for not coming up with a Moore’s law style of continuous bounty for the consumer.

Here is where I greatly disagree with Thiel. He cites the stagnation of wages as an indicator of economic progress which, in turn, is an indicator of tepid technological progress.

Let us now try to tackle this very thorny measurement problem from a very different angle. If meaningful scientific and technological progress occurs, then we reasonably would expect greater economic prosperity (though this may be offset by other factors). And also in reverse: If economic gains, as measured by certain key indicators, have been limited or nonexistent, then perhaps so has scientific and technological progress. Therefore, to the extent that economic growth is easier to quantify than scientific or technological progress, economic numbers will contain indirect but important clues to our larger investigation.

… Taken at face value, the economic numbers suggest that the notion of breathtaking and across-the-board progress is far from the mark. If one believes the economic data, then one must reject the optimism of the scientific establishment (italics mine).  Peter Thiel, National Review.

This is where Thiel drives into the weeds. He conflates stagnant wages in the post Viet Nam era with a failure of science and technology to produce the kinds of advances he would recognize as worthy.

What is lost on Thiel is the fact that stagnant wages are a kind of benefit to employers and investors as the result of technology. Over this so-called period of stagnation in wages is a complementary increase in productivity. If anything the improvements in technology unseen by Thiel and his ilk have been applied to render human labor obsolete, thereby sustaining profits. China hasn’t gotten all American jobs. Machines have taken over much ot it.

The fact that Thiel scans the horizon from his perch and fails to see this is indicative of a kind of blindness of prosperity. In his world, technology is the internet. Apparently, people like Thiel only register scientific progress as a stream of shiny new consumer electronics, supersonic transport, or brain transplants. The advances in science and technology from the last 20 years are everywhere, not necessarily just in internet technology, cell phones, and Viagra.

Semiconductor technology is now well below the micron scale and heading to the tens of nanometers.  Bits of data are heading toward tens to hundreds of electrons per bit.  Lithographic fabrication at this scale allows for rules of thumb like Moore’s Law.  Growth in component density can multiply parabolically or more as greater  acreage of chip surface is consumed in 3 dimensions. Many doublings are possible in this domain.

But parabolic growth in aircraft or land vehicle speed is limited by other physics. A dynamic range of only a few factors of ten in vehicle speed are economically feasible.  Fossil fuels are fantastically well suited for use in transportation owing to their high energy density, low cost per kiloJoule, and ability to flow through pipes. Fundamentally new forms of energy storage are hard to find and are expensive.  All energy usage is consumption.  Science can only go so far in facilitating better forms of consumption for the profligate.  Doing work against gravity also consumes lots of energy, so the world of George Jetson never became feasible.

Ordinary automobiles that comprise a part of the stagnancy that Thiel bemoans are coated in highly advanced polymer coatings made from specialty monomers, catalysts, and initiators. The polymeric mechanical assemblies are highly engineered as well as is the robotic assembly of the vehicle. The implementation of automation in the manufacture of plain old cars is just a part of the overall issue of low job growth. In this case, technological advancement => stagnant growth in wages and employment.

Octopole and Quadrupole

Busy week learning to use the new ICPMS. Pretty flippin’ amazing instrument. Reaffirms my admiration for Bill and Dave. A lot of nuances and software to learn, but do-able.  Agricola and Biringuccio could’ve used one of these. Of course, they’d have needed 208 VAC single phase power …

Interesting approach to polyatomic ion interferences- run the beam through a He chamber to slow down the large cross section ions and use the octopole to steer the beam into the off-axis chamber exit and into the quadrupole mass filter. Clever monkeys.

There is No Slam Dunk

Every day I’m reminded that there is no slam dunk in business. Everything is hard work and perseverance. Even apparently simple things are fraught with complications and layers of nuance.  The great appeal of gambling that some find so convincing is that complexity and vexing details have been somehow suspended and a path is clear for the slam dunk. Slam dunks do happen I suppose, but over time the slams outnumber the dunks.

In chemical manufacturing, there are no trivial operations. Every step in the manufacturing sequence requires thought and infrastructure. Even fillling drums with water and shipping it out has complications-  quality control, portion control, container quality, inventory control, purchasing, pallets and dunnage, quality control overhead.  Then there is the matter of receiving & shipping, accounts payable and receivable, auditing, taxes, sales and marketing, and all of the other overhead that goes with operating an above-the-board business operation.  Then there is the matter of managing a staff and all of the HR delights that go along with that.

Now imagine if you were manufacturing hazardous or controlled substances. Suddenly, your staff are partitioned into those who work with hazardous materials and those who do not. Those who do need a steady supply of personal protective equipment (PPE) as well as lots of documented training programs to operate in hazardous environments. They’ll need physical exams, coats, gloves, boots, eye protection, and respirators with annual training. A smart employer will have the piss wagon come by now and then looking for drug use.

Let’s say that you want to replace a process solvent. You want to replace ether with toluene. In order to do this, you’ll have to validate the process in R&D for scale up. The process change will have to go through some kind of stage gate process to validate the benefit of the change and the approval of all customers. Some process changes must be approved by the customer. Woe is he who wants to make such a change in the cGMP or military chemicals world.  Developing a perpetual motion machine may be easier.

Process changes will alter the material streams in your facility. This may trigger PSM protocols that will have to play out on its own schedule. Or it may trigger environmental permits or LVE limits under TSCA.

Process changes may also alter the quality or safety margins that you have previously been relying on, but didn’t know it. This often occurs when a company tries to intensify a process. Suddenly the process is generating more watts per kg of reaction mass than before. Or all of a sudden the reaction mass doesn’t filter well or the pot residence time during distillation is deleterious at the higher concentration or with the higher boiling composition. All changes have a down side. These are some of them.  There is no slam dunk.

Gold Rush Alaska. Getting the pay out of paydirt.

So I’ll come clean. I am a fan of Gold Rush Alaska on the Discovery Channel. The new season has started with some serious twists. What I like about the show is the technical side. The miners are struggling with serious mechanical problems and difficult issues with unit operations in placer mining. This is what made life precarious for the gold rush miners of the 19th century and is certainly what caused many to return home empty handed. 

Getting to the pay streak, conveying the ore from the pit, moving it to the sluicing equipment, and getting the fines to run over the riffles of the sluice properly require a great deal of energy input. The remoteness of the site, the high cost of heavy equipment, and wrestling with faulty equipment all contribute to the difficulty of getting the pay out of paydirt. The mining season is about 100 days in duration. That is 2400 hours. Every hour must be used to maximum effect.

This season there is a bad guy. This guy, Dakota Fred, tips over the apple cart. So, the boys are heading to the Klondike.  But first they need a claim to work in a time of record gold prices and intense activity in the mines. I love the vicarious life.

On the release of hazardous energy

What should you do if a raw material for a process is explosive? Good question. Just because a material has explosive properties does not automatically disqualify it for use. To use it safely you must accumulate some information on the type and magnitude of stimulus that is required to give a hazardous release of energy.

But first, some comments on the release of hazardous energy. Hazardous energy is that energy which, if released in an uncontrolled way, can result in harm to people or equipment.  This energy may be stored in the form of mechanical strain of the sort found in a compressed spring, a tank of compressed gas, the unstable chemical bonds of an explosive material, or as an explosive mixture of air and fuel. A good old fashioned pool fire is a release of hazardous energy as well. The radiant energy from a pool fire can easily and rapidly accelerate past the ignition point of nearby materials.

Accumulating and applying energy in large quantities is common and actually necessary in many essential activities. In chemical processing, heat energy may be applied to chemical reactions. Commonly, heat is also released from chemical reactions at some level ranging from minimal to large. The rate of heat evolution in common chemical condensation or metathesis reactions can be simply and reliably managed by controlling the rate of addition of reactants where two reactants are necessary.

There are explosive materials and there are explosive conditions. If one places the components of the fire triangle into a confined space, what may have been conditions for simple flammability in open air are now the components for an explosion. Heat and increasing pressure will apply PV work to the containment. In confinement, the initiation of combustion may accelerate to deflagration or detonation. The outcome will minimally be an overpressure with containment failure. If the contents are capable of accelerating from deflagration to detonation, then loss of containment may involve catastrophic failure of mechanical components.

Rate control of substances that autodecompose or otherwise break into multiple fragments is a bit more tricky. This is the reaction realm of explosives. The energy output is governed by the mathematics of first order kinetics, at least to some level of approximation. In first order kinetics, the rate of reaction depends on both the rate constant and the intitial concentration of one reactant.  Regarding the control of reactions that are approximately first order in nature, some thought should be given to limiting the reaction mass size to that which is controllable with available reactor utilities. A determination of the adiabatic ΔT will give information that will tell you if the reaction will self-heat past the bp of your solvent system.

There is a particular type of explosive behavior called detonation. Detonation is a variety of explosive behavior that is characterized by the generation and propagation of a high velocity shock through a material. A shock is a high velocity compression wave which begins at the point of initiation and propagates throughout the bulk mass.  Because it is a wave, it can be manipulated somewhat. This is the basis for explosive lensing and shaped charges.

Detonable materials may be subject to geometry constraints that limit the propagation of the shock. A cylinder of explosive material may or may not propagate a detonation wave depending on the diameter. Some materials are relatively insensitive to the shape and thickness. A film of nitroglycerin will easily propagate as will a slender filling of PETN in detcord.  But these compounds are for munitions makers, not custom or fine chemical manufacturers. The point is that explosability and detonability is rather more complex than you might realize. Therefore, it is important to do a variety of tests on a material suspected of explosability.

A characteristic of high order explosives is the ability to propagate a shock across the bulk of the explosive material.  However, this ability may depend upon the geometry of the material, the shock velocity, and the purity of the explosive itself. There are other parameters as well. Marginally detonable materials may lose critical energy if the shape of the charge provides enough surface area for loss of energy.  The point is that “explosion” and “detonation” are not quite synonymous, and care must be exercised in their use. The word “detonation” confers attributes that are unique to that phenomenon.

Explosive substances have functional groups that are the locus of their explosibility. A functional group related to the onset of explosive behavior, called an explosiphore (or explosaphore), is needed to give a molecule explosability beyond the fuel-air variety. Obvious explosiphores include azide, nitro, nitroesters, nitrate salts, perchlorates, fulminates, diazo compounds, peroxides, picrates and styphnates, and hydrazine moieties. Other explosiphores include hydroxylamino. HOBt, a triazole analog of hydroxyamine,  hydroxybenzotriazole, has injured people, destroyed reactors and caused serious damage to facilities. Hydroxylamine has been the source of a few plant explosions as well.   It is possible to run a process for years and never cross the line to runaway.

Let’s go back to the original question of this essay. What do you do if you find that a raw material or a product is explosive? The first thing to do is collect all available information on the properties of the substance. In a business organization, upper management must be engaged immediately since the handling of such materials involves the assumption of risk profiles beyond that expected.

At this point, an evaluation must be made in relation to the value of the product in your business model vs the magnitude of the risk. Dow’s Fire and Explosion Index is one place to start. This methodology attempts to quantify and weight the risks of a particular scenario. A range of numbers are possible and a ranking of risk magnitude can be obtained therein. It is then possible to compare the risk ranking to a risk policy schedule generated beforehand by management. The intent is to quantify the risk against a scale already settled upon for easier decision making.

But even before such a risk ranking can be made, it is necessary to understand the type and magnitude of stimulus needed to elicit a release of hazardous energy. A good place to start is with a DSC thermogram and a TGA profile. These are easy and relatively inexpensive. A DSC thermogram will indicate onset temperature and energy release data as a first pass. Low onset temperature and high energy release is least desirable. High onset temperature and low exothermocity is most desirable.

What is more difficult to come to a decision point on is the scenario where there is relatively high temperature onset and high exothermicity.  Inevitably, the argument will be made that operating temperatures will be far below the onset temp and that a hazardous condition may be avoided by simply putting controls on processing temperatures. While there is some truth to this, here is where we find that simple DSC data is inadequate for validating safe operating conditions.

Onset temperatures are not inherent physical properties. Onset temperatures are kinetic epiphenomena that are dependent on sample quality, the Cp of both the the sample and the crucible, and the rate of temperature rise. What is needed once an indication of high energy release is indicated by the DSC is a determination of time to maximum rate (TMS)  determination. While this can be done with special techniques in the DSC (i.e., AKTS).  TMR data may be calculated from 4 DSC scans at different rates, or it may be determined from Accelerated Rate Calorimetry, or ARC testing. Arc testing gives time, temp, and pressure profiles that DSC cannot give and in my mind, is the more information-rich choice of the two approaches. ARC also gives an indication of non-classical liquid/vapour behavior that is useful. ARC testing can indicate the generation of non-condensable gases in the decomposition profile which is good to know.

Other tests that indicate sensitivity to stimulus is the standard test protocol for DOT classification.  Several companies do this testing and rating. There are levels of testing applied based on the result of what the lower series tests show. Series 1 and 2 are minimally what can be done to flesh out the effects of basic stimuli.  What you get from the results of Series 1, 2, and 3 are a general indication of explosabilty and detonability, as well as sensitivity to impact and friction. In addition, tests for sensitivity to electric discharge and dust explosability should be performed as well.

The Gap test, Konen test, and time-pressure test will give a good picture of the ability to detonate, and whether or not any explosability requires confinement. The Konen test indicates whether or not extreme heating can cause decomposition to accelerate into an explosion sufficient to fragment a container with a hole in it.

BOM or BAM impact testing will indicate sensitivity to impact stimulus. Friction testing gives threshold data for friction sensitivity.

ESD sensitivity testing gives threshold data for visible effects of static discharge on the test material. Positive results include discoloration, smoking, flame, explosive report, etc.

Once the data is in hand, it is necessary to sift through it and make some determinations. There is rarely a clear line on the ground to indicate what to do. The real question for the company is whether or not the risk processing with the material is worth the reward. Everyone will have an opinion.

The key activity is to consider where in the process an unsafe stimulus may be applied to the material. If it is thermally sensitive in the range of heating utilities, then layers of protection guarding against overheating must be put in place. Layers of protection should include multiple engineering and administrative layers.  Every layer is like a piece of Swiss cheese. The idea is to prevent the holes in the cheese from aligning.

If the material is impact or friction sensitive, then measures to guard against these stimuli must be put in place. For solids handling, this can be problematic. It might be that preparing the material as a solution is needed for minimum solids handling.

If the material is detonable, then all forms of stimulus must be guarded against unless you have specific knowledge that indicates otherwise. Furthermore, a safety study on storage should be performed. Segregation of explosable or detonable materials in storage will work towards decoupling of energy transfer during an incident.  By segregating such materials, it is possible to minimize the adverse effects of fire and explosion to the rest of the facility.

With explosive materials, electrostatic safety is very important. All solids handling of explosable solids should involve provisions for suppression of static energy. A discharge of static energy in bulk solid material is a good way to initiate runaway decomposition in an energetic material.  This is how a material with a high decomposition temperature by DSC can find sufficient stimulus for an explosion.

Safe practices involving energetic materials require an understanding the cause and effect of stimulus on the materials themselves. This is of necessity a data and knowledge driven activity. Along with ESD energy, handwaving arguments should also be suppressed.

Refractory Problem

Here is an interesting problem. How do you analyze refractory materials? What if you are making materials that could be used as a crucible raw material? How do you digest refractory materials down to homogeneous solutions that themselves need to be contained in something even more refractory?

Obviously, it is done all of the time. Methods like AA, ICPOES, ICPMS, GDMS, etc., are all useful in quantitating or revealing mass spectra of materials. Of the above list, only GDMS can be applied to solid samples. The AA and ICP methodologies require homogeneous solutions. This can be problematic.

X-ray techniques like XRF and XRD are useful for solids characterization as well. Of these two, only XRF is useful in the absence of distinct crystal phases. XRD detects crystal phases and can be used to good end with the crystallograpic database that is available for the identification of solid substances. In contrast, XRF, X-ray fluorescence, detects elements easily down to sodium, and lighter with a bit more difficulty. Hand held XRF units are available for the price of a low end BMW that will alow the user to point the business end of the unit to a material and identify the elements present.

A useful company to get to know in this arena is Inorganic Ventures. These folks are extremely knowledgeable and supply stock and custom standards for flame and ICP methods. The trick to the analysis and characterization of refractory metal oxides in the category of RO2, R2O3, and RO, is to have reliable standards on hand as well as a choice of fluxes. Fluxing at high temperature is often critical to the digestion of refractory oxides.  Fluxes are molecular inorganic salts that may be acidic or basic and may or may not be oxidizing. 

If you started out as an organikker like me, there will be a period of slight adjustment to the notions of what are regarded as acids and bases at 1000 C. A flux is a substance that melts and dissolves an inorganic solid, usually through the digestion of the material in question. A melt is produced inside a crucible within a muffle furnace.  This melt can be poured into a mold to produce a button or the material may be allowed to solidify in the crucible followed by aqueous acid dissolution.

In addition to acidic and basic fluxes, there is the matter of melting temperature and the need for a eutectic mixture. A variety of compositions can be prepared to provide a melt temperature suitable for a particular need.  Volatility may be a problem, requiring adjustment of conditions.