Friday, January 25, 2008

Design Space for a Synthesis Reaction

When applying Design Space and QbD, we need to determine i) which process parameters are critical and ii) how to ensure operation within their combined (multi-dimensional) acceptable ranges (the design space). In effect we are looking for the conditions that will give similar results at different scales by maintaining similar values of certain critical variables. This presents development challenges and opportunities that I will illustrate using a synthesis reaction as an example.

Statistical design of experiments might seem the natural approach here but casual selection of factors to vary in a DOE study could in such cases produce a design space in which it was undesirable or even impossible to operate. The desired scale-up similarity with this approach is hoped for, rather than designed for. Knowledgable practitioners take a mechanistic approach; in addition to the genuine process understanding this creates, the corresponding experimental program is more focused and compact. When in addition hypotheses to explain the data are tested using a mechanistic model alongside experiments, the effect is a further condensation of the experimental program.

This new way of working, in which trial and error and mystery are replaced by faster, more systematic approaches and deeper process understanding, places new demands on information systems, which will be the subject of other posts in this blog.

Multi-phase reactions make up the majority of reactions, especially those which are problematic on scale and in this case I will use a slurry-phase hydrogenation, one of the most common steps in API synthesis routes.

As a first step in thinking about this reaction, a process scheme can be used to visualize the roles of chemical kinetic rates, mixing / mass transfer and heat transfer rates; in multi-phase systems (here gas-liquid-solid) these interact in ways that determine the overall progress of hydrogenation. Each rate has a rate constant that is a function of other process variables:

-kinetic rate constant k(T) for each reaction
-mass transfer rate constant kLa(N, V, with fixed vessel geometry)
-heat transfer rate constant Ua (N, V, with fixed geometry and jacket conditions)

where N is agitator speed and V is the volume of the reaction mixture.

In this case the chemistry is a nitrile reduction (requiring 2 moles of H2 per mole of substrate) to make an amine, with a side-reaction between the substrate and the imine intermediate to give an undesired impurity. We see these types of reaction routinely, in which unfavourable conditions lead to excessive impurity levels.

If the impurity level is our CQA for the design space, we want our CPPs to be narrowly defined enough to achieve the target level or better, but broad enough to give latitude in how we can run the process. Reaction pressure (P) and temperature (T) may for example be defined tightly but for operational flexibility (e.g. limited available vessel sizes) we may prefer not to fix substrate concentration and catalyst loading (a significant cost factor). Note that higher values of any of the aforementioned variables will make the chemistry faster and to ensure quality we will need mass transfer (kLa) to keep pace; so the acceptable range for kLa depends on our choice of the other operating conditions and vice versa.

Finally heat transfer needs to keep pace with the reaction rate; the acceptable range for Ua depends on all of the previous variables and on the available temperature difference between the reactor and the jacket. Heat transfer is rarely an issue in small scale lab reactors, as the ratio of surface area to volume ('a' in Ua) is very high; this becomes an issue in larger vessels. To simulate this behaviour, the delta T (between reactor and jacket) at lab scale can be limited to mimic larger scale cooling rates.

The above mechanistic considerations provide a sound basis for achieving similarity between scales and definition of the required design space.

Without the process scheme and a clear mechanistic approach to this reaction, likely choices of factors in a statistical experimental design are P, T, substrate concentration, catalyst amount, agitator speed (N) and time. Misleading conclusions about the effect of agitator speed can easily be obtained, especially when it is varied over an insensitive range; agitator speed is also an unsuitable CPP as it does not sufficiently characterise agitation. Time is also irrelevant as a factor in this sense: what matters are the relative rates of kinetics, mass transfer and heat transfer. The dissimilarity of lab and plant cooling may not even be considered. A DOE study will frequently vary more than one parameter at a time between experiments, making individual mechanistic effects harder to distinguish. Finally, as the focus is typically on end-point results, too few samples will ordinarily be taken to allow the reaction to be properly followed.

A mechanistically based experimental program will vary P, T, substrate concentration, catalyst amount, and kLa over expected ranges. In many cases, only one variable setting will change between experiments, so that individual effects, e.g. the temperature-dependence, can be isolated and understood. This approach implies that the scientist is aware of effects like mass transfer and furthermore has characterized his equipment with respect to its mass transfer ability. The jacket temperature will be limited and monitored for heat transfer scale-up purposes. Samples will be taken that allow the reaction progress to be followed. Analysis of the data will also be on a mechanistic basis and CQAs are likely to be correlated rationally with CPPs. Analysis occurs in parallel with the experimental program, enabling the experiments to be redirected as new information comes to light. The dependence of the required kLa on each of the other conditions is likely to emerge and that relationship captured in the definition of the design space.

A third approach is enabled when the scientific method is applied, i.e. a hypothesis to explain the experimental data is incorporated in a mechanistic model (based on the process scheme) and model parameters are fitted to the data. This iterative procedure causes the user to rethink their model several times, until it fits their data as well as possible. The result is a higher level of understanding, documented in a model with the data and a tool that can be used to reduce the need for experimentation. In fact the model can be used to predict for a given P, T, substrate loading, catalyst level, kLa and Ua what the impurity level will be and to explore any combination of those factors that meet the target CQA. In that sense the design space is whatever the model says meets the CQA, giving wider latitude in selecting operating conditions.

Another consequence is that different experiments are now called for: one set to characterise the chemical rate constants, k and the other set to characterize the targeted scale-up equipment. Lab experiments no longer need to mimic plant scale operation, they can focus on determining intrinsic chemical kinetics in association with the model; spiked experiments to deliberately heighten the impurity level can allow its formation kinetics to be better determined. This approach is increasingly applied in Pharma API and has been standard in other parts of the chemical process industries for many years.

All of which leaves the practical question as to how to achieve the required equipment rate constants kLa and Ua in a given vessel; otherwise the process could operate outside the design space, producing off-spec material. Success here relies on good equipment characterization using solvent tests and chemical engineering (especially agitation) calculations, supported by a process engineering-oriented equipment database in which accumulated equipment performance knowledge is stored and can be reused when required. Here, a great advantage of the pharma industry is that the same multi-purpose equipment is used for each product, meaning that useful data for characterization is being collected all the time, even during cleanouts.

Tuesday, January 15, 2008

Visual techniques in QbD: the Process Scheme

The QbD initiative is leading to increased use of tools like Ishikawa / fishbone diagrams in pharma, to brainstorm and visualize the impacts of process parameters on product quality attributes. See for example the ICH Guidance document at http://www.ich.org/LOB/media/MEDIA4349.pdf.

Our customers have found another visualization tool, the 'process scheme', very powerful; this was developed originally in the Zeneca Agrochemicals Process Studies Group, to facilitate sharing of information between chemists and engineers at the unit operation level. Zeneca observed the different ways in which many chemists and engineers approach and discuss synthesis steps and that they lacked a common language and visual representation that would facilitate subsequent process development and scale-up. The process scheme is at first unfamiliar to both (few chemical structures, if any; very little equipment detail) but is built using a simple procedure that each can understand and leads to a compact representation of their current process understanding.

You can find examples anywhere that DynoChem is presented, see our case studies page for some quick links. A process scheme for a hydrogenation reaction is shown here, but all operations, including work-up and isolation, lend themselves to this approach.


The process scheme summarizes the phases (gas, liquid, etc.) and rate processes (reaction, phase transfer, hydrogen supply, heat removal) necessary for the operation to proceed; not all of these are always appreciated prior to the discussion. The scheme indicates certain process parameters likely to affect critical quality attributes (like impurity levels) and provides a rational basis for a systematic program to quantify their impact and how those parameters will vary with scale. Later, as more development data become available, the process scheme may be revised to reflect the current state of process understanding.

The procedure for creating the process scheme is:
  1. Assemble a small group that is familiar with the project, ideally including both the process development chemist and a chemical engineer. An analytical chemist can often provide invaluable additional input.
  2. Internal phases: draw each of the phases which you think are present:
    - For the continuous (normally liquid) phase, use a large rectangle
    - For solid phases, use a triangle at the lower left edge of the liquid phase rectangle
    - For dispersed gas phases, use a rectangle above the liquid, indicating the headspace
    - For sparged gas phases (gas introduced via a dip-leg), use a single bubble near the top of the liquid phase rectangle
    - For a dispersed liquid phase, use a single droplet at the edge of the liquid phase rectangle
  3. Internal rate processes:
    - Wr
    ite down (or propose) a working reaction scheme for the major reactions taking place in each phase
    - Write down the major components in each phase, on each phase
    - Represent any transport processes including mass or heat transfer between phases using a pair of single arrows, one pointing in each direction. Next to each transfer, mark the type of rate process (e.g. for mass transfer mark kLa, for heat transfer mark UA)
  4. External rate processes:
    - Using arrows that cross the model boundary, mark on the picture any process streams (i.e. flows) entering or leaving the system for each phase. These arrows should enter or leave the appropriate phase(s). For fed batch systems, show the feed tank as a separate phase connected to the main liquid phase by a flow. Beside each arrow, list the major components in each stream
    - Represent any vessel heating or cooling via a loop entering and leaving the liquid phase rectangle; mark with a pair of single arrows and UA.

Monday, January 14, 2008

Potential of open, standards-based, data access and sharing

Also at AIChE in November:

Alistair Gillanders, a member of the ISA88 committee, gave a useful perspective on how existing open standards such as S88 can facilitate the sharing of data to support faster and more efficient process development and the adoption of design space and QbD approaches.

The full paper is available at: http://www.scale-up.com/pubs.html.

Anjali Kataria related some results of a CRADA study underaken with FDA and a cross-section of pharmaceutical companies, underlining the problems that exist in this area:

  • Data traceability and visibility were issues for both large and small companies.
  • Fewer than 5% of respondents use structured databases to store drug development data.
  • Drug development professionals spend, on average, five hours per week looking for data.
  • Some respondents spend eight hours or more on data retrieval each week.
  • Roughly two-thirds of respondents could not find the data they needed between 10 to 20% of the time, triggering rework and duplication of tests and procedures.

More details at http://conformia.com/partners/industry.php

Tuesday, January 8, 2008

CFD limitations for stirred vessel applications

The widening application of computational fluid dynamics by academics working on pharma problems, as evidenced in presentations at the AIChE Annual Meeting in Salt Lake City in November, prompted this note on the limitations of that approach for design space and QbD work.

CFD results can have a high impact in a presentation or paper, but unfortunately this can mask the invalidity of the underlying calculations. Our company was a leading exponent of CFD before facing the fact that its limitations prevented us addressing many customer problems, especially those involving stirred tanks.

Validating CFD:
No CFD results for stirred vessels / reactors should ever be presented or relied upon without quantifying their validity by comparison with established data:
  1. A quick overall validation check is the torque on the agitator, obtained by integrating momentum flow on a surface around the impeller(s), which can be verified using known power numbers (Po).
  2. Another important check is integrating the epsilon (W/Kg, rate of turbulent energy dissipation per unt mass) over the fluid volume; this should nearly equate in a turbulent system to the power input calculated using a known Po.
  3. The effects of grid size on any CFD results should always be checked; it can be very difficult to obtain a 'grid-independent' solution.
  4. More detailed validation tests include flow visualization (requires a lab set-up and tracer test techniques or similar) or careful velocity measurements.

CFD and Design Space / QbD:
For QbD purposes, CFD can shed light on elements of a problem or behaviours of a vessel under certain limited conditions, e.g. in single-phase flow (no particles, drops, bubbles), at steady state (is there really a steady state?) and without chemistry occurring, if you have the expertise, time and resources to set up a significant project to produce these results. However much the same level of insight can be obtained using a five minute calculation and an engineering correlation :)

For Design Space exploration, multiple parametric calculations would need to be performed, increasing the project duration by orders of magnitude, producing results that potentially cannot be relied upon.

Too negative? No, just realistic. CFD excels in automotive / airflow / aerospace and generally in single-phase non-reacting systems with well-defined turbulence and a steady state. Those situations are rarely of interest for Pharma API development and scale-up.

Friday, January 4, 2008

First post

Thought it was about time to start blogging about the mechanistic approach to process development and scale-up and how that underpins quality by design (QbD). Its an area we take seriously and where DynoChem already has a big impact.

ShareThis small