Search Blogs

Sunday, May 3, 2026

Glass-Ceramics

Glass-ceramics are one of those materials that seem obvious after you take a few materials engineering courses. You start with a glass, which is useful because it can be melted, shaped, and processed. Then instead of avoiding crystallization, you force the glass to crystallize in a controlled way.

That is the key distinction. In normal glass processing, crystallization is usually a defect. It means the glass devitrified. In glass-ceramics, crystallization is the product. So you want something like:

$$ \text{glass} \xrightarrow{\text{controlled heat treatment}} \text{glass-ceramic} $$

Typically, the more useful state is a glass-ceramic with one or more crystalline phases embedded in a residual glassy matrix. The parent glass composition, nucleation process, growth process, crystal size, and residual glass are all essential aspects [1].


The thing I like about glass-ceramics is that they are a very clean process-structure-property example where kinetics matter. You are not just choosing a chemistry. You are choosing a chemistry that can first form a vitreous state, and then later crystallize into the phases you actually want. The workflow is then something like:

graph LR A[Composition] --> B[Glass] B --> C[Crystallization Path] C --> D[Microstructure] D --> E[Properties]

This controlled process drives structure-property relationships that lead to glass-ceramics that have applications in cookware, cooktops, dental restorations, optical mirror substrates, sealants, and bioactive materials. They look unrelated, but the trick is the same, i.e., controlled crystallization gives access to property combinations that are hard to get from either ordinary glass or conventional sintered ceramics [2].

Figure 1. Corning Pyroceram glass-ceramic. Wikimedia Commons, public domain.

CorningWare (see fig. Figure 1) was a glass-ceramic that turned out to have outstanding thermal shock properties and hence was used for cookware. Whereas many silica-rich glasses (for example soda-lime window glass) tolerate rapid temperature changes poorly; borosilicate glass is a common exception. The low-expansion glass-ceramic handles thermal shock much better because the crystallized microstructure changes the effective thermal expansion response.


The classical glass-ceramic idea is tied to S. Donald Stookey's work on catalyzed crystallization of glass [3]. The important point is not simply that glasses can crystallize. I mean ever vitreous material tends towards the thermodynamically stable state which usually is the crystalline phase. As stated above the important point is that the crystallization process is very caefully controlled. The way to think of this is there are two events, 1.) crystallite nucleation, 2.) growth/distribution of the crystallites. if you assume two rates for nucleation and growth, what you try to control is something like,

$$ \begin{equation} I(T) \not\equiv U(T) \label{eq:nucleation_growth} \end{equation} $$

where $I(T)$ is the nucleation rate and $U(T)$ is the crystal growth rate (different physical quantities with different dimensions; (\not\equiv) means their (T)-dependences differ). The expression is illustrative, not empirical. In practice you need the right density of nuclei, and proper crystal growth. If there are too few nuclei, the crystals can grow too large. If the crystals grow too much, the material can lose the desired properties. Ideally one is looking to find the right crystal size and number density to get useful glass-ceramic properties.

The other thing to think about is the actual glass-ceramic microstructure. The useful properties of glass-ceramics come from details like:

  • the crystalline phase,
  • the crystal volume fraction,
  • the crystal size,
  • the crystal shape,
  • the residual glass composition,
  • the elastic and thermal mismatch between phases.

For low thermal expansion glass-ceramics, the idea is often that the glassy phase has positive thermal expansion and the crystalline phase has negative thermal expansion. If the balance is right, those contributions nearly cancel. So you have some linear mixing like:

$$ \begin{equation} \alpha_{\text{eff}} \approx V_g \alpha_g + V_c \alpha_c \nonumber \label{eq:cte_mixture} \end{equation} $$

where $V_g$ and $V_c$ are the glass and crystal volume fractions, and $\alpha_g$ and $\alpha_c$ are the thermal expansion coefficients of the glassy and crystalline phases. This is obviously too simple because real microstructures have elastic constraint and anisotropy.

One example is ZERODUR (see fig. @fig:zerodur). It is a lithium aluminosilicate glass-ceramic used in precision optical applications because it can have extremely low thermal expansion. The low expansion comes from balancing the positive expansion of the residual glass with the negative expansion of the crystalline phase [4].

Figure 2. SCHOTT CERAN glass-ceramics. Wikimedia Commons, CC BY-SA 4.0

Cooktops (see fig. Figure 2) are the less exotic version of the same basic idea. You want a smooth surface, thermal shock resistance, chemical durability, and enough mechanical robustness for normal use. Dental materials are another good example because a dental restoration needs strength, chemical durability, wear resistance, processability, and translucency. A glass might look good, but not be strong enough and a polycrystalline ceramic is strong, but it may be too opaque or difficult to process. Whereas glass-ceramic gives the right balance, such as lithium disilicate dental glass-ceramics. They contain interlocked lithium disilicate crystals in a glassy matrix, which helps improve mechanical properties while still preserving required optical properties [5].

References

[1] et . al . J. Deubener, Updated Definition of Glass-Ceramics, Journal of Non-Crystalline Solids. 501 (2018) 3--10. https://doi.org/10.1016/j.jnoncrysol.2018.01.033.
[2] M.J. Davis, E.D. Zanotto, Glass-Ceramics and Realization of the Unobtainable: Property Combinations That Push the Envelope, MRS Bulletin. 42 (2017) 195--199. https://doi.org/10.1557/mrs.2017.27.
[3] S.D. Stookey, Catalyzed Crystallization of Glass in Theory and Practice, Industrial & Engineering Chemistry. 51 (1959) 805--808. https://doi.org/10.1021/ie50595a022.
[4] P. Hartmann, R. Jedamzik, A. Carre, J. Krieg, T. Westerhoff, Glass Ceramic ZERODUR: Even Closer to Zero Thermal Expansion: A Review, Journal of Astronomical Telescopes, Instruments, and Systems. 7 (2021) 020902. https://doi.org/10.1117/1.JATIS.7.2.020902.
[5] L. Fu, H. Engqvist, W. Xia, Glass--Ceramics in Dentistry: A Review, Materials. 13 (2020) 1049. https://doi.org/10.3390/ma13051049.

Reuse and Attribution

Sunday, March 15, 2026

The Spectral Energy Density Method Digest

This was a post that has taken me 2 years to put together, which is kind of ridiculous given the original Spectral Energy Density (SED) paper was published in 2010. The thing is I never really could get this to be compact enough while still feeling it was informative to be useful for others to use. I've decided that I'm just going to post it as is, which is pretty complete. I will note that I did have Gemini look at it and refine some language and check things.


When you think about calculating thermal properties of crystalline materials the first thing you might encounter are the phonon dispersion and density of state (DOS) plots. But you will immediately ask your self, how do these connect to actual heat transport? The short answer is ... they don't directly. Dispersion and DOS tell you all about the allowed channels for lattice/phonon vibrations. It is telling you what the coherent, non-dissipative modes are but the actual thermal conductivity is governed by how those modes scatter via phonon-phonon, defects, and boundaries. Furthermore, standard phonon bandstructures are a ground-state properties so they assume 0 K and harmonic/quasi-harmonic1 forces. To get temperature-dependent, anharmonic phonon frequencies and lifetimes, how long a mode last, you need to use finite-temperature lattice simulations. One practical route to get this information through the simulation results is the spectral energy density (SED) method, which was developed(?)/introduced and refined by McGaughey and collaborators [12].

May aim with this post isto try to put together a guide on what the SED is, the main equations, what every term means, and how you get them from practical atomistic simulations. I'll stick to the formulation that is formally derived from lattice dynamics (the $\Phi$ method); an alternative $\Phi'$ was shown to misrepresent the total spectral energy and fail for lifetimes and thermal conductivity [3].

tl;dr the reciepe

Get harmonic phonon eigenvectors and $\omega_\nu(\mathbf{q})$ from the dynamical matrix; run MD and Fourier-transform mass-weighted velocities in space and time to get $\Phi(\mathbf{q},\omega)$; plot it as a heatmap for a bandstructure and fit peaks to Lorentzians to get lifetimes $\tau_\nu = 2/\Gamma_\nu$; then sum mode heat capacity $\times$ group velocity$^2$ $\times$ $\tau$ over all modes for thermal conductivity. Symbols are defined in the Notation legend at the end.

Spectral Energy Density

The SED is the distribution of kinetic energy over a wavevector $\mathbf{q}$ and the frequency (angular) $\omega$ in a crystal. To remind you the wavevector represents the reciprocal space directions in a crystal and is akin to the wave momentum. The peaks in $\Phi(\mathbf{q},\omega)$ correspond to phonon modes and their positions give the dispersion $\omega_\nu(\mathbf{q})$. The widths of the peaks correspond to the inverse lifetime (broadening) of the phonon mode at $\mathbf{q}$.

Unit Cell Phonon Eigenvectors

You need the harmonic phonon modes of the primitive (or conventional) unit cell at a set of wavevectors $\mathbf{q}$. That comes from lattice dynamics calculations where you build the dynamical matrix, diagonalize it, get eigenvalues $\omega_\nu^2(\mathbf{q})$ and eigenvectors $\mathbf{e}_{s\alpha,\nu}(\mathbf{q})$. You can do this with packages like Phonopy and GULP.

The Dynamical matrix (harmonic force constants in $\mathbf{q}$-space) is given by:

$$ \begin{equation} D_{ss'}^{\alpha\beta}(\mathbf{q}) = \frac{1}{\sqrt{m_s m_{s'}}} \sum_{\ell} \Phi_{ss'}^{\alpha\beta}(0,\ell) \, e^{i\mathbf{q}\cdot(\mathbf{R}_{s'}(\ell) - \mathbf{R}_s(0))} \label{eq:dyn_matrix} \end{equation} $$

where:

  • $s,s'$: basis site indices in the primitive unit cell.
  • $\alpha,\beta$: Cartesian directions.
  • $m_s$: mass of atom $s$.
  • $\Phi_{ss'}^{\alpha\beta}(0,\ell)$: force constant2, i.e., force on atom $(0,s)$ in direction $\alpha$ when atom $(\ell,s')$ is displaced in direction $\beta$.
  • $\mathbf{R}_s(\ell)$: equilibrium position of atom $s$ in unit cell $\ell$.

This then becomes an eigenvalue problem which can be solved to get the eigenvalues $\omega_\nu^2(\mathbf{q})$ and eigenvectors $\mathbf{e}_{s\alpha,\nu}(\mathbf{q})$,

$$ \begin{equation} \sum_{s'\beta} D_{ss'}^{\alpha\beta}(\mathbf{q}) \, e_{s'\beta,\nu}(\mathbf{q}) = \omega_\nu^2(\mathbf{q}) \, e_{s\alpha,\nu}(\mathbf{q}) \label{eq:eigen} \end{equation} $$

Which gives:

  • $\nu$: branch index (mode label).
  • $\omega_\nu(\mathbf{q})$: harmonic frequency of mode $\nu$ at $\mathbf{q}$ (rad/s).
  • $\mathbf{e}_{s\alpha,\nu}(\mathbf{q})$: polarization eigenvector for atom $s$, direction $\alpha$, mode $\nu$. Usually normalized.

In practice you use a code like Phonopy or GULP with your relaxed structure and force constants to get $\omega_\nu(\mathbf{q})$ and $\mathbf{e}_{s\alpha,\nu}(\mathbf{q})$ on a grid of $\mathbf{q}$-points or along high-symmetry paths in reciprocal space. The thing is that the same primitive cell and $\mathbf{q}$-points must be consistent with the supercell you use in your MD simulations.

Computing the Power Spectrum

The power spectrum is the Fourier transform of the kinetic energy in $(\mathbf{q},\omega)$ space, in other words it is where the thermal energy is distributed in momentum space. To get this you'll typically configure your atomistic simulation to run in the canonical (e.g. NVT) or microcanonical (NVE) ensemble at the temperature of interest. For this step you need to run the simulation for sufficient enough timne as well as use a supercell so that the wavevectors you want are commensurate with the supercell (no spurious aliasing). From the simulation you get the Output positions and velocities at some sampling time interval [12].

Then to get the SED you need to compute the power spectrum of the kinetic energy in $(\mathbf{q},\omega)$ space and optionally project onto the eigenvectors. Let $n$ index unit cells in the supercell and $i$ the atoms within a cell (or a combined index over all atoms). The velocity of atom $i$ in cell $n$ at time $t$ is $\mathbf{v}_{ni}(t)$, then we define the mass-weighted velocity in $\mathbf{q}$-space, which is the spatial Fourier transform at each time:

$$ \begin{equation} \tilde{\mathbf{v}}_i(\mathbf{q}, t) = \sum_n \sqrt{m_i}\, \mathbf{v}_{ni}(t) \, e^{i\mathbf{q}\cdot\mathbf{R}_{ni}}, \label{eq:v_q} \end{equation} $$

here, we have

  • $\mathbf{R}_{ni}$: equilibrium position of that atom or use instantaneous positions; equilibrium seems to be more typical for the SED [12]).
  • The sum over unit cells $n$ in the supercell with $i$ running over basis atoms, so each $ni$ is a unique atom.

Again, the SED $\Phi(\mathbf{q},\omega)$ is the temporal power spectrum of the kinetic energy in $\mathbf{q}$-space [12, 4], and so we compute it as:

$$ \begin{equation} \Phi(\mathbf{q},\omega) = \sum_i \frac{1}{2} \bigl| \tilde{v}_i(\mathbf{q},\omega) \bigr|^2 \label{eq:sed_total} \end{equation} $$

where $\tilde{v}_i(\mathbf{q},\omega)$ is the Fourier transform of $\tilde{\mathbf{v}}_i(\mathbf{q},t)$ with respect to time (e.g. FFT over the trajectory). So $\tilde{v}_i(\mathbf{q},\omega)$ has dimensions of $m^{1/2} \cdot \text{length} / \text{time}^2$ in frequency space; $\Phi$ is energy per unit frequency per mode (e.g. $\text{J} \cdot \text{s}/\text{rad}$).

At thermal equilibrium two things hold. First, the SED $\Phi(\mathbf{q},\omega)$ is concentrated near $\omega = \omega_\nu(\mathbf{q})$: at each wavevector $\mathbf{q}$ the lattice motion is mostly a sum of the harmonic modes, so the kinetic energy in $(\mathbf{q},\omega)$-space shows peaks at those mode frequencies instead of being spread over all frequencies. Second, if you integrate $\Phi(\mathbf{q},\omega)$ over $\omega$ at fixed $\mathbf{q}$, you get the total kinetic energy in that $\mathbf{q}$-channel; when normalized correctly this integral is on the order of $\frac{1}{2} n_\text{bands} k_B T$, in other words the equipartition across the $n_\text{bands}$ branches at that $\mathbf{q}$ [4].

The most interesting, in my opinion is the mode-resolved SED (projection onto eigenvectors) which allows you to assign energy to a specific branch $\nu$ and hence mode specific thermal conductivity [12]. To do this you project the mass-weighted velocities onto the harmonic eigenvector $\mathbf{e}_{s\alpha,\nu}(\mathbf{q})$ before taking the time Fourier transform. For atom index $i$ mapping to basis $s$ and Cartesian $\alpha$, the mode amplitude is

$$ \begin{equation} Q_{\mathbf{q}\nu}(t) \propto \sum_{ni} \sqrt{m_i}\, v_{ni,\alpha}(t) \, e_{s\alpha,\nu}(\mathbf{q}) \, e^{-i\mathbf{q}\cdot\mathbf{R}_{ni}} \label{eq:mode_proj} \end{equation} $$

also doing so with proper normalization and phase convention. Then the mode-resolved SED $\Phi_\nu(\mathbf{q},\omega)$ is the power spectrum of $Q_{\mathbf{q}\nu}(t)$. In practice you can compute $\Phi(\mathbf{q},\omega)$ first and then decompose by $\nu$ using the eigenvectors, or compute $\Phi_\nu$ directly; both approaches are used in the literature [12, 4]. You then get the the following pieces:

  • $\mathbf{v}_{ni}(t)$: from your MD trajectory with velocities.
  • $\mathbf{q}$: set of wavevectors commensurate with the supercell (e.g. $\mathbf{q} = \sum_i (n_i/N_i)\mathbf{b}_i$).
  • $\mathbf{e}_{s\alpha,\nu}(\mathbf{q})$: eigenvectors from earlier.
  • $\omega_\nu(\mathbf{q})$: frequencies from earlier.

SED Bandstructure and Mode Lifetimes

A natural way to visualise the SED is like a phonon bandstructure, i.e., plot $\Phi(\mathbf{q},\omega)$ or $\Phi_\nu(\mathbf{q},\omega)$ along a path in the Brillouin zone (e.g. $\Gamma \to X \to K \to \Gamma$) on one axis and frequency $\omega$ (e.g. in THz) on the other. What you get looks like the phonon bandstructure but with bright ridges following the branches. The heatmap intensity comes from the spectral energy density and the spread is related to the mode lifetimes, so at what we see is at finite temperature ( and with anharmonicity) the branches can shift and broaden compared to the standard ground-state quasi-harmonic phonon dispersion. An example is shown in the Figure 1.

In our SED bandstructure each peak in $\Phi_\nu(\mathbf{q},\omega)$ at a given $\mathbf{q}$ can be modeled as a damped harmonic oscillator. As result, the frequency domain the power spectrum has a Lorentzian-like form [45]:

$$ \begin{equation} S_\nu(\omega) \propto \frac{\Gamma_\nu \, \omega_\nu^2}{(\omega^2 - \omega_\nu^2)^2 + (\Gamma_\nu \omega)^2} \label{eq:lorentzian} \end{equation} $$

where:

  • $\omega_\nu$: central frequency peak position of the mode.
  • $\Gamma_\nu$: full width at half maximum (FWHM).

With this then we can compute the phonon lifetime using the following equation:

$$ \begin{equation} \tau_\nu = \frac{2}{\Gamma_\nu} \label{eq:lifetime} \end{equation} $$

which corresponds to the decay of the oscillator envelope going like $e^{-t/\tau_\nu}$.

To get the lifetime, we would then fit the SED peak for mode $\nu$ at each $\mathbf{q}$ to a Lorentzian in order to extract $\omega_\nu$ and $\Gamma_\nu$ and then compute $\ref{eq:lifetime}$. These fits are typically done per branch and per $\mathbf{q}$; so it becomes challenging with overlapping branches or when there are multiple modes at the same frequency.

Figure 1. SED Bandstructure from [1]

Mode-Specific Thermal Conductivity

Within the relaxation-time approximation of the phonon Boltzmann transport equation, the lattice thermal conductivity tensor can be written as a sum over modes $\mathbf{q},\nu$ [6]:

$$ \begin{equation} \kappa^{\alpha\beta} = \frac{1}{V} \sum_{\mathbf{q}\nu} C_{\mathbf{q}\nu} \, v_{\mathbf{q}\nu}^\alpha \, v_{\mathbf{q}\nu}^\beta \, \tau_{\mathbf{q}\nu} \label{eq:kappa_tensor} \end{equation} $$

lets break down these terms in $\ref{eq:kappa_tensor}$:

  • $V$: volume of the crystal (supercell or primitive cell).
  • $C_{\mathbf{q}\nu}$: mode heat capacity (J/K per mode), assuming haromonic approx: $C_{\mathbf{q}\nu} = k_B \, x^2 e^x / (e^x - 1)^2$ with $x = \hbar\omega_{\mathbf{q}\nu}/(k_B T)$ (Einstein heat capacity per mode).
  • $v_{\mathbf{q}\nu}^\alpha = \partial\omega_{\mathbf{q}\nu}/\partial q_\alpha$: group velocity (m/s) of branch $\nu$ at $\mathbf{q}$. From the harmonic dispersion $\omega_\nu(\mathbf{q})$ (Step 1) or from the peak positions of the SED at neighboring $\mathbf{q}$-points.
  • $\tau_{\mathbf{q}\nu}$: relaxation time (s) from the Lorentzian fit (Step 3), $\tau_{\mathbf{q}\nu} = 2/\Gamma_{\mathbf{q}\nu}$.

For an isotropic material you often report a scalar $\kappa$ and use $\frac{1}{3}\operatorname{tr}(\kappa)$ or one diagonal component:

$$ \begin{equation} \kappa = \frac{1}{3V} \sum_{\mathbf{q}\nu} C_{\mathbf{q}\nu} \, v_{\mathbf{q}\nu}^2 \, \tau_{\mathbf{q}\nu} \label{eq:kappa_scalar} \end{equation} $$

with $v_{\mathbf{q}\nu}^2 = |\mathbf{v}_{\mathbf{q}\nu}|^2$. So: you take harmonic $C_{\mathbf{q}\nu}$ and $\mathbf{v}_{\mathbf{q}\nu}$ (or SED-derived frequencies), and anharmonic $\tau_{\mathbf{q}\nu}$ from the SED. That combination is the usual SED-based route to $\kappa$ [12].

Pseudocode Summary

I do have python code that achieves all of this below in conjuction with LAMMPS and Phonopy but they are old and messy at the moment. In addition the package dynasor appears to be well maintained and has good tutorial for how to use it; pySED is a Python package that implements the SED method to compute kinetic-energy-weighted phonon dispersion and extract phonon lifetimes from MD trajectories [7]. What I'll share is is the basic recap of the algorithm:

  1. Primitive cell + force constants $\to$ dynamical matrix $D(\mathbf{q})$ for $\mathbf{q}$ on desired grid/path.
  2. Diagonalize $D(\mathbf{q})$ $\to$ $\omega_\nu(\mathbf{q})$, $\mathbf{e}_{s,\nu}(\mathbf{q})$ (eqs. \ref{eq:dyn_matrix}, \ref{eq:eigen}).
  3. Build supercell (integer repetition); run NVT or NVE MD at target $T$; dump positions and velocities every $\Delta t$.
  4. For each commensurate $\mathbf{q}$:
    • a. Compute $\tilde{\mathbf{v}}_i(\mathbf{q},t) = \sum_n \sqrt{m_i}\, \mathbf{v}_{ni}(t)\, e^{i\mathbf{q}\cdot\mathbf{R}_{ni}}$ (eq. \ref{eq:v_q}).
    • b. FFT in time $\to$ $\tilde{v}_i(\mathbf{q},\omega)$.
    • c. $\Phi(\mathbf{q},\omega) = \sum_i \frac{1}{2} |\tilde{v}_i(\mathbf{q},\omega)|^2$ (eq. \ref{eq:sed_total}).
    • (Optional: project onto $\mathbf{e}_\nu(\mathbf{q})$ $\to$ $\Phi_\nu(\mathbf{q},\omega)$.)
  5. Plot $\Phi(\mathbf{q},\omega)$ along BZ path $\to$ heatmap bandstructure.
  6. For each $(\mathbf{q},\nu)$: fit SED peak to Lorentzian $\to$ $\omega_\nu$, $\Gamma_\nu$; $\tau_\nu = 2/\Gamma_\nu$ (eq. \ref{eq:lifetime}).
  7. From harmonic calc: $C_{\mathbf{q}\nu}$, $\mathbf{v}_{\mathbf{q}\nu}$. From SED: $\tau_{\mathbf{q}\nu}$. (eq. \ref{eq:kappa_scalar})
  8. $\kappa = \frac{1}{3V} \sum_{\mathbf{q}\nu} C_{\mathbf{q}\nu}\, v_{\mathbf{q}\nu}^2\, \tau_{\mathbf{q}\nu}$ (eq. \ref{eq:kappa_scalar}).

Practical Caveats

There are some practical caveats to be aware of with calculating the SED:

  • Commensurability: $\mathbf{q}$ must belong to the reciprocal lattice of the supercell; otherwise the SED is polluted by aliasing [4].
  • Sampling: Long trajectories and fine $\Delta t$ improve frequency resolution and the maximum $\omega$; run length should be much longer than $\tau_\nu$ for reliable lifetime extraction.
  • Temperature: SED reflects the ensemble you simulate (NVT/NVE at $T$) so anharmonic shifts and broadenings are included implicitly in the MD.

Summary

The SED method bridges harmonic lattice dynamics and MD to give you mode resolved thermal conductivity. You use harmonic eigenvectors and dispersions to define modes and group velocities, and then MD trajectories to get the power spectrum in terms of $(\mathbf{q},\omega)$. From that you read off anharmonic frequencies and lifetimes via Lorentzian fits, and plug them into the relaxation-time formula for mode-resolved thermal conductivity. This can now all of this can be done with standard codes (e.g. Phonopy, LAMMPS, dynasor, pySED [7], or in-house scripts) with the main equations are \ref{eq:dyn_matrix} -- \ref{eq:kappa_scalar} driving things.

Notation legend

Indices Meaning
$s,s'$ Basis (atom) indices in primitive unit cell
$\ell$, $n$ Unit-cell index (primitive / supercell)
$i$ Atom index within cell (or combined $ni$)
$\alpha,\beta$ Cartesian directions
$\nu$ Phonon branch (mode) index
Space & structure Meaning
$\mathbf{q}$ Wavevector in reciprocal space
$\mathbf{R}_s(\ell)$, $\mathbf{R}_{ni}$ Equilibrium position of atom $s$ in cell $\ell$, or atom $i$ in cell $n$
$m_s$, $m_i$ Mass of atom $s$ or $i$
Harmonic lattice dynamics Meaning
$D_{ss'}^{\alpha\beta}(\mathbf{q})$ Dynamical matrix (force constants in $\mathbf{q}$-space)
$\Phi_{ss'}^{\alpha\beta}(0,\ell)$ Force constant: force on $(0,s)$ along $\alpha$ when $(\ell,s')$ displaced along $\beta$
$\omega_\nu(\mathbf{q})$ Harmonic angular frequency of mode $\nu$ at $\mathbf{q}$ (rad/s)
$\mathbf{e}_{s\alpha,\nu}(\mathbf{q})$ Polarization eigenvector (atom $s$, direction $\alpha$, mode $\nu$)
MD & SED Meaning
$\mathbf{v}_{ni}(t)$ Velocity of atom $ni$ at time $t$
$\tilde{\mathbf{v}}_i(\mathbf{q},t)$, $\tilde{v}_i(\mathbf{q},\omega)$ Mass-weighted velocity in $\mathbf{q}$-space; its time FT
$\Phi(\mathbf{q},\omega)$ Spectral energy density (total kinetic energy in $(\mathbf{q},\omega)$)
$\Phi_\nu(\mathbf{q},\omega)$ Mode-resolved SED for branch $\nu$
$Q_{\mathbf{q}\nu}(t)$ Mode amplitude (projection onto $\mathbf{e}_{\nu}(\mathbf{q})$)
Lifetimes Meaning
$\Gamma_\nu$, $\Gamma_{\mathbf{q}\nu}$ FWHM of SED peak (angular frequency, rad/s)
$\tau_\nu$, $\tau_{\mathbf{q}\nu}$ Phonon lifetime $\tau = 2/\Gamma$ (s)
$S_\nu(\omega)$ Lorentzian power spectrum of mode $\nu$
Thermal conductivity Meaning
$V$ Crystal volume
$C_{\mathbf{q}\nu}$ Mode heat capacity (J/K per mode $(\mathbf{q},\nu)$)
$v_{\mathbf{q}\nu}^\alpha$, $\mathbf{v}_{\mathbf{q}\nu}$ Group velocity $\partial\omega_{\mathbf{q}\nu}/\partial q_\alpha$
$\kappa^{\alpha\beta}$, $\kappa$ Thermal conductivity tensor; scalar $\kappa = \frac{1}{3}\operatorname{tr}(\kappa)$
$n_\text{bands}$ Number of phonon branches

Footnotes


  1. In the harmonic approximation the potential is quadratic in atomic displacements and force constants are fixed at the equilibrium structure, which is typically 0 K. Quasi-harmonic keeps the potential quadratic at each volume but lets force constants and hence phonon frequencies depend on volume (e.g. via thermal expansion), giving temperature-dependent dispersions without full anharmonicity (no cubic or higher-order terms in the potential). 

  2. The force constants can be obtained from a supercell using frozen-phonon or density-functional perturbation theory (DFPT), or from a fitted force-constant model

References

[1] J.A. Thomas, J.E. Turney, R.M. Iutzi, C.H. Amon, A.J.H. McGaughey, Predicting phonon dispersion relations and lifetimes from the spectral energy density, Phys. Rev. B. 81 (2010) 081411(R). https://doi.org/10.1103/PhysRevB.81.081411.
[2] J.E. Turney, E.S. Landry, A.J.H. McGaughey, C.H. Amon, Predicting phonon properties and thermal conductivity from anharmonic lattice dynamics calculations and molecular dynamics simulations, Phys. Rev. B. 79 (2009) 064301. https://doi.org/10.1103/PhysRevB.79.064301.
[3] J.A. Thomas, J.E. Turney, R.M. Iutzi, C.H. Amon, A.J.H. McGaughey, Comparing and evaluating spectral energy methods for predicting phonon properties, J. Comput. Theor. Nanosci.. 11 (2014) 249–256. https://doi.org/10.1166/jctn.2014.3383.
[4] dynasor . developers, Spectral energy density and theoretical background, (n.d.).
[5] J.P. Boon, S. Yip, Molecular Hydrodynamics, Dover, 1991.
[6] J.M. Ziman, Electrons and Phonons: The Theory of Transport Phenomena in Solids, Oxford University Press, Oxford, 1960.
[7] T. Liang, W. Jiang, K. Xu, H. Bu, Z. Fan, W. Ouyang, J. Xu, PYSED: A tool for extracting kinetic-energy-weighted phonon dispersion and lifetime from molecular dynamics simulations, J. Appl. Phys.. 138 (2025) 075101. https://doi.org/10.1063/5.0278798.

Reuse and Attribution

Sunday, February 15, 2026

Dirac's my Prof, but von Neumann, my favorite?

When you read articles or books on the history of physics you always find some passage that mentions the name von Neumann. This man, John von Neumann, appears to be on another level and probably is the person you have in mind when someone says "oh, he's an Einstein!"1. For more on his life and work, see the MacTutor biography and the Institute for Advanced Study page. So I spent the last hour reading up on him and just want to quickly capture what this guy was able to do.

Figure 1. John von Neumann (public domain)

Childhood

He was born in 1903 in Budapest, Hungary; all his family details are there as well, very interesting. At an early age it was clear he was a prodigy. He could divide 8-digit numbers in his head and memorize books by age 6, then by age 8 he was fluent in ancient Greek. He was a sponge for knowledge, and because his parents were wealthy they always had tutors to aid his talents. His profile says that by age 10 he was doing college level math and basically was a genius.

College

I don't know much about the universities in Hungary, but at age 17 he goes to study at the University of Budapest and then continues on at universities in Berlin and Zurich. He gets his PhD in mathematics at 22 and then also gets a degree in engineering because he can. Then he starts to work in areas of quantum mechanics and basically provides the mathematical foundations for what Heisenberg and Schrödinger had proposed; he makes QM a rigorous theory [1]. Okay, he is 23 now and just getting started.

Out in the wild

Gets interested in mathematical models of strategy and invents the new field of game theory. He proves the minimax theorem [2], a fundamental result that shows how to make optimal decisions when opponents are trying to beat you. Turns out that things like poker, war, economics, all the same math. With Oskar Morgenstern he later writes the book that launches the field [3]. Von Neumann publishes it casually.

Gets invited to America to work at the Institute for Advanced Study (IAS) in Princeton. Becomes the youngest professor at IAS. Has a nice neighbor called Albert Einstein who is considered the smartest person alive but I'd wager von Neumann in raw brain-compute is superior, as many who knew both seem to have indicated.

John von Neumann becomes the man that people bring their unsolved problem to, the one they have been working on for months. He then proceeds to solve it while they are still explaining the problem and what they have tried. He was the kind of guy who would be completing their proofs before they finished describing the paper they were working on. Mathematicians start collecting "von Neumann stories" and everyone has one; a legend is born and he keeps pumping out impactful content.

Starts to work on ergodic theory, operator theory, and lattice theory. Publishes dozens of foundational papers that would be a career for a normal person in my view. But for von Neumann it's just another working day at IAS.

Interestingly he was not like our Prof. Dirac, not a hermit; he was known for his social skills and ability to party. He is a legend in many ways. Also known for liking to drink and tell impolite jokes, not a good driver, and many car crashes. Everyone, even the best, has flaws. In general von Neumann is a liked man.

The war years

John von Neumann becomes a U.S. citizen and then, as one would expect given his intellect and the looming threat of war, is recruited to join the Manhattan Project. He helps design the implosion mechanism for the atomic bomb and calculates the explosive lenses needed. His math helps build Fat Man, the bomb that destroys Nagasaki. Not sure how this made von Neumann feel or how he thought about the devastation of the bomb.

He continues with work on the hydrogen bomb, which turns out to be even more destructive. You see it just as math. He decides computing and computers are going to be the future and works on the ENIAC and EDVAC computers. He writes the "First Draft of a Report on the EDVAC" [4], which describes the von Neumann architecture that is still used in computers today. Every computer since uses this architecture. Basically von Neumann invented the modern design for every laptop, phone, server, etc. that exists today. Thank you Prof. von Neumann.

Post-war years

He starts to think about self-replicating machines and cellular automata [5]. Basically the precursor of Conway's Game of Life and cellular automata. He starts to think about the future and predicts nanotechnology and AI decades early.

Surprisingly, in his early fifties he is diagnosed with bone cancer, probably from radiation exposure at nuclear tests. He dies at 53. So what did he achieve in only 53 years?

  • Quantum mechanics foundations
  • Game theory
  • Von Neumann architecture (i.e., computers)
  • cellular automata
  • Nuclear power
  • Theories in economics, meteorology, biology

From reading about von Neumann: things like Einstein had deeper intuition and Gödel had purer logic, but nobody was faster than von Neumann. The smartest people of the 20th century clearly saw him as uniquely different.

Footnotes


  1. There's no doubt Einstein was a very brilliant thinker and one of a kind, but he probably was so influential for his creative and abstract thought rather than his strong foundational knowledge and analytical ability, although he obviously is at the top of the distribution for that as well. 

References

[1] J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1955.
[2] J. von Neumann, Zur Theorie der Gesellschaftsspiele, Math. Ann. 100 (1928) 295--320. https://doi.org/10.1007/BF01448847
[3] J. von Neumann, O. Morgenstern, Theory of Games and Economic Behavior, Princeton University Press, 1944.
[4] J. von Neumann, First Draft of a Report on the EDVAC, IEEE Ann. Hist. Comput. 15 (1993) 27--75. https://doi.org/10.1109/85.238389
[5] J. von Neumann, A.W. Burks, Theory of Self-Reproducing Automata, University of Illinois Press, Urbana, 1966.

Reuse and Attribution