MA Emilio Schmidt: Difference between revisions

From Arbeitsgruppe Kuiper
Jump to navigation Jump to search
Line 1,378: Line 1,378:


The following figure shows the corresponding numerically determined spectrum for \((m,\bar{k}) = (0,1)\). The dashed horizontal line marks the threshold \(-\bar{k}^2\). The inset shows the last ten eigenvalues of the spectrum and illustrates that only a single eigenvalue lies above this threshold, while all remaining eigenvalues stay below it.
The following figure shows the corresponding numerically determined spectrum for \((m,\bar{k}) = (0,1)\). The dashed horizontal line marks the threshold \(-\bar{k}^2\). The inset shows the last ten eigenvalues of the spectrum and illustrates that only a single eigenvalue lies above this threshold, while all remaining eigenvalues stay below it.
[[File:MAES-SpectrumM0K1Xmax25N500.png|400px|thumb|center|Spectrum of the numerically determined eigenvalues \(\bar{\sigma}_n^2\) for the fixed pair \((m,\bar{k}) = (0,1)\), on a computational domain with \(x_{\max} = 25\) and \(N = 250\) grid points. The dashed horizontal line marks the threshold \(-\bar{k}^2\). The inset shows the last ten eigenvalues.]]


\colorbox{yellow}{Plot of the spectrum for \((m,\bar{k}) = (0,1)\), caption: Spectrum of the numerically determined eigenvalues \(\bar{\sigma}_n^2\) for the fixed pair \((m,\bar{k}) = (0,1)\), on a computational domain with \(x_{\max} = 25\) and \(N = 250\) grid points. The dashed horizontal line marks the threshold \(-\bar{k}^2\). The inset shows the last ten eigenvalues.}
\colorbox{yellow}{Plot of the spectrum for \((m,\bar{k}) = (0,1)\), caption: Spectrum of the numerically determined eigenvalues \(\bar{\sigma}_n^2\) for the fixed pair \((m,\bar{k}) = (0,1)\), on a computational domain with \(x_{\max} = 25\) and \(N = 250\) grid points. The dashed horizontal line marks the threshold \(-\bar{k}^2\). The inset shows the last ten eigenvalues.}

Revision as of 15:56, 10 April 2026

Introduction

This master's thesis investigates the dynamics and stability of filamentary gas structures using numerical simulations. These filaments are of physical interest because they are common sites of star formation.

As a theoretical reference, this work builds on the analytically derived magnetohydrodynamic models of Fiege and Pudritz (2000a). They solved the stationary magnetohydrodynamic equations for cylindrical, self-gravitating filaments with helical magnetic fields and obtained equilibrium solutions from them. Building on these equilibrium solutions, they performed in Fiege and Pudritz (2000b) a linear stability analysis to investigate instabilities that occur, associated growth rates and characteristic wavelengths.

The aim of this work is to numerically reproduce the equilibrium solutions described by Fiege and Pudritz (2000a). These equilibria are then perturbed in order to investigate the temporal development of the perturbations and compare the results with those of the linear stability analysis by Fiege and Pudritz (2000b).

Theoretical Framework

The isothermal self-gravitating cylinder

Sought is the stationary density profile \(\rho\) of an infinitely long, isothermal, self-gravitating gas cylinder. To do this, the hydrostatic Euler equation has to be solved, which is given by:

\begin{align} \boldsymbol{\nabla} P = \boldsymbol{f}_\text{ext} \end{align}

Here, \(P\) is the pressure field and \(\boldsymbol{f}_\text{ext}\) is the force density acting on the gas. In the considered case, this is the gas's own gravity. Since the gravitational field is a conservative force field, it can be expressed by a potential \(\phi_\text{ext}\):

\begin{align} \boldsymbol{f}_\text{ext} = - \rho \boldsymbol{\nabla} \phi_\text{ext} \end{align}

Inserting yields:

\begin{align} \boldsymbol{\nabla} P = - \rho \boldsymbol{\nabla} \phi_\text{ext} \end{align}

The pressure field of the gas under consideration can be described by the equation of state of an isothermal gas, which is given by:

\begin{align} P = \frac{R T}{M} \rho \end{align}

Here, \(R\) denotes the universal gas constant, \(T\) the spatially homogeneous temperature distribution, and \(M\) the molar mass of the gas. Inserting into the hydrostatic Euler equation yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{M} \boldsymbol{\nabla} \rho = - \rho \boldsymbol{\nabla} \phi_\text{ext} \\[6pt] \Leftrightarrow\;& \frac{1}{\rho} \boldsymbol{\nabla} \rho = - \frac{M}{R T} \boldsymbol{\nabla} \phi_\text{ext} \\[6pt] \Leftrightarrow\;& \boldsymbol{\nabla} \ln(\rho) = - \frac{M}{R T} \boldsymbol{\nabla} \phi_\text{ext} \\[6pt] \Rightarrow\;& \Delta \ln(\rho) = - \frac{M}{R T} \Delta \phi_\text{ext} \end{align}

It proves useful to use the Poisson equation of Newtonian gravity, which is given by:

\begin{align} \Delta \phi_\text{ext} = 4 \pi G \rho \end{align}

Inserting yields:

\begin{align} \phantom{\Rightarrow}\;& \Delta \ln(\rho) = - \frac{4 \pi G M}{R T} \rho \, , \quad \text{def.:} \, a := \frac{4 \pi G M}{R T} \\[6pt] \Leftrightarrow\;& \Delta \ln(\rho) = - a \rho \end{align}

For an infinitely long, straight, self-gravitating gas cylinder without an externally imposed azimuthal or axial structure, there is symmetry with respect to rotation about the cylinder axis and translation along the axis. The underlying equations share this symmetry, which is why corresponding derivatives disappear in the considered equation:

\begin{align} \phantom{\Rightarrow}\;& \frac{1}{r} \partial_r \big(r \partial_r \ln(\rho)\big) = - a \rho \\[6pt] \Leftrightarrow\;& \partial_r \bigg(r \frac{\partial_r \rho}{\rho}\bigg) = - a r \rho \\[6pt] \Leftrightarrow\;& \frac{\partial_r \rho}{\rho} - r \bigg(\frac{\partial_r \rho}{\rho}\bigg)^2 + r \frac{\partial_r^2 \rho}{\rho} = - a r \rho \end{align}

Since the density is positive by definition, the following substitution can be made without restriction:

\begin{align} u(r) = \ln\left(\frac{\rho(r)}{\rho_0}\right). \end{align}

Here, \(\rho_0 = \text{const.} > 0\) is a constant reference density. Its specific value is irrelevant for further consideration. It only serves to ensure that the argument of the logarithm is dimensionless. The first two derivatives of the new quantity \(u\) are then given by:

\begin{align} \partial_r u = \frac{\partial_r \rho}{\rho} \quad \text{and} \quad \partial_r^2 u = \frac{\partial_r^2 \rho}{\rho} - \left(\frac{\partial_r \rho}{\rho}\right)^2 \end{align}

Inserting this into the differential equation under consideration yields:

\begin{align} \frac{\partial_r u}{r} + \partial_r^2 u = - a \rho_0 \exp(u) \end{align}

To simplify this equation further, another substitution is performed:

\begin{align} z = u + \sqrt{2} t \, , \quad \text{with} \, t = \sqrt{2} \ln\left(\sqrt{a \rho_0} r\right) \end{align}

With \(\partial_r t = \frac{\sqrt{2}}{r}\), the following applies for the first two derivatives of \(u\) with respect to \(r\):

\begin{align} \partial_r u = \frac{\sqrt{2}}{r} \partial_t u \quad \text{and} \quad \partial_r^2 u = - \frac{\sqrt{2}}{r^2} \partial_t u + \frac{2}{r^2} \partial_t^2 u \end{align}

Inserting yields:

\begin{align} \phantom{\Rightarrow}\;& 2 \partial_t^2 u = - a \rho_0 r^2 \exp(u) \\[6pt] \Leftrightarrow\;& 2 \partial_t^2 u = - \exp(z) \end{align}

Furthermore, the following applies:

\begin{align} \phantom{\Rightarrow}\;& \partial_t^2 z = \partial_t^2 u + \sqrt{2} \partial_t^2 t \\[6pt] \Leftrightarrow\;& \partial_t^2 z = \partial_t^2 u \end{align}

Thus, it follows that:

\begin{align} \phantom{\Rightarrow}\;& 2 \partial_t^2 z = - \exp(z) \\[6pt] \Rightarrow\;& 2 \left(\partial_t z\right) \left(\partial_t^2 z\right) = - \left(\partial_t z\right) \exp(z) \\[6pt] \Leftrightarrow\;& \partial_t\left(\left(\partial_t z\right)^2 + \exp(z)\right) = 0 \\[6pt] \Rightarrow\;& \left(\partial_t z\right)^2 + \exp(z) = c_1 \, , \quad c_1 \in \mathbb{R} \\[6pt] \Leftrightarrow\;& \partial_t z = \pm \sqrt{c_1 - \exp(z)} \end{align}

At first, it might seem that both signs need to be treated separately. However, it can actually be shown that both equations lead to the same solutions[1]. In the following, therefore, only the equation with a positive sign will be considered. To solve this, the following substitution is used:

\begin{align} z = \ln(y) \end{align}

This results in the following for the differential equation under consideration:

\begin{align} \frac{\partial_t y}{y} = \sqrt{c_1 - y} \end{align}

Separation of variables yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{\text{d}y}{y\sqrt{c_1 - y}} = \text{d}t \\[6pt] \Rightarrow\;& \frac{1}{\sqrt{c_1}} \ln\left(\bigg|\frac{\sqrt{c_1 - y} - \sqrt{c_1}}{\sqrt{c_1 - y} + \sqrt{c_1}}\bigg|\right) = t + C_2 \\[6pt] \Leftrightarrow\;& \bigg|\frac{\sqrt{c_1 - y} - \sqrt{c_1}}{\sqrt{c_1 - y} + \sqrt{c_1}}\bigg| = \exp\left(\sqrt{c_1} t + \sqrt{c_1} C_2\right) \, , \quad \text{with} \, c_2 = \exp\left(\sqrt{c_1} C_2\right) \\[6pt] \Leftrightarrow\;& \bigg|\frac{\sqrt{c_1 - y} - \sqrt{c_1}}{\sqrt{c_1 - y} + \sqrt{c_1}}\bigg| = c_2 \exp\left(\sqrt{c_1} t\right) \, , \quad \text{with} \, A = c_2 \exp\left(\sqrt{c_1} t\right) \\[6pt] \Leftrightarrow\;& \bigg|\frac{\sqrt{c_1 - y} - \sqrt{c_1}}{\sqrt{c_1 - y} + \sqrt{c_1}}\bigg| = A \, , \quad \text{with} \, y = \exp(z) \Rightarrow \sqrt{c_1 - y} > \sqrt{c_1} \; \forall y \\[6pt] \Leftrightarrow\;& \frac{\sqrt{c_1} - \sqrt{c_1 - y}}{\sqrt{c_1} + \sqrt{c_1 - y}} = A \\[6pt] \Leftrightarrow\;& y = 4 c_1 \frac{A}{(1 + A)^2} \\[6pt] \Leftrightarrow\;& \exp(u) = \frac{4 c_1}{a \rho_0 r^2} \frac{A}{(1 + A)^2} \\[6pt] \Leftrightarrow\;& \rho = \frac{4 c_1 \rho_0}{a \rho_0 r^2} \frac{A}{(1 + A)^2} \end{align}

The quantity \(A\) can be written as follows:

\begin{align} \phantom{\Rightarrow}\;& A = c_2 \exp\left(\sqrt{c_1} t\right) \\[6pt] \Leftrightarrow\;& A = c_2 \exp\left(\sqrt{2 c_1} \ln(\sqrt{a \rho_0} r)\right) \\[6pt] \Leftrightarrow\;& A = c_2 (\sqrt{a \rho_0} r)^{\sqrt{2 c_1}} \end{align}

Define additionally:

\begin{align} b := \sqrt{2 c_1} \quad \text{und} \quad r_0 := \left(c_2^\frac{1}{b} \sqrt{a \rho_0}\right)^{-1} \end{align}

This yields the following general solution for the differential equation under consideration:

\begin{align} \phantom{\Rightarrow}\;& \rho(r) = \frac{2 b^2}{a r^2} \frac{\left(\frac{r}{r_0}\right)^b} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \\[6pt] \Leftrightarrow\;& \rho(r) = \frac{2 b^2}{a r_0^2} \frac{\left(\frac{r}{r_0}\right)^{b-2}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \end{align}

The integration constants \(r_0\) and \(b\) must then be determined using appropriate boundary conditions. For the problem considered, it makes sense to specify boundary conditions for the center of the cylinder, i.e., for \(r = 0\). In the considered case, the density at the center of the cylinder must be finite and nonzero, since both a vanishing and a diverging density at the origin would lead to unphysical behavior:

\begin{align} \rho(r=0) = \rho_c \, , \quad \text{with} \quad 0 < \rho_c < \infty \end{align}

Furthermore, rotational symmetry and the requirement for a smooth, regular solution imply that the density profile at the origin should not have a gradient. Consequently, \(\rho(r)\) has an extremum at \(r=0\), and the following applies:

\begin{align} \partial_r \rho(r) \big|_{r = 0} = 0 \end{align}

It should be noted that the substitution \(t=\sqrt{2}\ln\big(\sqrt{a\rho_0}r\big)\) is only defined for \(r>0\). The differential equation was thus initially solved in the domain \(r \in \mathbb{R}\). In order to formulate boundary conditions at \(r=0\), the solution must be extended continuously to \(r=0\). Accordingly, the central values are understood as right-sided limits:

\begin{align} \rho(r = 0) = \lim\limits_{r \rightarrow 0} \rho(r) \quad \text{and} \quad \partial_r \rho(r) \big|_{r = 0} = \lim\limits_{r \rightarrow 0} \partial_r \rho(r) \end{align}

With the first boundary condition, the following then applies:

\begin{align} \phantom{\Rightarrow}\;& \lim\limits_{r \rightarrow 0} \rho(r) = \rho_c \\[6pt] \Leftrightarrow\;& \lim\limits_{r \rightarrow 0} \frac{2 b^2}{a r_0^2} \frac{\left(\frac{r}{r_0}\right)^{b-2}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} = \rho_c \end{align}

Since the denominator converges to \(1\) in the limit case \(r\rightarrow 0\), the local behavior of the density in the immediate vicinity of the origin is determined exclusively by the numerator. This, together with the fact that the density at the origin must not vanish or diverge, directly implies that the following must hold:

\begin{align} b = 2 \end{align}

Inserting into the solution for the density profile provides:

\begin{align} \rho(r) = \frac{8}{a r_0^2} \frac{1} {\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2} \end{align}

Furthermore, the first boundary condition implies that:

\begin{align} \phantom{\Rightarrow}\;& \rho_c = \frac{8}{a r_0^2} \\[6pt] \Leftrightarrow\;& r_0^2 = \frac{8}{a \rho_c} \end{align}

Inserting this gives the stationary density profile of an infinitely long, isothermal, self-gravitating gas cylinder:

\begin{align} \rho(r) = \frac{\rho_c}{\left(1 + \frac{1}{8} a \rho_c r^2\right)^2} \end{align}

It is noticeable that the second boundary condition is not explicitly included in the determination of the integration constants \(r_0\) and \(b\). The reason for this is that the requirement for a finite central density different from zero already restricts the solution so much that only the case \(b=2\) remains. Thus, the density profile takes the above form and automatically satisfies the second boundary condition. It is therefore not redundant, but is already implicitly contained in the regularity and symmetry requirements.

In this context, the total mass per unit length \(\lambda_{\text{total}}\) of an infinitely long, isothermal gas cylinder held together by its own gravity can be introduced:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\text{total}} = 2 \pi \int_{0}^{\infty} \rho(r) r \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{total}} = \frac{8 \pi}{a} \\[6pt] \Leftrightarrow\;& \lambda_{\text{total}} = \frac{2 R T}{G M} \end{align}

The solution for the density profile can therefore also be written as follows:

\begin{align} \rho(r) = \frac{\lambda_{\text{total}}}{\pi r_0^2} \frac{1}{\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2} \end{align}

With the determined density distribution, the gravitational potential induced by the density distribution can now be determined using Poisson's equation. The Poisson equation, using the given symmetry, is given by:

\begin{align} \phantom{\Rightarrow}\;& \frac{1}{r} \partial_r \left(r \partial_r \phi_\text{ext}(r)\right) = 4 \pi G \rho(r) \\[6pt] \Leftrightarrow\;& \partial_r \left(r \partial_r \phi_\text{ext}(r)\right) = 4 \pi G \rho(r) r \end{align}

Integration yields:

\begin{align} \phantom{\Rightarrow}\;& r \partial_r \phi_\text{ext}(r) = 4 \pi G \int_{0}^{r} \rho(r^\prime) r^\prime \text{d} r \\[6pt] \Leftrightarrow\;& r \partial_r \phi_\text{ext}(r) = \frac{32 \pi G}{a r_0^2} \int_{0}^{r} \frac{r^\prime}{\left(1 + \left(\frac{r^\prime}{r_0}\right)^2\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& r \partial_r \phi_\text{ext}(r) = \frac{16 \pi G}{a} \frac{\left(\frac{r}{r_0}\right)^2} {1 + \left(\frac{r}{r_0}\right)^2} \\[6pt] \Leftrightarrow\;& \partial_r \phi_\text{ext}(r) = \frac{16 \pi G}{a r_0} \frac{\frac{r}{r_0}}{1 + \left(\frac{r}{r_0}\right)^2} \end{align}

Another integration yields the gravitational potential of of an infinitely long, isothermal, self-gravitating gas cylinder:

\begin{align} \phantom{\Rightarrow}\;& \phi_\text{ext}(r) = \frac{16 \pi G}{a r_0} \int_{0}^{r} \frac{\frac{r^\prime}{r_0}}{1 + \left(\frac{r^\prime}{r_0}\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& \phi_\text{ext}(r) = \frac{8 \pi G}{a} \ln\left(1 + \left(\frac{r}{r_0}\right)^2\right) \\[6pt] \Leftrightarrow\;& \phi_\text{ext}(r) = G \lambda_{\text{total}} \ln\left(1 + \left(\frac{r}{r_0}\right)^2\right) \end{align}

  1. Switching sign just flips the sign of \(b\), and the final \(\rho\) is invariant wrt \(b\to -b\).

The case with a core

We now consider an infinitely long cylinder with radius \(R_{\text{min}}\), surrounded by an isothermal gas that is held together by gravity from the core on the one hand and by its self-gravity on the other. It can be shown that this system follows the same differential equation for density distribution as the case without a core. The starting point here is also the hydrostatic Euler equation. For this, the external force density or force \(\text{d} \boldsymbol{F}_{\text{ext}}\) of the gas, including the core, is required on a mass element \(\rho(r) \text{d} V\), which is given by:

\begin{align} \text{d} \boldsymbol{F}_{\text{ext}} = - \frac{2 G \lambda(r) \rho(r) \text{d} V}{r} \boldsymbol{e}_r \, , \quad \text{mit} \quad r > R_{\text{min}} \end{align}

Here, \(\lambda(r)\) is the mass per unit length enclosed up to radius \(r\). It is given by:

\begin{align} \phantom{\Rightarrow}\;& \lambda(r) = 2 \pi \int_{0}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime \\[6pt] \Leftrightarrow\;& \lambda(r) = \lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime \end{align}

Here, \(\lambda_{\text{core}}\) is the mass per unit length of the core. Substituting this into the hydrostatic Euler equation gives:

\begin{align} \phantom{\Rightarrow}\;& \boldsymbol{\nabla} P = - \frac{2 G \rho}{r} \left(\lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime\right) \boldsymbol{e}_r \\[6pt] \Leftrightarrow\;& \frac{R T}{M} \boldsymbol{\nabla} \rho = - \frac{2 G \rho}{r} \left(\lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime\right) \boldsymbol{e}_r \end{align}

Due to the given cylinder symmetry, the following then applies:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{M} \partial_r \rho = - \frac{2 G \rho}{r} \left(\lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime\right) \\[6pt] \Leftrightarrow\;& \frac{R T}{2 G M} \frac{r \partial_r \rho}{\rho} = - \left(\lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime\right) \end{align}

Differentiating with respect to \(r\) then yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{2 G M} \partial_r \left(\frac{r \partial_r \rho}{\rho}\right) = - 2 \pi \rho r \\[6pt] \Leftrightarrow\;& \frac{\partial_r \rho}{\rho} - r \left(\frac{\partial_r \rho}{\rho}\right)^2 + r \frac{\partial_r^2 \rho}{\rho} = - a r \rho \end{align}

Therefore, the general solution of the system under consideration is given by the general solution of the system without the core:

\begin{align} \phantom{\Rightarrow}\;& \rho(r) = \frac{2 b^2}{a r_0^2} \frac{\left(\frac{r}{r_0}\right)^{b-2}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \\[6pt] \Leftrightarrow\;& \rho(r) = \frac{b^2 \lambda_{\text{total}}}{4 \pi r_0^2} \frac{\left(\frac{r}{r_0}\right)^{b-2}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \end{align}

A condition for the existence of physically meaningful density profiles can be derived from Euler's hydrostatic equation. Under the assumed symmetry, the following then applies:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{M} \text{d}_r \rho = - \frac{2 G \lambda \rho}{r} \\[6pt] \Leftrightarrow\;& \frac{R T}{M} \text{d}_r \ln(\rho) = - \frac{2 G \lambda}{r} \end{align}

Applying the general solution and then performing the derivations yields:

\begin{align} \frac{R T}{M} \left(\frac{b - 2}{r} - \frac{2 b}{r} \frac{\left(\frac{r}{r_0}\right)^b}{1 + \left(\frac{r}{r_0}\right)^b}\right) = - \frac{2 G \lambda}{r} \end{align}

Since this equation holds for all \(r \in \mathbb{R}_{> 0}\), it must be satisfied in particular on the surface of the core. With \(\lambda(r = R_{\text{min}}) = \lambda_{\text{core}}\), the following then follows:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{M} \left(\frac{b - 2}{R_{\text{min}}} - \frac{2 b}{R_{\text{min}}} \frac{\left(\frac{R_{\text{min}}}{r_0}\right)^b}{1 + \left(\frac{R_{\text{min}}}{r_0}\right)^b}\right) = - \frac{2 G \lambda_{\text{core}}}{R_{\text{min}}} \\[6pt] \Leftrightarrow\;& \frac{r_0}{R_{\text{min}}} = \left(\frac{b + 2 - 4 \frac{\lambda_{\text{core}}}{\lambda_{\text{total}}}} {b - 2 + 4 \frac{\lambda_{\text{core}}}{\lambda_{\text{total}}}}\right)^{\frac{1}{b}} \end{align}

Since the left side of the equation is non-negative by definition, it follows that for the exponent \(b\), the following must apply:

\begin{align} b > \biggl|4 \frac{\lambda_{\text{core}}}{\lambda_{\text{total}}} - 2\biggr| \end{align}

The gas mass per unit length \(\lambda_{\text{gas}}\) can then be expressed in terms of the mass per unit length of the core and the exponent. The gas mass per unit length is generally given by:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\text{gas}} = 2 \pi \int_{R_{\text{min}}}^{\infty} \rho(r) r \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{gas}} = \frac{b^2 \lambda_{\text{total}}}{2 r_0^b} \int_{R_{\text{min}}}^{\infty} \frac{r^{b-1}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{gas}} = \frac{b \lambda_{\text{total}}}{2} \frac{1}{1 + \left(\frac{r}{r_0}\right)^b} \end{align}

Using the expression for \(\frac{r_0}{R_{\text{min}}}\) derived above, it follows:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\text{gas}} = \frac{b + 2}{4} \lambda_{\text{total}} - \lambda_{\text{core}} \\[6pt] \Leftrightarrow\;& \frac{\lambda_{\text{gas}} + \lambda_{\text{core}}}{\lambda_{\text{total}}} = \frac{b + 2}{4} \end{align}

If the core is a numerical necessity, it is only there to emulate the gas within the core radius. Therefore, the case \(b = 2\) continues to be considered for this case. For this case, the core mass per unit length must be selected as follows, for a given core radius:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\text{core}} = 2 \pi \int_{0}^{R_{\text{min}}} \rho(r,b=2) r \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{core}} = \frac{2 \lambda_{\text{total}}}{r_0^2} \int_{0}^{R_{\text{min}}} \frac{r}{\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{core}} = \frac{R_{\text{min}}^2} {R_{\text{min}}^2 + r_0^2}\lambda_{\text{total}} \end{align}

This results in the following for the gas mass per unit length in the system:

\begin{align} \lambda_{\text{gas}} = \frac{r_0^2} {R_{\text{min}}^2 + r_0^2}\lambda_{\text{total}} \end{align}

The case with a core (Lothar)

The solution to the ODE reads

\begin{align} \rho &= \frac{\lambda b^2(r/r_0)^{b-2}}{4\pi r_0^2(1+(r/r_0)^b)^2}\\[2ex] \text{with}\quad \lambda &= \frac{2k_\text{B}T}{Gm} = \int_0^\infty 2\pi r\rho(r,r_0,b{=}2)\,\text dr \end{align} being the only possible total mass/length in the case of gas only, where \(b=2\) is necessary for zero force at \(r=0\). Notably, \(\lambda\) does not depend on \(r_0\), i.e. every width of \(\rho\) is equally allowed. The reason is that no length scale can be constructed from the input parameters \(k_\text{B}T\), \(G\) and \(m\).

This changes in the presence of a core with mass/length \(\lambda_\text{c}\) and radius \(R\). Vanishing force at \(r=0\) is replaced by force balance at \(r=R\), which leads to the condition \begin{align} \frac{r_0}{R} &= \left(\frac{b+2-4\lambda_\text{c}/\lambda}{b-2+4\lambda_\text{c}/\lambda}\right)^{1/b} ~, \end{align} which in turn introduces the constraint \(b>\vert 4\lambda_\text{c}/\lambda-2\vert\).

The resulting gas mass is \begin{gather} \int_R^\infty 2\pi r\rho(r,r_0,b)\,\text dr = \lambda\frac{b+2}{4}-\lambda_\text{c}\\ \Leftrightarrow\quad \frac{\lambda_\text{c}+\lambda_\text{gas}}{\lambda}=\frac{b+2}{4} ~. \end{gather} That is, the amount of gas can be tuned via \(b\). Note that for \(\lambda_\text{c}<\lambda/2\), it cannot be chosen arbitrarily small.

If the core is actually a numerical necessity and only meant to emulate the gas contained inside, it's \(b=2\) again, and the most convenient choice is \(\lambda_\text{c}=\lambda_\text{gas}=\lambda/2\) which yields \(r_0=R\).

Free-fall time of an an infinitely long gas cylinder

In the following, the free fall time \(t_{\text{ff}}\) of an infinitely long gas cylinder is calculated for the previously derived density profile, with the cylinder initially at rest. The calculation is performed in the pressureless limit \(P \rightarrow 0\). The dynamics of the system are then determined solely by its own gravity.

To determine the free fall time, an infinitesimal cylindrical shell with radius \(r_{\text{i}}\) and mass \(\rho(r_i) \text{d} V\) is considered, whose gas, according to Newton's shell theorem, is gravitationally accelerated radially inwards by the mass enclosed by the cylindrical shell. Due to symmetry, the problem is reduced to a one-dimensional, purely radial motion. The equation of motion to be solved is therefore:

\begin{align} \phantom{\Rightarrow}\;& \rho(r_{\text{i}}) \text{d} V \frac{\text{d}^2 r}{\text{d} t^2} = - \frac{2 G \lambda(r_{\text{i}}) \rho(r_{\text{i}}) \text{d} V}{r} \\[6pt] \Leftrightarrow\;& \frac{\text{d}^2 r}{\text{d} t^2} = - \frac{2 G \lambda(r_{\text{i}})}{r} \end{align}

Here, \(\lambda(r_{\text{i}})\) is the mass enclosed by the cylinder shell per unit length, which is given by the following equation for the given density distribution:

\begin{align} \phantom{\Rightarrow}\;& \lambda(r_{\text{i}}) = 2 \pi \int_{0}^{r_{\text{i}}} \rho(r) r \text{d} r \\[6pt] \Leftrightarrow\;& \lambda(r_{\text{i}}) = \frac{2 \lambda_{\text{total}}}{r_0^2} \int_{0}^{r_{\text{i}}} \frac{r}{\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& \lambda(r_{\text{i}}) = \frac{r_{\text{i}}^2} {r_{\text{i}}^2 + r_0^2}\lambda_{\text{total}} \end{align}

Inserting yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{\text{d}^2 r}{\text{d} t^2} = - \frac{2 G \lambda_{\text{total}}}{r} \frac{r_{\text{i}}^2}{r_{\text{i}}^2 + r_0^2} \, , \quad \text{def.:} \, K := 2 G \lambda_{\text{total}} \frac{r_{\text{i}}^2}{r_{\text{i}}^2 + r_0^2} \\[6pt] \Leftrightarrow\;& \frac{\text{d}^2 r}{\text{d} t^2} = - \frac{K}{r} \, , \quad \text{with} \ v = \frac{\text{d} r}{\text{d} t} \, , \, \frac{\text{d} v}{\text{d} t} = v \frac{\text{d} v}{\text{d} r} \\[6pt] \Leftrightarrow\;& v \frac{\text{d} v}{\text{d} r} = - \frac{K}{r} \\[6pt] \Leftrightarrow\;& v \text{d} v = - \frac{K}{r} \text{d} r \\[6pt] \Rightarrow\;& \frac{1}{2}v^2 = - K \ln(r) + C \end{align}

From the initial conditions \(r(t=0) = r_{\text{i}}\) and \(v(t=0) = 0\), the following applies to the integration constant:

\begin{align} C = K \ln(r_i) \end{align}

Thus follows:

\begin{align} \phantom{\Rightarrow}\;& \frac{1}{2}v^2 = - K \ln\left(\frac{r_{\text{i}}}{r}\right) \\[6pt] \Leftrightarrow\;& v = \pm \sqrt{2 K \ln\left(\frac{r_{\text{i}}}{r}\right)} \end{align}

Since the collapse is directed inwards, only the negative branch of the solution for the velocity is relevant, which is why only this branch will be considered in the following:

\begin{align} \phantom{\Rightarrow}\;& v = - \sqrt{2 K \ln\left(\frac{r_{\text{i}}}{r}\right)} \\[6pt] \Leftrightarrow\;& \frac{\text{d} r}{\text{d} t} = - \sqrt{2 K \ln\left(\frac{r_{\text{i}}}{r}\right)} \end{align}

By integrating this differential equation, the time \(t(r, r_i)\) required for a particle to fall from its starting position \(r_i\) to a radius \(r\), is obtained:

\begin{align} \phantom{\Rightarrow}\;& \int_{0}^{t} \text{d} t^\prime = - \int_{r_{\text{i}}}^{r} \frac{1}{\sqrt{2 K \ln\left(\frac{r_{\text{i}}}{r^\prime}\right)}} \text{d} r^\prime \\[6pt] \Leftrightarrow\;& t(r,r_i) = \sqrt{\frac{\pi}{2 K}} r_{\text{i}} \operatorname{erf}\left(\sqrt{\ln\left(\frac{r_i}{r}\right)}\right) \\[6pt] \Leftrightarrow\;& t(r,r_i) = \frac{1}{2} \sqrt{\frac{\pi\left(r_{\text{i}}^2 + r_0^2\right)}{G \lambda_{\text{total}}}} \operatorname{erf}\left(\sqrt{\ln\left(\frac{r_i}{r}\right)}\right) \end{align}

Here, \(\operatorname{erf}(x)\) is the Gaussian error function, which is defined as follows:

\begin{align} \operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} \text{d} t \end{align}

The limit \(r \rightarrow 0\) then yields the free fall time \(t_{\text{ff}}(r_i)\) of a particle that is at its starting position at time \(t = 0\):

\begin{align} \phantom{\Rightarrow}\;& t_{\text{ff}}(r_i) = \lim\limits_{r \rightarrow 0} t(r,r_i) \, , \quad \text{with} \lim\limits_{x \rightarrow \infty} \operatorname{erf}(x) = 1 \\[6pt] \Leftrightarrow\;& t_{\text{ff}}(r_i) = \frac{1}{2} \sqrt{\frac{\pi\left(r_{\text{i}}^2 + r_0^2\right)}{G \lambda_{\text{total}}}} \end{align}

It follows that the free fall time does not approach zero when the starting position approaches zero, but converges to a finite limit value:

\begin{align} \lim_{r_i \rightarrow 0} t_{\text{ff}}(r_i) = \frac{r_0}{2}\sqrt{\frac{\pi}{G\lambda_{\text{total}}}} \end{align}

For arbitrarily small but finite starting positions \(r_i>0\), the free fall time cannot become arbitrarily small, but approaches this finite value.

Linear stability analysis

The aim of this section is to investigate how the hydrostatic equilibrium solution of an infinitely long, isothermal, self-gravitating gas cylinder behaves under perturbations. In the following, all equilibrium variables are denoted by the index \(0\):

\begin{align} \rho_0(r) = \frac{\lambda_{\text{total}}}{\pi r_0^2}\,\frac{1}{\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2}\,, \quad \boldsymbol{v}_0 = \boldsymbol{0}\,, \quad \phi_{\text{ext},0}(r) = G\,\lambda_{\text{total}}\,\ln\!\left(1 + \left(\frac{r}{r_0}\right)^2\right) \end{align}

Here, \(\boldsymbol{v}_0\) denotes the velocity field of the equilibrium solution, which is exactly zero in hydrostatic equilibrium due to the stationary configuration.

Linearisation of the hydrodynamic equations

Considered is now a disturbance to the equilibrium solution. To this end, a dimensionless disturbance parameter \(\varepsilon\) is introduced, in which every hydrodynamic variable \(\mathcal{Q}\) with \(\mathcal{Q} \in \{\rho, \boldsymbol{v}, \phi_{\text{ext}}\}\) can be expanded:

\begin{align} \mathcal{Q}(r,\varphi,z,t) = \mathcal{Q}_0(r) + \sum_{j=1}^{\infty} \varepsilon^j \mathcal{Q}_j(r,\varphi,z,t) \end{align}

Here, \(\mathcal{Q}_0(r)\) denotes the respective equilibrium solution, and the terms \(\mathcal{Q}_j(r,\varphi,z,t)\) represent the \(j\)-th-order perturbations. In the following, only small perturbations are considered, i.e. \(\varepsilon \ll 1\). In this case, higher-order terms as linear in \varepsilon can be neglected, so that in a linear approximation the following applies:

\begin{align} \mathcal{Q}(r,z,t) = \mathcal{Q}_0(r) + \varepsilon \mathcal{Q}_1(r,\varphi,z,t) \end{align}


To investigate how these disturbances evolve over time, the perturbed variables are substituted into the underlying hydrodynamic equations of the system. This results in higher-order contributions in \(\varepsilon\) arising from the products of perturbed terms. Since \(\varepsilon \ll 1\) is assumed, these non-linear contributions can be neglected. The equations are thus linearised.

Substituting the perturbed solution into the continuity equation yields:

\begin{align} \phantom{\Rightarrow}\;& \partial_t \rho + \boldsymbol{\nabla} \cdot \left(\rho \boldsymbol{v}\right) = 0 \\[6pt] \Leftrightarrow\;& \partial_t \left(\rho_0 + \varepsilon \rho_1\right) + \boldsymbol{\nabla} \cdot \left(\left(\rho_0 + \varepsilon \rho_1\right) \varepsilon \boldsymbol{v}_1\right) = 0 \end{align}

At this point, the fact that the disturbance parameter \(\varepsilon\) is a constant scalar and the steady-state solution is time-invariant can be utilised. Furthermore, the flux term contains products of perturbation variables that give rise to second-order contributions in \(\varepsilon\). These terms can be neglected. It therefore follows that:

\begin{align} \phantom{\Rightarrow}\;& \varepsilon \partial_t \rho_1 + \varepsilon \boldsymbol{\nabla} \cdot \left(\rho_0 \boldsymbol{v}_1\right) = 0 \\[6pt] \Leftrightarrow\;& \partial_t \rho_1 + \boldsymbol{\nabla} \cdot \left(\rho_0 \boldsymbol{v}_1\right) = 0 \end{align}

The same steps are performed for the Euler equation:

\begin{align} \phantom{\Rightarrow}\;& \partial_t \boldsymbol{v} + \left(\boldsymbol{v} \cdot \boldsymbol{\nabla}\right) \boldsymbol{v} = - \frac{1}{\rho} \boldsymbol{\nabla} P - \boldsymbol{\nabla} \phi_{\text{ext}} \, , \quad \text{with} \; P = c_s^2 \rho \\[6pt] \Leftrightarrow\;& \partial_t \boldsymbol{v} + \left(\boldsymbol{v} \cdot \boldsymbol{\nabla}\right) \boldsymbol{v} = - \frac{c_s^2}{\rho} \boldsymbol{\nabla} \rho - \boldsymbol{\nabla} \phi_{\text{ext}} \\[6pt] \Leftrightarrow\;& \varepsilon \partial_t \boldsymbol{v}_1 = - \frac{c_s^2}{\rho_0 + \varepsilon \rho_1} \boldsymbol{\nabla} \left(\rho_0 + \varepsilon \rho_1\right) - \boldsymbol{\nabla} \left(\phi_{\text{ext},0} + \varepsilon \phi_{\text{ext},1}\right) \\[6pt] \Leftrightarrow\;& \varepsilon \partial_t \boldsymbol{v}_1 = - \frac{c_s^2}{\rho_0} \frac{1}{1 + \varepsilon \frac{\rho_1}{\rho_0}} \boldsymbol{\nabla} \left(\rho_0 + \varepsilon \rho_1\right) - \boldsymbol{\nabla} \left(\phi_{\text{ext},0} + \varepsilon \phi_{\text{ext},1}\right) \, , \quad \text{with} \; \frac{1}{1 + x} = 1 -x + \mathcal{O} \left(x^2\right) \\[6pt] \Rightarrow\;& \varepsilon \partial_t \boldsymbol{v}_1 = - \frac{c_s^2}{\rho_0}\left(1 - \varepsilon \frac{\rho_1}{\rho_0}\right) \boldsymbol{\nabla} \left(\rho_0 + \varepsilon \rho_1\right) - \boldsymbol{\nabla} \left(\phi_{\text{ext},0} + \varepsilon \phi_{\text{ext},1}\right) \\[6pt] \Rightarrow\;& \varepsilon \partial_t \boldsymbol{v}_1 = - \frac{c_s^2}{\rho_0}\left(\boldsymbol{\nabla} \rho_0 + \varepsilon \boldsymbol{\nabla} \rho_1 - \varepsilon \frac{\rho_1}{\rho_0} \boldsymbol{\nabla} \rho_0\right) - \boldsymbol{\nabla} \phi_{\text{ext},0} - \varepsilon \boldsymbol{\nabla} \phi_{\text{ext},1} \end{align}

Here, it can now be used that the following holds for the equilibrium solution:

\begin{align} \frac{c_s^2}{\rho_0} \boldsymbol{\nabla} \rho_0 = - \boldsymbol{\nabla} \phi_{\text{ext},0} \end{align}

This gives the linearised Euler equation:

\begin{align} \partial_t \boldsymbol{v}_1 = - c_s^2 \boldsymbol{\nabla} \left(\frac{\rho_1}{\rho_0}\right) - \boldsymbol{\nabla} \phi_{\text{ext},1} \end{align}

Substituting the perturbed solution into the Poisson equation yields:

\begin{align} \phantom{\Rightarrow}\;& \Delta \phi_{\text{ext}} = 4 \pi G \rho \\[6pt] \Leftrightarrow\;& \Delta \left(\phi_{\text{ext},0} + \varepsilon \phi_{\text{ext},1}\right) = 4 \pi G \left(\rho_0 + \varepsilon \rho_1\right) \end{align}

Here, too, it can be used that the following holds for the equilibrium solution:

\begin{align} \Delta \phi_{\text{ext},0} = 4 \pi G \rho_0 \end{align}

Thus, it follows for the linearised Poisson equation:

\begin{align} \Delta \phi_{\text{ext},1} = 4 \pi G \rho_1 \end{align}

Due to the cylindrical geometry of the problem, the resulting linearised equations can be further simplified. The continuity equation thus becomes:

\begin{align} \phantom{\Rightarrow}\;& \partial_t \rho_1 + \frac{1}{r} \partial_r \left(r \rho_0 v_{1,r}\right) + \frac{1}{r} \partial_\varphi \left(\rho_0 v_{1,\varphi}\right) + \partial_z \left(\rho_0 v_{1,z}\right) = 0 \\[6pt] \Leftrightarrow\;& \partial_t \rho_1 + \frac{1}{r} \partial_r \left(r \rho_0 v_{1,r}\right) + \frac{\rho_0}{r} \partial_\varphi v_{1,\varphi} + \rho_0 \partial_z v_{1,z} = 0 \end{align}

For the Euler equation, it is useful to use a separate representation for each component. The components are given by:

\begin{align} & \partial_t v_{1,r} + c_s^2 \partial_r \left(\frac{\rho_1}{\rho_0}\right) + \partial_r \phi_{\text{ext},1} = 0 \\[6pt] & \partial_t v_{1,\varphi} + \frac{c_s^2}{r \rho_0} \partial_\varphi \rho_1 + \frac{1}{r} \partial_\varphi \phi_{\text{ext},1} = 0 \\[6pt] & \partial_t v_{1,z} + \frac{c_s^2}{\rho_0} \partial_z \rho_1 + \partial_z \phi_{\text{ext},1} = 0 \end{align}

For the Poisson equation follows:

\begin{align} \frac{1}{r} \partial_r \left(r \partial_r \phi_{\text{ext},1}\right) + \frac{1}{r^2} \partial_\varphi^2 \phi_{\text{ext},1} + \partial_z^2 \phi_{\text{ext},1} = 4 \pi G \rho_1 \end{align}

Separation ansatz and radial boundary value problem

After linearization, the following separation ansatz can be chosen for the perturbation quantities:

\begin{align} \mathcal{Q}_1(r,\varphi,z,t) = \hat{\mathcal{Q}}(r)\exp\left(i m \varphi\right) \exp\left(i k z\right)\exp\left(\sigma t\right) \end{align}

This ansatz can be motivated as follows. To this end, it is convenient to collect the perturbation quantities into a vector \(\boldsymbol{w}\):

\begin{align} \boldsymbol{w} = \left(\rho_1, v_{1,r}, v_{1,\varphi}, v_{1,z}, \phi_{\text{ext},1}\right)^{\mathsf{T}} \end{align}

The coupled system of differential equations to be solved can then be written in compact form as follows:

\begin{align} \mathbf{M} \partial_t \boldsymbol{w} = \mathbf{L} \boldsymbol{w} \end{align}

Here, \(\mathbf{L}\) is the matrix representation of the linear operator that results from the linearized governing equations and contains derivatives with respect to \(r\), \(\varphi\), and \(z\):

\begin{align} \mathbf{L} = \begin{pmatrix} 0 & \frac{1}{r} \partial_r r \rho_0 & \frac{\rho_0}{r} \partial_\varphi & \rho_0 \partial_z & 0\\ c_s^2 \partial_r \frac{1}{\rho_0} & 0 & 0 & 0 & \partial_r\\ \frac{c_s^2}{r \rho_0} \partial_z & 0 & 0 & 0 & \frac{1}{r} \partial_\varphi\\ \frac{c_s^2}{\rho_0} \partial_z & 0 & 0 & 0 & \partial_z\\ 4 \pi G & 0 & 0 & 0 & - \frac{1}{r} \partial_r r \partial_r - \frac{1}{r^2} \partial_\varphi^2 - \partial_z^2 \end{pmatrix} \end{align}

Intuitively, \(\mathbf{L}\) collects the contributions from the continuity equation, the Euler equation, and the Poisson equation.

The matrix \(\mathbf{M}\) is constant and encodes which equations contain time derivatives, for example:

\begin{align} \mathbf{M} = \operatorname{diag}(1,1,1,1,0) \end{align}

In this representation, the Poisson equation would correspond to the fourth row of the system of equations. However, this assignment is not unique, but rather a matter of the chosen ordering of the variables in \(\boldsymbol{w}\) or of the equations in the operator \(\mathbf{L}\). The only essential point is that the assignment is kept consistent.

Since the linear operator has no explicit dependence on \(\varphi\), the linearized equations are rotationally invariant in \(\varphi\). This means that a rotation of a solution about the \(z\)-axis is again a solution of the same linear system. To show this, the rotation operator \(\mathcal{T}_{\varphi_0}\) is introduced, which is defined by:

\begin{align} \mathcal{T}_{\varphi_0} f(r,\varphi,z,t) := f(r,\varphi+\varphi_0,z,t) \end{align}

As mentioned above, the linearized equations are rotationally invariant in \(\varphi\) if a rotation of a solution about the \(z\)-axis is again a solution of the same linear system. This is equivalent to saying that both the time derivative and the linear operator commute with the rotation operator \(\mathcal{T}_{\varphi_0}\).

The time derivative commutes trivially with \(\mathcal{T}_{\varphi_0}\), since \(\partial_t\) does not act on the \(\varphi\)-coordinate and \(\mathcal{T}_{\varphi_0}\) does not change time. Moreover, for sufficiently smooth functions \(f\), the following relations hold:

\begin{align} \partial_\varphi\left(\mathcal{T}_{\varphi_0} f\right) = \mathcal{T}_{\varphi_0}\left(\partial_\varphi f\right) \qquad \text{and} \qquad \chi(r,z)\left(\mathcal{T}_{\varphi_0} f\right) = \mathcal{T}_{\varphi_0}\left(\chi(r,z) f\right) \end{align}

Here, \(\chi(r,z)\) is an arbitrary function that depends on \(r\) and \(z\). Since the entries of the linear operator consist of sums and products of such building blocks, and since each of them commutes with \(\mathcal{T}_{\varphi_0}\), it follows that \(\mathbf{L}\) also commutes with the translation operator as a whole.

It turns out that Fourier modes \(\exp(i m \varphi)\) are a convenient choice for the \(\varphi\)-dependence. The reason is that these functions are eigenfunctions of the translation operator:

\begin{align} \mathcal{T}_{\varphi_0}\exp(im\varphi) = \exp\left(im(\varphi+\varphi_0)\right) = \exp(im\varphi_0)\exp(im\varphi) \end{align}

Each mode \(\exp(i m \varphi)\) is therefore solely multiplied by a constant phase factor \(\exp(i m \varphi_0)\) under a rotation about the \(z\)-axis.

Since both the time derivative and the linear operator commute with the rotation operator, they possess a common set of eigenfunctions. As a result, the \(\varphi\)-dependence of the perturbation quantities can be expanded in Fourier modes. Moreover, it can be shown that different azimuthal mode numbers \(m\) do not couple to one another, so that each Fourier mode evolves independently. For the stability analysis, it is therefore sufficient to consider a single Fourier mode with fixed mode number \(m\).

From the \(2\pi\)-periodicity in \(\varphi\), it follows that the perturbation must be single-valued after one full rotation. Hence, it must hold that:

\begin{align} \mathcal{Q}_1(r,\varphi+2\pi,z,t) = \mathcal{Q}_1(r,\varphi,z,t) \end{align}

For the \(\varphi\)-dependence in the separation ansatz, this condition is satisfied if and only if the following holds:

\begin{align} \exp(im2\pi) = 1 \end{align}

It follows immediately that \(m \in \mathbb{Z}\) must hold.

Analogously to the Fourier decomposition in \(\varphi\), the \(z\)-dependence can also be decomposed into eigenfunctions of the translation operator along the \(z\)-axis. The \(z\)-dependence of the perturbation is therefore given by:

\begin{align} \mathcal{Q}_1(z) \propto \exp\left(ikz\right) \, , \quad \text{with} \; k \in \mathbb{R} \end{align}

Here, \(k\) is the axial wavenumber. In contrast to \(\varphi\), there is generally no periodicity condition in the \(z\)-direction. Therefore, the axial wavenumber is not restricted to discrete values, but can take any real value.

Analogously, the time dependence can also be decomposed into eigenfunctions of the time-evolution operator. The temporal evolution of the perturbation is therefore given by:

\begin{align} \mathcal{Q}_1(t) \propto \exp\left(\sigma t\right)\, , \quad \text{with} \; \sigma \in \mathbb{C} \end{align}

Here, \(\sigma\) is generally complex. The real part \(\Re(\sigma)\) describes the growth or damping rate of the perturbation. For \(\Re(\sigma) > 0\), the perturbation grows exponentially, and the system is unstable in this case. For \(\Re(\sigma) < 0\), the perturbation decays exponentially, and the system is therefore stable. The imaginary part \(\Im(\sigma)\) corresponds to an oscillation frequency, that is, for \(\Im(\sigma) \neq 0\), the evolution is additionally oscillatory. In the special case \(\Re(\sigma) = 0\), the amplitude remains constant, and the solution is purely oscillatory.

The radial part of the linearized equations does not have constant coefficients. As a result, the equations are not translationally invariant in \(r\). Consequently, plane waves in \(r\) are not eigenfunctions of the radial operator, and the radial structure of the perturbations must be determined explicitly.

By inserting the following separation ansatz for the perturbation quantities, one obtains a coupled system of ordinary differential equations in \(r\):

\begin{align} & \rho_1(r,\varphi,z,t) = \hat{\rho}(r)\exp\left(im\varphi\right)\exp\left(ikz\right)\exp\left(\sigma t\right) \\[6pt] & v_{1,r}(r,\varphi,z,t) = \hat{v}_r(r)\exp\left(im\varphi\right)\exp\left(ikz\right)\exp\left(\sigma t\right) \\[6pt] & v_{1,\varphi}(r,\varphi,z,t) = \hat{v}_\varphi(r)\exp\left(im\varphi\right)\exp\left(ikz\right)\exp\left(\sigma t\right) \\[6pt] & v_{1,z}(r,\varphi,z,t) = \hat{v}_z(r)\exp\left(im\varphi\right)\exp\left(ikz\right)\exp\left(\sigma t\right) \\[6pt] & \phi_{\text{ext},1}(r,\varphi,z,t) = \hat{\phi}_{\text{ext}}(r)\exp\left(im\varphi\right)\exp\left(ikz\right) \exp\left(\sigma t\right) \end{align}

Inserting the ansatz into the linearized continuity equation yields:

\begin{align} \sigma \hat{\rho} + \frac{1}{r} \partial_r \left(r \rho_0 \hat{v}_r\right) + \frac{i m \rho_0}{r} \hat{v}_\varphi + i k \rho_0 \hat{v}_z = 0 \end{align}

For the components of the Euler equation, inserting the ansatz yields:

\begin{align} & \sigma \hat{v}_r + c_s^2 \partial_r \left(\frac{\hat{\rho}}{\rho_0}\right) + \partial_r \hat{\phi}_{\text{ext}} = 0 \\[6pt] & \sigma \hat{v}_\varphi + \frac{i m c_s^2}{r} \frac{\hat{\rho}}{\rho_0} + \frac{i m}{r} \hat{\phi}_{\text{ext}} = 0 \\[6pt] & \sigma \hat{v}_z + i k c_s^2 \frac{\hat{\rho}}{\rho_0} + i k \hat{\phi}_{\text{ext}} = 0 \end{align}

For the components of the Euler equation, inserting the ansatz yields:

\begin{align} & \hat{v}_r = - \frac{1}{\sigma} \left(c_s^2 \partial_r \left(\frac{\hat{\rho}}{\rho_0}\right) + \partial_r \hat{\phi}_{\text{ext}}\right) \\[6pt] & \hat{v}_\varphi = - \frac{im}{\sigma r} \left(c_s^2 \frac{\hat{\rho}}{\rho_0} + \hat{\phi}_{\text{ext}}\right) \\[6pt] & \hat{v}_z = - \frac{ik}{\sigma} \left(c_s^2 \frac{\hat{\rho}}{\rho_0} + \hat{\phi}_{\text{ext}}\right) \end{align}

Finally, inserting the ansatz into the Poisson equation yields:

\begin{align} \frac{1}{r} \partial_r \left(r \partial_r \hat{\phi}_\text{ext}\right) - \left(\frac{m^2}{r^2} + k^2\right) \hat{\phi}_\text{ext} = 4 \pi G \hat{\rho} \end{align}

By substituting the velocity components obtained from the Euler equation into the continuity equation, the original system of five coupled differential equations can be reduced to a system of two coupled differential equations for \(\hat{\rho}(r)\) and \(\hat{\phi}_{\text{ext}}(r)\). The continuity equation then takes the form:

\begin{align} \frac{1}{r} \partial_r \left(r \rho_0 \left(c_s^2 \partial_r \left(\frac{\hat{\rho}}{\rho_0}\right) + \partial_r \hat{\phi}_\text{ext}\right)\right) = \sigma^2 \hat{\rho} + \rho_0 \left(\frac{m^2}{r^2} + k^2\right) \left(c_s^2 \frac{\hat{\rho}}{\rho_0} + \hat{\phi}_{\text{ext}}\right) \end{align}

To solve this system of coupled differential equations, suitable boundary conditions are required. Since the relevant dynamics are essentially determined by the region of the cylinder and the equilibrium solution is localized within a finite radial range around the characteristic radius \(r_0\), it is required that the perturbation vanishes at large radial distances. In the limit \(r \to \infty\), the perturbation quantities must therefore tend to zero:

\begin{align} \lim\limits_{r\to\infty}\hat{\rho}(r) = 0 = \lim\limits_{r\to\infty}\hat{\phi}_\text{ext}(r) \end{align}

The second boundary condition requires a distinction between two cases. In the case of an axisymmetric perturbation about the cylinder axis, that is, for \(m = 0\), rotational symmetry together with the requirement of a smooth and regular solution implies, analogously to the derivation of the analytical equilibrium solution of an infinitely long, isothermal, self-gravitating gas cylinder, that the perturbation quantities must not have gradients at the origin. In particular, the radial derivatives must therefore vanish at the center:

\begin{align} \partial_r \hat{\rho}(r) \big|_{r = 0} = 0 = \partial_r \hat{\phi}_\text{ext}(r) \big|_{r = 0} \end{align}

For non-axisymmetric azimuthal perturbations, that is, for \(m \neq 0\), the perturbation quantities, by contrast, exhibit an explicit angular dependence. On the cylinder axis, however, the angular coordinate loses its geometric meaning, since all angular directions correspond to the same point there. If the amplitudes \(\hat{\rho}(0)\) and \(\hat{\phi}_{\text{ext}}(0)\) were nonzero, the perturbation quantities would formally depend on \(\varphi\) on the axis and would therefore no longer represent a uniquely defined function. To ensure the uniqueness of the solution on the cylinder axis, the amplitudes of non-axisymmetric perturbations must vanish there:

\begin{align} \hat{\rho}(r=0) = 0 = \hat{\phi}_\text{ext}(r=0) \end{align}

It is convenient to introduce some dimensionless quantities that further simplify the system of differential equations. A natural choice is to nondimensionalize the perturbation quantities as follows:

\begin{align} u := \frac{\hat{\rho}}{\rho_c} \quad \text{and} \quad \psi := \frac{\hat{\phi}_\text{ext}}{c_s^2} \end{align}

Furthermore, by introducing the dimensionless radial coordinate \(x := \frac{r}{r_0}\), the density profile of the equilibrium solution can be written in compact form as:

\begin{align} \rho_0(x) = \rho_c q(x) \, , \quad \text{with} \; q(x) := \frac{1}{\left(1 + x^2\right)^2} \end{align}

Moreover, the dimensionless parameters \(\bar{k}\) and \(\bar{\sigma}\) can be introduced:

\begin{align} \bar{k} := k r_0 \quad \text{and} \quad \bar{\sigma} := \sigma \frac{r_0}{c_s} \end{align}

Substituting into the continuity equation yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{\rho_c c_s^2}{r_0^2} \frac{1}{x} \partial_x \left(x q \left(\partial_x \left(\frac{u}{q}\right) + \partial_x \psi\right)\right) = \sigma^2 \rho_c u + \frac{\rho_c c_s^2}{r_0^2} q \left(\frac{m^2}{x^2} + \frac{k^2}{r_0^2}\right) \left(\frac{u}{q} + \psi\right) \\[6pt] \Leftrightarrow\;& \frac{1}{x} \partial_x \left(x q \left(\partial_x \left(\frac{u}{q}\right) + \partial_x \psi\right)\right) = \bar{\sigma}^2 u + q \left(\frac{m^2}{x^2} + \frac{k^2}{r_0^2}\right) \left(\frac{u}{q} + \psi\right) \\[6pt] \Leftrightarrow\;& \frac{1}{x} \partial_x \left(x \left(\partial_x u - \frac{\partial_x q}{q} u + q \partial_x \psi\right)\right) = \bar{\sigma}^2 u + \left(\frac{m^2}{x^2} + \bar{k}^2\right) \left(u + q \psi\right) \end{align}

An analogous procedure yields for the Poisson equation:

\begin{align} \phantom{\Rightarrow}\;& \frac{c_s^2}{r_0^2} \frac{1}{x} \partial_x \left(x \partial_x \psi\right) - \frac{c_s^2}{r_0^2} \left(\frac{m^2}{x^2} + \frac{k^2}{r_0^2}\right) \psi = 4 \pi G \rho_c u \\[6pt] \Leftrightarrow\;& \frac{1}{x} \partial_x \left(x \partial_x \psi\right) - \left(\frac{m^2}{x^2} + \bar{k}^2\right) \psi = \frac{4 \pi G \rho_c r_0^2}{c_s^2} u \end{align}

Using the relations introduced above for \(a\) and \(\rho_c\), it follows that:

\begin{align} \frac{1}{x} \partial_x \left(x \partial_x \psi\right) - \left(\frac{m^2}{x^2} + \bar{k}^2\right) \psi = 8 u \end{align}

Since \(u\) and \(\psi\) are essentially rescaled forms of the density and potential perturbations, respectively, the boundary conditions for the perturbation quantities carry over directly to \(u\) and \(\psi\). The same boundary conditions therefore apply. For \(x \to \infty\), it thus follows that:

\begin{align} \lim\limits_{x\to\infty}u(x) = 0 = \lim\limits_{x\to\infty}\psi(x) \end{align}

On the cylinder axis, that is, for \(x = 0\), one must still distinguish between the cases \(m = 0\) and \(m \neq 0\). Accordingly, the following holds there:

\begin{align} &\partial_x u(x) \big|_{x = 0} = 0 = \partial_x \psi(x) \big|_{x = 0} \qquad \text{for } m=0 \\[6pt] &u(0)=0=\psi(0) \qquad \text{for } m\neq 0 \end{align}

It turns out that this system of coupled differential equations cannot, in general, be solved analytically, so that a numerical solution approach is used in the following. For the later implementation, it must be noted that \(\bar{\sigma}^2 \in \mathbb{R}\) must hold. This follows from the fact that the perturbation quantities under consideration must be real for physical reasons. In the present form of the system of differential equations, this is guaranteed only if \(\bar{\sigma}^2\) is real. In the course of the numerical solution, however, complex values \(\bar{\sigma}^2 \in \mathbb{C}\) may also arise due to the mathematical structure of the equations to be solved. Such solutions must, however, be interpreted as unphysical and therefore discarded in the physical interpretation.

Numerical formulation as a generalized eigenvalue problem

For the numerical treatment of the problem, it is convenient to introduce the quantity \(\alpha(x)\), defined by:

\begin{align} \alpha(x) := - \frac{\partial_x q(x)}{q(x)} \end{align}

Using the explicit form of \(q(x)\), this immediately yields:

\begin{align} \alpha(x) = \frac{4x}{1 + x^2} \end{align}

Moreover, it is convenient to introduce the quantity \(P(x)\), defined by:

\begin{align} P(x) := \frac{m^2}{x^2} + \bar{k}^2 \end{align}

Furthermore, it is convenient to introduce the following quantities:

\begin{align} &F(x) = \partial_x u(x) + \alpha(x) u(x) + q(x)\,\partial_x \psi(x) \\[6pt] &G(x) = \partial_x \psi(x) \end{align}

The equations to be solved then take the form:

\begin{align} &\frac{1}{x} \partial_x \left(x F\right) - P \left(u + q \psi\right) = \bar{\sigma}^2 u \\[6pt] &\frac{1}{x} \partial_x \left(x G\right) - P \psi = 8u \end{align}

To solve the problem numerically, the equations must be discretized. For this purpose, the interval \(\left[0, x_{\max}\right]\) is divided into \(N\) equidistant subintervals. The corresponding \(N+1\) grid points are given by:

\begin{align} x_j = j\,\Delta x \, , \quad \text{with} \quad \Delta x = \frac{x_{\max}}{N} \, , \quad j \in \left\{0,\ldots,N\right\} \end{align}

In the following, the quantities under consideration, evaluated at the respective grid points, are written as follows:

\begin{align} u_j = u\left(x_j\right) \, , \quad \psi_j = \psi\left(x_j\right) \, , \quad q_j = q\left(x_j\right) \, , \quad \alpha_j = \alpha(x_j) \, , \quad P_j = P(x_j) \end{align}

For the interior grid points \(j \in \{1,\ldots,N-1\}\), the discretized equations follow directly from the continuous equations. The discretized continuity equation is given by:

\begin{align} \frac{1}{x_j} \left(\partial_x \left(x F\right)\right)_j - P_j \left(u_j + q_j \psi_j\right) = \bar{\sigma}^2 u_j \end{align}

The discretized Poisson equation is given by:

\begin{align} \frac{1}{x_j} \left(\partial_x \left(x G\right)\right)_j - P_j \psi_j = 8 u_j \end{align}

Since the problem reduces to a purely radial one after the exponential ansatz and operators in divergence form appear in cylindrical coordinates, a finite-volume discretization is a natural choice. For a function \(f(x)\), the derivative at the \(j\)-th grid point can be written in flux form in terms of the cell faces at \(x_{j\pm\frac12}\) as follows:

\begin{align} \left(\partial_x f\left(x\right)\right)_j = \frac{f_{j + \frac{1}{2}} - f_{j - \frac{1}{2}}}{\Delta x} \end{align}

This yields the following expression for the derivative term in the discretized continuity equation:

\begin{align} \left(\partial_x \left(xF\right)\right)_j = \frac{1}{\Delta x}\left(x_{j+\frac12} F_{j+\frac12} - x_{j-\frac12} F_{j-\frac12}\right) \end{align}

In particular, the following holds:

\begin{align} &F_{j+\frac12} = \left(\partial_x u\right)_{j+\frac12} + \alpha_{j+\frac12} u_{j+\frac12} + q_{j+\frac12}\left(\partial_x \psi\right)_{j+\frac12} \\[6pt] &F_{j-\frac12} = \left(\partial_x u\right)_{j-\frac12} + \alpha_{j-\frac12} u_{j-\frac12} + q_{j-\frac12}\left(\partial_x \psi\right)_{j-\frac12} \end{align}

The quantities at the indices \(j \pm \frac{1}{2}\) are obtained from averages of the neighboring grid points. For equidistant grid points, this therefore gives:

\begin{align} f_{j\pm\frac12} = \frac{f_j + f_{j\pm1}}{2} \end{align}

The derivatives at the indices \(j \pm \frac{1}{2}\) are given by the corresponding difference quotients:

\begin{align} \left(\partial_x f\right)_{j+\frac12} = \frac{f_{j+1}-f_j}{\Delta x} \qquad \text{and} \qquad \left(\partial_x f\right)_{j-\frac12} = \frac{f_j-f_{j-1}}{\Delta x} \end{align}

Substituting these definitions into \(F_{j\pm\frac12}\) and collecting terms yields:

\begin{align} &F_{j+\frac12} = \left(\frac{\alpha_{j+\frac12}}{2} + \frac{1}{\Delta x}\right) u_{j+1} + \left(\frac{\alpha_{j+\frac12}}{2} - \frac{1}{\Delta x}\right) u_j + \frac{q_{j+\frac12}}{\Delta x}\psi_{j+1} - \frac{q_{j+\frac12}}{\Delta x}\psi_j \\[6pt] &F_{j-\frac12} = \left(\frac{\alpha_{j-\frac12}}{2} + \frac{1}{\Delta x}\right) u_j + \left(\frac{\alpha_{j-\frac12}}{2} - \frac{1}{\Delta x}\right) u_{j-1} + \frac{q_{j-\frac12}}{\Delta x}\psi_j - \frac{q_{j-\frac12}}{\Delta x}\psi_{j-1} \end{align}

The discretized continuity equation can thus be written in the following form:

\begin{align} C_{j,-}^{(C)} u_{j-1} + C_{j,0}^{(C)} u_j + C_{j,+}^{(C)} u_{j+1} + D_{j,-}^{(C)} \psi_{j-1} + D_{j,0}^{(C)} \psi_j + D_{j,+}^{(C)} \psi_{j+1} = \bar{\sigma}^2 u_j \end{align}

The coefficients are given by:

\begin{align} C_{j,-}^{(C)} &:= \frac{x_{j-\frac12}}{\Delta x\, x_j}\left(\frac{1}{\Delta x} - \frac{\alpha_{j-\frac12}}{2}\right) \\[6pt] C_{j,0}^{(C)} &:= \frac{x_{j+\frac12}}{\Delta x\, x_j}\left(-\frac{1}{\Delta x} + \frac{\alpha_{j+\frac12}}{2}\right) - \frac{x_{j-\frac12}}{\Delta x\, x_j}\left(\frac{1}{\Delta x} + \frac{\alpha_{j-\frac12}}{2}\right) - P_j \\[6pt] C_{j,+}^{(C)} &:= \frac{x_{j+\frac12}}{\Delta x\, x_j}\left(\frac{1}{\Delta x} - \frac{\alpha_{j+\frac12}}{2}\right) \\[6pt] D_{j,-}^{(C)} &:= \frac{x_{j-\frac12} q_{j-\frac12}}{\Delta x^2 x_j} \\[6pt] D_{j,0}^{(C)} &:= -\frac{x_{j+\frac12} q_{j+\frac12}}{\Delta x^2 x_j} - \frac{x_{j-\frac12} q_{j-\frac12}}{\Delta x^2 x_j} - P_j q_j \\[6pt] D_{j,+}^{(C)} &:= \frac{x_{j+\frac12} q_{j+\frac12}}{\Delta x^2 x_j} \end{align}

Analogously, the following expression is obtained for the derivative term in the discretized Poisson equation:

\begin{align} \left(\partial_x \left(xG\right)\right)_j = \frac{1}{\Delta x}\left(x_{j+\frac12} G_{j+\frac12} - x_{j-\frac12} G_{j-\frac12}\right) \end{align}

After collecting terms, \(G_{j\pm\frac12}\) is given by:

\begin{align} &G_{j+\frac12} = \frac{\psi_{j+1}}{\Delta x} - \frac{\psi_{j}}{\Delta x} \\[6pt] &G_{j-\frac12} = \frac{\psi_{j}}{\Delta x} - \frac{\psi_{j-1}}{\Delta x} \end{align}

The discretized Poisson equation can thus be written in the following form:

\begin{align} C_{j,0}^{(P)} u_j + D_{j,-}^{(P)} \psi_{j-1} + D_{j,0}^{(P)} \psi_j + D_{j,+}^{(P)} \psi_{j+1} = 0 \end{align}

The coefficients are then given by:

\begin{align} C_{j,0}^{(P)} &:= -8 \\[6pt] D_{j,-}^{(P)} &:= \frac{x_{j-\frac12}}{\Delta x^2 x_j} \\[6pt] D_{j,0}^{(P)} &:= -\frac{x_{j+\frac12} + x_{j-\frac12}}{\Delta x^2 x_j} - P_j \\[6pt] D_{j,+}^{(P)} &:= \frac{x_{j+\frac12}}{\Delta x^2 x_j} \end{align}

The grid points at the boundaries are determined by the boundary conditions. At the outer boundary, \(x = x_{\max}\), the discretized boundary condition is given by:

\begin{align} u_N = 0 = \psi_N \end{align}

At the inner boundary, \(x = 0\), a distinction must be made between the cases \(m = 0\) and \(m \neq 0\). For \(m = 0\), the following holds:

\begin{align} \left(\partial_x u\right)_0 = 0 = \left(\partial_x \psi\right)_0 \end{align}

By definition, the derivative at the grid point with index \(j = 0\) is given by:

\begin{align} \phantom{\Rightarrow}\;& \left(\partial_x u\right)_0 = \frac{u_{\frac12} - u_{-\frac12}}{\Delta x} \\[6pt] \Leftrightarrow\;& \left(\partial_x u\right)_0 = \frac{u_1 - u_{-1}}{2 \Delta x} \end{align}

At this point, a term \(u_{-1}\) appears. However, a grid point with index \(j = -1\) does not exist on the grid under consideration. The same problem arises analogously for the dimensionless potential perturbation. Therefore, the derivative at the boundary grid point \(j = 0\) must be treated separately.

Since the approximation of the derivative used here has an error of order \(\mathcal{O}(\Delta x^2)\), it is convenient to replace the derivative at the boundary by an approximation of the same order. In the present case, this can only be achieved by means of a second-order forward difference approximation, since no grid point with index \(j = -1\) is available. The second-order forward difference approximation is given by:

\begin{align} \left(\partial_x u\right)_0 = \frac{-3 u_0 + 4 u_1 - u_2}{2 \Delta x} \end{align}

Thus, for the case \(m = 0\), the boundary condition on the cylinder axis can be approximated in the discretization as follows:

\begin{align} -3 u_0 + 4 u_1 - u_2 = 0 = -3 \psi_0 + 4 \psi_1 - \psi_2 \end{align}

For the case \(m \neq 0\), by contrast, the discretized boundary condition on the cylinder axis is given by:

\begin{align} u_0 = 0 = \psi_0 \end{align}

The problem can thus be formulated as a coupled system with originally \(2N + 2\) unknowns. However, since the boundary conditions at the outer boundary are already prescribed, namely \(u_N = 0\) and \(\psi_N = 0\), these two unknowns no longer need to be solved for. The resulting system is therefore reduced to \(2N\) unknowns, two equations for the boundary conditions at the origin, and \(2N - 2\) equations for the interior grid points \(j \in \{1,\ldots,N-1\}\).

This system can be written in matrix form as a generalized eigenvalue problem:

\begin{align} \mathbf{A}(\bar{k})\boldsymbol{y} = \bar{\sigma}^2\mathbf{B}\boldsymbol{y} \end{align}

Here, \(\mathbf{A}, \mathbf{B} \in \mathbb{R}^{2N \times 2N}\). The matrix \(\mathbf{A}\) contains all terms that are independent of \(\bar{\sigma}^2\), whereas \(\mathbf{B}\) contains all contributions multiplied by \(\bar{\sigma}^2\). The vector \(\boldsymbol{y}\) contains all unknowns and is given by:

\begin{align} \boldsymbol{y} = \left(u_0, u_1, \ldots, u_{N-1}, \psi_0, \psi_1, \ldots, \psi_{N-1}\right)^\mathsf{T} \in \mathbb{R}^{2N} \end{align}

The equations are arranged such that the first two rows describe the boundary conditions at the origin, while the subsequent rows contain the discretized equations at the interior grid points. Accordingly, all entries in the first two rows of the matrix \(\mathbf{A}\) are zero, except for those acting on \(u_0\), \(u_1\), and \(u_2\), and on \(\psi_0\), \(\psi_1\), and \(\psi_2\), respectively. These matrix elements encode the corresponding boundary conditions.

For \(m = 0\), the corresponding nonzero matrix elements are given by

\begin{align} & A_{0,0} = -3 \, , \qquad A_{0,1} = 4 \, , \qquad A_{0,2} = -1 \\[6pt] & A_{1,N} = -3 \, , \qquad A_{1,N+1} = 4 \, , \qquad A_{1,N+2} = -1 \end{align}

For the case \(m \neq 0\), by contrast, only those matrix elements acting on \(u_0\) and \(\psi_0\), respectively, are nonzero. In that case, the following holds:

\begin{align} A_{0,0} = 1 \quad \text{and} \quad A_{1,N} = 1 \end{align}

The rows \(r = j + 1\) with \(j \in \{1,\ldots,N-1\}\) are chosen such that they represent the discretized dimensionless continuity equation. The nonzero entries in these rows acting on the \(u\)-block are given by:

\begin{align} A_{j+1,j-1} = C_{j,-}^{(C)} \, , \quad A_{j+1,j} = C_{j,0}^{(C)} \, , \quad A_{j+1,j+1} = C_{j,+}^{(C)} \end{align}

This form formally applies to all \(j \in \{1,\ldots,N-1\}\). However, for the boundary case \(j = N-1\), the index \(j+1 = N\) would appear, which would refer to \(u_N\). Since \(u_N\), just like \(\psi_N\), has already been eliminated by the boundary condition and is therefore not contained in \(\boldsymbol{y}\), this contribution is not treated as an unknown and must accordingly be handled separately.

Analogously, the nonzero entries that act on the \(\psi\)-block in the continuity equation are given by:

\begin{align} A_{j+1,N+j-1} = D_{j,-}^{(C)} \, , \quad A_{j+1,N+j} = D_{j,0}^{(C)} \, , \quad A_{j+1,N+j+1} = D_{j,+}^{(C)} \end{align}

Here as well, the boundary case \(j = N-1\) must be treated separately, since \(N + j + 1 = 2N\) would refer to \(\psi_N\), which has already been eliminated by the boundary condition.

The rows \(r = N + j\) with \(j \in \{1,\ldots,N-1\}\) describe the discretized Poisson equation. The nonzero entries acting on the \(u\)-block are the following:

\begin{align} A_{N+j,j} = C_{j,0}^{(P)} \end{align}

The nonzero entries acting on the \(\psi\)-block are given by:

\begin{align} A_{N+j,N+j-1} = D_{j,-}^{(P)} \, , \quad A_{N+j,N+j} = D_{j,0}^{(P)} \, , \quad A_{N+j,N+j+1} = D_{j,+}^{(P)} \end{align}

In the Poisson equation as well, the boundary case \(j = N-1\) must be treated analogously to the continuity equation.

In the matrix \(\mathbf{B}\), the entries in the first two rows are zero. This is because \(\mathbf{B}\) contains only terms multiplied by \(\bar{\sigma}^2\), while the boundary conditions at the origin do not contain any contributions involving \(\bar{\sigma}^2\).

For the rows \(r = j+1\) with \(j \in \{1,\ldots,N-1\}\), which represent the dimensionless continuity equation, the only nonzero entries are given by:

\begin{align} B_{j+1,j} = 1 \end{align}

Since the dimensionless Poisson equation does not contain any terms depending on \(\bar{\sigma}^2\), all entries in the rows \(r = N + j\) representing the Poisson equation are zero.

This generalized eigenvalue problem can now be solved numerically. In the present work, the Python library SciPy was used for this purpose, as it provides routines for solving generalized eigenvalue problems. Of particular interest are the eigenvalues, since they determine the linear temporal evolution of the perturbations.

Structure of the numerical spectrum

For a fixed pair \((m,\bar{k})\), the linear stability analysis reduces to an eigenvalue problem in the dimensionless radial coordinate \(x\). In general, this problem possesses several eigenvalues \(\bar{\sigma}_n^2\) with corresponding eigenfunctions \(u_n(x)\) and \(\psi_n(x)\), respectively. Before interpreting individual parts of this spectrum physically or comparing different pairs \((m,\bar{k})\) with one another, it is first convenient to examine the structure of the entire numerically obtained spectrum for a specific example.

In the following, the spectrum of eigenvalues for the fixed pair \((m,\bar{k}) = (0,1)\) is considered as an illustrative example. The main purpose is to investigate the qualitative structure of the numerically determined spectrum. For this purpose, the numerical solution of the given equations is first carried out on a computational domain with \(x_{\max} = 25\) using \(N = 500\) grid points.

For the illustrative pair \((m,\bar{k}) = (0,1)\), the numerical solution of the generalized eigenvalue problem yields a discrete spectrum of eigenvalues \(\bar{\sigma}_n^2\). A characteristic structure of the spectrum becomes apparent. There exists exactly one eigenvalue above the threshold \(-\bar{k}^2\), that is, satisfying the condition \(\bar{\sigma}^2 > -\bar{k}^2\). All remaining numerically determined eigenvalues, by contrast, lie below this threshold, that is, at \(\bar{\sigma}^2 < -\bar{k}^2\).

The following figure shows the corresponding numerically determined spectrum for \((m,\bar{k}) = (0,1)\). The dashed horizontal line marks the threshold \(-\bar{k}^2\). The inset shows the last ten eigenvalues of the spectrum and illustrates that only a single eigenvalue lies above this threshold, while all remaining eigenvalues stay below it.

Spectrum of the numerically determined eigenvalues \(\bar{\sigma}_n^2\) for the fixed pair \((m,\bar{k}) = (0,1)\), on a computational domain with \(x_{\max} = 25\) and \(N = 250\) grid points. The dashed horizontal line marks the threshold \(-\bar{k}^2\). The inset shows the last ten eigenvalues.

\colorbox{yellow}{Plot of the spectrum for \((m,\bar{k}) = (0,1)\), caption: Spectrum of the numerically determined eigenvalues \(\bar{\sigma}_n^2\) for the fixed pair \((m,\bar{k}) = (0,1)\), on a computational domain with \(x_{\max} = 25\) and \(N = 250\) grid points. The dashed horizontal line marks the threshold \(-\bar{k}^2\). The inset shows the last ten eigenvalues.}

This observation is not restricted to the pair \((m,\bar{k}) = (0,1)\), but can also be found for other numerically investigated combinations of azimuthal mode number \(m\) and dimensionless wavenumber \(\bar{k}\). Numerically, it is also observed that the eigenvalues approach the threshold \(-\bar{k}^2\). Whether this threshold actually constitutes an accumulation point of the spectrum is not proved here, however. For the arguments and observations presented in the following, this is also not necessary.

This numerically observed structure of the spectrum should not be understood as a mere numerical finding, but can be explained from the structure of the underlying boundary value problem. The generalized eigenvalue problem is formulated on the semi-infinite interval \(x \in [0,\infty)\) and is therefore determined to a large extent by the behavior of the solutions in the two boundary regions. A physically admissible solution must satisfy both the boundary condition at \(x = 0\) and the one at \(x \to \infty\). It is therefore natural to examine the coupled differential equations more closely precisely in these two limiting regions. In particular, it turns out that the threshold \(-\bar{k}^2\) observed in the numerical spectrum arises naturally from the behavior of the equations at large radii.

Asymptotic behavior of the solutions

In the following, the local behavior of the solutions at \(x = 0\) is first examined, followed by the asymptotic behavior for \(x \to \infty\). On this basis, it then becomes possible to understand why the numerically determined spectrum for a fixed pair \((m,\bar{k})\) exhibits the characteristic structure observed above.

To investigate the local behavior of the solutions at \(x = 0\), it is convenient to consider the continuity equation in the following form:

\begin{align} \frac{1}{x}\partial_x \left( x q \left(\partial_x \left(\frac{u}{q}\right) + \partial_x \psi \right)\right) = \bar{\sigma}^2 u + q\left(\frac{m^2}{x^2} + \bar{k}^2\right)\left(\frac{u}{q} + \psi\right) \end{align}

For small \(x\), \(q(x)\) can be expanded about \(x = 0\). This gives:

\begin{align} q(x)=1+\mathcal{O}(x^2) \end{align}

In particular, this implies the following asymptotic behavior as \(x \to 0\):

\begin{align} q(x)\sim 1 \end{align}

Thus, for small \(x\), the continuity equation asymptotically reduces to:

\begin{align} \frac{1}{x}\partial_x \left( x \left(\partial_x u + \partial_x \psi \right)\right) = \bar{\sigma}^2 u + \left(\frac{m^2}{x^2}+\bar{k}^2\right)(u+\psi) \end{align}

Since the coupled differential equations contain terms of the form \(\frac{1}{x}\) and \(\frac{1}{x^2}\) at the origin, \(x = 0\) is a singular point of the system. To determine the local behavior of the solutions near this point, a Frobenius ansatz is used in the following. To this end, the local solutions near \(x = 0\) are assumed to be of the form:

\begin{align} u(x)=x^\nu \sum_{n=0}^{\infty} a_n x^n \quad \text{and} \quad \psi(x)=x^\mu \sum_{n=0}^{\infty} b_n x^n \end{align}

Here, \(a_0 \neq 0\) and \(b_0 \neq 0\), while the exponents \(\nu\) and \(\mu\), respectively, are to be determined from the differential equations.

Since the asymptotic behavior as \(x \to 0\) is of interest here, it is sufficient to consider only the dominant term of each ansatz. Thus, asymptotically,

\begin{align} u(x)\sim a_0 x^\nu \quad \text{and} \quad \psi(x)\sim b_0 x^\mu \end{align}

Substituting into the continuity equation and subsequently collecting terms according to powers of \(x\) yields:

\begin{align} a_0\left(\nu^2-m^2\right)x^{\nu-2} + b_0\left(\mu^2-m^2\right)x^{\mu-2} - a_0\left(\bar{\sigma}^2+\bar{k}^2\right)x^\nu - b_0\left(\bar{\sigma}^2+\bar{k}^2\right)x^\mu =0 \end{align}

As \(x \to 0\), the terms with the smaller exponents dominate over those with the larger exponents. Therefore, in the leading-order behavior, the coefficients of the dominant terms must vanish. It follows that:

\begin{align} \nu=\pm |m| \quad \text{and} \quad \mu=\pm |m| \end{align}

Since the underlying differential equations are linear, the general local solution is given by a superposition of the corresponding branches. Thus, asymptotically as \(x \to 0\),

\begin{align} u(x)\sim a_{0,+}x^{|m|}+a_{0,-}x^{-|m|} \quad \text{and} \quad \psi(x)\sim b_{0,+}x^{|m|}+b_{0,-}x^{-|m|} \end{align}

The branches with negative exponents diverge as \(x \to 0\) and are therefore incompatible with the boundary conditions at the origin. Consequently, the coefficients of the diverging solutions must vanish:

\begin{align} a_{0,-} = 0 = b_{0,-} \end{align}

This yields the asymptotic solutions near \(x = 0\):

\begin{align} u(x)\sim a_{0,+}x^{|m|} \quad \text{and} \quad \psi(x)\sim b_{0,+}x^{|m|} \end{align}

To investigate the asymptotic behavior of the solutions as \(x \to \infty\), it is convenient to first consider the Poisson equation. This is an inhomogeneous linear differential equation. Its general solution \(\psi(x)\) therefore consists of the solution of the associated homogeneous differential equation \(\psi_{\text{h}}(x)\) and a particular solution of the inhomogeneous equation \(\psi_{\text{p}}(x)\):

\begin{align} \psi(x) = \psi_{\text{h}}(x) + \psi_{\text{p}}(x) \end{align}

The associated homogeneous Poisson equation is given by:

\begin{align} \phantom{\Leftrightarrow}\;& \frac{1}{x} \partial_x \left(x \partial_x \psi_{\text{h}}\right) - \left(\frac{m^2}{x^2} + \bar{k}^2\right) \psi_{\text{h}} = 0 \\[6pt] \Leftrightarrow\;& \partial_x^2 \psi_{\text{h}} + \frac{1}{x} \partial_x \psi_{\text{h}} - \left(\frac{m^2}{x^2} + \bar{k}^2\right) \psi_{\text{h}} = 0 \end{align}

With the substitution \(z = \bar{k}x\), this equation can be transformed into the form of the modified Bessel equation:

\begin{align} z^2 \partial_z^2 \psi_{\text{h}} + z \partial_z \psi_{\text{h}} - \left(m^2 + z^2\right)\psi_{\text{h}} = 0 \end{align}

The general solution of this equation is given by the modified Bessel functions:

\begin{align} \psi_{\text{h}}(x) = b_{\text{h},+} I_m(\bar{k}x) + b_{\text{h},-} K_m(\bar{k}x) \end{align}

For \(x \to \infty\), this yields the following asymptotic behavior:

\begin{align} \psi_{\text{h}}(x) \sim b_{\text{h},+} x^{-\frac12} \exp(\bar{k}x) + b_{\text{h},-} x^{-\frac12} \exp(-\bar{k}x) \end{align}

Since the perturbations must vanish in the limit \(x \to \infty\), the exponentially growing branch of the homogeneous solution is not admissible. Consequently, the following must hold:

\begin{align} b_{\text{h},+}=0 \end{align}

The asymptotic behavior of the homogeneous solution of the Poisson equation is therefore given by:

\begin{align} \psi_{\text{h}}(x)\sim x^{-\frac12} \exp(-\bar{k}x) \end{align}

To determine the asymptotic behavior of the particular solution \(\psi_{\text{p}}(x)\), the asymptotic behavior of \(u(x)\) is also required, since, up to a constant prefactor, the inhomogeneity of the Poisson equation is precisely given by \(u(x)\). To this end, it is first necessary to examine the asymptotic behavior of the \(x\)-dependent prefactors in the differential equations. For large \(x\), these can be expanded as follows:

\begin{align} q(x)=x^{-4}+\mathcal{O}(x^{-6}) \qquad \text{and} \qquad \frac{\partial_x q(x)}{q(x)}=-4x^{-1}+\mathcal{O}(x^{-3}) \end{align}

In particular, this implies the following asymptotic behavior as \(x\to\infty\):

\begin{align} q(x)\sim x^{-4} \qquad \text{and} \qquad \frac{\partial_x q(x)}{q(x)}\sim -4x^{-1} \end{align}

Thus, the continuity equation is asymptotically given by:

\begin{align} \phantom{\Leftrightarrow}\;& \frac{1}{x}\partial_x\left(x\left(\partial_x u+\frac{4}{x}u+\frac{1}{x^4}\partial_x\psi\right)\right) = \bar{\sigma}^2 u+\left(\frac{m^2}{x^2}+\bar{k}^2\right)\left(u+\frac{\psi}{x^4}\right) \\[6pt] \Leftrightarrow\;& \partial_x^2 u+\frac{5}{x}\partial_x u-\left(\bar{\sigma}^2+\frac{m^2}{x^2}+\bar{k}^2\right)u = \frac{1}{x^4}\partial_x^2\psi-\frac{3}{x^5}\partial_x\psi+\left(\frac{m^2}{x^6}+\frac{\bar{k}^2}{x^4}\right)\psi. \end{align}

Formally, the \(\psi\)-terms appearing on the right-hand side can be regarded as an inhomogeneity of the differential equation for \(u(x)\). Since \(\psi(x)\), however, is itself an unknown function coupled to \(u(x)\), these terms are more precisely interpreted as coupling terms. If the \(\psi\)-terms are formally regarded as an inhomogeneity, the associated homogeneous continuity equation is given by:

\begin{align} \partial_x^2 u_{\text{h}}+\frac{5}{x}\partial_x u_{\text{h}}-\left(\bar{\sigma}^2+\frac{m^2}{x^2}+\bar{k}^2\right)u_{\text{h}}=0 \end{align}

Since for \(\bar{\sigma}^2 \neq -\bar{k}^2\) both terms proportional to \(x^{-1}\) and \(x^{-2}\), as well as constant contributions, are present, it is natural in this case to use the following ansatz for the asymptotic solution of the differential equation:

\begin{align} u_{\text{h}}(x) \sim x^\beta \exp(\lambda x) \end{align}

Substituting into the homogeneous continuity equation and subsequently collecting terms according to powers of \(x\) yields:

\begin{align} \left( \lambda^2-(\bar{\sigma}^2+\bar{k}^2) +\frac{(2\beta+5)\lambda}{x} +\frac{\beta^2+4\beta-m^2}{x^2} \right)u_{\text{h}}=0 \end{align}

Since the equation under consideration describes the asymptotic behavior of the solution for \(x\to\infty\), the prefactor in parentheses must asymptotically tend to zero. It follows that:

\begin{align} \lambda^2-(\bar{\sigma}^2+\bar{k}^2) +\frac{(2\beta+5)\lambda}{x} +\frac{\beta^2+4\beta-m^2}{x^2} \to 0 \qquad \text{for } x\to\infty. \end{align}

As \(x\to\infty\), the constant term dominates over the terms proportional to \(x^{-1}\) and \(x^{-2}\). Therefore, the following must first hold:

\begin{align} \phantom{\Leftrightarrow}\;& \lambda^2 = \bar{\sigma}^2+\bar{k}^2 =: \Lambda^2 \\[6pt] \Leftrightarrow\;& \lambda = \pm \Lambda \end{align}

If \(\lambda\neq 0\), that is, if \(\bar{\sigma}^2\neq -\bar{k}^2\), then after the constant term has vanished, the term proportional to \(x^{-1}\) is the next-to-leading contribution. In order for this term to vanish as well, the following condition must hold:

\begin{align} (2\beta+5)\lambda=0 \end{align}

Since \(\lambda\neq 0\), this immediately implies:

\begin{align} \beta=-\frac{5}{2} \end{align}

Thus, for \(\bar{\sigma}^2\neq -\bar{k}^2\), the asymptotic behavior of the homogeneous solution is given by:

\begin{align} u_{\text{h}}(x)\sim x^{-5/2}\left(a_{\text{h},+}\exp(\Lambda x)+a_{\text{h},-}\exp(-\Lambda x)\right) \end{align}

Here as well, the perturbation must vanish at infinity, so that the exponentially growing branch is not admissible. Hence, \(a_{\text{h},+}=0\) must hold. The asymptotic behavior of the homogeneous solution of the continuity equation for \(\bar{\sigma}^2\neq -\bar{k}^2\) is therefore given by:

\begin{align} u_{\text{h}}(x)\sim x^{-5/2}\exp(-\Lambda x) \end{align}

In the limiting case \(\bar{\sigma}^2=-\bar{k}^2\), the term with constant prefactor vanishes. In this case, the homogeneous continuity equation is given by:

\begin{align} \partial_x^2 u_{\mathrm{h}}+\frac{5}{x}\partial_x u_{\mathrm{h}}-\frac{m^2}{x^2}u_{\mathrm{h}}=0 \end{align}

Since no constant prefactors appear in this equation any longer, it is natural to use a power-law ansatz of the following form for the asymptotic solution:

\begin{align} u_{\mathrm{h}}(x)\sim x^\beta \end{align}

Substituting into the homogeneous continuity equation yields:

\begin{align} \frac{\beta^2+4\beta-m^2}{x^2}u_{\mathrm{h}}=0 \end{align}

Here as well, the prefactor must asymptotically tend to zero. This implies the following for the exponent \(\beta\):

\begin{align} \beta=-2\pm\sqrt{4+m^2} \end{align}

Thus, for \(\bar{\sigma}^2=-\bar{k}^2\), the asymptotic behavior of the homogeneous solution is given by:

\begin{align} u_{\mathrm{h}}(x)\sim x^{-2}\left(a_{\mathrm{h},+}x^{\sqrt{4+m^2}}+a_{\mathrm{h},-}x^{-\sqrt{4+m^2}}\right) \end{align}

Here as well, the perturbation must vanish at infinity, so that the algebraically growing branch, or, in the special case \(m=0\), the constant branch, is not admissible. Hence, \(a_{\mathrm{h},+}=0\) must hold. The asymptotic behavior of the homogeneous solution of the continuity equation for \(\bar{\sigma}^2=-\bar{k}^2\) is therefore given by:

\begin{align} u_{\mathrm{h}}(x) \sim x^{-2-\sqrt{4+m^2}} \end{align}

After determining the homogeneous asymptotic solution branches, it must first be clarified which of these contributions asymptotically dominates as \(x\to\infty\). This is crucial for the further construction of the particular solutions, since the coupling terms in the two differential equations are generated by the other perturbation quantity in each case. Strictly speaking, the general solution of the other equation would have to be inserted into the respective inhomogeneity. Since this is not directly possible because of the coupling, the asymptotically most slowly decaying homogeneous solution branch is used first in the following in order to determine the particular contribution generated by it. Since the actual asymptotic structure of the coupled system is additionally influenced by the induced particular contributions, it must then be checked whether these remain subdominant with respect to the homogeneous contributions or become asymptotically dominant.

In the following, a distinction must therefore be made according to whether \(u_{\text{h}}(x)\) or \(\psi_{\text{h}}(x)\) decays more slowly asymptotically as \(x\to\infty\).

For the case \(\bar{\sigma}^2 \geq 0\), the relation \(\Lambda \geq \bar{k}\) holds. Therefore, the homogeneous solution of \(\psi\) decays more slowly than the homogeneous solution of \(u\) and is thus asymptotically dominant. Consequently, the \(\psi\)-terms in the continuity equation are asymptotically not subdominant. From the form of the homogeneous solution of \(\psi\), it follows that the term \(\frac{\bar{k}^2}{x^4}\psi_{\text{h}}\) in particular asymptotically dominates. Since \(\bar{k}\) is a parameter independent of \(x\), the corresponding term asymptotically has the following form:

\begin{align} \frac{\bar{k}^2}{x^4} \psi_{\text{h}}(x) \sim x^{-9/2} \exp(-\bar{k}x) \end{align}

Therefore, for the particular solution of \(u\), an asymptotic ansatz with the same exponential rate is natural. Since derivatives of the exponential factor merely produce constant prefactors, whereas derivatives of the algebraic factor only yield additional inverse powers of \(x\), which makes them asymptotically subdominant, the following asymptotic behavior is obtained for the particular solution of \(u\):

\begin{align} u_{\text{p}}(x) \sim x^{-9/2} \exp(-\bar{k}x) \end{align}

Since the relation \(\Lambda \geq \bar{k}\) holds for \(\bar{\sigma}^2 \geq 0\), the particular solution decays more slowly than the homogeneous solution. Thus, the particular solution is asymptotically dominant. The general solution of \(u\) therefore exhibits the following asymptotic behavior:

\begin{align} u(x) \sim x^{-9/2} \exp(-\bar{k}x) \end{align}

Analogously, it follows for the particular solution of \(\psi\) that asymptotically it has the same form as the inhomogeneity. Hence,

\begin{align} \psi_{\text{p}}(x) \sim x^{-9/2} \exp(-\bar{k}x) \end{align}

Although the particular solution of \(\psi\) decays with the same exponential rate as the homogeneous solution, it exhibits a stronger algebraic decay and is therefore asymptotically subdominant relative to the homogeneous solution. The asymptotic solution of \(\psi\) for the case \(\bar{\sigma}^2 \geq 0\) is thus given by:

\begin{align} \psi(x) \sim x^{-1/2} \exp(-\bar{k}x) \end{align}

For the case \(-\bar{k}^2<\bar{\sigma}^2<0\), the relation \(\Lambda<\bar{k}\) holds. Therefore, the homogeneous solution of \(u\) decays more slowly than the homogeneous solution of \(\psi\) and is thus asymptotically dominant. Consequently, the particular solution of \(\psi\) is determined first. By analogy with the previous argument, it must asymptotically have the same form as the inhomogeneity. Since the inhomogeneity of the Poisson equation is given, up to a constant prefactor, by the homogeneous solution of \(u\), the following asymptotic behavior is obtained for the particular solution of \(\psi\):

\begin{align} \psi_{\mathrm{p}}(x) \sim x^{-5/2}\exp(-\Lambda x) \end{align}

Since the relation \(\Lambda<\bar{k}\) holds for \(-\bar{k}^2<\bar{\sigma}^2<0\), the particular solution \(\psi_{\mathrm{p}}(x)\) decays more slowly than the homogeneous solution \(\psi_{\mathrm{h}}(x)\). Thus, the particular solution is asymptotically dominant, and the following asymptotic behavior is obtained for the general solution of \(\psi\):

\begin{align} \psi(x) \sim x^{-5/2}\exp(-\Lambda x) \end{align}

The \(\psi\)-terms appearing on the right-hand side of the asymptotic continuity equation all carry additional inverse powers of \(x\). Therefore, these coupling terms, both with and without derivatives, are asymptotically subdominant relative to the homogeneous solution of \(u\). Consequently, the particular solution of \(u\) induced by them also remains subdominant relative to the homogeneous solution. The general solution of \(u\) is therefore asymptotically determined by the homogeneous solution:

\begin{align} u(x) \sim x^{-5/2}\exp(-\Lambda x) \end{align}

For the case \(\bar{\sigma}^2=-\bar{k}^2\), the homogeneous solution of \(u\) decays purely algebraically. Therefore, it decays asymptotically more slowly than the homogeneous solution of \(\psi\) and is thus asymptotically dominant. Consequently, the particular solution of \(\psi\) is determined first. By analogy with the previous argument, the following asymptotic behavior is obtained:

\begin{align} \psi_{\mathrm{p}}(x)\sim x^{-2-\sqrt{4+m^2}} \end{align}

Since the homogeneous solution of \(\psi\) decays exponentially, it decays, in particular, faster than the particular solution. Thus, the particular solution is asymptotically dominant, and the following asymptotic behavior is obtained for the general solution of \(\psi\) in the case \(\bar{\sigma}^2=-\bar{k}^2\):

\begin{align} \psi(x)\sim x^{-2-\sqrt{4+m^2}} \end{align}

By analogy with the previous argument, the terms on the right-hand side of the continuity equation carry additional inverse powers of \(x\) and are therefore asymptotically subdominant relative to the homogeneous solution of \(u\). Consequently, the induced particular solution of \(u\) also remains subdominant relative to the homogeneous solution. The general solution of \(u\) is therefore asymptotically determined by the homogeneous solution:

\begin{align} u(x)\sim x^{-2-\sqrt{4+m^2}} \end{align}

In the case \(\bar{\sigma}^2<-\bar{k}^2\), \(\Lambda\) becomes complex. It is therefore convenient to introduce the following real positive quantity:

\begin{align} \mu := \sqrt{|\bar{\sigma}^2|-\bar{k}^2} \end{align}

This gives:

\begin{align} \Lambda=i\mu \end{align}

In this case, the branch with positive exponent in the homogeneous solution of \(u\) must not be discarded. The reason is that complex \(\Lambda\) does not lead to exponential growth or decay, but to oscillatory behavior. Thus, for the case \(\bar{\sigma}^2<-\bar{k}^2\), the homogeneous solution of \(u\) is asymptotically given by:

\begin{align} u_{\mathrm{h}}(x)\sim x^{-5/2} \left( a_{\mathrm{h},+}\exp(i\mu x)+a_{\mathrm{h},-}\exp(-i\mu x) \right) \end{align}

Thus, the homogeneous solution of \(u\) is asymptotically algebraically damped and oscillatory. In particular, it decays more slowly than the homogeneous solution of \(\psi\), which remains exponentially damped. Therefore, in this case as well, the particular solution of \(\psi\) is determined first. By analogy with the previous argument, the following asymptotic behavior is obtained:

\begin{align} \psi_{\mathrm{p}}(x)\sim x^{-5/2} \left( b_{\mathrm{p},+}\exp(i\mu x)+b_{\mathrm{p},-}\exp(-i\mu x) \right) \end{align}

Since the homogeneous solution of \(\psi\) decays exponentially, it decays, in particular, faster than the particular solution. Thus, the particular solution is asymptotically dominant, and the following asymptotic behavior is obtained for the general solution of \(\psi\) in the case \(\bar{\sigma}^2<-\bar{k}^2\):

\begin{align} \psi(x)\sim x^{-5/2} \left( b_{\mathrm{p},+}\exp(i\mu x)+b_{\mathrm{p},-}\exp(-i\mu x) \right) \end{align}

By analogy with the previous argument, the terms on the right-hand side of the continuity equation carry additional inverse powers of \(x\) and are therefore asymptotically subdominant relative to the homogeneous solution of \(u\). Consequently, the induced particular solution of \(u\) also remains subdominant relative to the homogeneous solution. The general solution of \(u\) is therefore asymptotically determined by the homogeneous solution:

\begin{align} u(x)\sim x^{-5/2} \left( a_{\mathrm{h},+}\exp(i\mu x)+a_{\mathrm{h},-}\exp(-i\mu x) \right) \end{align}

The analysis thus shows that the asymptotic behavior of the perturbation quantities \(u\) and \(\psi\) depends crucially on the value of \(\bar\sigma^2+\bar k^2\). For \(x \to \infty\), the following behavior is obtained in the different cases:

\begin{align} & \text{(i)} \quad \bar\sigma^2 \ge 0: \quad u(x) \sim x^{-9/2}\exp(-\bar k x) \, , \quad \psi(x) \sim x^{-1/2}\exp(-\bar k x) \\[6pt] & \text{(ii)} \quad -\bar k^2 < \bar\sigma^2 < 0: \quad u(x) \sim x^{-5/2}\exp(-\Lambda x) \, , \quad \psi(x) \sim x^{-5/2}\exp(-\Lambda x) \\[6pt] & \text{(iii)} \quad \bar\sigma^2 = -\bar k^2: \quad u(x) \sim x^{-2-\sqrt{4+m^2}} \, , \quad \psi(x) \sim x^{-2-\sqrt{4+m^2}} \\[6pt] & \text{(iv)} \quad \bar\sigma^2 < -\bar k^2: \quad u(x) \sim x^{-5/2}\left(a_+\exp(i\mu x)+a_-\exp(-i\mu x)\right) \, , \quad \psi(x) \sim x^{-5/2}\left(b_+\exp(i\mu x)+b_-\exp(-i\mu x)\right) \end{align}

Here,

\begin{align} \Lambda = \sqrt{\bar\sigma^2+\bar k^2}\in\mathbb{R}_{>0} \, , \quad \mu = \sqrt{|\bar\sigma^2|-\bar k^2}\in\mathbb{R}_{>0} \end{align}

The asymptotic analysis therefore shows that the value \(-\bar k^2\) plays a distinguished role. For \(\bar\sigma^2 > -\bar k^2\), the solutions exhibit exponentially decaying behavior as \(x \to \infty\). In the limiting case \(\bar\sigma^2 = -\bar k^2\), this exponential behavior is lost, and the decay becomes purely algebraic. For \(\bar\sigma^2 < -\bar k^2\), the solutions finally exhibit asymptotically oscillatory behavior with an algebraically decaying envelope. The threshold at \(-\bar k^2\) observed numerically is therefore not accidental, but follows directly from the asymptotic structure of the underlying differential equations.

Interpretation of the spectral structure

For the further discussion of the spectrum, the region \(\bar\sigma^2 < -\bar k^2\) is considered first. As the asymptotic analysis shows, the solutions in this case no longer exhibit exponentially or purely algebraically decaying asymptotic behavior. Instead, they behave oscillatory as \(x \to \infty\), with an algebraically decaying envelope. This fundamentally distinguishes this region from the case \(\bar\sigma^2 \ge -\bar k^2\).

This distinction is crucial for the structure of the spectrum. In the case of exponentially or purely algebraically decaying solutions, the growing solution branch is excluded by the boundary condition at \(x \to \infty\), so that asymptotically only a single admissible branch remains. For \(\bar\sigma^2 < -\bar k^2\), such a selection no longer occurs. The asymptotic solutions do not diverge there, but oscillate with an algebraically decaying envelope. The boundary condition at infinity therefore constrains the solution in this region much more weakly.

This implies that for \(\bar\sigma^2 < -\bar k^2\), isolated discrete eigenvalues are generally not to be expected, but rather a continuous spectral region. Numerically, this region is approximated on the finite computational domain \([0,x_{\max}]\) by a sequence of discrete eigenvalues and therefore appears as a quasi-continuous spectrum. This explains why many eigenvalues occur below the threshold \(-\bar k^2\) in the numerically determined spectrum.

The region \(\bar\sigma^2 \ge -\bar k^2\) is now considered. As the asymptotic analysis shows, the solutions in this case exhibit asymptotically exponentially or purely algebraically decaying behavior. As a result, the solution at infinity is constrained much more strongly than in the case \(\bar\sigma^2 < -\bar k^2\), where oscillatory solutions with an algebraically decaying envelope occur. This is the reason why no quasi-continuous spectral structure is present above the threshold, but only isolated eigenvalues are possible.

More precisely, for \(\bar\sigma^2 \ge -\bar k^2\), only a single admissible branch remains asymptotically after the growing solution branches have been excluded. This is decisive for the structure of the spectrum. The coupled system of differential equations for \(u\) and \(\psi\) is of second order in both quantities and therefore initially possesses a total of four free integration constants. The regularity conditions at the origin reduce this number to two. Because of the linearity of the problem, only a single free relative parameter then remains in essence, characterizing the remaining family of regular solutions. The reason is that multiplying \(u\) and \(\psi\) simultaneously by an arbitrary nonzero constant again yields a solution of the same linear system, so that the absolute overall amplitude plays no role for the question of the existence of an eigenfunction. For a fixed value of \(\bar\sigma^2\), this generally yields a regular solution that contains both a decaying and a growing exponential contribution in the far field. The boundary condition at \(x \to \infty\) then additionally requires that the coefficient of the growing branch vanish. This provides an additional matching condition, which is generally satisfied only for isolated values of \(\bar\sigma^2\). Above the threshold \(-\bar k^2\), a discrete spectrum is therefore to be expected.

This matching condition can be made immediately visible by means of a shooting method. For fixed \((m,\bar k)\) and fixed \(\bar\sigma^2\), the solution that is regular at the origin is integrated outward. Since the origin is a singular point of the system of differential equations, the numerical integration is not started directly at \(x = 0\), but at a small value \(x = x_{\min} > 0\). The corresponding initial values are obtained from the local behavior of the regular solution derived above. After a suitable normalization of the overall amplitude, only the relative amplitude of the initial values of the two perturbation quantities remains as a free parameter. In the following, this free parameter is denoted by \(b\) and is defined by:

\begin{align} b := \frac{\psi( x_{\min})}{u( x_{\min})} \end{align}

For fixed \(\bar\sigma^2\), the condition \(u(x_{\max}) = 0\) determines a value \(b_u(\bar\sigma^2)\), while the condition \(\psi(x_{\max}) = 0\) determines a value \(b_{\psi}(\bar\sigma^2)\). An eigenfunction exists if and only if both conditions are satisfied simultaneously. This is the case precisely when the following holds:

\begin{align} b_u(\bar\sigma^2) = b_{\psi}(\bar\sigma^2) \end{align}

It is therefore convenient to introduce the following difference as a diagnostic quantity:

\begin{align} \Delta b(\bar\sigma^2) := b_u(\bar\sigma^2) - b_{\psi}(\bar\sigma^2) \end{align}

The eigenvalues then correspond exactly to the zeros of \(\Delta b\).

In the following figure, \(\Delta b(\bar\sigma^2)\) is shown as a function of \(\bar\sigma^2\) for \((m,\bar k) = (0,1)\). The zeros of \(\Delta b(\bar\sigma^2)\) correspond exactly to the eigenvalues.

\colorbox{yellow}{Plot of \(\Delta b(\bar\sigma^2)\), caption: Shooting analysis for \((m,\bar k)=(0,1)\): on the left, \(\Delta b(\bar\sigma^2)\); on the right, \(|\Delta b(\bar\sigma^2)|\) on a logarithmic scale. The red points mark the zeros of \(\Delta b\), which correspond exactly to the eigenvalues. The dashed line indicates the threshold \(-\bar k^2=-1\).}

The numerical evaluation of the shooting method shows that \(\Delta b(\bar\sigma^2)\) has several zeros below the threshold \(-\bar k^2\), whereas only a single zero occurs above this threshold. This behavior is consistent with the spectrum discussed above, since several eigenvalues occur below the threshold, whereas only discrete eigenvalues are possible above the threshold. For the case considered here, \((m,\bar k) = (0,1)\), exactly one eigenvalue above \(-\bar k^2\) is obtained on the finite computational domain.

To further support this interpretation, it is helpful to consider not only the zero condition \(\Delta b(\bar\sigma^2)=0\) itself, but also the behavior of the corresponding solution at the outer boundary. In particular, the quantity \(|\psi(x_{\max})|\) for different values of \(\bar\sigma^2\) provides additional insight into whether the asymptotically growing part of the solution actually vanishes or merely remains small on a finite computational domain.

In the following figure, \(|\psi(x_{\max})|\) is shown as a function of \(x_{\max}\) for several values of \(\bar\sigma^2\) in the vicinity of the zero found above the threshold. For each value of \(\bar\sigma^2\), the value \(b_\psi\) was chosen such that \(u(x_{\max})=0\) is satisfied. According to the argument above, only for the actual eigenvalue should \(\psi(x_{\max})=0\) additionally hold, whereas for neighboring values of \(\bar\sigma^2\), \(u(x_{\max})\) generally vanishes, but \(\psi\) still contains a small, nonvanishing contribution of the asymptotically growing branch.

It turns out that \(|\psi(x_{\max})|\) remains very small as \(x_{\max}\) increases only for one distinguished value of \(\bar\sigma^2\), which indicates that the prefactor of the growing asymptotic branch vanishes there. By contrast, even small deviations from the actual eigenvalue lead to a strong growth of \(|\psi(x_{\max})|\).

The dip visible in the figure at \(x_{\max}=12\) for \(\bar\sigma^2=0.875625\) is not to be interpreted as an independent physical effect, but results from the fact that the shooting method was carried out exactly for \(x_{\max}=12\). The eigenvalue determined in this way is therefore, strictly speaking, adapted to this finite computational domain. For other values of \(x_{\max}\), the exact eigenvalue would shift slightly. This is irrelevant, however, for the qualitative statement of interest here.

What matters instead is that \(|\psi(x_{\max})|\) remains small only for the eigenvalue, whereas for neighboring values of \(\bar\sigma^2\), a small but nonvanishing contribution of the asymptotically exponentially growing branch is present. This contribution may initially remain inconspicuous on a finite interval, but becomes visible as \(x_{\max}\) increases and eventually dominates the behavior of the solution. The figure therefore argues against the possibility that a whole interval of admissible eigenvalues emerges above the threshold \(-\bar k^2\) as \(x_{\max}\to\infty\). Instead, the observed behavior is consistent with a discrete spectrum above \(-\bar k^2\).

Overall, the results of the shooting method support the interpretation of the spectrum obtained above from the asymptotic analysis. Below the threshold \(-\bar k^2\), there is a quasi-continuous spectral region, whereas the spectrum above this threshold is discrete. This should not be understood as a mathematical proof, but as a numerical finding that supports the interpretation of the spectrum. As already mentioned above, the numerical solution of the boundary value problem also shows that the spectrum has the same qualitative structure for other combinations of \((m,\bar k)\), and that in each case only one eigenvalue occurs above the threshold.

Dominant mode and characteristic fragment spacing

Now that the qualitative structure of the spectrum has been understood for a fixed pair \((m,\bar{k})\), the discussion can turn to the physical interpretation of general perturbations. Of particular interest is the question of which mode asymptotically dominates the linear evolution of a general initial perturbation.

As described above, a general perturbation can be represented as a superposition of Fourier modes. For a fixed pair \((m,\bar{k})\), the linear stability analysis therefore reduces to an eigenvalue problem in the dimensionless radial coordinate. In general, this problem possesses several eigenfunctions \(u_n(x)\) and \(\psi_n(x)\), respectively, with corresponding eigenvalues \(\bar{\sigma}_n^2(m,\bar{k})\). Because of the linearity of the problem, a general perturbation for fixed \((m,\bar{k})\) can be represented as a linear combination of these eigenfunctions.

Because of the exponential time evolution of the individual modes, the mode with the largest growth rate always dominates for sufficiently large times, provided that unstable modes exist. For a fixed pair \((m,\bar{k})\), this yields:

\begin{align} \bar{\sigma}_{\max}(m,\bar{k}) = \max_n \bar{\sigma}_n(m,\bar{k}) \end{align}

A general perturbation is, in general, a superposition of Fourier modes with different mode numbers and dimensionless wavenumbers. Accordingly, analogously to the previous argument, the asymptotic temporal behavior is determined by the overall fastest-growing mode. Its growth rate \(\bar{\sigma}_{\ast}\) is obtained as the maximum over all mode numbers and dimensionless wavenumbers:

\begin{align} \phantom{\Leftrightarrow}\;& \bar{\sigma}_{\ast} = \max_{m,\bar{k}} \bar{\sigma}_{\max}(m,\bar{k}) \\[6pt] \Leftrightarrow\;& \bar{\sigma}_{\ast} = \max_{m,\bar{k}} \max_n \bar{\sigma}_n(m,\bar{k}) \end{align}

The corresponding pair \((m_\ast,\bar{k}_\ast)\) is accordingly given by:

\begin{align} \phantom{\Leftrightarrow}\;& \left(m_{\ast},\bar{k}_{\ast}\right) = \operatorname*{arg\,max}_{m,\bar{k}} \bar{\sigma}_{\max}(m,\bar{k}) \\[6pt] \Leftrightarrow\;& \left(m_{\ast},\bar{k}_{\ast}\right) = \operatorname*{arg\,max}_{m,\bar{k}} \max_n \bar{\sigma}_n(m,\bar{k}) \end{align}

For the numerical solution of the present problem, a sufficiently large computational domain \([0,x_{\max}]\) with \(x_{\max}=25\) and a sufficiently large number of grid points \(N=500\) are used. It turns out that the results discussed in the following are not changed significantly either by a further enlargement of the computational domain or by increasing the number of grid points. With respect to these two parameters, the numerical result can therefore be regarded as converged.

Furthermore, the problem is considered only for nonnegative mode numbers and dimensionless wavenumbers, that is, for \(m \geq 0\) and \(\bar{k} \geq 0\). This is justified by the fact that \(m\) and \(\bar{k}\) appear in the underlying equations only in squared form, namely as \(m^2\) and \(\bar{k}^2\), respectively. The cases \(m\) and \(-m\), as well as \(\bar{k}\) and \(-\bar{k}\), are therefore mathematically equivalent and yield identical eigenvalues. Without loss of generality, the analysis can thus be restricted to \(m \geq 0\) and \(\bar{k} \geq 0\).

For the further analysis, it is first of interest to determine for which pairs \((m,\bar{k})\) positive and negative squared growth rates occur. Of particular relevance are the regions where \(\bar{\sigma}_{\max}^2(m,\bar{k})\) is positive, since these correspond to unstable modes. For this purpose, the problem was first investigated for mode numbers in the range \(m\in\left\{0,1,2\right\}\) and dimensionless wavenumbers in the interval \(\bar{k}\in[0,2]\). The following figure shows the squared maximal growth rates \(\bar{\sigma}_{\max}^2(m,\bar{k})\) for fixed pairs \((m,\bar{k})\).

\colorbox{yellow}{Plot of the growth rates for fixed pairs \((m,\bar{k})\)}

The figure shows that positive squared growth rates occur exclusively for the axisymmetric mode \(m=0\). For \(m=1\) and \(m=2\), \(\bar{\sigma}_{\max}^2(m,\bar{k})\) is negative throughout the entire wavenumber range considered. Moreover, the consideration of larger dimensionless wavenumbers and mode numbers also shows that no further unstable modes occur. These modes are therefore linearly stable and do not contribute to the asymptotically dominant instability.

It is also possible to understand physically why perturbations with \(m \neq 0\) are stable. Although the perturbation leads to a modulation of the density along the \(z\)-axis, the azimuthal structure of the perturbation implies that for \(m \neq 0\) there is no modulation of the mass integrated over a slice of thickness \(\Delta z\). For the mass \(M(z,\Delta z)\) of such a slice, namely:

\begin{align} \phantom{\Leftrightarrow}\;& M(z,\Delta z) = \int_{z}^{z+\Delta z} \int_{0}^{2\pi} \int_{0}^{\infty} \rho(r,\varphi,z^\prime) r \text{d}r \text{d}\varphi \text{d}z^\prime \\[6pt] \Leftrightarrow\;& M(z,\Delta z) = \int_{z}^{z+\Delta z} \int_{0}^{2\pi} \int_{0}^{\infty} \rho_0(r) r \text{d}r \text{d}\varphi \text{d}z^\prime + \int_{z}^{z+\Delta z} \int_{0}^{2\pi} \int_{0}^{\infty} \rho_1(r,\varphi,z^\prime) r \text{d}r \text{d}\varphi \text{d}z^\prime \end{align}

The first integral corresponds to the equilibrium mass \(M_0(\Delta z)\) of a slice of the filament with thickness \(\Delta z\), whereas the second integral describes the perturbation contribution \(M_1(z,\Delta z)\). For the chosen perturbation ansatz, the azimuthal part satisfies:

\begin{align} \int_{0}^{2\pi} \exp(im\varphi)\,\mathrm{d}\varphi = 0 \qquad \text{for} \qquad m \neq 0 \end{align}

Hence, the entire perturbation contribution vanishes:

\begin{align} M_1(z,\Delta z) = 0 \end{align}

Thus, follows:

\begin{align} M(z,\Delta z) = M_0(\Delta z) \end{align}

Accordingly, there is no modulation along the \(z\)-axis of the total mass contained in a slice. In this way, no gravitational instability can arise that would lead to a preferred contraction of individual regions along the cylinder axis. For the axisymmetric mode \(m=0\), by contrast, no azimuthal structure of the perturbation is present, so that the corresponding contribution does not generally vanish and can therefore favor gravitational instabilities.

At the same time, it is apparent that \(\bar{\sigma}_{\max}^2(m=0,\bar{k})\) does not depend monotonically on \(\bar{k}\), but instead possesses a maximum at a finite dimensionless wavenumber. This can be seen in the following figure, where the maximal squared growth rate for \(m=0\) is shown as a function of the dimensionless wavenumber.

\colorbox{yellow}{Plot of the growth rate for \(m=0\) as a function of \(\bar{k}\)}

The figure shows that the growth rate of the fastest-growing unstable mode is given by:

\begin{align} \bar{\sigma}_{\ast} = 0.9666 \end{align}

The corresponding dimensionless wavenumber of this mode is given by:

\begin{align} \bar{k}_{\ast} = 0.8081 \end{align}

Since the fastest-growing Fourier mode dominates the linear evolution for sufficiently large times, periodic overdensities form along the \(z\)-axis. The corresponding wavelength is given by:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\ast} = \frac{2\pi}{k_{\ast}} \\[6pt] \Leftrightarrow\;& \lambda_{\ast} = \frac{2\pi r_0}{\bar{k}_{\ast}} \\[6pt] \Leftrightarrow\;& \lambda_{\ast} = \frac{2\pi}{0.8081} r_0 \end{align}

As soon as the amplitudes of the overdensities become so large that the system enters the nonlinear regime, self-gravity leads to enhanced contraction in the overdense regions. In this regime, the gas cylinder fragments. The fragments that form later arise preferentially at the locations of the overdensities that were already imprinted in the linear regime by the dominant unstable mode. The characteristic fragment spacing \(\lambda_{\text{frag}}\) therefore corresponds, to a good approximation, to the wavelength of the unstable mode with the largest growth rate:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\mathrm{frag}} = \lambda_{\ast} \\[6pt] \Leftrightarrow\;& \lambda_{\mathrm{frag}} = \frac{2\pi}{0.8081} r_0 \end{align}

Thus, the fragment spacing scales linearly with the characteristic radius. In this model, the characteristic radius therefore directly sets the characteristic length scale on which overdensities develop along the \(z\)-axis and from which fragments emerge in the subsequent nonlinear evolution.

Work Plan and Approach

The following section presents a rough work plan for this master's thesis, which aims to numerically reproduce the solutions developed by Fiege and Pudritz. The initial goal is to simulate the purely hydrostatic reference case, the infinitely long, isothermal, self-gravitating cylinder without magnetic fields, in PLUTO/belt. Due to the cylindrical symmetry, the first tests can be carried out in a one-dimensional setup in cylindrical coordinates.

Afterwards, the numerical setup will be gradually expanded to two and then three dimensions. Once the three-dimensional setup in cylindrical coordinates is stable and works reliably, the plan is to switch to a three-dimensional setup in Cartesian coordinates. Cartesian coordinates are needed for the considered problem because the instabilities to be investigated later are expected to break the cylindrical symmetry.

Once the three-dimensional setup is in Cartesian coordinates, the next step will be to extend it to magnetohydrodynamic calculations. The aim here is to implement the equilibrium solutions of Fiege and Pudritz (2000a) in PLUTO/belt. Finally, the equilibrium configuration should be perturbed, analogous to Fiege and Pudritz (2000b), in order to investigate the temporal development of the perturbations and compare them with the predictions of their linear stability analysis.

1D Hydrodynamic Reference Test: Setup and Convergence

As explained above, the study starts with purely hydrodynamic, one-dimensional, isothermal simulations including self-gravity. The simulation setup is initialised according to an infinitely long, isothermal, self-gravitating cylinder with a core. Specifically, the density profile is set according to the previously derived density distribution, the velocity field is initialised to zero at the start, and the core mass is selected according to the relation given above. The temperature \(T\) and the characteristic radius \(r_0\) are selected based on typical values for observed filaments (cf. Seifried and Walch, 2015):

\begin{align} T = 10 \, \text{K} \quad \text{and} \quad r_0 = 0.033 \, \text{pc} \end{align}

For the initial simulations, a grid extending from \(R_{\min} = 10^{-4}\,\text{pc}\) to \(R_{\max} = 1\,\text{pc}\) is used. This ensures that the numerical domain completely encompasses the filament and that the outer region is sufficiently large to minimise boundary artefacts.

The grid is designed in such a way that the cells become exponentially larger towards the outside, according to:

\begin{align} \Delta r_i = r_{i-\frac{1}{2}} \left(10^{\Delta \xi} - 1\right) \, , \quad \text{with} \quad \Delta \xi = \frac{1}{N_r} \log_{10}\left(\frac{R_{\max}}{R_{\min}}\right) \end{align}

Here, \(\Delta r_i\) is the length of the \(i\)-th cell, \( r_{i-\frac{1}{2}}\) is the radius of the inner or left cell wall of the \(i\)-th cell, and \(N_r\) is the number of cells in the domain. This exponential growth of the domain ensures high resolution in the area of interest, i.e. in and around the filament, especially near the core, while a coarser resolution is sufficient further out, where in the case under consideration there are typically only very low densities or the filament transitions into an almost vacuum state. In this way, a good balance is achieved between accuracy in the relevant region and the highest possible numerical efficiency.

For the hydrodynamic variables, reflective boundary conditions are used at the inner edge of the domain. This is because the inner edge represents an impenetrable core and the reflective boundary condition prevents mass flow across the boundary surface. For the outer boundary, outflow–no-inflow boundary conditions are selected in order to model the domain as an open system. This allows matter, perturbations and waves to leave the domain, while preventing inflow from outside.

For the gravitational potential of the gas in the domain, a zero-gradient boundary condition is used at the inner edge. The reason for this is that the gravitational attraction of the core is considered separately in PLUTO/belt. Since gravitational potentials can be superimposed due to the linearity of Poisson's equation, the potential generated by the gas can then be linearly added to the potential generated by the core. At the outer boundary, the potential is specified by the analytical potential solution for an infinitely long, isothermal, self-gravitating cylinder without a core. This boundary condition is permissible in this setup because the core serves only as a numerical aid to emulate the case without a core.

The first simulation with this setup was performed with \(N_r = 1000\) cells in the radial direction. As a result, it can be seen that the maximum density in the system, which corresponds to the density in the innermost cell at the core due to the selected density profile, initially undergoes a damped oscillation. The maximum density then stabilises at a value of approximately \(\rho_{\max} = 3.36 \cdot 10^{-19}\,\frac{\text{g}}{\text{cm}^3}\). This corresponds to an upward deviation of about \(3\,\%\) from the initial and thus analytically expected value. This behaviour is illustrated in the following figure:

Time evolution of the maximum density \(\rho_{\max}\) for \(N_r = 1000\).

As a consistency check, both the radial density distribution \(\rho(r)\) and the gravitational potential \(\phi_{\text{ext}}(r)\) of the converged simulation are now considered and compared with the analytically expected solution. Both quantities are shown in the following figure, with the density distribution on the left and the corresponding gravitational potential on the right:

Simulation for \(N_r = 1000\): Radial density profile \(\rho(r)\) of the converged simulation compared to the analytical reference solution (left) and gravitational potential \(\phi_{\mathrm{ext}}(r)\) compared to the analytically expected potential solution (right).

Due to conservation of mass, the increase in maximum density causes the density distribution to contract slightly. This is evident in the fact that the density peak increases while the characteristic radius \(r_0\) decreases. As a result, the profile becomes narrower overall and more concentrated towards the centre. The gravitational potential agrees very well with the analytically expected solution over the entire radial range. Minor differences can be explained by the fact that the gravitational potential is only uniquely determined up to an additive constant. A possible offset therefore corresponds merely to a choice of zero level and is physically uncritical. Overall, the comparison confirms that the numerically calculated potential resulting from the gas is consistent with the analytical reference solution.

It can be seen that the contraction of the density distribution is significantly weaker at higher resolutions. This can be seen in the following figure, which shows the density in the innermost cell, i.e. the first cell at the core, as a function of the number of cells \(N_r\) used:

Density in the innermost cell as a function of radial resolution \(N_r\). As the number of cells increases, \(\rho\) converges towards the theoretically expected value.

Here, it can be seen that the density in the innermost cell converges towards the theoretically expected value as the number of cells increases. For \(N_r = 8000\), the maximum density deviates upwards by only about \(0.05\,\%\). In addition, the temporal evolution of the maximum density shows a rather monotonic increase with a single overshoot, followed by relaxation to the steady-state value without further significant oscillations. This behaviour is illustrated in the following figure. The left plot shows the density in the innermost cell as a function of time, and the middle plot shows the radial density distribution of the converged simulation as well as the theoretically expected density distribution used as the initial condition. For the sake of completeness, the right plot also shows the gravitational potential of the converged simulation together with the analytically expected potential solution. However, there is no qualitative or quantitative change in the potential compared to the case with \(N_r = 1000\) cells.

Simulation for \(N_r = 8000\): Time evolution of the maximum density \(\rho_{\max}(t)\) in the innermost cell (left), radial density profile \(\rho(r)\) of the converged simulation compared to the analytical reference solution (centre) and gravitational potential \(\phi_{\mathrm{ext}}(r)\) compared to the analytically expected potential solution (right).

Free-fall time validation

Another consistency check is to verify whether the setup can reproduce the free fall time \(t_{\mathrm{ff}}\) in the two limiting cases \(r_i \rightarrow R_{\min}\) and \(r_i \rightarrow R_{\max}\). The analytical formula derived above yields the following expected values for these two limiting cases:

\begin{align} t_{\text{ff}}\left(r_i \rightarrow R_{\min}\right) = 1.07 \cdot 10^5\,\text{yr} \quad \text{and} \quad t_{\text{ff}}\left(r_i \rightarrow R_{\max}\right) = 3.26 \cdot 10^6\,\text{yr} \end{align}

To achieve this, the pressure must be switched off so that the dynamics of the system depend solely on its own gravity. To achieve this, the limiting case \(T \to 0\) is considered, because this then results in \(P \to 0\) from the isothermal equation of state. In the following simulations, a very low temperature of \(T = 10^{-3}\,\text{K}\) is used for this purpose, so that pressure forces are negligible compared to gravity. The total mass in the domain remains unchanged from the previous study.

In the following simulations, a grid with \(N_r = 8000\) cells is used to keep numerical resolution effects as small as possible. The only change compared to the previous study concerns the boundary condition at the inner boundary for the hydrodynamic variables. Outflow-no-inflow boundary conditions are used there instead of reflective ones. This is necessary because in this test, the inner boundary is supposed to act as a sink into which gas can flow, and not as an impenetrable wall as before. Inflow into the domain is prevented because such an inflow does not exist physically and otherwise mass could be artificially introduced into the domain due to numerical effects. All other boundary conditions for the hydrodynamic variables and for the gravitational potential remain unchanged.

In order to determine the free fall time in the two limiting cases from the simulation, suitable measurement variables are required that can be used to clearly identify the corresponding points in time. For this purpose, it is useful to consider the temporal evolution of the accretion rate across the inner edge \(\dot{M}_{R_{\min}}(t) = \frac{\mathrm{d} M_{R_{\min}}}{\mathrm{d} t}(t)\). This quantity describes the mass flow across the inner edge at time \(t\), i.e. the mass flowing out of the domain across the inner edge per time.

It turns out that the accretion rate is particularly sensitive to the beginning of gravitational collapse. As long as the matter falling inwards from the domain has not yet reached the inner edge, the accretion rate remains negligible, except for numerical noise. As soon as the first mass elements reach the inner edge, it should increase sharply and show a pronounced first maximum. Towards the end of the outflow, the accretion rate should first continue to decline and then suddenly drop by several orders of magnitude, leaving only numerically driven contributions.

Since very small, non-physical contributions can occur early on in the simulation, e.g. due to numerical diffusion or rounding errors, a detection threshold \(\dot{M}_{\text{thr}}\) is introduced. In the following, \(\dot{M}_{\text{thr}} = 10^{-11} M_\odot\text{yr}^{-1}\) is used to robustly separate the physically relevant part of the curve from numerical noise.

On this basis, the inner limit case \(r_i \rightarrow R_{\min}\) is defined by the time of the first significant peak in the accretion rate, i.e. by the first pronounced maximum after it has exceeded the detection threshold. This point in time marks the beginning of efficient accretion and thus corresponds approximately to the earliest possible arrival of matter at the inner edge. The outer limit case \(r_i \rightarrow R_{\max}\), on the other hand, is determined by the last point in time before the abrupt drop in the accretion rate. For this purpose, the characteristic kink in the accretion rate is identified, at which point the rate drops sharply after a phase of slower decay. The detection threshold continues to serve as an auxiliary criterion in order to consider only the physically relevant part of the curve.

The following figure shows the accretion rate on a logarithmic scale together with the detection threshold used and the two free fall times determined from this for the two boundary cases.

Temporal evolution of the accretion rate across the inner edge \(\dot{M}_{R_{\min}}(t)\). The horizontal dashed line marks the detection threshold \(\dot{M}_{\mathrm{thr}} = 10^{-11}\,M_\odot\,\mathrm{yr}^{-1}\). The vertical lines mark the determined free-fall times of the boundary cases. \(t_{\mathrm{ff}}(r_i \rightarrow R_{\min})\) as the time of the first significant peak and \(t_{\mathrm{ff}}(r_i \rightarrow R_{\max})\) as the last time immediately before the abrupt drop in the accretion rate.

The figure shows that the free fall times determined from the simulation for the two limitis correspond very well with the analytically expected times. For the limit \(r_i \rightarrow R_{\min}\), the simulation yields \(t_{\text{ff}}\left(r_i \rightarrow R_{\min}\right) = 1.1 \cdot 10^5\,\text{yr}\), which corresponds to a deviation of approximately \(2.5\,\%\) from the analytical solution. For the limit \(r_i \rightarrow R_{\max}\), the simulation yields \(t_{\text{ff}}\!\left(r_i \rightarrow R_{\max}\right) = 3.27 \cdot 10^6\,\text{yr}\), which corresponds to a deviation of about \(0.4\,\%\) from the analytical value.

In summary, the results of this section, in combination with the results from the previous study, show that the hydrodynamic setup, including self-gravity, is numerically stable with the boundary conditions used. With a sufficiently high resolution, both the stationary profiles and the analytically expected free fall times are reliably reproduced in the boundary cases considered. This creates a reliable reference that serves as a starting point for the subsequent extension to two dimensions.

2D Hydrodynamic Reference Test: Stability

Next, the one-dimensional setup is expanded to a two-dimensional one. Since the previous study showed that the one-dimensional setup can reliably reproduce the analytical solution at sufficiently high resolution, the coarsest possible resolution, at which the one-dimensional simulation still runs stably and does not produce any obviously unphysical results, is deliberately chosen for the investigation now. This increases the numerical efficiency for the following tests. The aim of this study is not to reproduce the analytical solution again with maximum accuracy, but to verify the extension to two dimensions. In particular, the aim is to check whether the setup is fundamentally stable in 2D and whether or which numerical or implementation-related problems occur during the transition from one to two dimensions.

It appears that the coarsest grid at which the one-dimensional simulation still runs stably and does not produce any obviously unphysical results has \(N_r = 250\) cells in the radial direction. The figure below shows the density in the innermost cell as a function of time on the left. The middle plot shows the radial density distribution of the converged simulation and the theoretically expected density distribution. The right-hand plot additionally shows the gravitational potential of the converged simulation together with the analytically expected potential solution.

2D reference run (\(N_r=250\)): Time evolution of the maximum density \(\rho_{\max}(t)\) (left), radial density profile \(\rho(r)\) with fit and analytical solution (centre) and gravitational potential \(\phi_{\mathrm{ext}}(r)\) with fit and analytical solution (right).

It can be seen that the simulation with \(N_r = 250\) cells in the radial direction converges faster in simulated physical time than the simulation with \(N_r = 1000\) cells. However, this is due to numerical effects. In this case, too, an increase in central density and, due to mass conservation, a decrease in the characteristic radius can be observed. As expected, this contraction is more pronounced than in the case of \(N_r = 1000\). In addition, the coarse resolution shows a slight deviation in the gravitational potential. For small radii, the numerical potential is slightly below the analytical solution. This is consistent with the contracted density distribution, as it means that more mass is located at small radii and the potential in the centre is correspondingly deeper.

For the two-dimensional simulations, a domain is considered that is identical to the previous study in the radial direction and extends in the azimuthal direction from \(\varphi_{\min}=0\) to \(\varphi_{\max}=2\pi\). The domain is discretised by an \(N_r \times N_{\varphi}\) grid. In the radial direction, as in the previous study, a grid with \(N_r = 250\) cells is used, with the cells increasing exponentially in size towards the outside. The number of cells in the azimuthal direction \(N_{\varphi}\) is chosen so that the cells are approximately square. This achieves a comparable resolution in the radial and azimuthal directions, which reduces anisotropic numerical errors and stronger dissipation in one direction. For locally square cells, the following must approximately apply:

\begin{align} r_i \Delta \varphi_i = \Delta r_i \end{align}

Here, \(r_i\) is the radius of the \(i\)-th cell centre, \(\Delta r_i\) is the radial width of the \(i\)-th cell, and \(\Delta\varphi\) is the azimuthal cell width. A uniform cell size is used for this in the azimuthal direction. The following therefore applies:

\begin{align} \phantom{\Rightarrow}\;& \Delta \varphi_i = \Delta \varphi = \frac{\varphi_{\max} - \varphi_{\min}}{N_{\varphi}} \\[6pt] \Leftrightarrow\;& \Delta \varphi = \frac{2 \pi}{N_{\varphi}} \end{align}

Inserting yields:

\begin{align} \frac{2 \pi r_i}{N_{\varphi}} = r_{i-\frac{1}{2}} \left(10^{\Delta \xi} - 1\right) \end{align}

Since \(r_i\) describes the radius of the cell centre and \(r_{i-\frac12}\) describes the radius of the inner cell edge, the following applies:

\begin{align} r_i = r_{i-\frac12} + \frac{\Delta r_i}{2}\, . \end{align}

If the grid has a sufficiently fine resolution, i.e. \(\Delta r_i \ll r_i\), then the difference between \(r_i\) and \(r_{i-\frac12}\) is only a small fraction of \(r_i\). Therefore, as a first approximation, we can assume that the following applies:

\begin{align} r_{i-\frac12} \approx r_i \, . \end{align}

Thus follows:

\begin{align} \phantom{\Rightarrow}\;& \frac{2 \pi}{N_{\varphi}} = 10^{\Delta \xi} - 1 \\[6pt] \Leftrightarrow\;& N_{\varphi} = \frac{2 \pi}{10^{\Delta \xi} - 1} \end{align}

With \(N_r = 250\), \(R_{\min} = 10^{-4}\,\text{pc}\) and \(R_{\max} = 1\,\text{pc}\), the following applies:

\begin{align} N_{\varphi} = \frac{2 \pi}{10^{\frac{2}{125}} - 1} \approx 167 \end{align}

Therefore, a \(250 \times 167\) grid will be used in the following.

For the hydrodynamic quantities and the gravitational potential, the same boundary conditions are used in the radial direction as in the previous study. In the azimuthal direction, periodic boundary conditions are chosen for both the hydrodynamic quantities and the gravitational potential. This is justified by the angular geometry, since \(\varphi=0\) and \(\varphi=2\pi\) describe the same physical location.

It can be seen that the simulation runs stable without any new numerical or implementation-related problems occurring. The following figure shows the time evolution of the maximum density of the two-dimensional simulation in the domain. For comparison, the time evolution of the one-dimensional simulation with \(N_r = 250\) cells is also shown.

Time evolution of the maximum density \(\rho_{\max}(t)\) for the 2D reference run (solid line) compared to the corresponding 1D simulation with \(N_r=250\) (dashed line).

The figure shows that the temporal evolution of the maximum density in the two-dimensional simulation is practically identical to that in the one-dimensional simulation. In particular, it initially shows a damped oscillation and then approaches a steady state value. From about \(t \sim 3 \cdot 10^8\,\text{yr}\) onwards, the oscillation has completely damped out.

The following figure shows both the density and the gravitational potential of the converged simulation as a function of the Cartesian coordinates \(x = r \sin(\varphi)\) and \(y = r \cos(\varphi)\). For the density representation, the central area of the domain was zoomed in on more than for the gravitational potential, since the density is essentially concentrated in the filament and no longer exhibits any relevant structures at larger radii.

Contour plots of the converged 2D simulation (\(N_r \times N_\varphi = 250 \times 167\)): density distribution \(\rho(x,y)\) (left) and gravitational potential \(\phi_{\mathrm{ext}}(x,y)\) (right).

The figure confirms that the two-dimensional simulation maintains the expected cylindrical symmetry. Both the density and the gravitational potential are purely radially dependent and show no systematic \(\varphi\)-dependence. In particular, the density peak remains centred and the isolines run circularly, indicating that the periodic boundary conditions are consistently implemented in the azimuthal direction and no artefacts arise at \(\varphi = 0\) or \(\varphi = 2\pi\).

This observation can also be confirmed by the following figure. On the left-hand side, the radial profiles are shown for five different azimuth angles. The density is shown at the top and the gravitational potential at the bottom. On the right-hand side, the corresponding profiles are shown for an azimuth angle close to \(\varphi \approx 2\pi\), together with a fit and the analytically expected solution, with the density distribution at the top and the gravitational potential at the bottom.

Radiale Profile der konvergierten 2D-Simulation für verschiedene Azimutalwinkel: Dichte \(\rho(r,\varphi)\) (oben links) und Gravitationspotential \(\phi_{\mathrm{ext}}(r,\varphi)\) (unten links). Rechts sind die Profile exemplarisch für einen Winkel nahe \(\varphi \approx 2\pi\) zusammen mit einem Fit und der analytischen Referenzlösung dargestellt (oben: \(\rho\), unten: \(\phi_{\mathrm{ext}}\)).

The figure confirms that the converged two-dimensional simulation remains cylindrically symmetric. On the left, it can be seen that the radial profiles for the different azimuth angles completely overlap, both for density and for gravitational potential. This means that there are no systematic \(\varphi\)-dependencies, and in particular, even close to \(\varphi \approx 2\pi\), there are no signs of artefacts caused by the periodic boundary conditions.

As in the one-dimensional case with coarse resolution, the density profile continues to show a contraction. The potential agrees very well with the analytical expectation over large radii. The small deviations in the inner region are consistent with the changed mass distribution. It should also be noted that the potential is basically only determined up to an additive constant, so that a possible offset is physically uncritical. Overall, both the temporal evolution of the maximum density and the fit parameters agree with the corresponding one-dimensional reference run at the same radial resolution, so that the extension to two dimensions does not introduce any additional systematic effects.

Overall, this two-dimensional reference test shows that the hydrodynamic setup, including self-gravity and periodic boundary conditions in the \(\varphi\) direction, is stable and consistently reproduces the corresponding one-dimensional case, thus providing the basis for subsequent extension to higher dimensions.