MA Emilio Schmidt: Difference between revisions

From Arbeitsgruppe Kuiper
Jump to navigation Jump to search
(→‎Linear stability analysis: Proof real eigenvalues)
(disturb -> perturb)
 
Line 5: Line 5:
As a theoretical reference, this work builds on the analytically derived magnetohydrodynamic models of [https://arxiv.org/pdf/astro-ph/9901096 Fiege and Pudritz (2000a)]. They solved the stationary magnetohydrodynamic equations for cylindrical, self-gravitating filaments with helical magnetic fields and obtained equilibrium solutions from them. Building on these equilibrium solutions, they performed in [https://arxiv.org/pdf/astro-ph/9902385 Fiege and Pudritz (2000b)] a linear stability analysis to investigate instabilities that occur, associated growth rates and characteristic wavelengths.
As a theoretical reference, this work builds on the analytically derived magnetohydrodynamic models of [https://arxiv.org/pdf/astro-ph/9901096 Fiege and Pudritz (2000a)]. They solved the stationary magnetohydrodynamic equations for cylindrical, self-gravitating filaments with helical magnetic fields and obtained equilibrium solutions from them. Building on these equilibrium solutions, they performed in [https://arxiv.org/pdf/astro-ph/9902385 Fiege and Pudritz (2000b)] a linear stability analysis to investigate instabilities that occur, associated growth rates and characteristic wavelengths.


The aim of this work is to numerically reproduce the equilibrium solutions described by [https://arxiv.org/pdf/astro-ph/9901096 Fiege and Pudritz (2000a)]. These equilibria are then disturbed in order to investigate the temporal development of the disturbances and compare the results with those of the linear stability analysis by [https://arxiv.org/pdf/astro-ph/9902385 Fiege and Pudritz (2000b)].
The aim of this work is to numerically reproduce the equilibrium solutions described by [https://arxiv.org/pdf/astro-ph/9901096 Fiege and Pudritz (2000a)]. These equilibria are then perturbed in order to investigate the temporal development of the perturbations and compare the results with those of the linear stability analysis by [https://arxiv.org/pdf/astro-ph/9902385 Fiege and Pudritz (2000b)].


= Theoretical Framework =
= Theoretical Framework =
Line 666: Line 666:
== Linear stability analysis ==
== Linear stability analysis ==


The aim of this section is to investigate how the hydrostatic equilibrium solution of an infinitely long, isothermal, self-gravitating gas cylinder behaves under disturbances along the \(z\)-axis. In the following, all equilibrium variables are labelled with the index \(0\):
The aim of this section is to investigate how the hydrostatic equilibrium solution of an infinitely long, isothermal, self-gravitating gas cylinder behaves under perturbations along the \(z\)-axis. In the following, all equilibrium variables are labelled with the index \(0\):


\begin{align}
\begin{align}
Line 678: Line 678:
Here, \(\boldsymbol{v}_0\) denotes the velocity field of the equilibrium solution, which is identically zero in hydrostatic equilibrium due to the stationary configuration.
Here, \(\boldsymbol{v}_0\) denotes the velocity field of the equilibrium solution, which is identically zero in hydrostatic equilibrium due to the stationary configuration.


Considered is now a disturbance to the equilibrium solution along the \(z\)-axis. For this purpose, a dimensionless disturbance parameter \(\varepsilon\) is introduced, in which each hydrodynamic quantity \(q\) with \(q \in \{\rho, \boldsymbol{v}, \phi_{\text{ext}}\}\) can be expanded:
Considered is now a perturbation to the equilibrium solution along the \(z\)-axis. For this purpose, a dimensionless perturbation parameter \(\varepsilon\) is introduced, in which each hydrodynamic quantity \(q\) with \(q \in \{\rho, \boldsymbol{v}, \phi_{\text{ext}}\}\) can be expanded:


\begin{align}
\begin{align}
Line 690: Line 690:
\end{align}
\end{align}


In order to investigate how these disturbances develop over time, the disturbed variables are inserted into the hydrodynamic equations underlying the system. As a result, products of disturbed terms give rise to higher-order contributions in \(\varepsilon\). Since \(\varepsilon \ll 1\) is assumed in the following, these nonlinear contributions can be neglected. The equations are thus linearised around the equilibrium solution.
In order to investigate how these perturbations develop over time, the perturbed variables are inserted into the hydrodynamic equations underlying the system. As a result, products of perturbation terms give rise to higher-order contributions in \(\varepsilon\). Since \(\varepsilon \ll 1\) is assumed in the following, these nonlinear contributions can be neglected. The equations are thus linearised around the equilibrium solution.


Substituting the disturbed solution into the continuity equation yields:
Substituting the perturbed solution into the continuity equation yields:


\begin{align}
\begin{align}
Line 702: Line 702:
\end{align}
\end{align}


At this point, it can be used that the disturbance parameter \(\varepsilon\) is a constant prefactor and the equilibrium solution is constant over time. On the other hand, products of disturbance variables appear in the flow term, which contribute second-order terms in \(\varepsilon\). These terms can be neglected in the following. This leads to:
At this point, it can be used that the perturbation parameter \(\varepsilon\) is a constant prefactor and the equilibrium solution is constant over time. On the other hand, products of perturbation variables appear in the flow term, which contribute second-order terms in \(\varepsilon\). These terms can be neglected in the following. This leads to:


\begin{align}
\begin{align}
Line 746: Line 746:
\end{align}
\end{align}


Substituting the disturbed solution into Poisson's equation yields:
Substituting the perturbed solution into Poisson's equation yields:


\begin{align}
\begin{align}
Line 769: Line 769:
\end{align}
\end{align}


The linearised equations obtained can be further simplified by assuming an axisymmetric disturbance. For the continuity equation, this results in:
The linearised equations obtained can be further simplified by assuming an axisymmetric perturbation. For the continuity equation, this results in:


\begin{align}
\begin{align}
Line 779: Line 779:
\end{align}
\end{align}


For Euler's equation, it is useful to use a component-wise representation in the following. Due to the assumed axial symmetry of the disturbance, the \(\varphi\) component is omitted. For the remaining \(r\) and \(z\) components, it follows:
For Euler's equation, it is useful to use a component-wise representation in the following. Due to the assumed axial symmetry of the perturbation, the \(\varphi\) component is omitted. For the remaining \(r\) and \(z\) components, it follows:


\begin{align}
\begin{align}
Line 793: Line 793:
\end{align}
\end{align}


After the linearisation, the following separation approach can be selected for the disturbed variables:
After the linearisation, the following separation approach can be selected for the perturbation variables:


\begin{align}
\begin{align}
Line 799: Line 799:
\end{align}
\end{align}


This approach can be justified as follows. To this end, it is useful to combine the disturbance variables in a vector \(\boldsymbol{w}\):
This approach can be justified as follows. To this end, it is useful to combine the perturbation variables in a vector \(\boldsymbol{w}\):


\begin{align}
\begin{align}
Line 861: Line 861:
Each mode \(\exp(ikz)\) is thus multiplied by a constant phase factor \(\exp(ikz_0)\) through a translation along the \(z\)-axis.
Each mode \(\exp(ikz)\) is thus multiplied by a constant phase factor \(\exp(ikz_0)\) through a translation along the \(z\)-axis.
Since both the time derivative and the linear operator commute with the translation operator, they have a common set of eigenfunctions. This makes it consistent to develop the \(z\)-dependence of the disturbance variables in Fourier modes. Furthermore, it can be shown that different wave numbers \(k\) do not couple with each other, so that each Fourier mode develops independently. For the stability analysis, it is therefore sufficient to consider a single Fourier mode with a fixed wave number \(k\).
Since both the time derivative and the linear operator commute with the translation operator, they have a common set of eigenfunctions. This makes it consistent to develop the \(z\)-dependence of the perturbation variables in Fourier modes. Furthermore, it can be shown that different wave numbers \(k\) do not couple with each other, so that each Fourier mode develops independently. For the stability analysis, it is therefore sufficient to consider a single Fourier mode with a fixed wave number \(k\).
      
      
Analogous to the Fourier decomposition in \(z\), the time dependence can also be decomposed into eigenfunctions of the time evolution operator. Thus, the time evolution of the disturbance is given by:
Analogous to the Fourier decomposition in \(z\), the time dependence can also be decomposed into eigenfunctions of the time evolution operator. Thus, the time evolution of the perturbation is given by:


\begin{align}
\begin{align}
Line 869: Line 869:
\end{align}
\end{align}


Here, \(\sigma\) is generally complex. The real part \(\Re(\sigma)\) describes the growth or damping rate of the disturbance. For \(\Re(\sigma)>0\), the disturbance grows exponentially, and the system is unstable in this case. For \(\Re(\sigma)<0\), the disturbance decays exponentially, and the system is stable. The imaginary part \(\Im(\sigma)\) corresponds to an oscillation frequency, i.e. for \(\Im(\sigma)\neq 0\), the development is additionally oscillatory. In the special case \(\Re(\sigma)=0\), the amplitude remains constant, resulting in a purely oscillatory solution.  
Here, \(\sigma\) is generally complex. The real part \(\Re(\sigma)\) describes the growth or damping rate of the perturbation. For \(\Re(\sigma)>0\), the perturbation grows exponentially, and the system is unstable in this case. For \(\Re(\sigma)<0\), the perturbation decays exponentially, and the system is stable. The imaginary part \(\Im(\sigma)\) corresponds to an oscillation frequency, i.e. for \(\Im(\sigma)\neq 0\), the development is additionally oscillatory. In the special case \(\Re(\sigma)=0\), the amplitude remains constant, resulting in a purely oscillatory solution.  


The radial part of the linearised equations has no constant coefficients. As a result, the equations are not translation-invariant in \(r\). Consequently, plane waves in \(r\) are not eigenfunctions of the radial operator, and the radial structure of the perturbations must be determined explicitly.
The radial part of the linearised equations has no constant coefficients. As a result, the equations are not translation-invariant in \(r\). Consequently, plane waves in \(r\) are not eigenfunctions of the radial operator, and the radial structure of the perturbations must be determined explicitly.


Using the following separation Ansatz for the disturbance variables results in a coupled system of ordinary differential equations in \(r\):
Using the following separation Ansatz for the perturbation variables results in a coupled system of ordinary differential equations in \(r\):


\begin{align}
\begin{align}
Line 927: Line 927:
\end{align}
\end{align}


To solve this system of coupled differential equations, suitable boundary conditions are required. Since the relevant dynamics are essentially determined by the area of the cylinder and the equilibrium solution is localised to a finite radial area around the characteristic radius \(r_0\), it is required that the disturbance disappears for large radial distances. In the limit \(r \to \infty\), the disturbance variables must therefore approach zero:
To solve this system of coupled differential equations, suitable boundary conditions are required. Since the relevant dynamics are essentially determined by the area of the cylinder and the equilibrium solution is localised to a finite radial area around the characteristic radius \(r_0\), it is required that the perturbation disappears for large radial distances. In the limit \(r \to \infty\), the perturbation variables must therefore approach zero:


\begin{align}
\begin{align}
Line 933: Line 933:
\end{align}
\end{align}


For the second boundary condition, analogous to the derivation of the analytical equilibrium solution of an infinitely long, isothermal, self-gravitating gas cylinder, the rotational symmetry about the cylinder axis and the requirement for a smooth and regular solution imply that the disturbance variables at the origin must not have any gradients. In particular, the radial derivatives must vanish at the centre:
For the second boundary condition, analogous to the derivation of the analytical equilibrium solution of an infinitely long, isothermal, self-gravitating gas cylinder, the rotational symmetry about the cylinder axis and the requirement for a smooth and regular solution imply that the perturbation variables at the origin must not have any gradients. In particular, the radial derivatives must vanish at the centre:


\begin{align}
\begin{align}
Line 939: Line 939:
\end{align}
\end{align}


It proves useful to introduce some dimensionless quantities that further simplify the system of differential equations. It is reasonable to remove the dimensions from the disturbance variables as follows:
It proves useful to introduce some dimensionless quantities that further simplify the system of differential equations. It is reasonable to remove the dimensions from the perturbation variables as follows:


\begin{align}
\begin{align}
Line 985: Line 985:
\end{align}
\end{align}


Since \(u\) and \(\psi\) essentially represent the correspondingly rescaled density and potential perturbations, the boundary conditions for the disturbance variables are directly transferred to \(u\) and \(\psi\). The same boundary conditions therefore apply:
Since \(u\) and \(\psi\) essentially represent the correspondingly rescaled density and potential perturbations, the boundary conditions for the perturbation variables are directly transferred to \(u\) and \(\psi\). The same boundary conditions therefore apply:


\begin{align}
\begin{align}
Line 1,514: Line 1,514:
\end{align}
\end{align}


The disturbance along the \(z\)-axis is a superposition of Fourier modes with different wave numbers. Accordingly, the asymptotic behaviour over time, analogous to the previous argument, is determined by the mode with the fastest overall growth, i.e. by the maximum over all dimensionless wave numbers:
The perturbation along the \(z\)-axis is a superposition of Fourier modes with different wave numbers. Accordingly, the asymptotic behaviour over time, analogous to the previous argument, is determined by the mode with the fastest overall growth, i.e. by the maximum over all dimensionless wave numbers:


\begin{align}
\begin{align}
Line 1,542: Line 1,542:
Here, a sufficiently large computational domain \([0,\,x_{\max}]\) with \(x_{\max}=25\) and a sufficient number of grid points \(N=500\) were used. Neither a further enlargement of the computational domain nor an increase in the number of grid points leads to a significant change in the dimensionless wave number of the fastest growing mode, so that the result can be considered converged with respect to these two parameters.
Here, a sufficiently large computational domain \([0,\,x_{\max}]\) with \(x_{\max}=25\) and a sufficient number of grid points \(N=500\) were used. Neither a further enlargement of the computational domain nor an increase in the number of grid points leads to a significant change in the dimensionless wave number of the fastest growing mode, so that the result can be considered converged with respect to these two parameters.


Since the fastest growing Fourier mode dominates the disturbance pattern for sufficiently long times, periodic overdensities form along the \(z\)-axis. The corresponding wavelength is given by:
Since the fastest growing Fourier mode dominates the perturbation pattern for sufficiently long times, periodic overdensities form along the \(z\)-axis. The corresponding wavelength is given by:


\begin{align}
\begin{align}
Line 1,573: Line 1,573:
Afterwards, the numerical setup will be gradually expanded to two and then three dimensions. Once the three-dimensional setup in cylindrical coordinates is stable and works reliably, the plan is to switch to a three-dimensional setup in Cartesian coordinates. Cartesian coordinates are needed for the considered problem because the instabilities to be investigated later are expected to break the cylindrical symmetry.
Afterwards, the numerical setup will be gradually expanded to two and then three dimensions. Once the three-dimensional setup in cylindrical coordinates is stable and works reliably, the plan is to switch to a three-dimensional setup in Cartesian coordinates. Cartesian coordinates are needed for the considered problem because the instabilities to be investigated later are expected to break the cylindrical symmetry.


Once the three-dimensional setup is in Cartesian coordinates, the next step will be to extend it to magnetohydrodynamic calculations. The aim here is to implement the equilibrium solutions of [https://arxiv.org/pdf/astro-ph/9901096 Fiege and Pudritz (2000a)] in PLUTO/belt. Finally, the equilibrium configuration should be disturbed, analogous to [https://arxiv.org/pdf/astro-ph/9902385 Fiege and Pudritz (2000b)], in order to investigate the temporal development of the perturbations and compare them with the predictions of their linear stability analysis.
Once the three-dimensional setup is in Cartesian coordinates, the next step will be to extend it to magnetohydrodynamic calculations. The aim here is to implement the equilibrium solutions of [https://arxiv.org/pdf/astro-ph/9901096 Fiege and Pudritz (2000a)] in PLUTO/belt. Finally, the equilibrium configuration should be perturbed, analogous to [https://arxiv.org/pdf/astro-ph/9902385 Fiege and Pudritz (2000b)], in order to investigate the temporal development of the perturbations and compare them with the predictions of their linear stability analysis.


== 1D Hydrodynamic Reference Test: Setup and Convergence ==
== 1D Hydrodynamic Reference Test: Setup and Convergence ==
Line 1,594: Line 1,594:
Here, \(\Delta r_i\) is the length of the \(i\)-th cell, \( r_{i-\frac{1}{2}}\) is the radius of the inner or left cell wall of the \(i\)-th cell, and \(N_r\) is the number of cells in the domain. This exponential growth of the domain ensures high resolution in the area of interest, i.e. in and around the filament, especially near the core, while a coarser resolution is sufficient further out, where in the case under consideration there are typically only very low densities or the filament transitions into an almost vacuum state. In this way, a good balance is achieved between accuracy in the relevant region and the highest possible numerical efficiency.
Here, \(\Delta r_i\) is the length of the \(i\)-th cell, \( r_{i-\frac{1}{2}}\) is the radius of the inner or left cell wall of the \(i\)-th cell, and \(N_r\) is the number of cells in the domain. This exponential growth of the domain ensures high resolution in the area of interest, i.e. in and around the filament, especially near the core, while a coarser resolution is sufficient further out, where in the case under consideration there are typically only very low densities or the filament transitions into an almost vacuum state. In this way, a good balance is achieved between accuracy in the relevant region and the highest possible numerical efficiency.


For the hydrodynamic variables, reflective boundary conditions are used at the inner edge of the domain. This is because the inner edge represents an impenetrable core and the reflective boundary condition prevents mass flow across the boundary surface. For the outer boundary, outflow–no-inflow boundary conditions are selected in order to model the domain as an open system. This allows matter, disturbances and waves to leave the domain, while preventing inflow from outside.
For the hydrodynamic variables, reflective boundary conditions are used at the inner edge of the domain. This is because the inner edge represents an impenetrable core and the reflective boundary condition prevents mass flow across the boundary surface. For the outer boundary, outflow–no-inflow boundary conditions are selected in order to model the domain as an open system. This allows matter, perturbations and waves to leave the domain, while preventing inflow from outside.


For the gravitational potential of the gas in the domain, a zero-gradient boundary condition is used at the inner edge. The reason for this is that the gravitational attraction of the core is considered separately in PLUTO/belt. Since gravitational potentials can be superimposed due to the linearity of Poisson's equation, the potential generated by the gas can then be linearly added to the potential generated by the core. At the outer boundary, the potential is specified by the analytical potential solution for an infinitely long, isothermal, self-gravitating cylinder without a core. This boundary condition is permissible in this setup because the core serves only as a numerical aid to emulate the case without a core.
For the gravitational potential of the gas in the domain, a zero-gradient boundary condition is used at the inner edge. The reason for this is that the gravitational attraction of the core is considered separately in PLUTO/belt. Since gravitational potentials can be superimposed due to the linearity of Poisson's equation, the potential generated by the gas can then be linearly added to the potential generated by the core. At the outer boundary, the potential is specified by the analytical potential solution for an infinitely long, isothermal, self-gravitating cylinder without a core. This boundary condition is permissible in this setup because the core serves only as a numerical aid to emulate the case without a core.

Latest revision as of 10:47, 4 March 2026

Introduction

This master's thesis investigates the dynamics and stability of filamentary gas structures using numerical simulations. These filaments are of physical interest because they are common sites of star formation.

As a theoretical reference, this work builds on the analytically derived magnetohydrodynamic models of Fiege and Pudritz (2000a). They solved the stationary magnetohydrodynamic equations for cylindrical, self-gravitating filaments with helical magnetic fields and obtained equilibrium solutions from them. Building on these equilibrium solutions, they performed in Fiege and Pudritz (2000b) a linear stability analysis to investigate instabilities that occur, associated growth rates and characteristic wavelengths.

The aim of this work is to numerically reproduce the equilibrium solutions described by Fiege and Pudritz (2000a). These equilibria are then perturbed in order to investigate the temporal development of the perturbations and compare the results with those of the linear stability analysis by Fiege and Pudritz (2000b).

Theoretical Framework

The isothermal self-gravitating cylinder

Sought is the stationary density profile \(\rho\) of an infinitely long, isothermal, self-gravitating gas cylinder. To do this, the hydrostatic Euler equation has to be solved, which is given by:

\begin{align} \boldsymbol{\nabla} P = \boldsymbol{f}_\text{ext} \end{align}

Here, \(P\) is the pressure field and \(\boldsymbol{f}_\text{ext}\) is the force density acting on the gas. In the considered case, this is the gas's own gravity. Since the gravitational field is a conservative force field, it can be expressed by a potential \(\phi_\text{ext}\):

\begin{align} \boldsymbol{f}_\text{ext} = - \rho \boldsymbol{\nabla} \phi_\text{ext} \end{align}

Inserting yields:

\begin{align} \boldsymbol{\nabla} P = - \rho \boldsymbol{\nabla} \phi_\text{ext} \end{align}

The pressure field of the gas under consideration can be described by the equation of state of an isothermal gas, which is given by:

\begin{align} P = \frac{R T}{M} \rho \end{align}

Here, \(R\) denotes the universal gas constant, \(T\) the spatially homogeneous temperature distribution, and \(M\) the molar mass of the gas. Inserting into the hydrostatic Euler equation yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{M} \boldsymbol{\nabla} \rho = - \rho \boldsymbol{\nabla} \phi_\text{ext} \\[6pt] \Leftrightarrow\;& \frac{1}{\rho} \boldsymbol{\nabla} \rho = - \frac{M}{R T} \boldsymbol{\nabla} \phi_\text{ext} \\[6pt] \Leftrightarrow\;& \boldsymbol{\nabla} \ln(\rho) = - \frac{M}{R T} \boldsymbol{\nabla} \phi_\text{ext} \\[6pt] \Rightarrow\;& \Delta \ln(\rho) = - \frac{M}{R T} \Delta \phi_\text{ext} \end{align}

It proves useful to use the Poisson equation of Newtonian gravity, which is given by:

\begin{align} \Delta \phi_\text{ext} = 4 \pi G \rho \end{align}

Inserting yields:

\begin{align} \phantom{\Rightarrow}\;& \Delta \ln(\rho) = - \frac{4 \pi G M}{R T} \rho \, , \quad \text{def.:} \, a := \frac{4 \pi G M}{R T} \\[6pt] \Leftrightarrow\;& \Delta \ln(\rho) = - a \rho \end{align}

For an infinitely long, straight, self-gravitating gas cylinder without an externally imposed azimuthal or axial structure, there is symmetry with respect to rotation about the cylinder axis and translation along the axis. The underlying equations share this symmetry, which is why corresponding derivatives disappear in the considered equation:

\begin{align} \phantom{\Rightarrow}\;& \frac{1}{r} \partial_r \big(r \partial_r \ln(\rho)\big) = - a \rho \\[6pt] \Leftrightarrow\;& \partial_r \bigg(r \frac{\partial_r \rho}{\rho}\bigg) = - a r \rho \\[6pt] \Leftrightarrow\;& \frac{\partial_r \rho}{\rho} - r \bigg(\frac{\partial_r \rho}{\rho}\bigg)^2 + r \frac{\partial_r^2 \rho}{\rho} = - a r \rho \end{align}

Since the density is positive by definition, the following substitution can be made without restriction:

\begin{align} u(r) = \ln\left(\frac{\rho(r)}{\rho_0}\right). \end{align}

Here, \(\rho_0 = \text{const.} > 0\) is a constant reference density. Its specific value is irrelevant for further consideration. It only serves to ensure that the argument of the logarithm is dimensionless. The first two derivatives of the new quantity \(u\) are then given by:

\begin{align} \partial_r u = \frac{\partial_r \rho}{\rho} \quad \text{and} \quad \partial_r^2 u = \frac{\partial_r^2 \rho}{\rho} - \left(\frac{\partial_r \rho}{\rho}\right)^2 \end{align}

Inserting this into the differential equation under consideration yields:

\begin{align} \frac{\partial_r u}{r} + \partial_r^2 u = - a \rho_0 \exp(u) \end{align}

To simplify this equation further, another substitution is performed:

\begin{align} z = u + \sqrt{2} t \, , \quad \text{with} \, t = \sqrt{2} \ln\left(\sqrt{a \rho_0} r\right) \end{align}

With \(\partial_r t = \frac{\sqrt{2}}{r}\), the following applies for the first two derivatives of \(u\) with respect to \(r\):

\begin{align} \partial_r u = \frac{\sqrt{2}}{r} \partial_t u \quad \text{and} \quad \partial_r^2 u = - \frac{\sqrt{2}}{r^2} \partial_t u + \frac{2}{r^2} \partial_t^2 u \end{align}

Inserting yields:

\begin{align} \phantom{\Rightarrow}\;& 2 \partial_t^2 u = - a \rho_0 r^2 \exp(u) \\[6pt] \Leftrightarrow\;& 2 \partial_t^2 u = - \exp(z) \end{align}

Furthermore, the following applies:

\begin{align} \phantom{\Rightarrow}\;& \partial_t^2 z = \partial_t^2 u + \sqrt{2} \partial_t^2 t \\[6pt] \Leftrightarrow\;& \partial_t^2 z = \partial_t^2 u \end{align}

Thus, it follows that:

\begin{align} \phantom{\Rightarrow}\;& 2 \partial_t^2 z = - \exp(z) \\[6pt] \Rightarrow\;& 2 \left(\partial_t z\right) \left(\partial_t^2 z\right) = - \left(\partial_t z\right) \exp(z) \\[6pt] \Leftrightarrow\;& \partial_t\left(\left(\partial_t z\right)^2 + \exp(z)\right) = 0 \\[6pt] \Rightarrow\;& \left(\partial_t z\right)^2 + \exp(z) = c_1 \, , \quad c_1 \in \mathbb{R} \\[6pt] \Leftrightarrow\;& \partial_t z = \pm \sqrt{c_1 - \exp(z)} \end{align}

At first, it might seem that both signs need to be treated separately. However, it can actually be shown that both equations lead to the same solutions[1]. In the following, therefore, only the equation with a positive sign will be considered. To solve this, the following substitution is used:

\begin{align} z = \ln(y) \end{align}

This results in the following for the differential equation under consideration:

\begin{align} \frac{\partial_t y}{y} = \sqrt{c_1 - y} \end{align}

Separation of variables yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{\text{d}y}{y\sqrt{c_1 - y}} = \text{d}t \\[6pt] \Rightarrow\;& \frac{1}{\sqrt{c_1}} \ln\left(\bigg|\frac{\sqrt{c_1 - y} - \sqrt{c_1}}{\sqrt{c_1 - y} + \sqrt{c_1}}\bigg|\right) = t + C_2 \\[6pt] \Leftrightarrow\;& \bigg|\frac{\sqrt{c_1 - y} - \sqrt{c_1}}{\sqrt{c_1 - y} + \sqrt{c_1}}\bigg| = \exp\left(\sqrt{c_1} t + \sqrt{c_1} C_2\right) \, , \quad \text{with} \, c_2 = \exp\left(\sqrt{c_1} C_2\right) \\[6pt] \Leftrightarrow\;& \bigg|\frac{\sqrt{c_1 - y} - \sqrt{c_1}}{\sqrt{c_1 - y} + \sqrt{c_1}}\bigg| = c_2 \exp\left(\sqrt{c_1} t\right) \, , \quad \text{with} \, A = c_2 \exp\left(\sqrt{c_1} t\right) \\[6pt] \Leftrightarrow\;& \bigg|\frac{\sqrt{c_1 - y} - \sqrt{c_1}}{\sqrt{c_1 - y} + \sqrt{c_1}}\bigg| = A \, , \quad \text{with} \, y = \exp(z) \Rightarrow \sqrt{c_1 - y} > \sqrt{c_1} \; \forall y \\[6pt] \Leftrightarrow\;& \frac{\sqrt{c_1} - \sqrt{c_1 - y}}{\sqrt{c_1} + \sqrt{c_1 - y}} = A \\[6pt] \Leftrightarrow\;& y = 4 c_1 \frac{A}{(1 + A)^2} \\[6pt] \Leftrightarrow\;& \exp(u) = \frac{4 c_1}{a \rho_0 r^2} \frac{A}{(1 + A)^2} \\[6pt] \Leftrightarrow\;& \rho = \frac{4 c_1 \rho_0}{a \rho_0 r^2} \frac{A}{(1 + A)^2} \end{align}

The quantity \(A\) can be written as follows:

\begin{align} \phantom{\Rightarrow}\;& A = c_2 \exp\left(\sqrt{c_1} t\right) \\[6pt] \Leftrightarrow\;& A = c_2 \exp\left(\sqrt{2 c_1} \ln(\sqrt{a \rho_0} r)\right) \\[6pt] \Leftrightarrow\;& A = c_2 (\sqrt{a \rho_0} r)^{\sqrt{2 c_1}} \end{align}

Define additionally:

\begin{align} b := \sqrt{2 c_1} \quad \text{und} \quad r_0 := \left(c_2^\frac{1}{b} \sqrt{a \rho_0}\right)^{-1} \end{align}

This yields the following general solution for the differential equation under consideration:

\begin{align} \phantom{\Rightarrow}\;& \rho(r) = \frac{2 b^2}{a r^2} \frac{\left(\frac{r}{r_0}\right)^b} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \\[6pt] \Leftrightarrow\;& \rho(r) = \frac{2 b^2}{a r_0^2} \frac{\left(\frac{r}{r_0}\right)^{b-2}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \end{align}

The integration constants \(r_0\) and \(b\) must then be determined using appropriate boundary conditions. For the problem considered, it makes sense to specify boundary conditions for the center of the cylinder, i.e., for \(r = 0\). In the considered case, the density at the center of the cylinder must be finite and nonzero, since both a vanishing and a diverging density at the origin would lead to unphysical behavior:

\begin{align} \rho(r=0) = \rho_c \, , \quad \text{with} \quad 0 < \rho_c < \infty \end{align}

Furthermore, rotational symmetry and the requirement for a smooth, regular solution imply that the density profile at the origin should not have a gradient. Consequently, \(\rho(r)\) has an extremum at \(r=0\), and the following applies:

\begin{align} \partial_r \rho(r) \big|_{r = 0} = 0 \end{align}

It should be noted that the substitution \(t=\sqrt{2}\ln\big(\sqrt{a\rho_0}r\big)\) is only defined for \(r>0\). The differential equation was thus initially solved in the domain \(r \in \mathbb{R}\). In order to formulate boundary conditions at \(r=0\), the solution must be extended continuously to \(r=0\). Accordingly, the central values are understood as right-sided limits:

\begin{align} \rho(r = 0) = \lim\limits_{r \rightarrow 0} \rho(r) \quad \text{and} \quad \partial_r \rho(r) \big|_{r = 0} = \lim\limits_{r \rightarrow 0} \partial_r \rho(r) \end{align}

With the first boundary condition, the following then applies:

\begin{align} \phantom{\Rightarrow}\;& \lim\limits_{r \rightarrow 0} \rho(r) = \rho_c \\[6pt] \Leftrightarrow\;& \lim\limits_{r \rightarrow 0} \frac{2 b^2}{a r_0^2} \frac{\left(\frac{r}{r_0}\right)^{b-2}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} = \rho_c \end{align}

Since the denominator converges to \(1\) in the limit case \(r\rightarrow 0\), the local behavior of the density in the immediate vicinity of the origin is determined exclusively by the numerator. This, together with the fact that the density at the origin must not vanish or diverge, directly implies that the following must hold:

\begin{align} b = 2 \end{align}

Inserting into the solution for the density profile provides:

\begin{align} \rho(r) = \frac{8}{a r_0^2} \frac{1} {\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2} \end{align}

Furthermore, the first boundary condition implies that:

\begin{align} \phantom{\Rightarrow}\;& \rho_c = \frac{8}{a r_0^2} \\[6pt] \Leftrightarrow\;& r_0^2 = \frac{8}{a \rho_c} \end{align}

Inserting this gives the stationary density profile of an infinitely long, isothermal, self-gravitating gas cylinder:

\begin{align} \rho(r) = \frac{\rho_c}{\left(1 + \frac{1}{8} a \rho_c r^2\right)^2} \end{align}

It is noticeable that the second boundary condition is not explicitly included in the determination of the integration constants \(r_0\) and \(b\). The reason for this is that the requirement for a finite central density different from zero already restricts the solution so much that only the case \(b=2\) remains. Thus, the density profile takes the above form and automatically satisfies the second boundary condition. It is therefore not redundant, but is already implicitly contained in the regularity and symmetry requirements.

In this context, the total mass per unit length \(\lambda_{\text{total}}\) of an infinitely long, isothermal gas cylinder held together by its own gravity can be introduced:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\text{total}} = 2 \pi \int_{0}^{\infty} \rho(r) r \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{total}} = \frac{8 \pi}{a} \\[6pt] \Leftrightarrow\;& \lambda_{\text{total}} = \frac{2 R T}{G M} \end{align}

The solution for the density profile can therefore also be written as follows:

\begin{align} \rho(r) = \frac{\lambda_{\text{total}}}{\pi r_0^2} \frac{1}{\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2} \end{align}

With the determined density distribution, the gravitational potential induced by the density distribution can now be determined using Poisson's equation. The Poisson equation, using the given symmetry, is given by:

\begin{align} \phantom{\Rightarrow}\;& \frac{1}{r} \partial_r \left(r \partial_r \phi_\text{ext}(r)\right) = 4 \pi G \rho(r) \\[6pt] \Leftrightarrow\;& \partial_r \left(r \partial_r \phi_\text{ext}(r)\right) = 4 \pi G \rho(r) r \end{align}

Integration yields:

\begin{align} \phantom{\Rightarrow}\;& r \partial_r \phi_\text{ext}(r) = 4 \pi G \int_{0}^{r} \rho(r^\prime) r^\prime \text{d} r \\[6pt] \Leftrightarrow\;& r \partial_r \phi_\text{ext}(r) = \frac{32 \pi G}{a r_0^2} \int_{0}^{r} \frac{r^\prime}{\left(1 + \left(\frac{r^\prime}{r_0}\right)^2\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& r \partial_r \phi_\text{ext}(r) = \frac{16 \pi G}{a} \frac{\left(\frac{r}{r_0}\right)^2} {1 + \left(\frac{r}{r_0}\right)^2} \\[6pt] \Leftrightarrow\;& \partial_r \phi_\text{ext}(r) = \frac{16 \pi G}{a r_0} \frac{\frac{r}{r_0}}{1 + \left(\frac{r}{r_0}\right)^2} \end{align}

Another integration yields the gravitational potential of of an infinitely long, isothermal, self-gravitating gas cylinder:

\begin{align} \phantom{\Rightarrow}\;& \phi_\text{ext}(r) = \frac{16 \pi G}{a r_0} \int_{0}^{r} \frac{\frac{r^\prime}{r_0}}{1 + \left(\frac{r^\prime}{r_0}\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& \phi_\text{ext}(r) = \frac{8 \pi G}{a} \ln\left(1 + \left(\frac{r}{r_0}\right)^2\right) \\[6pt] \Leftrightarrow\;& \phi_\text{ext}(r) = G \lambda_{\text{total}} \ln\left(1 + \left(\frac{r}{r_0}\right)^2\right) \end{align}

  1. Switching sign just flips the sign of \(b\), and the final \(\rho\) is invariant wrt \(b\to -b\).

The case with a core

We now consider an infinitely long cylinder with radius \(R_{\text{min}}\), surrounded by an isothermal gas that is held together by gravity from the core on the one hand and by its self-gravity on the other. It can be shown that this system follows the same differential equation for density distribution as the case without a core. The starting point here is also the hydrostatic Euler equation. For this, the external force density or force \(\text{d} \boldsymbol{F}_{\text{ext}}\) of the gas, including the core, is required on a mass element \(\rho(r) \text{d} V\), which is given by:

\begin{align} \text{d} \boldsymbol{F}_{\text{ext}} = - \frac{2 G \lambda(r) \rho(r) \text{d} V}{r} \boldsymbol{e}_r \, , \quad \text{mit} \quad r > R_{\text{min}} \end{align}

Here, \(\lambda(r)\) is the mass per unit length enclosed up to radius \(r\). It is given by:

\begin{align} \phantom{\Rightarrow}\;& \lambda(r) = 2 \pi \int_{0}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime \\[6pt] \Leftrightarrow\;& \lambda(r) = \lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime \end{align}

Here, \(\lambda_{\text{core}}\) is the mass per unit length of the core. Substituting this into the hydrostatic Euler equation gives:

\begin{align} \phantom{\Rightarrow}\;& \boldsymbol{\nabla} P = - \frac{2 G \rho}{r} \left(\lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime\right) \boldsymbol{e}_r \\[6pt] \Leftrightarrow\;& \frac{R T}{M} \boldsymbol{\nabla} \rho = - \frac{2 G \rho}{r} \left(\lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime\right) \boldsymbol{e}_r \end{align}

Due to the given cylinder symmetry, the following then applies:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{M} \partial_r \rho = - \frac{2 G \rho}{r} \left(\lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime\right) \\[6pt] \Leftrightarrow\;& \frac{R T}{2 G M} \frac{r \partial_r \rho}{\rho} = - \left(\lambda_{\text{core}} + 2 \pi \int_{R_{\text{min}}}^{r} \rho(r^\prime) r^\prime \text{d} r^\prime\right) \end{align}

Differentiating with respect to \(r\) then yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{2 G M} \partial_r \left(\frac{r \partial_r \rho}{\rho}\right) = - 2 \pi \rho r \\[6pt] \Leftrightarrow\;& \frac{\partial_r \rho}{\rho} - r \left(\frac{\partial_r \rho}{\rho}\right)^2 + r \frac{\partial_r^2 \rho}{\rho} = - a r \rho \end{align}

Therefore, the general solution of the system under consideration is given by the general solution of the system without the core:

\begin{align} \phantom{\Rightarrow}\;& \rho(r) = \frac{2 b^2}{a r_0^2} \frac{\left(\frac{r}{r_0}\right)^{b-2}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \\[6pt] \Leftrightarrow\;& \rho(r) = \frac{b^2 \lambda_{\text{total}}}{4 \pi r_0^2} \frac{\left(\frac{r}{r_0}\right)^{b-2}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \end{align}

A condition for the existence of physically meaningful density profiles can be derived from Euler's hydrostatic equation. Under the assumed symmetry, the following then applies:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{M} \text{d}_r \rho = - \frac{2 G \lambda \rho}{r} \\[6pt] \Leftrightarrow\;& \frac{R T}{M} \text{d}_r \ln(\rho) = - \frac{2 G \lambda}{r} \end{align}

Applying the general solution and then performing the derivations yields:

\begin{align} \frac{R T}{M} \left(\frac{b - 2}{r} - \frac{2 b}{r} \frac{\left(\frac{r}{r_0}\right)^b}{1 + \left(\frac{r}{r_0}\right)^b}\right) = - \frac{2 G \lambda}{r} \end{align}

Since this equation holds for all \(r \in \mathbb{R}_{> 0}\), it must be satisfied in particular on the surface of the core. With \(\lambda(r = R_{\text{min}}) = \lambda_{\text{core}}\), the following then follows:

\begin{align} \phantom{\Rightarrow}\;& \frac{R T}{M} \left(\frac{b - 2}{R_{\text{min}}} - \frac{2 b}{R_{\text{min}}} \frac{\left(\frac{R_{\text{min}}}{r_0}\right)^b}{1 + \left(\frac{R_{\text{min}}}{r_0}\right)^b}\right) = - \frac{2 G \lambda_{\text{core}}}{R_{\text{min}}} \\[6pt] \Leftrightarrow\;& \frac{r_0}{R_{\text{min}}} = \left(\frac{b + 2 - 4 \frac{\lambda_{\text{core}}}{\lambda_{\text{total}}}} {b - 2 + 4 \frac{\lambda_{\text{core}}}{\lambda_{\text{total}}}}\right)^{\frac{1}{b}} \end{align}

Since the left side of the equation is non-negative by definition, it follows that for the exponent \(b\), the following must apply:

\begin{align} b > \biggl|4 \frac{\lambda_{\text{core}}}{\lambda_{\text{total}}} - 2\biggr| \end{align}

The gas mass per unit length \(\lambda_{\text{gas}}\) can then be expressed in terms of the mass per unit length of the core and the exponent. The gas mass per unit length is generally given by:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\text{gas}} = 2 \pi \int_{R_{\text{min}}}^{\infty} \rho(r) r \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{gas}} = \frac{b^2 \lambda_{\text{total}}}{2 r_0^b} \int_{R_{\text{min}}}^{\infty} \frac{r^{b-1}} {\left(1 + \left(\frac{r}{r_0}\right)^b\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{gas}} = \frac{b \lambda_{\text{total}}}{2} \frac{1}{1 + \left(\frac{r}{r_0}\right)^b} \end{align}

Using the expression for \(\frac{r_0}{R_{\text{min}}}\) derived above, it follows:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\text{gas}} = \frac{b + 2}{4} \lambda_{\text{total}} - \lambda_{\text{core}} \\[6pt] \Leftrightarrow\;& \frac{\lambda_{\text{gas}} + \lambda_{\text{core}}}{\lambda_{\text{total}}} = \frac{b + 2}{4} \end{align}

If the core is a numerical necessity, it is only there to emulate the gas within the core radius. Therefore, the case \(b = 2\) continues to be considered for this case. For this case, the core mass per unit length must be selected as follows, for a given core radius:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\text{core}} = 2 \pi \int_{0}^{R_{\text{min}}} \rho(r,b=2) r \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{core}} = \frac{2 \lambda_{\text{total}}}{r_0^2} \int_{0}^{R_{\text{min}}} \frac{r}{\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& \lambda_{\text{core}} = \frac{R_{\text{min}}^2} {R_{\text{min}}^2 + r_0^2}\lambda_{\text{total}} \end{align}

This results in the following for the gas mass per unit length in the system:

\begin{align} \lambda_{\text{gas}} = \frac{r_0^2} {R_{\text{min}}^2 + r_0^2}\lambda_{\text{total}} \end{align}

The case with a core (Lothar)

The solution to the ODE reads

\begin{align} \rho &= \frac{\lambda b^2(r/r_0)^{b-2}}{4\pi r_0^2(1+(r/r_0)^b)^2}\\[2ex] \text{with}\quad \lambda &= \frac{2k_\text{B}T}{Gm} = \int_0^\infty 2\pi r\rho(r,r_0,b{=}2)\,\text dr \end{align} being the only possible total mass/length in the case of gas only, where \(b=2\) is necessary for zero force at \(r=0\). Notably, \(\lambda\) does not depend on \(r_0\), i.e. every width of \(\rho\) is equally allowed. The reason is that no length scale can be constructed from the input parameters \(k_\text{B}T\), \(G\) and \(m\).

This changes in the presence of a core with mass/length \(\lambda_\text{c}\) and radius \(R\). Vanishing force at \(r=0\) is replaced by force balance at \(r=R\), which leads to the condition \begin{align} \frac{r_0}{R} &= \left(\frac{b+2-4\lambda_\text{c}/\lambda}{b-2+4\lambda_\text{c}/\lambda}\right)^{1/b} ~, \end{align} which in turn introduces the constraint \(b>\vert 4\lambda_\text{c}/\lambda-2\vert\).

The resulting gas mass is \begin{gather} \int_R^\infty 2\pi r\rho(r,r_0,b)\,\text dr = \lambda\frac{b+2}{4}-\lambda_\text{c}\\ \Leftrightarrow\quad \frac{\lambda_\text{c}+\lambda_\text{gas}}{\lambda}=\frac{b+2}{4} ~. \end{gather} That is, the amount of gas can be tuned via \(b\). Note that for \(\lambda_\text{c}<\lambda/2\), it cannot be chosen arbitrarily small.

If the core is actually a numerical necessity and only meant to emulate the gas contained inside, it's \(b=2\) again, and the most convenient choice is \(\lambda_\text{c}=\lambda_\text{gas}=\lambda/2\) which yields \(r_0=R\).

Free-fall time of an an infinitely long gas cylinder

In the following, the free fall time \(t_{\text{ff}}\) of an infinitely long gas cylinder is calculated for the previously derived density profile, with the cylinder initially at rest. The calculation is performed in the pressureless limit \(P \rightarrow 0\). The dynamics of the system are then determined solely by its own gravity.

To determine the free fall time, an infinitesimal cylindrical shell with radius \(r_{\text{i}}\) and mass \(\rho(r_i) \text{d} V\) is considered, whose gas, according to Newton's shell theorem, is gravitationally accelerated radially inwards by the mass enclosed by the cylindrical shell. Due to symmetry, the problem is reduced to a one-dimensional, purely radial motion. The equation of motion to be solved is therefore:

\begin{align} \phantom{\Rightarrow}\;& \rho(r_{\text{i}}) \text{d} V \frac{\text{d}^2 r}{\text{d} t^2} = - \frac{2 G \lambda(r_{\text{i}}) \rho(r_{\text{i}}) \text{d} V}{r} \\[6pt] \Leftrightarrow\;& \frac{\text{d}^2 r}{\text{d} t^2} = - \frac{2 G \lambda(r_{\text{i}})}{r} \end{align}

Here, \(\lambda(r_{\text{i}})\) is the mass enclosed by the cylinder shell per unit length, which is given by the following equation for the given density distribution:

\begin{align} \phantom{\Rightarrow}\;& \lambda(r_{\text{i}}) = 2 \pi \int_{0}^{r_{\text{i}}} \rho(r) r \text{d} r \\[6pt] \Leftrightarrow\;& \lambda(r_{\text{i}}) = \frac{2 \lambda_{\text{total}}}{r_0^2} \int_{0}^{r_{\text{i}}} \frac{r}{\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2} \text{d} r \\[6pt] \Leftrightarrow\;& \lambda(r_{\text{i}}) = \frac{r_{\text{i}}^2} {r_{\text{i}}^2 + r_0^2}\lambda_{\text{total}} \end{align}

Inserting yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{\text{d}^2 r}{\text{d} t^2} = - \frac{2 G \lambda_{\text{total}}}{r} \frac{r_{\text{i}}^2}{r_{\text{i}}^2 + r_0^2} \, , \quad \text{def.:} \, K := 2 G \lambda_{\text{total}} \frac{r_{\text{i}}^2}{r_{\text{i}}^2 + r_0^2} \\[6pt] \Leftrightarrow\;& \frac{\text{d}^2 r}{\text{d} t^2} = - \frac{K}{r} \, , \quad \text{with} \ v = \frac{\text{d} r}{\text{d} t} \, , \, \frac{\text{d} v}{\text{d} t} = v \frac{\text{d} v}{\text{d} r} \\[6pt] \Leftrightarrow\;& v \frac{\text{d} v}{\text{d} r} = - \frac{K}{r} \\[6pt] \Leftrightarrow\;& v \text{d} v = - \frac{K}{r} \text{d} r \\[6pt] \Rightarrow\;& \frac{1}{2}v^2 = - K \ln(r) + C \end{align}

From the initial conditions \(r(t=0) = r_{\text{i}}\) and \(v(t=0) = 0\), the following applies to the integration constant:

\begin{align} C = K \ln(r_i) \end{align}

Thus follows:

\begin{align} \phantom{\Rightarrow}\;& \frac{1}{2}v^2 = - K \ln\left(\frac{r_{\text{i}}}{r}\right) \\[6pt] \Leftrightarrow\;& v = \pm \sqrt{2 K \ln\left(\frac{r_{\text{i}}}{r}\right)} \end{align}

Since the collapse is directed inwards, only the negative branch of the solution for the velocity is relevant, which is why only this branch will be considered in the following:

\begin{align} \phantom{\Rightarrow}\;& v = - \sqrt{2 K \ln\left(\frac{r_{\text{i}}}{r}\right)} \\[6pt] \Leftrightarrow\;& \frac{\text{d} r}{\text{d} t} = - \sqrt{2 K \ln\left(\frac{r_{\text{i}}}{r}\right)} \end{align}

By integrating this differential equation, the time \(t(r, r_i)\) required for a particle to fall from its starting position \(r_i\) to a radius \(r\), is obtained:

\begin{align} \phantom{\Rightarrow}\;& \int_{0}^{t} \text{d} t^\prime = - \int_{r_{\text{i}}}^{r} \frac{1}{\sqrt{2 K \ln\left(\frac{r_{\text{i}}}{r^\prime}\right)}} \text{d} r^\prime \\[6pt] \Leftrightarrow\;& t(r,r_i) = \sqrt{\frac{\pi}{2 K}} r_{\text{i}} \operatorname{erf}\left(\sqrt{\ln\left(\frac{r_i}{r}\right)}\right) \\[6pt] \Leftrightarrow\;& t(r,r_i) = \frac{1}{2} \sqrt{\frac{\pi\left(r_{\text{i}}^2 + r_0^2\right)}{G \lambda_{\text{total}}}} \operatorname{erf}\left(\sqrt{\ln\left(\frac{r_i}{r}\right)}\right) \end{align}

Here, \(\operatorname{erf}(x)\) is the Gaussian error function, which is defined as follows:

\begin{align} \operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} \text{d} t \end{align}

The limit \(r \rightarrow 0\) then yields the free fall time \(t_{\text{ff}}(r_i)\) of a particle that is at its starting position at time \(t = 0\):

\begin{align} \phantom{\Rightarrow}\;& t_{\text{ff}}(r_i) = \lim\limits_{r \rightarrow 0} t(r,r_i) \, , \quad \text{with} \lim\limits_{x \rightarrow \infty} \operatorname{erf}(x) = 1 \\[6pt] \Leftrightarrow\;& t_{\text{ff}}(r_i) = \frac{1}{2} \sqrt{\frac{\pi\left(r_{\text{i}}^2 + r_0^2\right)}{G \lambda_{\text{total}}}} \end{align}

It follows that the free fall time does not approach zero when the starting position approaches zero, but converges to a finite limit value:

\begin{align} \lim_{r_i \rightarrow 0} t_{\text{ff}}(r_i) = \frac{r_0}{2}\sqrt{\frac{\pi}{G\lambda_{\text{total}}}} \end{align}

For arbitrarily small but finite starting positions \(r_i>0\), the free fall time cannot become arbitrarily small, but approaches this finite value.

Linear stability analysis

The aim of this section is to investigate how the hydrostatic equilibrium solution of an infinitely long, isothermal, self-gravitating gas cylinder behaves under perturbations along the \(z\)-axis. In the following, all equilibrium variables are labelled with the index \(0\):

\begin{align} \rho_0(r) = \frac{\lambda_{\text{total}}}{\pi r_0^2}\,\frac{1}{\left(1 + \left(\frac{r}{r_0}\right)^2\right)^2}\,, \quad \boldsymbol{v}_0 = \boldsymbol{0}\,, \quad \phi_{\text{ext},0}(r) = G\,\lambda_{\text{total}}\,\ln\!\left(1 + \left(\frac{r}{r_0}\right)^2\right) \end{align}

Here, \(\boldsymbol{v}_0\) denotes the velocity field of the equilibrium solution, which is identically zero in hydrostatic equilibrium due to the stationary configuration.

Considered is now a perturbation to the equilibrium solution along the \(z\)-axis. For this purpose, a dimensionless perturbation parameter \(\varepsilon\) is introduced, in which each hydrodynamic quantity \(q\) with \(q \in \{\rho, \boldsymbol{v}, \phi_{\text{ext}}\}\) can be expanded:

\begin{align} q(r,z,t) = q_0(r) + \sum_{j=1}^{\infty} \varepsilon^j q_j(r,z,t) \end{align}

Here, \(q_0(r)\) denotes the respective equilibrium solution, and the terms \(q_j(r,z,t)\) represent the \(j\)-th order perturbations. In the following, only small perturbations are considered, i.e. \(\varepsilon \ll 1\). In this case, higher-order terms can be neglected as linear in \(\varepsilon\), resulting in a linear approximation:

\begin{align} q(r,z,t) = q_0(r) + \varepsilon q_1(r,z,t) \end{align}

In order to investigate how these perturbations develop over time, the perturbed variables are inserted into the hydrodynamic equations underlying the system. As a result, products of perturbation terms give rise to higher-order contributions in \(\varepsilon\). Since \(\varepsilon \ll 1\) is assumed in the following, these nonlinear contributions can be neglected. The equations are thus linearised around the equilibrium solution.

Substituting the perturbed solution into the continuity equation yields:

\begin{align} \phantom{\Rightarrow}\;& \partial_t \rho + \boldsymbol{\nabla} \cdot \left(\rho \boldsymbol{v}\right) = 0 \\[6pt] \Leftrightarrow\;& \partial_t \left(\rho_0 + \varepsilon \rho_1\right) + \boldsymbol{\nabla} \cdot \left(\left(\rho_0 + \varepsilon \rho_1\right) \varepsilon \boldsymbol{v}_1\right) = 0 \end{align}

At this point, it can be used that the perturbation parameter \(\varepsilon\) is a constant prefactor and the equilibrium solution is constant over time. On the other hand, products of perturbation variables appear in the flow term, which contribute second-order terms in \(\varepsilon\). These terms can be neglected in the following. This leads to:

\begin{align} \phantom{\Rightarrow}\;& \varepsilon \partial_t \rho_1 + \varepsilon \boldsymbol{\nabla} \cdot \left(\rho_0 \boldsymbol{v}_1\right) = 0 \\[6pt] \Leftrightarrow\;& \partial_t \rho_1 + \boldsymbol{\nabla} \cdot \left(\rho_0 \boldsymbol{v}_1\right) = 0 \end{align}

The same procedure is used for the Euler equation:

\begin{align} \phantom{\Rightarrow}\;& \partial_t \boldsymbol{v} + \left(\boldsymbol{v} \cdot \boldsymbol{\nabla}\right) \boldsymbol{v} = - \frac{1}{\rho} \boldsymbol{\nabla} P - \boldsymbol{\nabla} \phi_{\text{ext}} \, , \quad \text{with} \; P = c_s^2 \rho \\[6pt] \Leftrightarrow\;& \partial_t \boldsymbol{v} + \left(\boldsymbol{v} \cdot \boldsymbol{\nabla}\right) \boldsymbol{v} = - \frac{c_s^2}{\rho} \boldsymbol{\nabla} \rho - \boldsymbol{\nabla} \phi_{\text{ext}} \\[6pt] \Leftrightarrow\;& \varepsilon \partial_t \boldsymbol{v}_1 = - \frac{c_s^2}{\rho_0 + \varepsilon \rho_1} \boldsymbol{\nabla} \left(\rho_0 + \varepsilon \rho_1\right) - \boldsymbol{\nabla} \left(\phi_{\text{ext},0} + \varepsilon \phi_{\text{ext},1}\right) \\[6pt] \Leftrightarrow\;& \varepsilon \partial_t \boldsymbol{v}_1 = - \frac{c_s^2}{\rho_0} \frac{1}{1 + \varepsilon \frac{\rho_1}{\rho_0}} \boldsymbol{\nabla} \left(\rho_0 + \varepsilon \rho_1\right) - \boldsymbol{\nabla} \left(\phi_{\text{ext},0} + \varepsilon \phi_{\text{ext},1}\right) \, , \quad \text{with} \; \frac{1}{1 + x} = 1 -x + \mathcal{O} \left(x^2\right) \\[6pt] \Rightarrow\;& \varepsilon \partial_t \boldsymbol{v}_1 = - \frac{c_s^2}{\rho_0}\left(1 - \varepsilon \frac{\rho_1}{\rho_0}\right) \boldsymbol{\nabla} \left(\rho_0 + \varepsilon \rho_1\right) - \boldsymbol{\nabla} \left(\phi_{\text{ext},0} + \varepsilon \phi_{\text{ext},1}\right) \\[6pt] \Rightarrow\;& \varepsilon \partial_t \boldsymbol{v}_1 = - \frac{c_s^2}{\rho_0}\left(\boldsymbol{\nabla} \rho_0 + \varepsilon \boldsymbol{\nabla} \rho_1 - \varepsilon \frac{\rho_1}{\rho_0} \boldsymbol{\nabla} \rho_0\right) - \boldsymbol{\nabla} \phi_{\text{ext},0} - \varepsilon \boldsymbol{\nabla} \phi_{\text{ext},1} \end{align}

It can now be utilised that the following applies to the equilibrium solution:

\begin{align} \frac{c_s^2}{\rho_0} \boldsymbol{\nabla} \rho_0 = - \boldsymbol{\nabla} \phi_{\text{ext},0} \end{align}

This leads to the linearised Euler equation:

\begin{align} \partial_t \boldsymbol{v}_1 = - \frac{c_s^2}{\rho_0} \boldsymbol{\nabla} \left(\frac{\rho_1}{\rho_0}\right) - \boldsymbol{\nabla} \phi_{\text{ext},1} \end{align}

Substituting the perturbed solution into Poisson's equation yields:

\begin{align} \phantom{\Rightarrow}\;& \Delta \phi_{\text{ext}} = 4 \pi G \rho \\[6pt] \Leftrightarrow\;& \Delta \left(\phi_{\text{ext},0} + \varepsilon \phi_{\text{ext},1}\right) = 4 \pi G \left(\rho_0 + \varepsilon \rho_1\right) \end{align}

Here, too, it can be used that the following applies to the equilibrium solution:

\begin{align} \Delta \phi_{\text{ext},0} = 4 \pi G \rho_0 \end{align}

This results in the following for the linearised Poisson equation:

\begin{align} \Delta \phi_{\text{ext},1} = 4 \pi G \rho_1 \end{align}

The linearised equations obtained can be further simplified by assuming an axisymmetric perturbation. For the continuity equation, this results in:

\begin{align} \phantom{\Rightarrow}\;& \partial_t \rho_1 + \frac{1}{r} \partial_r \left(r \rho_0 v_{1,r}\right) + \partial_z \left(\rho_0 v_{1,z}\right) = 0 \\[6pt] \Leftrightarrow\;& \partial_t \rho_1 + \frac{1}{r} \partial_r \left(r \rho_0 v_{1,r}\right) + \rho_0 \partial_z \left(v_{1,z}\right) = 0 \end{align}

For Euler's equation, it is useful to use a component-wise representation in the following. Due to the assumed axial symmetry of the perturbation, the \(\varphi\) component is omitted. For the remaining \(r\) and \(z\) components, it follows:

\begin{align} & \partial_t v_{1,r} + c_s^2 \partial_r \left(\frac{\rho_1}{\rho_0}\right) + \partial_r \phi_{\text{ext},1} = 0 \\[6pt] & \partial_t v_{1,z} + \frac{c_s^2}{\rho_0} \partial_r \left(\rho_1\right) + \partial_z \phi_{\text{ext},1} = 0 \end{align}

Using axial symmetry, the following applies to Poisson's equation:

\begin{align} \frac{1}{r} \partial_r \left(r \partial_r \phi_{\text{ext},1}\right) + \partial_z^2 \phi_{\text{ext},1} = 4 \pi G \rho_1 \end{align}

After the linearisation, the following separation approach can be selected for the perturbation variables:

\begin{align} q_1(r,z,t) = \hat{q}(r)\exp\left(i k z\right)\exp\left(\sigma t\right) \end{align}

This approach can be justified as follows. To this end, it is useful to combine the perturbation variables in a vector \(\boldsymbol{w}\):

\begin{align} \boldsymbol{w} = \left(\rho_1, v_{1,r}, v_{1,z}, \phi_{\text{ext},1}\right)^{\mathsf{T}} \end{align}

The coupled differential equation system to be solved can then be written in compact form as follows:

\begin{align} \mathbf{M} \partial_t \boldsymbol{w} = \mathbf{L} \boldsymbol{w} \end{align}

Here, \(\mathbf{L}\) is the matrix representation of the linear operator resulting from the linearised equations and containing the derivatives with respect to \(r\) and \(z\):

\begin{align} \mathbf{L} = \mathbf{L}\left(r,\partial_r,\partial_z\right) = \begin{pmatrix} 0 & \frac{1}{r} \partial_r r \rho_0 & \rho_0 \partial_z & 0\\ c_s^2 \partial_r \frac{1}{\rho_0} & 0 & 0 & \partial_r\\ \frac{c_s^2}{\rho_0} \partial_z & 0 & 0 & \partial_z\\ 4 \pi G & 0 & 0 & - \frac{1}{r} \partial_r r \partial_r - \partial_z^2 \end{pmatrix} \end{align}

The matrix \(\mathbf{M}\) is constant and encodes which equations contain time derivatives, e.g.:

\begin{align} \mathbf{M} = \operatorname{diag}(1,1,1,0) \end{align}

In this representation, the Poisson equation would correspond to the fourth line of the system of equations. However, this assignment is not unique, but rather a question of the chosen order of the variables in \(\boldsymbol{w}\) or the equations in the operator \(\mathbf{L}\). The only crucial factor is that the assignment is maintained consistently.

Since the linear operator does not exhibit any explicit \(z\)-dependence, the linearised equations are translation-invariant in \(z\). This means that shifting a solution along the \(z\)-axis again yields a solution of the same linear system. To demonstrate this, the translation operator \(\mathcal{T}_{z_0}\) is introduced, which is defined by:

\begin{align} \mathcal{T}_{z_0} f(r,z,t) := f(r,z+z_0,t) \end{align}

As mentioned above, the linearised equations are translation-invariant in \(z\) if a shift of a solution along the \(z\)-axis again yields a solution of the same linear system. This is equivalent to both the time derivative and the linear operator commuting with the translation operator \(\mathcal{T}_{z_0}\).

The time derivative commutes trivially with \(\mathcal{T}_{z_0}\), since \(\partial_t\) does not act on the \(z\)-coordinate and \(\mathcal{T}_{z_0}\) does not change time. Furthermore, the following relations apply for sufficiently smooth functions \(f\):

\begin{align} \partial_z\left(\mathcal{T}_{z_0} f\right) = \mathcal{T}_{z_0}\left(\partial_z f\right) \qquad \text{and} \qquad \alpha(r)\left(\mathcal{T}_{z_0} f\right) = \mathcal{T}_{z_0}\left(\alpha(r) f\right) \end{align}

Here, \(\alpha(r)\) is any function that depends only on \(r\). Since the entries of the linear operator consist of sums and products of such building blocks, and these commute with \(\mathcal{T}_{z_0}\), it follows that \(\mathbf{L}\) also commutes with the translation operator.

It turns out that Fourier modes \(\exp(ikz)\) are useful for the \(z\)-dependence. The reason is that these functions are eigenfunctions of the translation operator:

\begin{align} \mathcal{T}_{z_0}\exp(ikz) = \exp\left(ik(z+z_0)\right) = \exp(ikz_0)\exp(ikz) \end{align}

Each mode \(\exp(ikz)\) is thus multiplied by a constant phase factor \(\exp(ikz_0)\) through a translation along the \(z\)-axis.

Since both the time derivative and the linear operator commute with the translation operator, they have a common set of eigenfunctions. This makes it consistent to develop the \(z\)-dependence of the perturbation variables in Fourier modes. Furthermore, it can be shown that different wave numbers \(k\) do not couple with each other, so that each Fourier mode develops independently. For the stability analysis, it is therefore sufficient to consider a single Fourier mode with a fixed wave number \(k\).

Analogous to the Fourier decomposition in \(z\), the time dependence can also be decomposed into eigenfunctions of the time evolution operator. Thus, the time evolution of the perturbation is given by:

\begin{align} q_1(t) \propto \exp\left(\sigma t\right)\, , \quad \text{with} \; \sigma \in \mathbb{C} \end{align}

Here, \(\sigma\) is generally complex. The real part \(\Re(\sigma)\) describes the growth or damping rate of the perturbation. For \(\Re(\sigma)>0\), the perturbation grows exponentially, and the system is unstable in this case. For \(\Re(\sigma)<0\), the perturbation decays exponentially, and the system is stable. The imaginary part \(\Im(\sigma)\) corresponds to an oscillation frequency, i.e. for \(\Im(\sigma)\neq 0\), the development is additionally oscillatory. In the special case \(\Re(\sigma)=0\), the amplitude remains constant, resulting in a purely oscillatory solution.

The radial part of the linearised equations has no constant coefficients. As a result, the equations are not translation-invariant in \(r\). Consequently, plane waves in \(r\) are not eigenfunctions of the radial operator, and the radial structure of the perturbations must be determined explicitly.

Using the following separation Ansatz for the perturbation variables results in a coupled system of ordinary differential equations in \(r\):

\begin{align} & \rho_1(r,z,t) = \hat{\rho}(r)\exp\left(ikz\right)\exp\left(\sigma t\right) \\[6pt] & v_{1,r}(r,z,t) = \hat{v}_r(r)\exp\left(ikz\right)\exp\left(\sigma t\right) \\[6pt] & v_{1,z}(r,z,t) = \hat{v}_z(r)\exp\left(ikz\right)\exp\left(\sigma t\right) \\[6pt] & \phi_{\text{ext},1}(r,z,t) = \hat{\phi}_{\text{ext}}(r)\exp\left(ikz\right)\exp\left(\sigma t\right). \end{align}

This yields the for the linearised continuity equation:

\begin{align} \sigma \hat{\rho} + \frac{1}{r} \partial_r \left(r \rho_0 \hat{v}_r\right) + i k \rho_0 \hat{v}_z = 0 \end{align}

For the two remaining components of Euler's equation, the following applies:

\begin{align} & \sigma \hat{v}_r + c_s^2 \partial_r \left(\frac{\hat{\rho}}{\rho_0}\right) + \partial_r \hat{\phi}_{\text{ext}} = 0 \\[6pt] & \sigma \hat{v}_z + i k c_s^2 \frac{\hat{\rho}}{\rho_0} + i k \hat{\phi}_{\text{ext}} = 0 \end{align}

These equations can be solved immediately for the corresponding velocity components:

\begin{align} & \hat{v}_r = - \frac{1}{\sigma} \left(c_s^2 \partial_r \left(\frac{\hat{\rho}}{\rho_0}\right) + \partial_r \hat{\phi}_{\text{ext}}\right) \\[6pt] & \hat{v}_z = - \frac{ik}{\sigma} \left(c_s^2 \frac{\hat{\rho}}{\rho_0} + \hat{\phi}_{\text{ext}}\right) \end{align}

Finally, substituting into the Poisson equation gives:

\begin{align} \frac{1}{r} \partial_r \left(r \partial_r \hat{\phi}_\text{ext}\right) - k^2 \hat{\phi}_\text{ext} = 4 \pi G \hat{\rho} \end{align}

By inserting the velocity components \(\hat{v}_r\) and \(\hat{v}_z\) determined from Euler's equation into the continuity equation, the original system of four coupled differential equations can be reduced to a system of two coupled differential equations for \(\hat{\rho}(r)\) and \(\hat{\phi}_ {\text{ext}}(r)\). The continuity equation is thus given by:

\begin{align} \phantom{\Rightarrow}\;& \sigma \hat{\rho} + \frac{1}{r} \partial_r \left(r \rho_0 \left(- \frac{1}{\sigma} \left(c_s^2 \partial_r \left(\frac{\hat{\rho}}{\rho_0}\right) + \partial_r \hat{\phi}_{\text{ext}}\right)\right)\right) + \frac{k^2}{\sigma} \rho_0 \left(c_s^2 \frac{\hat{\rho}}{\rho_0} + \hat{\phi}_{\text{ext}}\right) = 0 \\[6pt] \Leftrightarrow\;& \frac{1}{r} \partial_r \left(r \rho_0 \left(c_s^2 \partial_r \left(\frac{\hat{\rho}}{\rho_0}\right) + \partial_r \hat{\phi}_\text{ext}\right)\right) = \sigma^2 \hat{\rho} + k^2 \rho_0 \left(c_s^2 \frac{\hat{\rho}}{\rho_0} + \hat{\phi}_{\text{ext}}\right) \end{align}

To solve this system of coupled differential equations, suitable boundary conditions are required. Since the relevant dynamics are essentially determined by the area of the cylinder and the equilibrium solution is localised to a finite radial area around the characteristic radius \(r_0\), it is required that the perturbation disappears for large radial distances. In the limit \(r \to \infty\), the perturbation variables must therefore approach zero:

\begin{align} \lim\limits_{r\to\infty}\hat{\rho}(r) = 0 = \lim\limits_{r\to\infty}\hat{\phi}_\text{ext}(r) \end{align}

For the second boundary condition, analogous to the derivation of the analytical equilibrium solution of an infinitely long, isothermal, self-gravitating gas cylinder, the rotational symmetry about the cylinder axis and the requirement for a smooth and regular solution imply that the perturbation variables at the origin must not have any gradients. In particular, the radial derivatives must vanish at the centre:

\begin{align} \partial_r \hat{\rho}(r) \big|_{r = 0} = 0 = \partial_r \hat{\phi}_\text{ext}(r) \big|_{r = 0} \end{align}

It proves useful to introduce some dimensionless quantities that further simplify the system of differential equations. It is reasonable to remove the dimensions from the perturbation variables as follows:

\begin{align} u := \frac{\hat{\rho}}{\rho_0} \quad \text{and} \quad \psi := \frac{\hat{\phi}_\text{ext}}{c_s^2} \end{align}

Furthermore, the density distribution of the equilibrium solution can be written in compact form by introducing the dimensionless radial coordinate \(x := \frac{r}{r_0}\) as:

\begin{align} \rho_0(x) = \rho_c q(x) \, , \quad \text{with} \; q(x) := \frac{1}{1 + x^2} \end{align}

In addition, the dimensionless parameters \(\bar{k}\) and \(\bar{\sigma}\) can be introduced:

\begin{align} \bar{k} := k r_0 \quad \text{and} \quad \bar{\sigma} := \sigma \frac{r_0}{c_s} \end{align}

Substituting into the continuity equation yields:

\begin{align} \phantom{\Rightarrow}\;& \frac{\rho_c c_s^2}{r_0^2} \frac{1}{x} \partial_x \left(x q \left(\partial_x u + \partial_x \psi\right)\right) = \rho_c q \left(\sigma^2 u + k^2 c_s^2 \left(u + \psi\right)\right) \\[6pt] \Leftrightarrow\;& \frac{1}{x} \partial_x \left(x q \left(\partial_x u + \partial_x \psi\right)\right) = q \left(\bar{\sigma}^2 u + \bar{k}^2 \left(u + \psi\right)\right) \end{align}

Similarly, for the Poisson equation, this gives:

\begin{align} \phantom{\Rightarrow}\;& \frac{c_s^2}{r_0^2} \frac{1}{x} \partial_x \left(x \partial_x \psi\right) - k^2 c_s^2 \psi = 4 \pi G \rho_c q u \\[6pt] \Leftrightarrow\;& \frac{1}{x} \partial_x \left(x \partial_x \psi\right) - \bar{k}^2 \psi = \frac{4 \pi G \rho_c r_0^2}{c_s^2} q u \end{align}

With the relations for \(a\) and \(\rho_c\) introduced above, the following applies:

\begin{align} \frac{1}{x} \partial_x \left(x \partial_x \psi\right) - \bar{k}^2 \psi = 8 q u \end{align}

Since \(u\) and \(\psi\) essentially represent the correspondingly rescaled density and potential perturbations, the boundary conditions for the perturbation variables are directly transferred to \(u\) and \(\psi\). The same boundary conditions therefore apply:

\begin{align} & \lim\limits_{x\to\infty}u(x) = 0 = \lim\limits_{x\to\infty}\psi(x) \\[6pt] & \partial_x u(x) \big|_{x = 0} = 0 = \partial_x \psi(x) \big|_{x = 0} \end{align}

However, it turns out that this system of coupled differential equations cannot generally be solved analytically, which is why a numerical solution approach must be used. For later implementation, it is important to note that \(\bar{\sigma}^2 \in \mathbb{R}\) applies. However, due to finite discretisation and rounding errors, complex \(\bar{\sigma}^2 \in \mathbb{C}\) may also occur in the numerical solution. Such contributions are usually numerical artefacts and must be discarded in the physical interpretation.

The fact that \(\bar{\sigma}^2 \in \mathbb{R}\) holds can be justified as follows. The parameter \(\bar{\sigma}^2\) appears only in the dimensionless continuity equation. This can therefore be interpreted as an eigenvalue equation, with \(\bar{\sigma}^2\) representing the corresponding eigenvalues. For this purpose, the functional \(w[u]\) is introduced, which combines the rescaled density perturbation and the associated rescaled potential perturbation:

\begin{align} w[u] := u + \psi[u] \end{align}

Furthermore, an operator \(\operatorname{H}\) can be introduced, which is given by:

\begin{align} \operatorname{H} := \operatorname{D}_q - q\bar{k}^2 \, , \qquad \text{with} \qquad \operatorname{D}_q := \frac{1}{x}\partial_x \left(x q \partial_x\right) \end{align}

This allows the continuity equation to be expressed in the following compact form:

\begin{align} \operatorname{H} w[u] = q \bar{\sigma}^2 u \end{align}

An obvious way to demonstrate the reality of the eigenvalues would be to reduce the problem to a standard eigenvalue problem of the following form:

\begin{align} \tilde{\operatorname{H}} u = \bar{\sigma}^2 u \end{align}

In this representation, it would be sufficient to prove that \(\tilde{\operatorname{H}}\), with respect to a suitable scalar product and the boundary conditions, is Hermitian, since Hermitian operators have real eigenvalues. In the present formulation, however, such a transformation is technically complex, especially because \(w[u]\) implicitly contains the potential perturbation.

Instead, it is more appropriate to consider the problem in weak form. To do this, the equation is multiplied by a permissible test function that is sufficiently smooth and satisfies the same boundary conditions as the perturbation, and integrated over the radial domain. In this form, symmetries and Hermiticity can be formulated as properties of suitable sesquilinear forms without having to explicitly construct a Hermitian total operator.

This requires a suitable scalar product. Since the problem is reduced to a radial equation, the natural integration measure in cylindrical coordinates is \(r \mathrm{d} r\), or in dimensionless form \(x \mathrm{d} x\). The scalar product can therefore be defined as follows:

\begin{align} \langle f, g \rangle := \int_{0}^{\infty} f^*(x) g(x) x \text{d} x \, , \qquad f,g \in \mathcal{F} \end{align}

Here, \(\mathcal{F}\) denotes the space of admissible complex functions that satisfy the required regularity and boundary conditions.

The transition from the strong, pointwise form to the weak, integral form is equivalent if the functions under consideration are sufficiently smooth. This smoothness can be assumed here, since the rescaled perturbations must be smooth for physical reasons and also satisfy the required boundary conditions. This equivalence follows, among other things, from the fundamental lemma of the calculus of variations, in conjunction with partial integration and the boundary conditions.

The scalar product of the dimensionless continuity equation in eigenvalue form with an admissible test function \(w[v] \in \mathcal{F}\) yields the weak formulation:

\begin{align} \phantom{\Rightarrow}\;& \langle w[v], \operatorname{H} w[u] \rangle = \langle w[v], q \bar{\sigma}^2 u \rangle \\[6pt] \Leftrightarrow\;& \langle w[v], \operatorname{H} w[u] \rangle = \bar{\sigma}^2 \langle w[v], q u \rangle \end{align}

This is the eigenvalue equation in weak form. It is now useful to define the two associated sesquilinear forms as follows:

\begin{align} A(v,u) := \langle w[v], \operatorname{H} w[u] \rangle \quad \text{and} \quad M(v,u) := \langle w[v], q\,u \rangle \end{align}

This results in the following weak form of the eigenvalue equation:

\begin{align} A(v,u) = \bar{\sigma}^2 M(v,u) \end{align}

To show that the eigenvalues are real, it suffices to prove that the two sesquilinear forms introduced are Hermitian. From this it follows, as will be shown later, that the corresponding eigenvalues are real.

A Hermitian sesquilinear form is defined as follows. Let \(V\) be a complex vector space. A mapping \(a: V \times V \to \mathbb{C}\) is called a Hermitian sesquilinear form if it is semilinear in one argument and linear in the other argument:

\begin{align} & a(\alpha \xi_1 + \beta \xi_2,\, \eta) = \alpha^*\, a(\xi_1, \eta) + \beta^*\, a(\xi_2, \eta) \, , \\[6pt] & a(\xi,\, \alpha \eta_1 + \beta \eta_2) = \alpha\, a(\xi, \eta_1) + \beta\, a(\xi, \eta_2) \end{align}

This must hold for all \(\xi,\xi_1,\xi_2,\eta,\eta_1,\eta_2 \in V\) and all \(\alpha,\beta \in \mathbb{C}\).

In addition, Hermitian symmetry must hold:

\begin{align} a(\xi,\eta) = a(\eta,\xi)^* \end{align}

In particular, this directly implies that:

\begin{align} a(\xi,\xi) \in \mathbb{R} \qquad \forall\, \xi \in V \end{align}

First, it is shown that \(A(v,u)\) is a Hermitian sesquilinear form. To do this, the complex conjugate is examined:

\begin{align} \phantom{\Rightarrow}\;& A^*(v,u) = \langle w[v], \operatorname{H} w[u] \rangle^* \\[6pt] \Leftrightarrow\;& A^*(v,u) = \langle \operatorname{H} w[u], w[v] \rangle \end{align}

If it can now be shown that the operator \(\operatorname{H}\) is Hermitian, it follows that \(A(v,u)\) is a Hermitian sesquilinear form. Since \(\operatorname{H}\) is a linear combination of the operators \(\operatorname{D}_q\) and \(\,q \bar{k}^2\), it suffices to show that both operators are Hermitian. First, we consider \(\operatorname{D}_q\) and, in particular, the following scalar product for admissible test functions \(\xi,\eta \in \mathcal{F}\):

\begin{align} \langle \operatorname{D}_q \xi, \eta \rangle = \int_{0}^{\infty} \left(\operatorname{D}_q \xi\right)^* \eta x \text{d} x \end{align}

Since \(q(x) \in \mathbb{R}\) holds for all \(x \in \mathbb{R}\), the complex conjugation acts exclusively on the test function \(\xi\):

\begin{align} \phantom{\Rightarrow}\;& \langle \operatorname{D}_q \xi, \eta \rangle = \int_{0}^{\infty} \left(\operatorname{D}_q \xi^*\right) \eta x \text{d} x \\[6pt] \Leftrightarrow\;& \langle \operatorname{D}_q \xi, \eta \rangle = \int_{0}^{\infty} \partial_x \left(x q \partial_x \xi^*\right) \eta \text{d} x \end{align}

With partial integration, the following applies:

\begin{align} \langle \operatorname{D}_q \xi, \eta \rangle = \left[\left(x q \partial_x \xi^*\right) \eta\right]_0^\infty - \int_{0}^{\infty} \left(x q \partial_x \xi^*\right) \partial_x \eta \text{d} x \end{align}

Since \(\xi,\eta \in \mathcal{F}\), the test functions satisfy the boundary conditions that they vanish at infinity and that their radial derivatives vanish at the origin. Thus, the boundary term vanishes and it follows that:

\begin{align} \langle \operatorname{D}_q \xi, \eta \rangle = - \int_{0}^{\infty} x q \partial_x \xi^* \partial_x \eta \text{d} x \end{align}

By appropriately inserting a vanishing boundary term, using the same boundary conditions that satisfy the test functions and perturbations, it follows that:

\begin{align} \langle \operatorname{D}_q \xi, \eta \rangle = \left[\xi^* \left(x q \partial_x \eta\right)\right]_0^\infty - \int_{0}^{\infty} \partial_x \xi^* \left(x q \partial_x \eta\right) \text{d} x \end{align}

Partial integration in the reverse direction thus yields:

\begin{align} \phantom{\Rightarrow}\;& \langle \operatorname{D}_q \xi, \eta \rangle = \int_{0}^{\infty} \xi^* \partial_x \left(x q \partial_x \eta\right) \text{d} x \\[6pt] \Leftrightarrow\;& \langle \operatorname{D}_q \xi, \eta \rangle = \int_{0}^{\infty} \xi^* \left(\operatorname{D}_q \eta\right) x \text{d} x \\[6pt] \Leftrightarrow\;& \langle \operatorname{D}_q \xi, \eta \rangle = \langle \xi, \operatorname{D}_q \eta \rangle \end{align}

This shows that \(\operatorname{D}_q\) is Hermitian with respect to the given scalar product.

Since both \(q(x)\in\mathbb{R}\) for all \(x\in\mathbb{R}\) and \(\bar{k}\in\mathbb{R}\) hold, it follows that the operator \(q\,\bar{k}^2\) is Hermitian with respect to the introduced scalar product for admissible test functions \(\xi,\eta\ in\mathcal{F}\):

\begin{align} \langle q \bar{k}^2 \xi, \eta \rangle = \langle \xi, q \bar{k}^2 \eta \rangle \end{align}

Together with the previously shown result that \(\operatorname{D}_q\) is Hermitian, it follows that the operator \(\operatorname{H}\) is also Hermitian with respect to the given scalar product. It follows that \(A(v,u)\) is a Hermitian sesquilinear form:

\begin{align} \phantom{\Rightarrow}\;& A^*(v,u) = \langle \operatorname{H} w[u], w[v] \rangle \\[6pt] \Leftrightarrow\;& A^*(v,u) = \langle w[u], \operatorname{H} w[v] \rangle \\[6pt] \Leftrightarrow\;& A^*(v,u) = A(u,v) \end{align}

Now we will show that \(M(v,u)\) is a Hermitian sesquilinear form. Analogous to the procedure for \(A(v,u)\), the complex conjugate is examined:

\begin{align} \phantom{\Rightarrow}\;& M^*(v,u) = \langle w[v], q u \rangle^* \\[6pt] \Leftrightarrow\;& M^*(v,u) = \langle q u, w[v] \rangle \end{align}

It is now sufficient to show that the functional \(w[\cdot]\) can be transferred to the other side in the scalar product in a suitable manner, whereby Hermitian symmetry follows. For this purpose, an operator \(\operatorname{P}\) is introduced, which is motivated by the dimensionless Poisson equation:

\begin{align} \operatorname{P} := \frac{1}{x} \partial_x \left(x \partial_x\right) - \bar{k}^2 \end{align}

In the following, it will be shown that \(\operatorname{P}\) is Hermitian with respect to the introduced scalar product. To this end, the following scalar product for admissible test functions \(\xi,\eta \in \mathcal{F}\) will be considered:

\begin{align} \langle \operatorname{P} \xi, \eta \rangle = \int_{0}^{\infty} \left(\operatorname{P} \xi\right)^* \eta x \text{d} x \end{align}

Since \(\bar{k}\in\mathbb{R}\) applies and the operator only contains real coefficients, the complex conjugation again acts exclusively on the test function \(\xi\). It follows that:

\begin{align} \phantom{\Rightarrow}\;& \langle \operatorname{P} \xi, \eta \rangle = \int_{0}^{\infty} \left(\operatorname{P} \xi^*\right) \eta x \text{d} x \\[6pt] \Leftrightarrow\;& \langle \operatorname{P} \xi, \eta \rangle = \int_{0}^{\infty} \partial_x \left(x \partial_x \xi^*\right) \eta \text{d} x - \bar{k}^2 \int_{0}^{\infty} \xi^* \eta x \text{d} x \end{align}

With partial integration, the following applies:

\begin{align} \phantom{\Rightarrow}\;& \langle \operatorname{P} \xi, \eta \rangle = \left[\left(x \partial_x \xi^*\right) \partial_x \eta\right]_0^\infty - \int_{0}^{\infty} \left(x \partial_x \xi^*\right) \partial_x \eta \text{d} x - \bar{k}^2 \int_{0}^{\infty} \xi^* \eta x \text{d} x \\[6pt] \Leftrightarrow\;& \langle \operatorname{P} \xi, \eta \rangle = - \int_{0}^{\infty} x \partial_x \xi^* \partial_x \eta \text{d} x - \bar{k}^2 \int_{0}^{\infty} \xi^* \eta x \text{d} x \end{align}

Here, the boundary term disappears because \(\xi,\eta\in\mathcal{F}\) and therefore disappear at infinity, and their derivatives disappear at the origin. By appropriately inserting a vanishing boundary term, the following equivalent result is obtained:

\begin{align} \phantom{\Rightarrow}\;& \langle \operatorname{P} \xi, \eta \rangle = \left[\partial_x \xi^* \left(x \partial_x \eta\right)\right]_0^\infty - \int_{0}^{\infty} \partial_x \xi^* \left(x \partial_x \eta\right) \text{d} x - \bar{k}^2 \int_{0}^{\infty} \xi^* \eta x \text{d} x \\[6pt] \Leftrightarrow\;& \langle \operatorname{P} \xi, \eta \rangle = \int_{0}^{\infty} \xi^* \partial_x \left(x \partial_x \eta\right) \text{d} x - \bar{k}^2 \int_{0}^{\infty} \xi^* \eta x \text{d} x \\[6pt] \Leftrightarrow\;& \langle \operatorname{P} \xi, \eta \rangle = \int_{0}^{\infty} \xi^* \left(\operatorname{P} \eta\right) x \text{d} x \\[6pt] \Leftrightarrow\;& \langle \operatorname{P} \xi, \eta \rangle = \langle \xi, \operatorname{P} \eta \rangle \end{align}

This shows that the operator \(\operatorname{P}\) is Hermitian with respect to the given scalar product. In particular, substituting \(\xi=\psi[u]\) and \(\eta=\psi[v]\) yields:

\begin{align} \langle \operatorname{P} \psi[u], \psi[v] \rangle = \langle \psi[u], \operatorname{P} \psi[v] \rangle \end{align}

Now, the dimensionless Poisson equation \(\operatorname{P}\psi[u] = 8 q u\) is also used. This results in:

\begin{align} \phantom{\Rightarrow}\;& \langle 8 q u, \psi[v] \rangle = \langle \psi[u], 8 q v \rangle \\[6pt] \Leftrightarrow\;& \langle q u, \psi[v] \rangle = \langle \psi[u], q v \rangle \end{align}

With this intermediate result, it is now possible to prove the Hermitian symmetry of \(M(v,u)\):

\begin{align} \phantom{\Rightarrow}\;& M^*(v,u) = \langle q u, w[v] \rangle \\[6pt] \Leftrightarrow\;& M^*(v,u) = \langle q u, v \rangle + \langle q u, \psi[v] \rangle \\[6pt] \Leftrightarrow\;& M^*(v,u) = \langle u, q v \rangle + \langle \psi[u], q v \rangle \\[6pt] \Leftrightarrow\;& M^*(v,u) = \langle w[u], q v \rangle \\[6pt] \Leftrightarrow\;& M^*(v,u) = M(u,v) \end{align}

Since \(A(v,u)\) and \(M(v,u)\) are Hermitian sesquilinear forms for all admissible test functions \(v\in\mathcal{F}\), the weak eigenvalue equation applies in particular to all admissible test functions. The eigenvalues are independent of the choice of test function and depend only on the corresponding eigenfunction. Therefore, the special case \(v=u\) can be chosen without restriction. This choice is permissible because the eigenfunctions satisfy the boundary conditions and thus lie in the space of admissible test functions by construction. Therefore, the following holds:

\begin{align} A(u,u) = \bar{\sigma}^2 M(u,u) \end{align}

Since Hermitian sesquilinear forms yield real values for every pair of arguments \((u,u)\), in particular \(A(u,u)\in\mathbb{R}\) and \(M(u,u)\in\mathbb{R}\). Provided that \(M(u,u)\neq 0\), i.e. the eigenfunction is not the trivial zero solution, this results in:

\begin{align} \bar{\sigma}^2 = \frac{A(u,u)}{M(u,u)} \in \mathbb{R} \end{align}

This shows that the eigenvalues can only assume real values, which must then be taken into account when solving the given equations numerically.

To solve the given problem numerically, the equations must be discretised. To do this, the interval \(\left[0, x_{\max}\right]\) is divided into \(N\) equidistant subintervals and the corresponding \(N+1\) grid points are defined as follows:

\begin{align} x_j = j\,\Delta x \, , \quad \text{with} \quad \Delta x = \frac{x_{\max}}{N} \, , \quad j\in\left\{0,\ldots,N\right\} \end{align}

The variables considered, evaluated at the respective grid points, are then written as follows:

\begin{align} u_j = u\left(x_j\right) \, , \quad \psi_j = \psi\left(x_j\right) \, , \quad q_j = q\left(x_j\right) \end{align}

To discretise the equations to be solved, the fact that both equations contain a differential operator of the following form is used:

\begin{align} \operatorname{D}_\mu := \frac{1}{x} \partial_x \left(x \mu(x) \partial_x\right) \end{align}

Here, in the case of the dimensionless continuity equation, \(\mu(x)=q(x)\), while in the case of the dimensionless Poisson equation, \(\mu(x)=1\).

Since the problem is reduced to a purely radial problem after the exponential approach and operators in divergence form occur in cylindrical coordinates, finite volume discretisation is a suitable option. For a function \(f(x)\), the derivative at the \(j\)-th grid point in flux form over the cell areas at \(x_{j\pm\frac12}\) can be written as follows:

\begin{align} \left(\partial_x f\left(x\right)\right)_j = \frac{f_{j + \frac{1}{2}} - f_{j - \frac{1}{2}}}{\Delta x} \end{align}

For the operator \(\operatorname{D}_\mu\) applied to any function \(f(x)\), the discrete representation is given as follows:

\begin{align} \left(\operatorname{D}_\mu f\right)_j = \frac{1}{\Delta x x_j} \left(x_{j+\frac{1}{2}} \mu_{j+\frac{1}{2}} \left(\partial_x f\left(x\right)\right)_{j+\frac{1}{2}} - x_{j-\frac{1}{2}} \mu_{j-\frac{1}{2}} \left(\partial_x f\left(x\right)\right)_{j-\frac{1}{2}}\right) \end{align}

The values \(x\) and \(\mu\) for the indices \(j\pm\frac12\) are derived from the mean values of the neighbouring gird points. For equidistant gird points, the following therefore applies:

\begin{align} x_{j\pm\frac12} = \frac{x_j + x_{j\pm 1}}{2} \quad \text{and} \quad \mu_{j\pm\frac12} = \frac{\mu_j + \mu_{j\pm 1}}{2} \end{align}

The derivatives for the indices \(j\pm\frac12\) are defined as follows:

\begin{align} \left(\partial_x f\left(x\right)\right)_{j + \frac{1}{2}} = \frac{f_{j+1}-f_j}{\Delta x} \quad \text{and} \quad \left(\partial_x f\right)_{j-\frac12} = \frac{f_{j}-f_{j-1}}{\Delta x} \end{align}

This allows the discretised representation of the operator \(\operatorname{D}_\mu\), applied to any function \(f(x)\), to be further simplified. This results in the tridiagonal form:

\begin{align} \left(\operatorname{D}_\mu f\right)_j = \frac{x_{j-\frac12} \mu_{j-\frac12}}{\Delta x^2 x_j} f_{j-1} - \left(\frac{x_{j-\frac12} \mu_{j-\frac12}}{\Delta x^2 x_j} + \frac{x_{j+\frac12} \mu_{j+\frac12}}{\Delta x^2 x_j}\right) f_j + \frac{x_{j+\frac12} \mu_{j+\frac12}}{\Delta x^2 x_j} f_{j+1} \end{align}

This can now be written even more compactly by introducing the following coefficients:

\begin{align} c_{j,-}^{(\mu)} := \frac{x_{j-\frac12} \mu_{j-\frac12}}{\Delta x^2 x_j} \, , \quad c_{j,+}^{(\mu)} := \frac{x_{j+\frac12} \mu_{j+\frac12}}{\Delta x^2 x_j} \, , \quad c_{j,0}^{(\mu)} := - \left(c_{j,-}^{(\mu)} +c_{j,+}^{(\mu)}\right) \end{align}

Thus, it follows that:

\begin{align} \left(\operatorname{D}_\mu f\right)_j = c_{j,-}^{(\mu)} f_{j-1} + c_{j,0}^{(\mu)} f_j + c_{j,+}^{(\mu)} f_{j+1} \end{align}

For the internal grid points \(j \in \{1,\ldots,N-1\}\), the discretised equations are then derived directly from the continuous equations. The following applies to the discretised continuity equation:

\begin{align} \left(\operatorname{D}_q \left(u + \psi\right)\right)_j + q_j \bar{k}^2 \left(u_j + \psi_j\right) = q_j \bar{\sigma}^2 u_j \end{align}

For the Poisson equation, the discretized equation is givem by:

\begin{align} \left(D_1 \psi\right)_j - \bar{k}^2\psi_j = 8 q_j u_j \end{align}

The external grid points are determined by the boundary conditions. Discretised, these are:

\begin{align} & u_N = 0 = \psi_N \\[6pt] & \left(\partial_x u\right)_0 = 0 = \left(\partial_x \psi\right)_0 \end{align}

The derivative at the support point with index \(j=0\) is by definition given by:

\begin{align} \phantom{\Rightarrow}\;& \left(\partial_x u\right)_0 = \frac{u_{\frac12} - u_{-\frac12}}{\Delta x} \\[6pt] \Leftrightarrow\;& \left(\partial_x u\right)_0 = \frac{u_1 - u_{-1}}{2 \Delta x} \end{align}

At this point, a term \(u_{-1}\) appears. However, a grid point with the index \(j=-1\) does not exist in the grid under consideration. The same problem arises analogously for the dimensionless potential perturbation. Therefore, the derivative at the boundary support point \(j=0\) must be treated separately or replaced by anapproximation.

This is shown below using the dimensionless density perturbation as an example. The same procedure can then be followed for the dimensionless potential perturbation. For regular, axisymmetric solutions, the symmetry condition \(\partial_x u|_{x=0}=0\) applies at the origin. Accordingly, all terms of odd order disappear in the Taylor expansion around \(x=0\). The Taylor series expansion of the density perturbation around \(x=0\) is therefore:

\begin{align} u(x) = u(0) + \frac{1}{2} \partial_x^2 u(x)\big|_{x=0}\,x^2 + \mathcal{O}(x^4) \end{align}

For \(x=x_j\), this results in:

\begin{align} \phantom{\Rightarrow}\;& u(x_j) = u(0) + \frac{1}{2} \partial_x^2 u(x)\big|_{x=0} x_j^2 + \mathcal{O}(x_j^4) \\[6pt] \Rightarrow\;& u_j = u_0 + \frac{1}{2} (\partial_x^2 u)_0 x_j^2 + \mathcal{O}(x_j^4) \end{align}

In particular, for \(j=1\) with \(x_1=\Delta x\), the following applies:

\begin{align} u_1 = u_0 + \frac{1}{2} (\partial_x^2 u)_0 \Delta x^2 + \mathcal{O}(\Delta x^4) \end{align}

Rearranging yields the derivative at the index \(j = \frac12\):

\begin{align} (\partial_x u)_{1/2} = \frac{u_1-u_0}{\Delta x} = \frac{1}{2} (\partial_x^2 u)_0 \Delta x + \mathcal{O}(\Delta x^3) \end{align}

This means that \((\partial_x u)_{1/2}=\mathcal{O}(\Delta x)\), i.e. the derivative disappears in the limit case \(\Delta x\to 0\) consistent with the regularity condition \(\partial_x u|_{x=0}=0\). Furthermore, \((\partial_x u)_{1/2}\) transitions to \((\partial_x u)_0\) for \(\Delta x\to 0\).

Thus, the discretised boundary condition at the origin can be approximated as follows:

\begin{align} u_1 - u_0 = 0 = \psi_1 - \psi_0 \end{align}

This allows the problem to be formulated as a coupled system of equations with originally \(2N+2\) unknowns. However, since the boundary conditions at the outer edge are already given, namely \(u_N=0\) and \(\psi_N=0\), these two unknowns no longer need to be solved. The resulting system is thus reduced to \(2N\) unknowns, two equations for the boundary conditions at the origin and \(2N-2\) equations for the inner support points \(j\in\{1,\ldots,N-1\}\).

This system can be written as a generalised eigenvalue problem in matrix form:

\begin{align} \mathbf{A}(\bar{k})\boldsymbol{y} = \bar{\sigma}^2\mathbf{B}\boldsymbol{y} \end{align}

Here, \(\mathbf{A},\mathbf{B}\in\mathbb{R}^{2N\times 2N}\). The matrix \(\mathbf{A}\) contains all terms that are independent of \(\bar{\sigma}^2\), while \(\mathbf{B}\) collects all contributions that are multiplied by \(\bar{\sigma}^2\). The vector \(\boldsymbol{y}\) contains all unknowns and is given by:

\begin{align} \boldsymbol{y} = \left(u_0, u_1, \ldots, u_{N-1}, \psi_0, \psi_1, \ldots, \psi_{N-1}\right)^\mathsf{T} \in \mathbb{R}^{2N} \end{align}

The equations are arranged so that the first two rows describe the boundary conditions at the origin, while all subsequent rows contain the equations at the internal support points. Accordingly, in the first two rows of the matrix \(\mathbf{A}\), all entries are zero except for those that access \(u_0\) and \(u_1\) or \(\psi_0\) and \(\psi_1\), respectively. These matrix elements encode the boundary conditions \(u_1-u_0=0\) and \(\psi_1-\psi_0=0\) and are:

\begin{align} & A_{0,0} = -1 \quad \text{and} \quad A_{0,1} = 1 \\[6pt] & A_{1,N} = -1 \quad \text{and} \quad A_{1,N+1} = 1 \end{align}

The rows \(r=j+1\) with \(j\in\{1,\ldots,N-1\}\) are chosen such that they represent the discretised dimensionless continuity equation. The non-vanishing entries in these rows that access the \(u\)-block are given by:

\begin{align} A_{j+1,j-1} = c_{j,-}^{(q)} \, , \quad A_{j+1,j} = c_{j,0}^{(q)} - q_j \bar{k}^2 \, , \quad A_{j+1,j+1} = c_{j,+}^{(q)} \end{align}

This form formally applies to all \(j\in\{1,\ldots,N-1\}\). However, for the boundary case \(j=N-1\), the index \(j+1=N\) would occur, which would refer to \(u_N\). Since \(u_N\), like \(\psi_N\), has already been eliminated by the boundary condition and is not contained in \(\boldsymbol{y}\), this contribution is not listed as an unknown and must therefore be treated separately in the last inner row.

Analogously, the non-vanishing entries that access the \(\psi\) block in the continuity equation result in:

\begin{align} A_{j+1,N+j-1} = c_{j,-}^{(q)} \, , \quad A_{j+1,N+j} = c_{j,0}^{(q)} - q_j \bar{k}^2 \, , \quad A_{j+1,N+j+1} = c_{j,+}^{(q)} \end{align}

Here, too, the boundary case \(j=N-1\) must be treated separately, since \(N+j+1=2N\) would refer to \(\psi_N\), which has already been eliminated by the boundary condition.

The rows \(r=N+j\) with \(j\in\{1,\ldots,N-1\}\) describe the discretised Poisson equation. The non-vanishing entries that access the \(u\)-block are as follows:

\begin{align} A_{N+j,j} = -8\,q_j \end{align}

The non-vanishing entries that access the \(\psi\)-block are:

\begin{align} A_{N+j,N+j-1} = c_{j,-}^{(1)} \, , \quad A_{N+j,N+j} = c_{j,0}^{(1)} - \bar{k}^2 \, , \quad A_{N+j,N+j+1} = c_{j,+}^{(1)} \end{align}

In the Poisson equation, the boundary case \(j=N-1\) must also be treated analogously.

In the matrix \(\mathbf{B}\), the entries in the first two rows are zero. This is because \(\mathbf{B}\) contains only terms that are multiplied by \(\bar{\sigma}^2\), and the boundary conditions at the origin do not contain any contributions with \(\bar{\sigma}^2\).

For the rows \(r=j+1\) with \(j\in\{1,\ldots,N-1\}\), which represent the dimensionless continuity equation, the only non-vanishing entries are:

\begin{align} B_{j+1,j} = q_j \end{align}

Since the dimensionless Poisson equation does not contain any terms that depend on \(\bar{\sigma}^2\), all entries in the rows \(r=N+j\) that describe the Poisson equation are zero.

This generalised eigenvalue problem can now be solved numerically. Specifically, the Python library \texttt{SciPy} was used for this purpose, which provides routines for solving generalised eigenvalue problems. Of particular interest here are the eigenvalues, as they determine the linear temporal evolution of the perturbations.

As described above, a perturbation can be represented as a superposition of Fourier modes along the \(z\)-axis. For a fixed dimensionless wave number, the linear stability analysis is thus reduced to an eigenvalue problem in the dimensionless radial coordinate. This eigenvalue problem generally has several eigenfunctions \(u_n(x)\) or \(\psi_n(x)\) and associated eigenvalues \(\bar{\sigma}_n^2(\bar{k})\). Due to linearity, a general perturbation with a fixed wave number can be written as a linear combination of these eigenfunctions.

Due to the exponential time evolution of the individual modes, the mode with the highest growth rate always dominates for sufficiently long times, provided that unstable modes exist. For a fixed dimensionless wave number, this results in:

\begin{align} \bar{\sigma}_{\max}(\bar{k}) = \max_n \bar{\sigma}_n(\bar{k}) \end{align}

The perturbation along the \(z\)-axis is a superposition of Fourier modes with different wave numbers. Accordingly, the asymptotic behaviour over time, analogous to the previous argument, is determined by the mode with the fastest overall growth, i.e. by the maximum over all dimensionless wave numbers:

\begin{align} \phantom{\Rightarrow}\;& \bar{\sigma}_{\max} = \max_{\bar{k}} \bar{\sigma}_{\max}(\bar{k}) \\[6pt] \Leftrightarrow\;& \bar{\sigma}_{\max} = \max_{\bar{k}} \max_n \bar{\sigma}_n(\bar{k}) \end{align}

The dimensionless wave number \(\bar{k}_{\bar{\sigma}_{\max}}\) of the fastest growing mode is then given by:

\begin{align} \phantom{\Rightarrow}\;& \bar{k}_{\bar{\sigma}_{\max}} = \operatorname*{arg\,max}_{\bar{k}} \bar{\sigma}_{\max}(\bar{k}) \\[6pt] \Leftrightarrow\;& \bar{k}_{\bar{\sigma}_{\max}} = \operatorname*{arg\,max}_{\bar{k}} \max_n \bar{\sigma}_n(\bar{k}) \end{align}

The numerical solution of the given generalised eigenvalue problem yields the dimensionless wave number for the fastest growing mode:

\begin{align} \bar{k}_{\bar{\sigma}_{\max}} = 0.76949 \end{align}

Here, a sufficiently large computational domain \([0,\,x_{\max}]\) with \(x_{\max}=25\) and a sufficient number of grid points \(N=500\) were used. Neither a further enlargement of the computational domain nor an increase in the number of grid points leads to a significant change in the dimensionless wave number of the fastest growing mode, so that the result can be considered converged with respect to these two parameters.

Since the fastest growing Fourier mode dominates the perturbation pattern for sufficiently long times, periodic overdensities form along the \(z\)-axis. The corresponding wavelength is given by:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\bar{\sigma}_{\max}} = \frac{2\pi}{k_{\bar{\sigma}_{\max}}} \\[6pt] \Leftrightarrow\;& \lambda_{\bar{\sigma}_{\max}} = \frac{2\pi r_0}{\bar{k}_{\bar{\sigma}_{\max}}} \\[6pt] \Leftrightarrow\;& \lambda_{\bar{\sigma}_{\max}} = \frac{2\pi}{0.76949} r_0 \end{align}

As soon as the amplitudes of the overdensities become so large that the system enters the nonlinear regime, self-gravity leads to increased contraction in the overdense regions. In this regime, the gas cylinder fragments. The fragments that form later tend to form at the positions of the overdensities that were already characterised by the dominant unstable mode in the linear regime. The characteristic fragment spacing \(\lambda_{\text{frag}}\) therefore corresponds closely to the wavelength of the unstable mode with the highest growth rate:

\begin{align} \phantom{\Rightarrow}\;& \lambda_{\mathrm{frag}} = \lambda_{\bar{\sigma}_{\max}} \\[6pt] \Leftrightarrow\;& \lambda_{\mathrm{frag}} = \frac{2\pi}{0.76949} r_0 \end{align}

This results in a linear scaling of the fragment distance with the characteristic radius. In this model, this directly determines the characteristic length scale on which overdensities develop along the \(z\)-axis and from which fragments emerge from the subsequent nonlinear evolution.

Work Plan and Approach

The following section presents a rough work plan for this master's thesis, which aims to numerically reproduce the solutions developed by Fiege and Pudritz. The initial goal is to simulate the purely hydrostatic reference case, the infinitely long, isothermal, self-gravitating cylinder without magnetic fields, in PLUTO/belt. Due to the cylindrical symmetry, the first tests can be carried out in a one-dimensional setup in cylindrical coordinates.

Afterwards, the numerical setup will be gradually expanded to two and then three dimensions. Once the three-dimensional setup in cylindrical coordinates is stable and works reliably, the plan is to switch to a three-dimensional setup in Cartesian coordinates. Cartesian coordinates are needed for the considered problem because the instabilities to be investigated later are expected to break the cylindrical symmetry.

Once the three-dimensional setup is in Cartesian coordinates, the next step will be to extend it to magnetohydrodynamic calculations. The aim here is to implement the equilibrium solutions of Fiege and Pudritz (2000a) in PLUTO/belt. Finally, the equilibrium configuration should be perturbed, analogous to Fiege and Pudritz (2000b), in order to investigate the temporal development of the perturbations and compare them with the predictions of their linear stability analysis.

1D Hydrodynamic Reference Test: Setup and Convergence

As explained above, the study starts with purely hydrodynamic, one-dimensional, isothermal simulations including self-gravity. The simulation setup is initialised according to an infinitely long, isothermal, self-gravitating cylinder with a core. Specifically, the density profile is set according to the previously derived density distribution, the velocity field is initialised to zero at the start, and the core mass is selected according to the relation given above. The temperature \(T\) and the characteristic radius \(r_0\) are selected based on typical values for observed filaments (cf. Seifried and Walch, 2015):

\begin{align} T = 10 \, \text{K} \quad \text{and} \quad r_0 = 0.033 \, \text{pc} \end{align}

For the initial simulations, a grid extending from \(R_{\min} = 10^{-4}\,\text{pc}\) to \(R_{\max} = 1\,\text{pc}\) is used. This ensures that the numerical domain completely encompasses the filament and that the outer region is sufficiently large to minimise boundary artefacts.

The grid is designed in such a way that the cells become exponentially larger towards the outside, according to:

\begin{align} \Delta r_i = r_{i-\frac{1}{2}} \left(10^{\Delta \xi} - 1\right) \, , \quad \text{with} \quad \Delta \xi = \frac{1}{N_r} \log_{10}\left(\frac{R_{\max}}{R_{\min}}\right) \end{align}

Here, \(\Delta r_i\) is the length of the \(i\)-th cell, \( r_{i-\frac{1}{2}}\) is the radius of the inner or left cell wall of the \(i\)-th cell, and \(N_r\) is the number of cells in the domain. This exponential growth of the domain ensures high resolution in the area of interest, i.e. in and around the filament, especially near the core, while a coarser resolution is sufficient further out, where in the case under consideration there are typically only very low densities or the filament transitions into an almost vacuum state. In this way, a good balance is achieved between accuracy in the relevant region and the highest possible numerical efficiency.

For the hydrodynamic variables, reflective boundary conditions are used at the inner edge of the domain. This is because the inner edge represents an impenetrable core and the reflective boundary condition prevents mass flow across the boundary surface. For the outer boundary, outflow–no-inflow boundary conditions are selected in order to model the domain as an open system. This allows matter, perturbations and waves to leave the domain, while preventing inflow from outside.

For the gravitational potential of the gas in the domain, a zero-gradient boundary condition is used at the inner edge. The reason for this is that the gravitational attraction of the core is considered separately in PLUTO/belt. Since gravitational potentials can be superimposed due to the linearity of Poisson's equation, the potential generated by the gas can then be linearly added to the potential generated by the core. At the outer boundary, the potential is specified by the analytical potential solution for an infinitely long, isothermal, self-gravitating cylinder without a core. This boundary condition is permissible in this setup because the core serves only as a numerical aid to emulate the case without a core.

The first simulation with this setup was performed with \(N_r = 1000\) cells in the radial direction. As a result, it can be seen that the maximum density in the system, which corresponds to the density in the innermost cell at the core due to the selected density profile, initially undergoes a damped oscillation. The maximum density then stabilises at a value of approximately \(\rho_{\max} = 3.36 \cdot 10^{-19}\,\frac{\text{g}}{\text{cm}^3}\). This corresponds to an upward deviation of about \(3\,\%\) from the initial and thus analytically expected value. This behaviour is illustrated in the following figure:

Time evolution of the maximum density \(\rho_{\max}\) for \(N_r = 1000\).

As a consistency check, both the radial density distribution \(\rho(r)\) and the gravitational potential \(\phi_{\text{ext}}(r)\) of the converged simulation are now considered and compared with the analytically expected solution. Both quantities are shown in the following figure, with the density distribution on the left and the corresponding gravitational potential on the right:

Simulation for \(N_r = 1000\): Radial density profile \(\rho(r)\) of the converged simulation compared to the analytical reference solution (left) and gravitational potential \(\phi_{\mathrm{ext}}(r)\) compared to the analytically expected potential solution (right).

Due to conservation of mass, the increase in maximum density causes the density distribution to contract slightly. This is evident in the fact that the density peak increases while the characteristic radius \(r_0\) decreases. As a result, the profile becomes narrower overall and more concentrated towards the centre. The gravitational potential agrees very well with the analytically expected solution over the entire radial range. Minor differences can be explained by the fact that the gravitational potential is only uniquely determined up to an additive constant. A possible offset therefore corresponds merely to a choice of zero level and is physically uncritical. Overall, the comparison confirms that the numerically calculated potential resulting from the gas is consistent with the analytical reference solution.

It can be seen that the contraction of the density distribution is significantly weaker at higher resolutions. This can be seen in the following figure, which shows the density in the innermost cell, i.e. the first cell at the core, as a function of the number of cells \(N_r\) used:

Density in the innermost cell as a function of radial resolution \(N_r\). As the number of cells increases, \(\rho\) converges towards the theoretically expected value.

Here, it can be seen that the density in the innermost cell converges towards the theoretically expected value as the number of cells increases. For \(N_r = 8000\), the maximum density deviates upwards by only about \(0.05\,\%\). In addition, the temporal evolution of the maximum density shows a rather monotonic increase with a single overshoot, followed by relaxation to the steady-state value without further significant oscillations. This behaviour is illustrated in the following figure. The left plot shows the density in the innermost cell as a function of time, and the middle plot shows the radial density distribution of the converged simulation as well as the theoretically expected density distribution used as the initial condition. For the sake of completeness, the right plot also shows the gravitational potential of the converged simulation together with the analytically expected potential solution. However, there is no qualitative or quantitative change in the potential compared to the case with \(N_r = 1000\) cells.

Simulation for \(N_r = 8000\): Time evolution of the maximum density \(\rho_{\max}(t)\) in the innermost cell (left), radial density profile \(\rho(r)\) of the converged simulation compared to the analytical reference solution (centre) and gravitational potential \(\phi_{\mathrm{ext}}(r)\) compared to the analytically expected potential solution (right).

Free-fall time validation

Another consistency check is to verify whether the setup can reproduce the free fall time \(t_{\mathrm{ff}}\) in the two limiting cases \(r_i \rightarrow R_{\min}\) and \(r_i \rightarrow R_{\max}\). The analytical formula derived above yields the following expected values for these two limiting cases:

\begin{align} t_{\text{ff}}\left(r_i \rightarrow R_{\min}\right) = 1.07 \cdot 10^5\,\text{yr} \quad \text{and} \quad t_{\text{ff}}\left(r_i \rightarrow R_{\max}\right) = 3.26 \cdot 10^6\,\text{yr} \end{align}

To achieve this, the pressure must be switched off so that the dynamics of the system depend solely on its own gravity. To achieve this, the limiting case \(T \to 0\) is considered, because this then results in \(P \to 0\) from the isothermal equation of state. In the following simulations, a very low temperature of \(T = 10^{-3}\,\text{K}\) is used for this purpose, so that pressure forces are negligible compared to gravity. The total mass in the domain remains unchanged from the previous study.

In the following simulations, a grid with \(N_r = 8000\) cells is used to keep numerical resolution effects as small as possible. The only change compared to the previous study concerns the boundary condition at the inner boundary for the hydrodynamic variables. Outflow-no-inflow boundary conditions are used there instead of reflective ones. This is necessary because in this test, the inner boundary is supposed to act as a sink into which gas can flow, and not as an impenetrable wall as before. Inflow into the domain is prevented because such an inflow does not exist physically and otherwise mass could be artificially introduced into the domain due to numerical effects. All other boundary conditions for the hydrodynamic variables and for the gravitational potential remain unchanged.

In order to determine the free fall time in the two limiting cases from the simulation, suitable measurement variables are required that can be used to clearly identify the corresponding points in time. For this purpose, it is useful to consider the temporal evolution of the accretion rate across the inner edge \(\dot{M}_{R_{\min}}(t) = \frac{\mathrm{d} M_{R_{\min}}}{\mathrm{d} t}(t)\). This quantity describes the mass flow across the inner edge at time \(t\), i.e. the mass flowing out of the domain across the inner edge per time.

It turns out that the accretion rate is particularly sensitive to the beginning of gravitational collapse. As long as the matter falling inwards from the domain has not yet reached the inner edge, the accretion rate remains negligible, except for numerical noise. As soon as the first mass elements reach the inner edge, it should increase sharply and show a pronounced first maximum. Towards the end of the outflow, the accretion rate should first continue to decline and then suddenly drop by several orders of magnitude, leaving only numerically driven contributions.

Since very small, non-physical contributions can occur early on in the simulation, e.g. due to numerical diffusion or rounding errors, a detection threshold \(\dot{M}_{\text{thr}}\) is introduced. In the following, \(\dot{M}_{\text{thr}} = 10^{-11} M_\odot\text{yr}^{-1}\) is used to robustly separate the physically relevant part of the curve from numerical noise.

On this basis, the inner limit case \(r_i \rightarrow R_{\min}\) is defined by the time of the first significant peak in the accretion rate, i.e. by the first pronounced maximum after it has exceeded the detection threshold. This point in time marks the beginning of efficient accretion and thus corresponds approximately to the earliest possible arrival of matter at the inner edge. The outer limit case \(r_i \rightarrow R_{\max}\), on the other hand, is determined by the last point in time before the abrupt drop in the accretion rate. For this purpose, the characteristic kink in the accretion rate is identified, at which point the rate drops sharply after a phase of slower decay. The detection threshold continues to serve as an auxiliary criterion in order to consider only the physically relevant part of the curve.

The following figure shows the accretion rate on a logarithmic scale together with the detection threshold used and the two free fall times determined from this for the two boundary cases.

Temporal evolution of the accretion rate across the inner edge \(\dot{M}_{R_{\min}}(t)\). The horizontal dashed line marks the detection threshold \(\dot{M}_{\mathrm{thr}} = 10^{-11}\,M_\odot\,\mathrm{yr}^{-1}\). The vertical lines mark the determined free-fall times of the boundary cases. \(t_{\mathrm{ff}}(r_i \rightarrow R_{\min})\) as the time of the first significant peak and \(t_{\mathrm{ff}}(r_i \rightarrow R_{\max})\) as the last time immediately before the abrupt drop in the accretion rate.

The figure shows that the free fall times determined from the simulation for the two limitis correspond very well with the analytically expected times. For the limit \(r_i \rightarrow R_{\min}\), the simulation yields \(t_{\text{ff}}\left(r_i \rightarrow R_{\min}\right) = 1.1 \cdot 10^5\,\text{yr}\), which corresponds to a deviation of approximately \(2.5\,\%\) from the analytical solution. For the limit \(r_i \rightarrow R_{\max}\), the simulation yields \(t_{\text{ff}}\!\left(r_i \rightarrow R_{\max}\right) = 3.27 \cdot 10^6\,\text{yr}\), which corresponds to a deviation of about \(0.4\,\%\) from the analytical value.

In summary, the results of this section, in combination with the results from the previous study, show that the hydrodynamic setup, including self-gravity, is numerically stable with the boundary conditions used. With a sufficiently high resolution, both the stationary profiles and the analytically expected free fall times are reliably reproduced in the boundary cases considered. This creates a reliable reference that serves as a starting point for the subsequent extension to two dimensions.

2D Hydrodynamic Reference Test: Stability

Next, the one-dimensional setup is expanded to a two-dimensional one. Since the previous study showed that the one-dimensional setup can reliably reproduce the analytical solution at sufficiently high resolution, the coarsest possible resolution, at which the one-dimensional simulation still runs stably and does not produce any obviously unphysical results, is deliberately chosen for the investigation now. This increases the numerical efficiency for the following tests. The aim of this study is not to reproduce the analytical solution again with maximum accuracy, but to verify the extension to two dimensions. In particular, the aim is to check whether the setup is fundamentally stable in 2D and whether or which numerical or implementation-related problems occur during the transition from one to two dimensions.

It appears that the coarsest grid at which the one-dimensional simulation still runs stably and does not produce any obviously unphysical results has \(N_r = 250\) cells in the radial direction. The figure below shows the density in the innermost cell as a function of time on the left. The middle plot shows the radial density distribution of the converged simulation and the theoretically expected density distribution. The right-hand plot additionally shows the gravitational potential of the converged simulation together with the analytically expected potential solution.

2D reference run (\(N_r=250\)): Time evolution of the maximum density \(\rho_{\max}(t)\) (left), radial density profile \(\rho(r)\) with fit and analytical solution (centre) and gravitational potential \(\phi_{\mathrm{ext}}(r)\) with fit and analytical solution (right).

It can be seen that the simulation with \(N_r = 250\) cells in the radial direction converges faster in simulated physical time than the simulation with \(N_r = 1000\) cells. However, this is due to numerical effects. In this case, too, an increase in central density and, due to mass conservation, a decrease in the characteristic radius can be observed. As expected, this contraction is more pronounced than in the case of \(N_r = 1000\). In addition, the coarse resolution shows a slight deviation in the gravitational potential. For small radii, the numerical potential is slightly below the analytical solution. This is consistent with the contracted density distribution, as it means that more mass is located at small radii and the potential in the centre is correspondingly deeper.

For the two-dimensional simulations, a domain is considered that is identical to the previous study in the radial direction and extends in the azimuthal direction from \(\varphi_{\min}=0\) to \(\varphi_{\max}=2\pi\). The domain is discretised by an \(N_r \times N_{\varphi}\) grid. In the radial direction, as in the previous study, a grid with \(N_r = 250\) cells is used, with the cells increasing exponentially in size towards the outside. The number of cells in the azimuthal direction \(N_{\varphi}\) is chosen so that the cells are approximately square. This achieves a comparable resolution in the radial and azimuthal directions, which reduces anisotropic numerical errors and stronger dissipation in one direction. For locally square cells, the following must approximately apply:

\begin{align} r_i \Delta \varphi_i = \Delta r_i \end{align}

Here, \(r_i\) is the radius of the \(i\)-th cell centre, \(\Delta r_i\) is the radial width of the \(i\)-th cell, and \(\Delta\varphi\) is the azimuthal cell width. A uniform cell size is used for this in the azimuthal direction. The following therefore applies:

\begin{align} \phantom{\Rightarrow}\;& \Delta \varphi_i = \Delta \varphi = \frac{\varphi_{\max} - \varphi_{\min}}{N_{\varphi}} \\[6pt] \Leftrightarrow\;& \Delta \varphi = \frac{2 \pi}{N_{\varphi}} \end{align}

Inserting yields:

\begin{align} \frac{2 \pi r_i}{N_{\varphi}} = r_{i-\frac{1}{2}} \left(10^{\Delta \xi} - 1\right) \end{align}

Since \(r_i\) describes the radius of the cell centre and \(r_{i-\frac12}\) describes the radius of the inner cell edge, the following applies:

\begin{align} r_i = r_{i-\frac12} + \frac{\Delta r_i}{2}\, . \end{align}

If the grid has a sufficiently fine resolution, i.e. \(\Delta r_i \ll r_i\), then the difference between \(r_i\) and \(r_{i-\frac12}\) is only a small fraction of \(r_i\). Therefore, as a first approximation, we can assume that the following applies:

\begin{align} r_{i-\frac12} \approx r_i \, . \end{align}

Thus follows:

\begin{align} \phantom{\Rightarrow}\;& \frac{2 \pi}{N_{\varphi}} = 10^{\Delta \xi} - 1 \\[6pt] \Leftrightarrow\;& N_{\varphi} = \frac{2 \pi}{10^{\Delta \xi} - 1} \end{align}

With \(N_r = 250\), \(R_{\min} = 10^{-4}\,\text{pc}\) and \(R_{\max} = 1\,\text{pc}\), the following applies:

\begin{align} N_{\varphi} = \frac{2 \pi}{10^{\frac{2}{125}} - 1} \approx 167 \end{align}

Therefore, a \(250 \times 167\) grid will be used in the following.

For the hydrodynamic quantities and the gravitational potential, the same boundary conditions are used in the radial direction as in the previous study. In the azimuthal direction, periodic boundary conditions are chosen for both the hydrodynamic quantities and the gravitational potential. This is justified by the angular geometry, since \(\varphi=0\) and \(\varphi=2\pi\) describe the same physical location.

It can be seen that the simulation runs stable without any new numerical or implementation-related problems occurring. The following figure shows the time evolution of the maximum density of the two-dimensional simulation in the domain. For comparison, the time evolution of the one-dimensional simulation with \(N_r = 250\) cells is also shown.

Time evolution of the maximum density \(\rho_{\max}(t)\) for the 2D reference run (solid line) compared to the corresponding 1D simulation with \(N_r=250\) (dashed line).

The figure shows that the temporal evolution of the maximum density in the two-dimensional simulation is practically identical to that in the one-dimensional simulation. In particular, it initially shows a damped oscillation and then approaches a steady state value. From about \(t \sim 3 \cdot 10^8\,\text{yr}\) onwards, the oscillation has completely damped out.

The following figure shows both the density and the gravitational potential of the converged simulation as a function of the Cartesian coordinates \(x = r \sin(\varphi)\) and \(y = r \cos(\varphi)\). For the density representation, the central area of the domain was zoomed in on more than for the gravitational potential, since the density is essentially concentrated in the filament and no longer exhibits any relevant structures at larger radii.

Contour plots of the converged 2D simulation (\(N_r \times N_\varphi = 250 \times 167\)): density distribution \(\rho(x,y)\) (left) and gravitational potential \(\phi_{\mathrm{ext}}(x,y)\) (right).

The figure confirms that the two-dimensional simulation maintains the expected cylindrical symmetry. Both the density and the gravitational potential are purely radially dependent and show no systematic \(\varphi\)-dependence. In particular, the density peak remains centred and the isolines run circularly, indicating that the periodic boundary conditions are consistently implemented in the azimuthal direction and no artefacts arise at \(\varphi = 0\) or \(\varphi = 2\pi\).

This observation can also be confirmed by the following figure. On the left-hand side, the radial profiles are shown for five different azimuth angles. The density is shown at the top and the gravitational potential at the bottom. On the right-hand side, the corresponding profiles are shown for an azimuth angle close to \(\varphi \approx 2\pi\), together with a fit and the analytically expected solution, with the density distribution at the top and the gravitational potential at the bottom.

Radiale Profile der konvergierten 2D-Simulation für verschiedene Azimutalwinkel: Dichte \(\rho(r,\varphi)\) (oben links) und Gravitationspotential \(\phi_{\mathrm{ext}}(r,\varphi)\) (unten links). Rechts sind die Profile exemplarisch für einen Winkel nahe \(\varphi \approx 2\pi\) zusammen mit einem Fit und der analytischen Referenzlösung dargestellt (oben: \(\rho\), unten: \(\phi_{\mathrm{ext}}\)).

The figure confirms that the converged two-dimensional simulation remains cylindrically symmetric. On the left, it can be seen that the radial profiles for the different azimuth angles completely overlap, both for density and for gravitational potential. This means that there are no systematic \(\varphi\)-dependencies, and in particular, even close to \(\varphi \approx 2\pi\), there are no signs of artefacts caused by the periodic boundary conditions.

As in the one-dimensional case with coarse resolution, the density profile continues to show a contraction. The potential agrees very well with the analytical expectation over large radii. The small deviations in the inner region are consistent with the changed mass distribution. It should also be noted that the potential is basically only determined up to an additive constant, so that a possible offset is physically uncritical. Overall, both the temporal evolution of the maximum density and the fit parameters agree with the corresponding one-dimensional reference run at the same radial resolution, so that the extension to two dimensions does not introduce any additional systematic effects.

Overall, this two-dimensional reference test shows that the hydrodynamic setup, including self-gravity and periodic boundary conditions in the \(\varphi\) direction, is stable and consistently reproduces the corresponding one-dimensional case, thus providing the basis for subsequent extension to higher dimensions.