This section provides a theoretical analysis of the delayed fractional-order epidemic model. First, we study the existence and uniqueness of solutions and verify their positivity and boundedness. Next, we analyze the existence of equilibrium points, including the disease-free and endemic equilibria, to reveal the long-term behavior of the system. Furthermore, we derive the expression for the basic reproduction number (\(R_0\)), a key threshold parameter used to determine whether the disease will die out or persist. Finally, we analyze the stability of the equilibrium points, focusing on the stability conditions of the disease-free and endemic equilibria, and explore the conditions under which Hopf bifurcation occurs, providing a theoretical basis for the potential oscillatory dynamics of the system.
Existence, non-negativity and boundedness of solutions
Theorem 1
For any positive initial condition
$$X(0) = (S(0), E(0), I(0), H(0), R(0), M(0)) \in \Gamma ,$$
there always exists a non-negative, bounded, and unique solution
$$X(t) = (S(t), E(t), I(t), H(t), R(t), M(t)) \in \Gamma$$
for model (3), and it remains in \(\mathbb {R}_+^6\), where
$$\Gamma = \left\{ X(t) \in \mathbb {R}_+^6 : \begin{aligned}&0 \le N = S + E + I + H + R \le \max \left\{ N(0), \frac{\Lambda ^{\alpha }}{d^{\alpha }} \right\} , \\&0 \le M \le \max \left\{ M(0), \frac{\sigma _1^\alpha }{\sigma _2^\alpha } \max \left\{ N(0), \frac{\Lambda ^\alpha }{d^\alpha } \right\} \right\} , \end{aligned} \right\} ,$$
the closed set \(\Gamma\) is the positive invariant of model (3).
Proof of Theorem 1
To demonstrate the non-negativity of the solutions, we consider the corresponding variables in model (3), Specifically, from the equations of the model (3), we have:
$$\begin{aligned} ^CD^\alpha S \Big |_{S = 0}= & \Lambda ^\alpha \ge 0, \\ ^CD^\alpha E \Big |_{E = 0}= & \left( \beta _1^{\alpha } – \beta _2^{\alpha } \frac{M(t)}{m + M(t)} \right) S(t)I(t) \ge 0, \\ ^CD^\alpha I \Big |_{I = 0}= & \varepsilon ^\alpha E(t) \ge 0, \\ ^CD^\alpha H \Big |_{H = 0}= & \mu _1^\alpha I(t) \ge 0, \\ ^CD^\alpha R \Big |_{R = 0}= & \delta _1^\alpha I(t) + \delta _2^\alpha H(t) \ge 0, \\ ^CD^\alpha M \Big |_{M = 0}= & \sigma _1^\alpha I(t – \tau ) \ge 0. \end{aligned}$$
By using Theorem 2.149, the non-negativity of the solutions is guaranteed since each derivative term is non-negative when the corresponding variable is zero. Moreover, because all compartments have non-negative initial conditions and their fractional derivatives cannot become negative, these variables will never drop below zero throughout the course of the epidemic. Hence, we have \(X(t) = (S(t), E(t), I(t), H(t), R(t), M(t))\ge 0\) for any \(t \ge 0\).
To show that all the state variables remain bounded for all \(t \ge 0\), we first consider the total population \(N(t) = S(t) + E(t) + I(t) + H(t) + R(t)\) and derive an upper bound using a fractional Grönwall-type inequality. Afterwards, we demonstrate that M(t) is also bounded by leveraging the boundedness of I(t). Such boundedness guarantees that the model solutions are biologically feasible, preventing unrealistic blow-up of compartment populations or media coverage. First, we sum all equations in model (3) to obtain:
$$\begin{aligned} ^{C}D^{\alpha }N(t)&= ^CD^{\alpha }S(t) + ^{C}D^{\alpha }E(t) + ^CD^{\alpha }I(t) + ^CD^{\alpha }H(t) + ^CD^{\alpha }R(t)\\&= \Lambda ^{\alpha } – d^{\alpha }S(t) – d^{\alpha }E(t) -(d^{\alpha }+\xi _1^{\alpha })I(t) – (d^{\alpha }+\xi _2^{\alpha })H(t) – d^{\alpha }R(t). \end{aligned}$$
The inequality can be obtained:
$$^{C}D^{\alpha }N(t) \le \Lambda ^{\alpha } – \xi _1^{\alpha }I(t) – \xi _2^{\alpha }H(t) – d^{\alpha }N(t),$$
Since \(I\ge 0, H\ge 0\), we obtain:
$$^{C}D^{\alpha }N(t) + d^{\alpha }N(t) \le \Lambda ^{\alpha },$$
and from the lemma 3.350, we get:
$$N(t) \le N(0) E_{\alpha }\left( -d^{\alpha } t^{\alpha }\right) + \frac{\Lambda ^{\alpha }}{d^{\alpha }} \left[ 1 – E_{\alpha }\left( -d^{\alpha } t^{\alpha }\right) \right] .$$
Rearranging the terms yields
$$N(t) \le \left[ – \frac{\Lambda ^{\alpha }}{d^{\alpha }} + N(0) \right] E_{\alpha }\left( -d^{\alpha } t^{\alpha }\right) + \frac{\Lambda ^{\alpha }}{d^{\alpha }}.$$
If we take the limit as \(t \rightarrow \infty\), \(N \rightarrow \frac{\Lambda ^{\alpha }}{d^{\alpha }}\). Hence, we have:
$$\begin{aligned} N(t) \le \max \left\{ N(0), \frac{\Lambda ^{\alpha }}{d^{\alpha }}\right\} . \end{aligned}$$
Next, consider \(M(t)\)
$$^{C}D^\alpha M = \sigma _1^\alpha I(t – \tau ) – \sigma _2^\alpha M(t).$$
Given the boundedness of \(N(t)\), we have \(I(t) \le N(t)\), which implies:
$$I(t – \tau ) \le \max \left\{ N(0), \frac{\Lambda ^\alpha }{d^\alpha } \right\} .$$
Substituting the upper bound of \(I(t – \tau )\) into the equation for \(M(t)\), we obtain:
$$^{C}D^\alpha M \le \sigma _1^\alpha \max \left\{ N(0), \frac{\Lambda ^\alpha }{d^\alpha } \right\} – \sigma _2^\alpha M(t).$$
Thus, we find:
$$M = \frac{\sigma _1^\alpha }{\sigma _2^\alpha } \max \left\{ N(0), \frac{\Lambda ^\alpha }{d^\alpha } \right\} .$$
Considering the initial condition \(M(0)\), we conclude that \(M(t)\) is bounded as
$$0 \le M(t) \le \max \left\{ M(0), \frac{\sigma _1^\alpha }{\sigma _2^\alpha } \max \left\{ N(0), \frac{\Lambda ^\alpha }{d^\alpha } \right\} \right\} .$$
This shows that the solution is bounded and it remains in \(\mathbb {R}_+^6\), where
$$\Gamma = \left\{ X(t) \in \mathbb {R}_+^6 : \begin{aligned}&0 \le N = S + E + I + H + R \le \max \left\{ N(0), \frac{\Lambda ^{\alpha }}{d^{\alpha }} \right\} , \\&0 \le M \le \max \left\{ M(0), \frac{\sigma _1^\alpha }{\sigma _2^\alpha } \max \left\{ N(0), \frac{\Lambda ^\alpha }{d^\alpha } \right\} \right\} , \end{aligned} \right\} ,$$
the closed set \(\Gamma\) is the positive invariant of model (3).
To establish that model (3) has a unique solution for any given non-negative initial condition, we will show that the right-hand side of model (3) is Lipschitz continuous in set \(\Gamma\), which then implies local existence and uniqueness via the Banach fixed-point principle51. Specifically, we define the functions based on the system equations and confirm that there exists a constant \(L > 0\) such that
$$\Vert G(Y) – G(Z) \Vert \le L \Vert Y – Z \Vert$$
for any two points \(Z\) and \(Y\) in a bounded domain. Consider the time interval \([t_0, t_0 + h]\), where h is sufficiently small, and the bounded domain
$$\Gamma = \left\{ X(t) \in \mathbb {R}_+^6 : \begin{aligned}&0 \le N = S + E + I + H + R \le \max \left\{ N(0), \frac{\Lambda ^{\alpha }}{d^{\alpha }} \right\} , \\&0 \le M \le \max \left\{ M(0), \frac{\sigma _1^\alpha }{\sigma _2^\alpha } \max \left\{ N(0), \frac{\Lambda ^\alpha }{d^\alpha } \right\} \right\} , \end{aligned} \right\} ,$$
we have:
$$U = \max \left\{ \max \left\{ N(0), \frac{\Lambda ^\alpha }{d^\alpha } \right\} , \max \left\{ M(0), \frac{\sigma _1^\alpha }{\sigma _2^\alpha } \max \left\{ N(0), \frac{\Lambda ^\alpha }{d^\alpha } \right\} \right\} \right\} .$$
where \(t_1\) and \(U\) are finite positive constants. Let
$$X = (S_1, E_1, I_1, H_1, R_1, M_1), \quad Y = (S_2, E_2, I_2, H_2, R_2, M_2),$$
be two points in \(\Gamma\). Define the mapping \(G : \Gamma \rightarrow \mathbb {R}_+^6\) such that
$$G(X) = \begin{pmatrix} G_1(X) \\ G_2(X) \\ G_3(X) \\ G_4(X) \\ G_5(X) \\ G_6(X) \end{pmatrix} = \begin{pmatrix} \Lambda ^\alpha – \left( \beta _1^\alpha – \frac{\beta _2^\alpha M}{m + M} \right) S I – d^\alpha S \\ \left( \beta _1^\alpha – \frac{\beta _2^\alpha M}{m + M} \right) S I – \varepsilon ^\alpha E – d^\alpha E \\ \varepsilon ^\alpha E – \left( \xi _1^\alpha + \delta _1^\alpha + \mu _1^\alpha + d^\alpha \right) I \\ \mu _1^\alpha I – \left( \xi _2^\alpha + \delta _2^\alpha + d^\alpha \right) H \\ \delta _1^\alpha I + \delta _2^\alpha H – d^\alpha R \\ \sigma _1^\alpha I(t – \tau ) – \sigma _2^\alpha M \end{pmatrix}.$$
We show that
$$\Vert G(Y) – G(Z) \Vert = |G_1(Y) – G_1(Z)| + |G_2(Y) – G_2(Z)| + \cdots + |G_6(Y) – G_6(Z)|.$$
Then, we calculate \(G(Y) – G(Z)\) for each component:
$$\begin{aligned} |G_1(Y) – G_1(Z)|&= \left| -\left( \beta _1^\alpha – \beta _2^\alpha \frac{M_1}{m + M_1} \right) S_1 I_1 + \left( \beta _1^\alpha – \beta _2^\alpha \frac{M_2}{m + M_2} \right) S_2 I_2- d^{\alpha } (S_1 – S_2) \right| , \\ |G_2(Y) – G_2(Z)|&= \left| \left( \beta _1^\alpha – \beta _2^\alpha \frac{M_1}{m + M_1} \right) S_1 I_1 – \left( \beta _1^\alpha – \beta _2^\alpha \frac{M_2}{m + M_2} \right) S_2 I_2 – \varepsilon ^{\alpha } (E_1 – E_2) – d^{\alpha } (E_1 – E_2). \right| , \\ |G_3(Y) – G_3(Z)|&= \left| \varepsilon ^{\alpha } (E_1 – E_2) – \left( \xi _1^\alpha + \delta _1^\alpha + \mu _1^\alpha + d^{\alpha } \right) (I_1 – I_2) \right| , \\ |G_4(Y) – G_4(Z)|&= \left| \mu _1^\alpha (I_1 – I_2) – \left( \xi _2^\alpha + \delta _2^\alpha + d^{\alpha } \right) (H_1 – H_2) \right| , \\ |G_5(Y) – G_5(Z)|&= \left| \delta _1^\alpha (I_1 – I_2) + \delta _2^\alpha (H_1 – H_2) – d^{\alpha } (R_1 – R_2) \right| , \\ |G_6(Y) – G_6(Z)|&= \left| \sigma _1^\alpha (I_1(t – \tau ) – I_2(t – \tau )) – \sigma _2^\alpha (M_1 – M_2) \right| . \end{aligned}$$
we get:
$$\begin{aligned} \Vert G(Y) – G(Z) \Vert&= d^{\alpha } \left| S_1 – S_2 \right| + 2 \left| \left( \beta _1^\alpha – \beta _2^\alpha \frac{M_1}{m + M_1} \right) S_1 I_1 \right| \\&\quad + 2 \left| \left( \beta _1^\alpha – \beta _2^\alpha \frac{M_2}{m + M_2} \right) S_2 I_2 \right| + (2 \varepsilon ^{\alpha } + d^{\alpha }) \left| E_1 – E_2 \right| \\&\quad + \sigma _1^\alpha \left| I_1(t – \tau ) – I_2(t – \tau ) \right| + \left( \xi _1^\alpha + 2 \delta _1^\alpha + 2 \mu _1^\alpha + d^{\alpha } \right) \left| I_1 – I_2 \right| \\&\quad + \left( \xi _2^\alpha + 2 \delta _2^\alpha + d^{\alpha } \right) \left| H_1 – H_2 \right| + d^{\alpha } \left| R_1 – R_2 \right| + \sigma _2^\alpha \left| M_1 – M_2 \right| \\&\le (d^{\alpha } + 2U) \left| S_1 – S_2 \right| + (2 \varepsilon ^{\alpha } + d^{\alpha }) \left| E_1 – E_2 \right| + \left( \sigma _1^\alpha U + \xi _1^\alpha + 2 \delta _1^\alpha + 2 \mu _1^\alpha + d^{\alpha } \right) \left| I_1 – I_2 \right| \\&\quad + \left( \xi _2^\alpha + 2 \delta _2^\alpha + d^{\alpha } \right) \left| H_1 – H_2 \right| + d^{\alpha } \left| R_1 – R_2 \right| + \sigma _2^\alpha \left| M_1 – M_2 \right| \\&= L_1 \left| S_1 – S_2 \right| + L_2 \left| E_1 – E_2 \right| + L_3 \left| I_1 – I_2 \right| + L_4 \left| H_1 – H_2 \right| \\&\quad + L_5 \left| R_1 – R_2 \right| + L_6 \left| M_1 – M_2 \right| \\&\le L \left| Y – Z \right| . \end{aligned}$$
where
$$\begin{aligned} L_1&= d^{\alpha } + 2U, \\ L_2&= 2 \varepsilon ^{\alpha } + d^{\alpha }, \\ L_3&= \sigma _1^\alpha U + \xi _1^\alpha + 2 \delta _1^\alpha + 2 \mu _1^\alpha + d^{\alpha }, \\ L_4&= \xi _2^\alpha + 2 \delta _2^\alpha + d^{\alpha }, \\ L_5&= d^{\alpha }, \\ L_6&= \sigma _2^\alpha , \\ L&= \max (L_1, L_2, L_3, L_4, L_5, L_6). \end{aligned}$$
Thus, \(G(X)\) satisfy the Lipschitz condition. By combining with Lemma 1, it follows that model (3) with the given initial value condition has a unique solution. Therefore, combining the boundedness and non-negativity results, we have established that model (3) admits a unique solution that remains in \(\Gamma\) for all \(t \ge 0\). This completes the proof of Theorem 1. \(\square\)
Basic reproduction number
The basic reproduction number \(R_0\) is a key metric for quantifying the transmission potential of an infectious disease. It is defined as the average number of secondary infections caused by a single infected individual during their infectious period in a completely susceptible population. \(R_0\) serves as a critical threshold for determining whether a disease will spread within a population. Typically, if \(R_0 < 1\), the disease will eventually die out; if \(R_0 > 1\), the disease is likely to persist and spread, indicating the need for intervention measures to control its transmission.
In this section, we calculate \(R_0\) using the next-generation matrix method52. This method is also applicable to fractional-order epidemic models53,54. This approach constructs two key matrices, \(\Phi\) and \(\Psi\), to reformulate the dynamics of disease transmission into a mathematically tractable form. The matrix \(\Phi\) represents the transmission rates from infected individuals to new infections, while the matrix \(\Psi\) captures the transition rates of infected individuals between various states. By determining the spectral radius of these matrices, we derive the specific expression for \(R_0\). Based on the analysis of model (3), the specific forms of the matrices \(\Phi\) and \(\Psi\) are presented as follows
$$\Phi = \begin{pmatrix} 0 & \quad \beta _1^{\alpha } S_0 & \quad 0 \\ 0 & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad 0 \end{pmatrix}, \quad \Psi = \begin{pmatrix} (\varepsilon ^{\alpha } + d^{\alpha }) & \quad 0 & \quad 0 \\ -\varepsilon ^{\alpha } & \quad (\xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha }) & \quad 0 \\ 0 & \quad -\mu _1^{\alpha } & \quad (\xi _2^{\alpha } + \delta _2^{\alpha } + d^{\alpha }) \end{pmatrix}.$$
The inverse of \(\Psi\), denoted by \(\Psi ^{-1}\), is given by
$$\Psi ^{-1} = \begin{pmatrix} \frac{1}{\varepsilon ^{\alpha } + d^{\alpha }} & \quad 0 & \quad 0 \\[10pt] \frac{\varepsilon ^{\alpha }}{(\varepsilon ^{\alpha } + d^{\alpha })(\xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha })} & \quad \frac{1}{\xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha }} & \quad 0 \\[10pt] \frac{\mu _1^{\alpha } \varepsilon ^{\alpha }}{(\varepsilon ^{\alpha } + d^{\alpha })(\xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha })(\xi _2^{\alpha } + \delta _2^{\alpha } + d^{\alpha })} & \quad \frac{\mu _1}{(\xi _1 + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha })(\xi _2^{\alpha } + \delta _2^{\alpha } + d^{\alpha })} & \quad \frac{1}{\xi _2^{\alpha } + \delta _2^{\alpha } + d^{\alpha }} \end{pmatrix}.$$
then, we have
$$\Phi \Psi ^{-1} = \begin{pmatrix} \frac{\beta _1^{\alpha } S_0 \varepsilon ^{\alpha }}{(d^{\alpha } + \varepsilon ^{\alpha })(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })} & \quad \frac{\beta _1^{\alpha } S_0}{(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })} & \quad 0 \\ 0 & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad 0 \end{pmatrix}.$$
The basic reproduction number of model (3) is mathematically expressed as follows
$$\begin{aligned} \begin{aligned} R_0&= \rho (\Phi \Psi ^{-1}) \\&= \frac{\beta _1^{\alpha } S_0 \varepsilon ^{\alpha }}{(\varepsilon ^{\alpha } + d^{\alpha })(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })} \\&= \frac{\beta _1^{\alpha } \Lambda ^{\alpha } \varepsilon ^{\alpha }}{d^{\alpha }(\varepsilon ^{\alpha } + d^{\alpha })(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })}, \end{aligned} \end{aligned}$$
(4)
where \(\rho (*)\) denotes the spectral radius of a matrix \(*\).
Existence of equilibrium points
Equilibrium points represent the stable states that the system may reach over time, including the disease-free equilibrium and the endemic equilibrium. The disease-free equilibrium corresponds to a state where the disease is completely eradicated, while the endemic equilibrium indicates a state where the disease persists in the population at a stable level. By studying these equilibria, we can gain a deeper understanding of the conditions under which the disease spreads and the long-term behavior of the system. In this section, we discuss the existence of the disease-free equilibrium and the endemic equilibrium.
Theorem 2
For model (3), a disease-free equilibrium \(E_0 = \left( \frac{\Lambda ^{\alpha }}{d^{\alpha }}, 0, 0, 0, 0, 0\right)\) always exists. Furthermore, if \(R_0 > 1\), model (3) has a unique endemic equilibrium \(E^* = (S^*, E^*, I^*, H^*, R^*, M^*).\)
Proof of Theorem 2
Let the left side of model (3) equal to zero, we can obtain the following algebraic equation
$$\begin{aligned} \left\{ \begin{aligned} 0&= \Lambda – \left( \beta _1^\alpha – \beta _2^\alpha \frac{M(t)}{m + M(t)} \right) S(t) I(t) – d^\alpha S(t), \\ 0&= \left( \beta _1^\alpha – \beta _2^\alpha \frac{M(t)}{m + M(t)} \right) S(t) I(t) – \varepsilon ^\alpha E(t) – d^\alpha E(t), \\ 0&= \varepsilon ^\alpha E(t) – \left( \xi _1^\alpha + \delta _1^\alpha + \mu _1^\alpha + d^\alpha \right) I(t), \\ 0&= \mu _1^\alpha I(t) – \left( \xi _2^\alpha + \delta _2^\alpha + d^\alpha \right) H(t), \\ 0&= \delta _1^\alpha I(t) + \delta _2^\alpha H(t) – d^\alpha R(t), \\ 0&= \sigma _1^\alpha I(t – \tau ) – \sigma _2^\alpha M(t). \end{aligned} \right. \end{aligned}$$
(5)
A simple calculation shows that in Eq. (5) there always exists a disease-free equilibrium \(E_0 = \left( \frac{\Lambda ^{\alpha }}{d^{\alpha }}, 0, 0, 0, 0\right).\) Assuming that there exists a positive solution \(X^* = (S^*, E^*, I^*, H^*, R^*, M^*)\) to Eq. (5), by Eq. (5) we can express \(S^*, E^*, H^*, R^*\) and \(M^*\) in terms of \(I^*\), respectively, with the following expression:
$$\begin{aligned} S^*= & \frac{\left( \sigma _2^\alpha m + \sigma _1^\alpha I^* \right) (\varepsilon ^{\alpha } + d^{\alpha })(\xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha })}{\varepsilon ^{\alpha } \left[ \beta _1^{\alpha }\left( \sigma _2^\alpha m + \sigma _1^\alpha I^* \right) – \beta _2^{\alpha } \sigma _1^\alpha I^* \right] }, \\ E^*= & \frac{(\xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha })}{\varepsilon ^{\alpha }} I^*, \\ H^*= & \frac{\mu _1^{\alpha }}{\xi _2^{\alpha } + \delta _2^{\alpha } + d^{\alpha }} I^*, \\ R^*= & \frac{[(\xi _2^{\alpha } + \delta _2^{\alpha } + d^{\alpha })\delta _1^{\alpha } + \mu _1^{\alpha } \delta _2^{\alpha }]}{d^{\alpha }(\xi _2^{\alpha } + \delta _2^{\alpha } + d^{\alpha })}I^*, \\ M^*= & \frac{\sigma _1^\alpha }{\sigma _2^\alpha } I^*. \end{aligned}$$
By substituting the expressions of \(S^*\), \(E^*\), \(H^*\), \(R^*\), and \(M^*\), which are expressed in terms of \(I^*\), into the first equation of Eq. (5), we obtain the following expression:
$$\begin{aligned} 0= & \Lambda ^{\alpha } \varepsilon ^{\alpha } \left[ \beta _1^{\alpha }\left( \frac{\sigma _1^\alpha }{\sigma _2^\alpha }I^* + m\right) – \frac{\sigma _1^\alpha }{\sigma _2^\alpha }I^* \beta _2^{\alpha }\right] – (\varepsilon ^{\alpha } + d^{\alpha })(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha }) \\ & \times \left[ I\left( \beta _1^{\alpha }\left( \frac{\sigma _1^\alpha }{\sigma _2^\alpha }I^* + m\right) – \frac{\sigma _1^\alpha }{\sigma _2^\alpha }I^* \beta _2^{\alpha }\right) + d^{\alpha }\left( \frac{\sigma _1^\alpha }{\sigma _2^\alpha }I^* + m\right) \right] \end{aligned}$$
To simplify the above equation, we group the terms by their powers of \(I^*\). The quadratic term, linear term, and constant term are represented as coefficients \(a_1\), \(a_2\), and \(a_3\), respectively. This allows us to rewrite the equation in the standard quadratic form:
$$a_1 I^2 + a_2 I + a_3 = 0,$$
where the coefficients \(a_1\), \(a_2\), and \(a_3\) are defined as follows:
$$\begin{aligned} a_1&= \frac{\sigma _1^\alpha }{\sigma _2^\alpha }(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })(d^{\alpha } + \varepsilon ^{\alpha })(\beta _2^{\alpha } – \beta _1^{\alpha }), \\ \\ a_2&= \frac{\sigma _1^\alpha }{\sigma _2^\alpha }\Lambda ^{\alpha } \varepsilon ^{\alpha }(\beta _1^{\alpha } – \beta _2^{\alpha }) – (d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })(d^{\alpha } + \varepsilon ^{\alpha })\left( d^{\alpha }\frac{\sigma _1^\alpha }{\sigma _2^\alpha } + m \beta _1^{\alpha }\right) \\&= \frac{\sigma _1^\alpha }{\sigma _2^\alpha }\left[ \Lambda ^{\alpha } \varepsilon ^{\alpha } \beta _1^{\alpha } – (d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })(d^{\alpha } + \varepsilon ^{\alpha })d^{\alpha }\right] – \frac{\sigma _1^\alpha }{\sigma _2^\alpha }\beta _2^{\alpha }\Lambda ^{\alpha } \varepsilon ^{\alpha } – (d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })(d^{\alpha } + \varepsilon ^{\alpha })m ^{\alpha }\beta _1^{\alpha } \\&= \frac{\sigma _1^\alpha }{\sigma _2^\alpha }d^{\alpha }(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })(d^{\alpha } + \varepsilon ^{\alpha })(R_0 – 1) – \frac{\sigma _1^\alpha }{\sigma _2^\alpha }\beta _2^{\alpha }\Lambda ^{\alpha } \varepsilon ^{\alpha } – (d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })(d^{\alpha } + \varepsilon ^{\alpha })m \beta _1^{\alpha }, \\ \\ a_3&= m \left[ \beta _1^{\alpha } \Lambda ^{\alpha } \varepsilon ^{\alpha } – d^{\alpha }(\varepsilon ^{\alpha } + d^{\alpha })(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })\right] \\&= m d^{\alpha }(\varepsilon ^{\alpha } + d^{\alpha })(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })(R_0 – 1). \end{aligned}$$
Clearly, \(a_1\) is always negative. When \(R_0 < 1\), it follows that \(a_2 < 0\) and \(a_3 < 0\). According to Descartes’ rule of signs55, this implies that the equation does not have any positive real roots when \(R_0 < 1\). When \(R_0 > 1,\) then \(a_3 > 0\). it is necessary to further distinguish the signs of \(a_2\). In one scenario, where \(a_2 < 0\), the negative \(a_2\) combined with a positive \(a_3\) results in exactly one sign change in the sequence of the polynomial coefficients; thus, by Descartes’ rule of signs, there is precisely one positive real root. Alternatively, if \(a_2 > 0\) , the coefficient sequence still exhibits exactly one sign change, leading again to the conclusion that the polynomial has exactly one positive real root. Therefore, when \(R_0 > 1\), model (3) admits a unique endemic equilibrium \(E^* = (S^*, E^*, I^*, H^*, R^*)\). \(\square\)
Stability of the disease-free equilibrium
Theorem 3
The disease-free equilibrium point \(E_0\) of model (3) is locally asymptotically stable if the conditions \(R_0 < 1\) is satisfied.
Proof of Theorem 3
At the disease-free equilibrium \(E_0\), the Jacobian matrix \(J_{E_0}\) is given by
$$J_{E_0} = \begin{pmatrix} -d^{\alpha } & \quad 0 & \quad -\frac{\beta _1^{\alpha } \Lambda ^{\alpha }}{d^{\alpha }} & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad -(\varepsilon ^{\alpha } + d^{\alpha }) & \quad \frac{\beta _1^{\alpha } \Lambda ^{\alpha }}{d^{\alpha }} & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad \varepsilon ^{\alpha } & \quad -(\xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha }) & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad \mu _1^{\alpha } & \quad -(\xi _2^{\alpha } + \delta _2^{\alpha } + d^{\alpha }) & \quad 0 & \quad 0 \\ 0 & \quad 0 & \quad \delta _1^{\alpha } & \quad \delta _2^{\alpha } & \quad -d^{\alpha } & \quad 0 \\ 0 & \quad 0 & \quad \sigma _1^\alpha e^{-\lambda \tau } & \quad 0 & \quad 0 & \quad -\sigma _2^{\alpha } \end{pmatrix}.$$
The corresponding characteristic equation is given by \(\det (\lambda E – J(E_0)) = 0\), where \(E\) is the identity matrix. Therefore, we obtain:
$$(\lambda + d^{\alpha } )(\lambda + d^{\alpha } )(\lambda + \sigma _2^{\alpha } )[\lambda + (d^{\alpha } + \delta _2^{\alpha } + \xi _2^{\alpha })] ( \lambda ^2 + c_1 \lambda + c_2) = 0,$$
Clearly, \(\lambda _1 = -d^{\alpha }\), \(\lambda _2 = -(d^{\alpha } + \delta _2^{\alpha } + \xi _2^{\alpha })\) , \(\lambda _3 = -d^{\alpha }\) and \(\lambda _4 = -\sigma _2^{\alpha }\) are four eigenvalues of the matrix \(J_{E_0}.\)
The remaining eigenvalues \(\lambda _5\) and \(\lambda _6\) are the roots of the following equation:
$$\lambda ^2 + c_1 \lambda + c_2 = 0,$$
where
$$\begin{aligned} c_1&= 2d^{\alpha } + \varepsilon ^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha }, \\ c_2&= (d^{\alpha } + \varepsilon ^{\alpha })(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha }) – \frac{\Lambda ^{\alpha } \beta _1^{\alpha } \varepsilon ^{\alpha }}{d^{\alpha }} \\&= (d^{\alpha } + \varepsilon ^{\alpha })(d^{\alpha } + \delta _1^{\alpha } + \xi _1^{\alpha } + \mu _1^{\alpha })(1 – R_0). \end{aligned}$$
To determine the stability of the equilibrium, consider the condition \(R_0 < 1\). In this case, it can be shown that \(c_2 > 0\). By applying the Routh–Hurwitz conditions for fractional-order differential equations56, the eigenvalues satisfy \(|\text {arg}(\lambda _i)| > \frac{\alpha \pi }{2}\) for \(i = 1, 2, 3, 4, 5, 6\). Therefore, the stability conditions are satisfied, and the disease-free equilibrium \(E_0\) is locally asymptotically stable for any \(\alpha \in (0, 1]\), provided that \(R_0 < 1\). \(\square\)
Stability of the endemic disease equilibrium
Stability of the endemic disease equilibrium for \(\tau = 0\)
Theorem 4
If \(R_0 > 1\) and \(\tau = 0\), the endemic equilibrium \(E^* = (S^*, E^*, I^*, H^*, R^*, M^*)\) of the model (3) is locally asymptotically stable, provided the following conditions are satisfied:
$$a_1> 0, \quad a_4 > 0,$$
$$\Delta _2 = \begin{vmatrix} a_1&\quad a_3 \\ 1&\quad a_2 \end{vmatrix} = a_1 a_2 – a_3 > 0,$$
$$\Delta _3 = \begin{vmatrix} a_1&\quad a_3&\quad 0 \\ 1&\quad a_2&\quad a_4 \\ 0&\quad a_1&\quad a_3 \end{vmatrix} = a_1 a_2 a_3 – a_1^2 a_4 – a_3^2 > 0.$$
where
$$\begin{aligned} a_1&= u + v – \sigma _2^{\alpha } + p I^* + d^{\alpha }, \\ a_2&= [uv – \sigma _2^{\alpha }(u + v)] – p S^* \varepsilon ^{\alpha } + (p I^* + d^{\alpha })(u + v – \sigma _2^{\alpha }), \\ a_3&= – u v \sigma _2^{\alpha } – \varepsilon ^{\alpha } p S^* \sigma _2^{\alpha } + \sigma _1^{\alpha } q + p^2 \varepsilon ^{\alpha } S^* \\&\quad + (p I^* + d^{\alpha })[uv – (u + v)\sigma _2^{\alpha }] – (p I^* + d)p S^* \varepsilon ^{\alpha }, \\ a_4&= p^2 S^* \varepsilon ^{\alpha } \sigma _2^{\alpha } – q p \varepsilon ^{\alpha } \sigma _1^{\alpha } – (p I^* + d^{\alpha })(u v \sigma _2^{\alpha } + p S^* \varepsilon ^{\alpha } \sigma _2^{\alpha } – \sigma _1^{\alpha } q), \\ p&= \beta _1^{\alpha } – \beta _2^{\alpha } \frac{M^*}{m + M^*}, \\ q&= \frac{m \beta _2^{\alpha } S^* I^*}{(m + M^*)^2}, \\ u&= d^{\alpha } + \varepsilon ^{\alpha }, \\ v&= \xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha }. \end{aligned}$$
Proof of Theorem 4
At the endemic equilibrium \(E^*\), the Jacobian matrix \(J_{E*}\) of model (3) is given by:
$$J_{E^*} = \begin{bmatrix} -\left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) I^* – d^\alpha & 0 & -\left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) S^* & 0 & 0 & \frac{m \beta _2^\alpha S^* I^*}{(m + M^*)^2} \\ \left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) I^* & -\varepsilon ^\alpha – d^\alpha & \left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) S^* & 0 & 0 & – \frac{m \beta _2^\alpha S^* I^*}{(m + M^*)^2} \\ 0 & \varepsilon ^\alpha & -\left( \xi _1^\alpha + \delta _1^\alpha + \mu _1^\alpha + d^\alpha \right) & 0 & 0 & 0 \\ 0 & 0 & \mu _1^\alpha & -\left( \xi _2^\alpha + \delta _2^\alpha + d^\alpha \right) & 0 & 0 \\ 0 & 0 & \delta _1^\alpha & \delta _2^\alpha & -d^\alpha & 0 \\ 0 & 0 & \sigma _1^\alpha & 0 & 0 & -\sigma _2^\alpha \end{bmatrix}.$$
From the structure of the matrix \(J_{E^*}\), the following eigenvalues can be directly identified:
$$\lambda _1 = -d^\alpha , \quad \lambda _2 = -(\xi _2^\alpha + \delta _2^\alpha + d^\alpha ).$$
These eigenvalues are both negative. To analyze the remaining eigenvalues, we remove the rows and columns associated with \(\lambda _1\) and \(\lambda _2\). The reduced Jacobian matrix is:
$$J_2 = \begin{bmatrix} -\left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) I^* – d^\alpha & 0 & -\left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) S^* & \frac{m \beta _2^\alpha S^* I^*}{(m + M^*)^2} \\ \left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) I^* & -\varepsilon ^\alpha – d^\alpha & \left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) S^* & – \frac{m \beta _2^\alpha S^* I^*}{(m + M^*)^2} \\ 0 & \varepsilon ^\alpha & -\left( \xi _1^\alpha + \delta _1^\alpha + \mu _1^\alpha + d^\alpha \right) & 0 \\ 0 & 0 & \sigma _1^\alpha & -\sigma _2^\alpha \end{bmatrix}.$$
The characteristic equation is
$$\lambda ^4 + a_1 \lambda ^3 + a_2 \lambda ^2 + a_3 \lambda + a_4 = 0,$$
where
$$\begin{aligned} a_1&= u + v – \sigma _2^{\alpha } + p I^* + d^{\alpha }, \\ a_2&= [uv – \sigma _2^{\alpha }(u + v)] – p S^* \varepsilon ^{\alpha } + (p I^* + d^{\alpha })(u + v – \sigma _2^{\alpha }), \\ a_3&= – u v \sigma _2^{\alpha } – \varepsilon ^{\alpha } p S^* \sigma _2^{\alpha } + \sigma _1^{\alpha } q + p^2 \varepsilon ^{\alpha } S^* \\&\quad + (p I^* + d^{\alpha })[uv – (u + v)\sigma _2^{\alpha }] – (p I^* + d)p S^* \varepsilon ^{\alpha }, \\ a_4&= p^2 S^* \varepsilon ^{\alpha } \sigma _2^{\alpha } – q p \varepsilon ^{\alpha } \sigma _1^{\alpha } – (p I^* + d^{\alpha })(u v \sigma _2^{\alpha } + p S^* \varepsilon ^{\alpha } \sigma _2^{\alpha } – \sigma _1^{\alpha } q), \\ p&= \beta _1^{\alpha } – \beta _2^{\alpha } \frac{M^*}{m + M^*}, \\ q&= \frac{m \beta _2^{\alpha } S^* I^*}{(m + M^*)^2}, \\ u&= d^{\alpha } + \varepsilon ^{\alpha }, \\ v&= \xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha }. \end{aligned}$$
From Lemma 2, we can say that the endemic disease equilibrium \(E^* = (S^*, E^*, I^*, H^*, R^*, M^*)\) is locally asymptotically stable, provided the following conditions are satisfied:
$$\begin{aligned} \Delta _1= & \begin{vmatrix} a_1 \end{vmatrix} = a_1> 0, \\ \Delta _2= & \begin{vmatrix} a_1&\quad a_3 \\ 1&\quad a_2 \end{vmatrix} = a_1 a_2 – a_3> 0, \\ \Delta _3= & \begin{vmatrix} a_1&\quad a_3&\quad 0 \\ 1&\quad a_2&\quad a_4 \\ 0&\quad a_1&\quad a_3 \end{vmatrix} = a_1 a_2 a_3 – a_1^2 a_4 – a_3^2> 0, \\ \Delta _4= & \begin{vmatrix} a_1&\quad a_3&\quad 0&\quad 0 \\ 1&\quad a_2&\quad a_4&\quad 0 \\ 0&\quad a_1&\quad a_3&\quad 0 \\ 0&\quad 0&\quad a_2&\quad a_4 \end{vmatrix} = a_4 \Delta _3 > 0. \end{aligned}$$
\(\square\)
Stability of the endemic disease equilibrium for \(\tau > 0\) and Hopf bifurcation
Theorem 5
This theorem holds if the following conditions are satisfied:
$$(-3b_1 \omega _0^2 + b_3)(-b_1 \omega _0^4 + b_3 \omega _0^2) + (-4\omega _0^3 + 2b_2 \omega _0)(-\omega _0^5 + b_2 \omega _0^3 – b_4 \omega _0) > 0,$$
$$b_4^2 < (q p \varepsilon ^{\alpha } \sigma _1^{\alpha })^2,$$
where
$$\begin{aligned} b_1&= u + v – \sigma _2^{\alpha } + p I^* + d^{\alpha }, \\ b_2&= [uv – \sigma _2^{\alpha }(u + v)] – p S^* \varepsilon ^{\alpha } + (p I^* + d^{\alpha })(u + v – \sigma _2^{\alpha }), \\ b_3&= – u v \sigma _2^{\alpha } – \varepsilon ^{\alpha } p S^* \sigma _2^{\alpha } + p^2 \varepsilon ^{\alpha } S^* \\&\quad + (p I^* + d^{\alpha })[uv – (u + v)\sigma _2^{\alpha }] – (p I^* + d^{\alpha })p S^* \varepsilon ^{\alpha }, \\ b_4&= p^2 S^* \varepsilon ^{\alpha } \sigma _2^{\alpha } – (p I^* + d^{\alpha })(u v \sigma _2^{\alpha } + p S^* \varepsilon ^{\alpha } \sigma _2^{\alpha } – \sigma _1^{\alpha } q), \\ b_5&= \sigma _1^{\alpha } q – q p \varepsilon ^{\alpha } \sigma _1^{\alpha }, \\ p&= \beta _1^{\alpha } – \beta _2^{\alpha } \frac{M^*}{m + M^*}, \\ q&= \frac{m \beta _2^{\alpha } S^* I^*}{(m + M^*)^2}, \\ u&= d^{\alpha } + \varepsilon ^{\alpha }, \\ v&= \xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha }. \end{aligned}$$
Under these conditions, when \(R_0 > 1\), there exists a bifurcation parameter \(\tau _0\). Specifically:
-
(ii)
If \(\tau < \tau _0\), the endemic equilibrium \(E^*\) is locally asymptotically stable;
-
(i)
If \(\tau > \tau _0\), the endemic equilibrium \(E^*\) becomes unstable;
-
(iii)
At \(\tau = \tau _0\), a Hopf bifurcation occurs at \(E^*\).
The bifurcation parameter \(\tau _0\) is given by:
$$\tau _0 = \frac{1}{\omega _0} \arccos \left( \frac{q p \varepsilon \sigma _1(\omega ^4 – b_2 \omega ^2 + b_4) + \sigma _1 q(b_1 \omega ^3 – b_3 \omega )}{(q p \varepsilon \sigma _1)^2 + (\sigma _1 q)^2} \right) .$$
Proof of Theorem 5
When \(\tau > 0\), the Jacobian matrix of model (3) at the endemic equilibrium \(E^*\) is given by:
$$J = \begin{bmatrix} -\left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) I^* – d^\alpha & 0 & -\left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) S^* & 0 & 0 & \frac{m \beta _2^\alpha S^* I^*}{(m + M^*)^2} \\ \left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) I^* & -\varepsilon ^\alpha – d^\alpha & \left( \beta _1^\alpha – \frac{\beta _2^\alpha M^*}{m + M^*} \right) S^* & 0 & 0 & – \frac{m \beta _2^\alpha S^* I^*}{(m + M^*)^2} \\ 0 & \varepsilon ^\alpha & -\left( \xi _1^\alpha + \delta _1^\alpha + \mu _1^\alpha + d^\alpha \right) & 0 & 0 & 0 \\ 0 & 0 & \mu _1^\alpha & -\left( \xi _2^\alpha + \delta _2^\alpha + d^\alpha \right) & 0 & 0 \\ 0 & 0 & \delta _1^\alpha & \delta _2^\alpha & -d^\alpha & 0 \\ 0 & 0 & \sigma _1^\alpha e^{-\lambda \tau } & 0 & 0 & -\sigma _2^\alpha \end{bmatrix}.$$
The characteristic equation of model (3) is given by:
$$(\lambda + d^\alpha )(\lambda + \xi _2^\alpha + \delta _2^\alpha + d^\alpha )(\lambda ^4 + b_1 \lambda ^3 + b_2 \lambda ^2 + b_3 \lambda + b_4 + b_5 e^{\lambda \tau }) = 0,$$
It is evident that the eigenvalues include \(\lambda _1 = -d^\alpha\) and \(\lambda _2 = -(\xi _2^\alpha + \delta _2^\alpha + d^\alpha )\). Thus, the remaining eigenvalues satisfy the following equation:
$$\begin{aligned} \lambda ^4 + b_1 \lambda ^3 + b_2 \lambda ^2 + b_3 \lambda + b_4 + b_5 e^{\lambda \tau } = 0, \end{aligned}$$
(6)
where
$$\begin{aligned} b_1&= u + v – \sigma _2^{\alpha } + p I^* + d^{\alpha }, \\ b_2&= [uv – \sigma _2^{\alpha }(u + v)] – p S^* \varepsilon ^{\alpha } + (p I^* + d^{\alpha })(u + v – \sigma _2^{\alpha }), \\ b_3&= – u v \sigma _2^{\alpha } – \varepsilon ^{\alpha } p S^* \sigma _2^{\alpha } + p^2 \varepsilon ^{\alpha } S^* \\&\quad + (p I^* + d^{\alpha })[uv – (u + v)\sigma _2^{\alpha }] – (p I^* + d^{\alpha })p S^* \varepsilon ^{\alpha }, \\ b_4&= p^2 S^* \varepsilon ^{\alpha } \sigma _2^{\alpha } – (p I^* + d^{\alpha })(u v \sigma _2^{\alpha } + p S^* \varepsilon ^{\alpha } \sigma _2^{\alpha } – \sigma _1^{\alpha } q), \\ b_5&= \sigma _1^{\alpha } q – q p \varepsilon ^{\alpha } \sigma _1^{\alpha }, \\ p&= \beta _1^{\alpha } – \beta _2^{\alpha } \frac{M^*}{m + M^*}, \\ q&= \frac{m \beta _2^{\alpha } S^* I^*}{(m + M^*)^2}, \\ u&= d^{\alpha } + \varepsilon ^{\alpha }, \\ v&= \xi _1^{\alpha } + \delta _1^{\alpha } + \mu _1^{\alpha } + d^{\alpha }. \end{aligned}$$
To demonstrate that model (3) undergoes a Hopf bifurcation at the endemic equilibrium \(E^*\), it is necessary to show that Eq. (6) has a pair of purely imaginary eigenvalues. Substituting the eigenvalue \(\lambda = \omega i, (\omega > 0)\) into Eq. (6), we have:
$$\begin{aligned} \omega ^4 – b_1 \omega ^3 i – b_2 \omega ^2 + b_3 \omega i + b_4 + b_5 (\cos {\omega \tau } + i \sin {\omega \tau }) = 0. \end{aligned}$$
(7)
Then, separating the real and imaginary parts, we obtain:
$$\begin{aligned} {\left\{ \begin{array}{ll} \omega ^4 – b_2 \omega ^2 + b_4 = q p \varepsilon ^{\alpha } \sigma _1^{\alpha } \cos {\omega \tau } – \sigma _1^{\alpha } q \omega \sin {\omega \tau }, \\ b_1 \omega ^3 – b_3 \omega = \sigma _1^{\alpha } q \omega \cos {\omega \tau } + q p \varepsilon ^{\alpha } \sigma _1^{\alpha } \sin {\omega \tau }. \end{array}\right. } \end{aligned}$$
(8)
It follows that:
$$\begin{aligned} \tau _0 = \frac{1}{\omega _0} \arccos \left( \frac{q p \varepsilon ^{\alpha } \sigma _1^{\alpha }(\omega ^4 – b_2 \omega ^2 + b_4) + \sigma _1^{\alpha } q(b_1 \omega ^3 – b_3 \omega )}{(q p \varepsilon ^{\alpha } \sigma _1^{\alpha })^2 + (\sigma _1^{\alpha } q)^2} \right) . \end{aligned}$$
(9)
By eliminating \(\tau\) from the system of equations (8)
$$\begin{aligned} \omega ^8 + (b_1^2 – 2b_2)\omega ^6 + (b_2^2 – 2b_1 b_3 +2b_4)\omega ^4 + (b_3^2 – 2b_2 b_4 – (\sigma _1^{\alpha } q)^2)\omega ^2 + b_4^2 – (q p \varepsilon ^{\alpha } \sigma _1^{\alpha })^2 = 0. \end{aligned}$$
(10)
Let \(z = \omega ^2\)
$$\begin{aligned} g(z) = z^4 + k_1 z^3 + k_2 z^2 + k_3 z + k_4. \end{aligned}$$
(11)
where
$$\begin{aligned} k_1&= b_1^2 – 2b_2, \\ k_2&= b_2^2 – 2b_1 b_3 +2b_4, \\ k_3&= b_3^2 – 2b_2 b_4 – (\sigma _1^{\alpha } q)^2, \\ k_4&= b_4^2 – (q p \varepsilon ^{\alpha } \sigma _1^{\alpha })^2. \end{aligned}$$
Let \(k_4 < 0\), then \(b_4^2 < (q p \varepsilon ^{\alpha } \sigma _1^{\alpha })^2\), and thus \(g(0) < 0\) and \(g(+\infty ) \rightarrow +\infty\). Therefore, Eq. (11) has at least one positive root. Define \(\sqrt{z_0} = \omega _0\), then Eq. (11) has a pair of purely imaginary roots \(\pm \omega _0 i\).
Returning to the bifurcation analysis, let \(\tau\) be a parameter and consider Eq. (11) as a function of \(\tau.\) Define \(\lambda (\tau ) = \eta (\tau ) + \omega (\tau )i\) as the eigenvalue of Eq. (11). Suppose the initial parameter value \(\tau _0\) satisfies \(\eta (\tau _0) = 0, \omega (\tau _0) = \omega _0\), and \(\omega _0 > 0\). To establish the existence of a Hopf bifurcation, it is necessary to verify the sign of \(\frac{d\text {Re}(\lambda (\tau ))}{d\tau }\big |_{\tau =\tau _0}\).
For Eq. (6), differentiating with respect to \(\tau\), we have:
$$(4\lambda ^3 + 3 b_1 \lambda ^2 + 2 b_2 \lambda + b_3 – b_5 \tau e^{\lambda \tau }) \frac{d\lambda }{d\tau } – b_5 \lambda e^{-\lambda \tau } = 0,$$
After rearrangement, we get:
$$\left(\frac{d\lambda }{d\tau }\right)^{-1} = \frac{4\lambda ^3 + 3 b_1 \lambda ^2 + 2 b_2 \lambda + b_3}{- \lambda (\lambda ^4 + b_1 \lambda ^3 + b_2 \lambda ^2 + b_3 \lambda + b_4)} – \frac{\tau }{\lambda }.$$
Therefore,
$$\text {sign}\left[ \frac{d \text {Re}\lambda }{d \tau }\right] _{\tau = \tau _0} = \text {sign}\left[ \text {Re}\left( \frac{d\lambda }{d\tau }\right) ^ {-1}\right] _{\tau = \tau _0}.$$
So
$$\begin{aligned} & \text {sign} \left\{ \text {Re} \frac{4\lambda ^3 + 3 b_1 \lambda ^2 + 2 b_2 \lambda + b_3}{- \lambda (\lambda ^4 + b_1 \lambda ^3 + b_2 \lambda ^2 + b_3 \lambda + b_4)} \right\} _{\lambda = \omega _0 i}. \\ & \text {sign} \left\{ \text {Re} \frac{4(\omega _0 i)^3 + 3 b_1 (\omega _0 i)^2 + 2 b_2 (\omega _0 i) + b_3}{- (\omega _0 i) ((\omega _0 i)^4 + b_1 (\omega _0 i)^3 + b_2 (\omega _0 i)^2 + b_3 (\omega _0 i) + b_4)} \right\} . \\ & \text {sign} \left\{ \text {Re} \left( \frac{-4\omega _0^3 i – 3b_1 \omega _0^2 + 2b_2 \omega _0 i + b_3}{-\omega _0^5 i – b_1 \omega _0^4 + b_2 \omega _0^3 i + b_3 \omega _0^2 – b_4 \omega _0 i} \right) \right\} . \\ & \text {sign} \left\{ \text {Re} \frac{(-3b_1 \omega _0^2 + b_3) + i(-4\omega _0^3 + 2b_2 \omega _0)}{(-b_1 \omega _0^4 + b_3 \omega _0^2) + i(-\omega _0^5 + b_2 \omega _0^3 – b_4 \omega _0)} \right\} . \end{aligned}$$
According to the rules of complex division:
$$\frac{a + bi}{c + di} = \frac{(ac + bd) + i(bc – ad)}{c^2 + d^2}.$$
Consequently,
$$\text {Re} = \frac{(-3b_1 \omega _0^2 + b_3)( – b_1 \omega _0^4 + b_3 \omega _0^2) + (-4\omega _0^3 + 2b_2 \omega _0)(-\omega _0^5 + b_2 \omega _0^3 – b_4 \omega _0)}{( – b_1 \omega _0^4 + b_3 \omega _0^2)^2 + (-\omega _0^5 + b_2 \omega _0^3 – b_4 \omega _0)^2}.$$
From Lemma 4, it follows that the theorem can be proven if the sign of the expression
$$\text {sign} \left\{ (-3b_1 \omega _0^2 + b_3)( – b_1 \omega _0^4 + b_3 \omega _0^2) + (-4\omega _0^3 + 2b_2 \omega _0)(-\omega _0^5 + b_2 \omega _0^3 – b_4 \omega _0) \right\}$$
is greater than zero. \(\square\)