In this paper, we propose a dimensional splitting method for the three dimensional (3D) rotating Navier-Stokes equations. Assume that the domain is a channel bounded by two surfaces Im and is decomposed by a series of surfaces Im_{i} into several sub-domains, which are called the layers of the flow. Every interface Im_{i} between two sub-domains shares the same geometry. After establishing a semi-geodesic coordinate (S-coordinate) system based on Im_{i}, Navier-Stoke equations in this coordinate can be expressed as the sum of two operators, of which one is called the membrane operator defined on the tangent space on Im_{i}, another one is called the bending operator taking value in the normal space on Im_{i}. Then the derivatives of velocity with respect to the normal direction of the surface are approximated by the Euler central difference, and an approximate form of Navier-Stokes equations on the surface Im_{i} is obtained, which is called the two-dimensional three-component (2D-3C) Navier-Stokes equations on a two dimensional manifold. Solving these equations by alternate iteration, an approximate solution to the original 3D Navier-Stokes equations is obtained. In addition, the proof of the existence of solutions to 2D-3C Navier-Stokes equations is provided, and some approximate methods for solving 2D-3C Navier-Stokes equations are presented.
Length-biased data arise in many important fields, including epidemiological cohort studies, cancer screening trials and labor economics. Analysis of such data has attracted much attention in the literature. In this paper we propose a quantile regression approach for analyzing right-censored and length-biased data. We derive an inverse probability weighted estimating equation corresponding to the quantile regression to correct the bias due to length-bias sampling and informative censoring. This method can easily handle informative censoring induced by length-biased sampling. This is an appealing feature of our proposed method since it is generally difficult to obtain unbiased estimates of risk factors in the presence of length-bias and informative censoring. We establish the consistency and asymptotic distribution of the proposed estimator using empirical process techniques. A resampling method is adopted to estimate the variance of the estimator. We conduct simulation studies to evaluate its finite sample performance and use a real data set to illustrate the application of the proposed method.
We discuss a variant of the multi-task n-vehicle exploration problem. Instead of requiring an optimal permutation of vehicles in every group, the new problem requires all vehicles in a group to arrive at the same destination. Given n tasks with assigned consume-time and profit, it may also be viewed as a maximization of every processor’s average profit. Further, we propose a new kind of partition problem in fractional form and analyze its computational complexity. By regarding fractional partition as a special case, we prove that the average profit maximization problem is NP-hard when the number of processors is fixed and it is strongly NPhard in general. At last, a pseudo-polynomial time algorithm for the average profit maximization problem and the fractional partition problem is presented, using the idea of the pseudo-polynomial time algorithm for the classical partition problem.
In this paper, we consider the fault-tolerant concave facility location problem (FTCFL) with uniform requirements. By investigating the structure of the FTCFL, we obtain a modified dual-fitting bifactor approximation algorithm. Combining the scaling and greedy argumentation technique, the approximation factor is proved to be 1.52.
This article solved the asymptotic solution of a singularly perturbed boundary value problem with second order turning point, encountered in the dissipative equilibrium vector field of the coupled convection disturbance kinetic equations under the constrained filed and the gravity. Using the matching of asymptotic expansions, the formal asymptotic solution is constructed. By using the theory of differential inequality the uniform validity of the asymptotic expansion for the solution is proved.
In this paper, we get the Hájek-Rényi-type inequalities for a pairwise NQD sequence, an ^{r} (r > 1) mixingale and a linear process, which have the concrete coefficients. In addition, we obtain the strong law of large numbers, strong growth rate and the integrability of supremum for the above sequences, which generalize and improve Corollary 2 for ^{r}(r > 1) mixingale of Hansen.
We consider an optimization problem of an insurance company in the diffusion setting, which controls the dividends payout as well as the capital injections. To maximize the cumulative expected discounted dividends minus the penalized discounted capital injections until the ruin time, there is a possibility of (cheap or non-cheap) proportional reinsurance. We solve the control problems by constructing two categories of suboptimal models, one without capital injections and one with no bankruptcy by capital injection. Then we derive the explicit solutions for the value function and totally characterize the optimal strategies. Particularly, for cheap reinsurance, they are the same as those in the model of no bankruptcy.
The main purpose of this paper is to prove the well-posedness of the two-dimensional Boussinesq equations when the initial vorticity ω0 ∈ L^{1}(R^{2}) (or the finite Radon measure space). Using the stream function form of the equations and the Schauder fixed-point theorem to get the new proof of these results, we get that when the initial vorticity is smooth, there exists a unique classical solutions for the Cauchy problem of the two dimensional Boussinesq equations.
We establish new Kamenev-type oscillation criteria for the half-linear partial differential equation with damping div(A(x)‖?u‖^{p-2}?u) + x),‖?u‖^{p-2}?u> + c(x)|u|^{p-2}u = 0 (E) under quite general conditions. These results are extensions of the recent results developed by Sun [Y.G. Sun, New Kamenev-type oscillation criteria of second order nonlinear differential equations with damping, J. Math. Anal. Appl. 291 (2004) 341-351] for second order ordinary differential equations in a natural way, and improve some existing results in the literature. As applications, we illustrate our main results using two different types of half-linear partial differential equations.
Many cellular and subcellular biological processes can be described in terms of diffusing and chemically reacting species (e.g. enzymes). In this paper, we will use reflected and absorbed Brownian motion and stochastic differential equations to construct a closed form solution to one dimensional Robin boundary problems. Meanwhile, we will give a reasonable explanation to the closed form solution from a stochastic point of view. Finally, we will extend the problem to Robin boundary problem with two boundary conditions and give a specific solution by resorting to a stopping time.
In this paper, we examine the best time to sell a stock at a price being as close as possible to its highest price over a finite time horizon [0, T], where the stock price is modelled by a geometric Brownian motion and the ‘closeness’ is measured by the relative error of the stock price to its highest price over [0, T]. More precisely, we want to optimize the expression:
where (V_{t})_{t≥0} is a geometric Brownian motion with constant drift α and constant volatility is the running maximum of the stock price, and the supremum is taken over all possible stopping times 0 ≤ τ ≤ T adapted to the natural filtration (F_{t})_{t≥0} of the stock price. The above problem has been considered by Shiryaev, Xu and Zhou (2008) and Du Toit and Peskir (2009). In this paper we provide an independent proof that when α = 1/2σ^{2}, a selling strategy is optimal if and only if it sells the stock either at the terminal time T or at the moment when the stock price hits its maximum price so far. Besides, when α > 1/2σ^{2}, selling the stock at the terminal time T is the unique optimal selling strategy. Our approach to the problem is purely probabilistic and has been inspired by relating the notion of dominant stopping ρ_{τ} of a stopping time τ to the optimal stopping strategy arisen in the classical “Secretary Problem”.
In this paper, we combine Leimer’s algorithm with MCS-M algorithm to decompose graphical models into marginal models on prime blocks. It is shown by experiments that our method has an easier and faster implementation than Leimer’s algorithm.
A class of differential-difference reaction diffusion equations initial boundary problem with a small time delay is considered. Under suitable conditions and by using method of the stretched variable, the formal asymptotic solution is constructed. And then, by using the theory of differential inequalities the uniformly validity of solution is proved.
This paper is concerned with the decay rate of solutions for a quasilinear wave equation with viscosity. We use a so-called energy perturbation method to establish decay rate of solutions in terms of energy norm for a class of nonlinear functions. With the help of a comparison lemma of differential inequalities, we establish a relationship between decay rate of solutions and f.
In this paper, we consider the isentropic compressible Navier-Stokes-Poisson equations arising from transport of charged particles or motion of gaseous stars in astrophysics. We are interested in the case that the viscosity coefficients depend on the density and shall degenerate in the appearance of (density) vacuum, and show the L^{1}-stability of weak solutions for arbitrarily large data on spatial multi-dimensional bounded or periodic domain or whole space.
Histogram and kernel estimators are usually regarded as the two main classical data-based nonparametric tools to estimate the underlying density functions for some given data sets. In this paper we will integrate them and define a histogram-kernel error based on the integrated square error between histogram and binned kernel density estimator, and then exploit its asymptotic properties. Just as indicated in this paper, the histogram-kernel error only depends on the choice of bin width and the data for the given prior kernel densities. The asymptotic optimal bin width is derived by minimizing the mean histogram-kernel error. By comparing with Scott’s optimal bin width formula for a histogram, a new method is proposed to construct the data-based histogram without knowledge of the underlying density function. Monte Carlo study is used to verify the usefulness of our method for different kinds of density functions and sample sizes.