Most Read Articles

    Published in last 1 year |  In last 2 years |  In last 3 years |  All

    All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Sampling Survey in the Context of Big Data
    JIN Yongjin, LIU Xiaoyu
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 2-16.   DOI: 10.12341/jssms21449
    Abstract364)      PDF (669KB)(258)       Save
    {Big data is characterized by large volume, rich types, and rapid growth, but it also has problems such as low value density and poor representativeness, which brings opportunities and challenges to sampling survey. In the context of big data, how does sampling survey adapt to new changes and what kind of development and application does it have? This paper discusses it from three perspectives. First, there are some new sampling methods with strong adaptability in the data stream environment, which can obtain representative samples efficiently and accurately, and take into account the storage space, processing time and ability. Secondly, some non-probability sampling methods without sampling frame have been developed by means of internet survey or social network data collection, which can obtain a large number of analysis samples in a short time at low cost. Third, the advantages of big data and sampling survey are integrated to integrate online and offline survey data. In the case that online sample is non-probability sample and offline sample is probability sample, this article puts forward the basic idea of data integration: On the one hand, probability samples are used to carry out the ``probability test'' for non-probability samples; on the other hand, the information of probability samples is extracted and make inferences based on model or pseudo-randomization.
    Related Articles | Metrics
    Superpixel Merging-Based Hyperspectral Image Classification
    XIE Fuding, LI Xu, HUAGN Dan, JIN Cui
    Journal of Systems Science and Mathematical Sciences    2021, 41 (12): 3268-3279.   DOI: 10.12341/jssms21388
    Abstract334)      PDF (653KB)(201)       Save
    Superpixel-level hyperspectral image classification is a representative spec-tral-spatial classification method. Compared with the pixel-wise classification method, it has obvious advantages in classification accuracy and efficiency. However, the main disadvantage of superpixel-level classification algorithms is that the classification results depend heavily on the segmentation scale of superpixels. Existing literature shows that the optimal segmentation scale of superpixels is usually an experimental result, and it is difficult to be specified in advance. To weaken this dependency, a superpixel-level hyperspectral image classification algorithm based on superpixel merging is proposed in this work. Local modularity function is first used to merge the sparse weighted superpixel graph constructed. By the newly defined mapping, each superpixel is represented as a sample. Then popular KNN method is adopted to classify the merged image at the superpixel level. The superpixel merging enhances the role of spatial information in classification, effectively weakens the dependence of classification results on the segmentation scale of superpixels, and improves the classification accuracy. To evaluate the effectiveness of the method, the proposed algorithm is compared with some competitive hyperspectral image classification methods on four publicly real hyperspectral datasets. The experimental and comparative results show that the proposed method not only effectively reduces the influence of superpixel segmentation scale on the classification results, but also has obvious advantages both in classification accuracy and computational efficiency.
    Reference | Related Articles | Metrics
    Review of Autonomous Driving Test and Evaluation
    YU Weizhi, SU Yimin, WANG Lin
    Journal of Systems Science and Mathematical Sciences    2022, 42 (3): 495-508.   DOI: 10.12341/jssms21113
    Abstract322)      PDF (660KB)(233)       Save
    With the rise of research on autonomous driving technology, the establishment of a scientific and complete test and evaluation system for autonomous driving algorithms has gradually attracted attention. The test and evaluation system of autonomous vehicles mainly includes testing methods, as well as the selection of evaluation plans and indicators. Different countries have also formulated standards and regulations for the development and application of autonomous vehicles, which are helpful to the establishment of the evaluation system. Therefore, this article has carried out related research work on the test and evaluation of autonomous vehicles, and introduces from four aspects:Background, testing methods, evaluation plans, different national policy and research. In terms of the background, the significance and application of the test and evaluation of autonomous vehicles are introduced. In terms of testing methods, it specifically introduces Monte Carlo simulations, game theory method, test matrix evaluation and worst-scenario evaluation. Accelerated evaluation of automated vehicles is also introduced. In the aspect of evaluation plans, this article specifically introduces the evaluation plans in Annual Research Report on Autonomous Vehicle Simulation in China, German PEGASUS project, Chinese Intelligent Vehicle Future Challenge and the European AdaptIVe project these four projects. It also introduces the evaluation plans mentioned in other academic literature. Finally, this article briefly introduces the various standards and policies formulated for autonomous driving in China, the United States, Europe and other countries and regions in recent years. Related companies and their simulation software are also introduced.
    Reference | Related Articles | Metrics
    A Trajectory Learning and Obstacle Avoidance Method for Manipulators Based on DMP-RRT
    JIN Yuqiang, QIU Xiang, LIU Andong, ZHANG Wen'an
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 193-205.   DOI: 10.12341/jssms20224
    Abstract228)      PDF (7577KB)(128)       Save
    In order to avoid the shortcomings of the commonly used trajectory planning methods, such as the cubersome model coupling and the difficulties in the model operation, a trajectory generation and obstacle avoidance method is proposed for manipulators based on the Learning from Demonstration (LfD). This method combines Gaussian mixture model (GMM), dynamic motion primitive (DMP) and rapid extended random tree (RRT) methods after pretreating the data recorded by the robot platform. The gaussian mixture model, aiming to optimize the set of demonstration data, is employed to generate trajectories containing as many motion features as possible. DMP is used to model and generalize the movements. And then, the trajectory can be adjusted by RRT algorithm to meet the operation requirements in cases of complex environments with obstacles in different shapes. Finally, the pick-and-place experiments based on Franka manipulator validate the effectiveness of the proposed method.
    Reference | Related Articles | Metrics
    Analysis of Zero-Hopf Bifurcation in High Dimensional Polynomial Differential Systems with Algorithm Derivation
    HUANG Bo, HAN Deren
    Journal of Systems Science and Mathematical Sciences    2021, 41 (12): 3280-3298.   DOI: 10.12341/jssms21399
    Abstract227)      PDF (459KB)(159)       Save
    This paper deals with the Zero-Hopf bifurcation in high dimensional polynomial differential systems. First, we reduce the problem of bifurcation analysis to an algebraic problem, and we give a method for determining the bifurcation set of the Zero-Hopf bifurcation points of differential systems by using symbolic algorithm for solving semi-algebraic systems. Then, based on the second order averaging method, the algorithmic framework of the Zero-Hopf bifurcation analysis of differential systems is derived, and the limit cycle bifurcation problem is studied through specific examples by using the methods of symbolic computation, and some new results are obtained. Finally, we propose several related research problems.
    Reference | Related Articles | Metrics
    Further Results on the Equivalence of Multivariate Polynomial Matrices
    LI Dongmei, GUI Yingying
    Journal of Systems Science and Mathematical Sciences    2021, 41 (12): 3299-3310.   DOI: 10.12341/jssms21407
    Abstract212)      PDF (307KB)(114)       Save
    Multidimensional systems are often described by polynomial matrices, and problems on the equivalence of multidimensional systems in system theory are often transformed into problems on the equivalence of polynomial matrices. In this paper, we mainly study the equivalence of two kinds of multivariate polynomial matrices, and obtain the discriminant conditions for the equivalence of these matrices and their Smith forms, respectively. The conditions are easily verified, and an example is also used to illustrate these in the paper.
    Reference | Related Articles | Metrics
    Constructions of Two Classes of Optimal Cyclic Locally Repairable Codes
    CHEN Min, KAI Xiaoshan
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 487-494.   DOI: 10.12341/jssms21261
    Abstract199)      PDF (295KB)(93)       Save
    Locally repairable codes are a class of erasure codes which can repair multiple failed nodes. They are widely used in the distributed storage systems. A main topic in the distributed storage coding is to construct optimal locally repairable codes at present. In this paper, the following two classes of optimal ${(r,\delta)}$ locally repairable codes based on cyclic codes over $\mathbb{F}_{q}$ are constructed:1) $[3(q+1),3(q+1)-3\delta+1,\delta+2]$, where $q\equiv1(\bmod~6)$, $r+\delta-1=q+1$ and $2\leq\delta\leq q-1$ is even; 2) $[3(q-1),3(q-1)-3\delta+2,\delta+1]$, where $q\equiv7(\bmod~9)$, $r+\delta-1=q-1$, $2\leq\delta\leq\frac{2(q-1)}{3}$ is even with $\delta\not\equiv0(\bmod~6)$.
    Reference | Related Articles | Metrics
    Optimal Subsampling Algorithm for Big Data  Ridge Regression
    LI Lili, JIN Shilei, ZHOU Kaihe
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 50-63.   DOI: 10.12341/jssms21494
    Abstract187)      PDF (514KB)(154)       Save
    With the advent of the big data era, in order to improve computational efficiency, Wang, et al.(2018) proposed an optimal subsampling algorithm for logistic regression, which provides a better tradeoff between estimation efficiency and computational efficiency. To solve the problem of multicollinearity among variables, this paper proposes an optimal subsampling algorithm in the context of ridge regression, and proves the consistency and asymptotic normality of the estimator from optimal subsampling algorithm. Numerical experiments are carried out on both simulated and real data to evaluate the proposed methods. Results show that the optimal subsampling algorithm produces similar results compared with the full data analysis, while significantly reducing the computational costs.
    Related Articles | Metrics
    Group Sequential Randomization for Split Questionnaire Design
    YANG Haoyu, QIN Yichen, LI Yang
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 17-34.   DOI: 10.12341/jssms21515
    Abstract186)      PDF (1305KB)(74)       Save
    Sampling survey is still an essential tool in the era of big data. However, traditional sampling survey faces the dual challenges of increasing execution cost and decreasing data quality. Split questionnaire design can has been paid more attention by researchers as an effective way to reduce the cost and improve the data quality. In this paper, we discuss the sub-questionnaire assignment process in the split questionnaire design. Based on the assumption that participants arrive in accordance with the Poisson process, the sequential randomization method considering covariates balance is designed with the goal of improving the similarity among sub-samples and the population. Both theoretical and numerical results show that the proposed method has superior performance compared with the existing methods on sub-sample balancing and estimation accuracy.
    Related Articles | Metrics
    Optimal Subsampling Algorithm for Nonparametric Local Polynomial Regression Estimation
    NIU Xiaoyang, ZOU Jiahui
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 72-84.   DOI: 10.12341/jssms21475
    Abstract185)      PDF (551KB)(154)       Save
    In this paper, we extend the subsampling method under the linear model to the nonparametric regression model and propose two subsampling methods for the nonparametric local polynomial regression model. First, we derive the convergence rate of subsampling based weighted least squares parameter estimation to full sample weighted least squares parameter estimation, and the asymptotic normality of the subsample parameter estimation are derived. Then, we use the criterion of minimizing the asymptotic variance, and two subsampling methods of OPT and PL under nonparametric local polynomial regression model are proposed. Finally, numerical simulation of OPT subsampling and PL subsampling, uniform subsampling and Basic Leveraging subsampling are carried out respectively, in terms of mean square error, fitting effect and computational cost. The results show that the subsampling method based on OPT criterion and PL criterion has great advantages in improving estimation accuracy and reducing calculation burden.
    Related Articles | Metrics
    Epidemic Modeling Based on Hierarchical Bayesian Spatio-Temporal Possion Model
    LIANG Yongyu, TIAN Maozai
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 462-472.   DOI: 10.12341/jssms20374
    Abstract185)      PDF (537KB)(111)       Save
    The widespread spread of the epidemic has had a huge impact on economic development and daily life. Therefore, it is of great importance for formulating corresponding control strategies and economic recovery policies to collect epidemic data and analyze the spatio-temporal patterns of incidence rate or the intensity of infection. In this paper, epidemic modeling methods based on hierarchical Bayesian spatio-temporal Poisson model are discussed, including different settings of data model, process model and parameter model, discussion of parameter prior distribution, model selection and so on. Based on this idea, we can analyze the spread and development of epidemics, study the spatial differences of different regions and the influence of other covariables on epidemic trends, and study the spatio-temporal dependence of virus transmission and the heteroscedasticity structure of spatial effects.The modeling method discussed in this paper can provide theoretical reference for the study of related problems. For parameter estimation, the Gibbs sampling algorithm under the default Markov Chain Monte Carlo algorithm (MCMC) in WinBUGS and OpenBUGS can be used.
    Reference | Related Articles | Metrics
    Fisher Information for Inverse Rayleigh Distribution in Ranked Set Sampling with Application to Parameter Estimation
    CHEN Meng, CHEN Wangxue, DENG Cuihong, YANG Rui
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 141-152.   DOI: 10.12341/jssms21498
    Abstract176)      PDF (335KB)(123)       Save
    In this article, Fisher information in the corresponding samples about the scale parameter $\theta$ from Inverse Rayleigh distribution under simple random sampling and ranked set sampling will be respectively studied. The numerical results show ranked set sample carry more information about $\theta$ than a simple random sample of equivalent size. Then we respectively use the simple random sample and ranked set sample to construct some optimal estimators of $\theta$. The numerical results of these estimators are compared.
    Related Articles | Metrics
    Sub-Sampling Model Averaging Theory for Large Scale Data
    ZONG Xianpeng, WANG Tongtong
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 109-132.   DOI: 10.12341/jssms21524
    Abstract158)      PDF (487KB)(140)       Save
    With the development of information age, how to mine useful information from massive data quickly and effectively is a new challenge. As an effective tool for large scale data analysis, sub-sampling method has attracted extensive attention of scholars at home and abroad. However, the traditional sub-sampling method usually does not take into account the uncertainty of the model. When the assumed model is incorrect, the conclusions may be wrong. In order to solve this problem, a sub-sampling model averaging estimator (SSMA estimator) is constructed by the sampled data. Theoretically, we prove that the SSMA estimator is an asymptotically unbiased and consistent estimator of the model averaging estimator based on full data. In addition, we propose a weight choice criterion for the SSMA estimator, which is based on the Mallows' criterion proposed by Hansen (2007), and derive the asymptotic optimality of the weight estimator. It is worth mentioning that, in the proofs of these theoretical properties, we consider the double randomness brought by the model and sampling design. Finally, numerical analysis further shows the effectiveness of the proposed method.
    Related Articles | Metrics
    An Analysis of the Influence of Token Incentive Allocation Monopoly on User Knowledge Contribution in Blockchain-Based Knowledge Communities
    LI Zhihong, XIE Yongjing, XU Xiaoying
    Journal of Systems Science and Mathematical Sciences    2022, 42 (6): 1362-1374.   DOI: 10.12341/jssms22018ZX
    Abstract156)      PDF (834KB)(122)       Save
    The rapid development of blockchain token incentives provides a new perspective to solve the problem of insufficient motivation for user content creation, but it still faces many challenges in the actual implementation process. This paper takes Steemit, a knowledge community based on blockchain, as the research object.By collecting block data, this paper analyzes the situations and problems existed in the token incentive mechanism of community from two aspects, including incentive equality and knowledge contribution efficiency, thus revealing the problem of token incentive allocation monopoly. Moreover, this paper identifies the influence of token incentive allocation monopoly on user knowledge contribution. The results show that the token incentive distribution in the community is monopolized by a small number of top users, and the incentive distribution mechanism in the community cannot effectively reflect the users' knowledge contribution levels. The inequality of token incentives allocation results in the decrease of users' content production and content discovery levels.
    Reference | Related Articles | Metrics
    Portfolio Optimization Based on AdaBoost
    QIAN Long, WEI Jiang, ZHAO Huimin, NI Xuanming
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 271-286.   DOI: 10.12341/jssms21059
    Abstract150)      PDF (1184KB)(76)       Save
    This paper adopts the AdaBoost ensemble learning technique to boost the performance of mean-variance (MV) strategy. Firstly, this paper conducts an ambiguity decomposition on the quadratic cost function of expected utility, which proves that ensemble learning can boost the performance of portfolio strategies. Secondly, we parameterize the shrinkage intensity of the mean and covariance shrinkage estimator of return to be out-of-sample driven, and use iterative active set and gradient descent algorithms to maximize the value function, constructing parameterized MV strategy as the weak learner of proposed AdaBoost.PT. In terms of empirical study, we utilize the full panel stock data of A shares in near 25 years and American shares in near 40 years, and examine the performance of ensemble portfolio strategies in terms of Sharpe ratio, standard deviation, turnover and maximum drawdown, and then conduct a hypothesis test to check the significance of Sharpe ratio's difference. The empirical results show that the ensemble strategies based on return shrinkage estimator are superior to the baseline strategies under all four indices and statistical tests, and the robust tests based on industrial portfolios also show the same results.
    Reference | Related Articles | Metrics
    Cited: CSCD(1)
    Head Defect Recognition of GH159 Bolt After Hot Upsetting Based on Transfer Learning
    LI Lei, MA Yulin, HU Gang, KONG Xuefeng, YANG Jun, XU Yanwei
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 175-192.   DOI: 10.12341/jssms21526
    Abstract148)      PDF (4308KB)(100)       Save
    To accurately identify the head defects of the GH159 bolt after hot upsetting, this paper proposes a defect recognition method based on transfer learning, where datasets under scenes with different brightness are set as the source domain and target domain in transfer learning, respectively. First, considering the multi clusters of the conditional distribution in the domain, this paper adopts the K-means algorithm to cluster samples with the same defect and determine the cluster centers in this defect, then a novel measurement of the distribution discrepancy can be constructed on the cluster centers. Second, based on the distances between cluster centers and the distances between each cluster center and the samples belonging to the cluster, a new intra-class discrepancy can be established for improving the computational efficiency of transfer learning. Finally, the optimization objective of the proposed method is built on minimizing the weighted sum of the constructed distribution discrepancy and intra-class discrepancy to effectively identify defects under scenes with different brightness. According to the requirement on partial parameters setting of the proposed method, the pseudo-accuracy is designed using the reverse verification strategy, then the parameters are set as the parameters’ combination with the highest pseudo-accuracy. Using the collected dataset on head defects of the GH159 bolt after hot upsetting, the analysis and application of the defect recognition are carried out to verify the effectiveness of the proposed method.
    Related Articles | Metrics
    Improved Estimator Based on DCSBM in Respondent-Driven Sampling
    JIANG Yan, MENG Zhufeng, WANG Tianjia, LIU Xiaoyu
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 85-99.   DOI: 10.12341/jssms21627
    Abstract137)      PDF (646KB)(76)       Save
    In Big Data era, Respondent-Driven Sampling (RDS) is more often applied network sampling with general population. Such optimization offers a possible solution for problems in traditional sampling investigation, including the difficulties to obtains usable sampling frames, respondents or the responds themselves. Moreover, it also enables network survey to be probabilistic and obtain overall parameter estimation within a certain error range. However, homogeneity in statistical research always deviates RDS estimating result (when recommending a companion for the research, the respondent is more likely to introduce someone with whom he/she shares similar qualities). In order to offer a practical solution, this paper assumes that population obeys the Degree-Corrected Stochastic Block Models (DCSBM). We post-stratify the sample based on transition probability and propose an inverse probability weighted PS-IPW estimator. By simulation analysis, we compare the relative efficiency between different network population with varied homogeneity. By empirical study, we sort out stratifies variables of our sample based on the characteristics of spectral of block matrix, which further verifies the usability of RDS sampling and the efficiency of PS-IPW estimator.
    Related Articles | Metrics
    A Model-Free Design Method of Uncertainty Control System Based on Newton's Laws of Motion
    KAI Ping'an, SHEN Zhongli
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 206-223.   DOI: 10.12341/jssms20206
    Abstract133)      PDF (774KB)(50)       Save
    In engineering practice, it is of great significance to design a controller which is easy to use and maintain for the uncertainty of industrial process. Newton's law was one of the most familiar physical laws for engineers and technicians. Based on Newton's law, this paper proposes a model-free uncertainty control system and its design method. By constructing three state variables of the controlled system, i.e., position, velocity and acceleration, and applying Kalman filter theory, an observer based on Newton motion law (ONLM) is designed. And then a closed-loop compensator is designed according to the position, velocity and acceleration, and then a model free control system (MFCNLM) is brought into being, which makes the system output track the desired output trajectory. Furtherly, this paper puts forward a PID controller design method based on the principle of Newton motion law, and analyzes and demonstrates the Newton motion law of MFCNLM control method and PID control method in control system design. The design method proposed in this paper does not need the mathematical model of the controlled object, but only needs the control engineer to give the expected transition time $T$ of the closed-loop control system. The results show that the proposed method has good control quality and robust performance for uncertain systems.
    Reference | Related Articles | Metrics
    Sample Size Design of Product Quality Sampling Survey Under the Background of Big Data
    ZHANG Xuan, ZHAO Jing , DING Wenxing
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 133-140.   DOI: 10.12341/jssms21654
    Abstract131)      PDF (434KB)(64)       Save
    Product quality sampling survey is an important means for government quality supervision department to know the product quality status. In the sampling survey of product quality over the years, a large number of actual data have also been accumulated. In this paper, the prior information provided by big data is effectively combined with the sample size design in sampling survey. The valuable information provided by big data is used as auxiliary information, and the survey objects are stratified by clustering method. According to the different characteristics of each stratum, the relationship between relative error limits of each stratum is determined by the series of preferred numbers, and then the stratified random sampling sample size is determined, which makes the determination method of sample size be taken into account the advantages of science and practicality. At the same time, by selecting different levels of parameters for different strata of supervision objects, the effectiveness of supervision is improved under the condition of limited cost.
    Related Articles | Metrics
    Research on Combination Portfolio Strategy with Equal Weight Adjustment Based on Certainty Equivalence
    WANG Zongrun, TAN Guoxi
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 287-303.   DOI: 10.12341/jssms21202
    Abstract130)      PDF (1144KB)(73)       Save
    Due to the existence of estimation error, the out-of-sample performance of mean variance investment strategy is not satisfactory. At the same time, equal weight investment strategy is gradually concerned because of its lack of estimation error and good out-of-sample performance, and it plays an important role in combination strategy. Therefore, this paper introduces the equal weight strategy to adjust the original mean variance asset allocation, determines the combination coefficient of sub-strategies based on the certainty equivalence to measure the performance of different investment strategies, constructs the final combination portfolio strategy, and compares the combination portfolio strategy with other investment strategy. The results show that as far as single investment strategy is concerned, the mean variance strategy is superior in reducing standard deviation and controlling risk, while the equal weight strategy is beneficial to increase the Sharp ratio of the asset portfolio and improve return; As far as combination portfolio strategy is concerned, on the one hand, the combination portfolio strategy based on certainty equivalence can reduce the tail risk of portfolio and help investors avoid extreme losses, on the other hand, it can significantly improve Sharpe ratio and Sortino ratio of portfolio and obtain higher risk-adjusted returns.
    Reference | Related Articles | Metrics
    Machine Learning Methods Investigate Liver Cancer Prediction Problem
    HU Xuemei, LI Jiali, JIANG Huifeng
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 417-433.   DOI: 10.12341/jssms21168
    Abstract128)      PDF (751KB)(75)       Save
    Liver cancer has the second highest fatality rate among all cancers. Machine learning methods can improve the accuracy of disease prediction. Therefore, in this paper we mainly apply machine learning methods to study the pre-diagnosis problem for liver cancer, and improve the prediction accuracy to liver cancer. Firstly, 10 indicators affecting liver cancer are selected as predictors, and 579 liver cancer patients are divided into two groups:A training sample composed of 492 patients are randomly selected, and a testing sample composed of the remaining 87 patients. Then, we take advantage of the training samples to establish six classifiers:Logistic regression, $L_{2}$ penalized logistic regression, Support Vector Machine (SVM), Gradient Boosting Decision Tree (GBDT), Artificial Neural Network (ANN) and eXtreme Gradient Boosting (XGBoost), where logistic regression and $L_{2}$ penalized logistic regression adopt Newton-Raphson algorithm to obtain the iterative weighted least squares estimators for model parameters, calculate the probability estimate of malignant and benign tumor cells in patients, and determine the optimal threshold to predict tumor traits. Finally, the confusion matrix, sensitivity and specificity are calculated by the testing samples, and the ROC curve is drawn to evaluate the prediction accuracy. The results show that in terms of prediction accuracy, $L_{2}$ penalized logistic regression ranks the first, SVM prediction accuracy ranks second, XGBoost prediction accuracy ranks third, logistic regression prediction accuracy ranks fourth, GBDT prediction accuracy ranks fifth, and the prediction accuracies for ANN and random forest are the worst.
    Reference | Related Articles | Metrics
    Unified Precision Measure and Its Relationship with Shannon's Theorem
    LIU Yawen, MA Wenbo, DU Zifang
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 64-71.   DOI: 10.12341/jssms21482
    Abstract126)      PDF (451KB)(74)       Save
    Statistical inference usually measures the estimation accuracy by two indicators: Confidence coefficient and bias, but when confidence coefficient and bias are different, the comparison of accuracy between estimators will be difficult. In this paper, a widely used dimensionless accuracy statistic is proposed. The function of confidence coefficient and bias can compare the accuracy when the estimation bias and the degree of confidence are different at the same time. In addition, starting from the factors affecting the interpretation accuracy and its action mechanism, the logical consistency between the sample size determination formula and the Shannon theorem of information theory is found. A new explanation of the physical meaning of the sample size determination formula is given.
    Related Articles | Metrics
    A Design Approach for Asymmetric Sine Curve Motion Profile with Smoothed Jerk
    ZHU Qixin, JIN Yusheng, LIU Hongli, ZHU Yonghong
    Journal of Systems Science and Mathematical Sciences    2022, 42 (3): 555-567.   DOI: 10.12341/jssms21307
    Abstract125)      PDF (1539KB)(44)       Save
    In this paper, we proposed a systematized design approach for asymmetric sine motion profile with smoothed jerk to reduce the vibration and impact in the deceleration stage in motion control. Considering the unsmooth part of jerk in the closed-form sine motion profile will be retained with the simple asymmetric design, we remove the uniform deceleration stage to get a smoother deceleration stage. So that we can increase the movement stability and decrease the residual vibration at the same time. The effectiveness of the proposed asymmetric motion profile is illustrated with motion profiles and the dynamic model by simulations.
    Reference | Related Articles | Metrics
    Relationship Among Electricity Consumption, Industrial Structure and Economic Growth: Analysis Based on Panel Vector Autoregression (PVAR) Model of Gansu Urban Data
    LIU Chun, MA Chao, FENG Yongchun, WANG Zhuxiu
    Journal of Systems Science and Mathematical Sciences    2022, 42 (3): 599-613.   DOI: 10.12341/jssms21249
    Abstract122)      PDF (822KB)(182)       Save
    As the basic industry for economic development, electricity consumption is closely related to industrial structure and economic growth. Based on the panel data from 2004 to 2018, this paper empirically tests the causal relationship among industrial structure, power consumption and economic growth in Gansu Province by using panel vector autoregression (PVAR) model. The result shows that, the industrial structure and economic growth influence each other in Gansu Province. Economic growth has a positive effect on electricity consumption, and fixed asset investment throughout the country has a positive effect on economic growth and industrial structure optimization. The findings of this paper show that we should adhere to the economic development goal of steady growth and structural adjustment, and vigorously develop clean energy such as wind energy and solar energy through policy guidance. At the same time, we should also improve the efficiency of energy investment and utilization, and guide and support the development of the tertiary industry to realize the multiple goals of industrial structure optimization, energy conservation, emission reduction, and economic development in Gansu.
    Reference | Related Articles | Metrics
    A Method of Traffic Light Information Sharing Among Vehicles Based on YOLO Deep Learning Architecture
    LI Jiangtian, LUO Dingsheng
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 370-385.   DOI: 10.12341/jssms21395
    Abstract120)      PDF (3063KB)(58)       Save
    As a fundamental part of urban traffic, bus is much higher than other kinds of vehicles. Hence, buses may block rear drivers' sights when rear drivers want to observe the traffic lights. During such sight block, the rear drivers tend to blindly follow the bus, which may lead to illegal driving, rear-end collision, etc. To solve this problem, this paper proposes a traffic light information sharing system based on the YOLO deep learning framework. Specifically, the system collects the road image data through the camera, improves the image quality by using the image preprocessing method. The state-of-art Yolo V3 model is utilized to timely identify the position and color of traffic lights within the road image. Finally, the identification results is displayed on the electronic screen to remind the surrounding drivers. To further reduce the demanding of computing resources of Yolo V3, a lightweight model based on Yolo Lite is developed, which can run smoothly on CPU i5-750. The experimental evaluation using real-world road image data proves the effectiveness of the proposed system.
    Reference | Related Articles | Metrics
    Estimation of Total Population: A Combined Method Based on Triple System Estimator and Ratio Estimator
    MENG Jie , YANG Guijun, FENG Guolei, HUA Mengke
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 35-49.   DOI: 10.12341/jssms21520
    Abstract119)      PDF (748KB)(78)       Save
    Due to the influence of various factors, the census results inevitably deviate from the real total population. How to construct an estimation of the total population with good statistical properties and wide application scope and to accurately grasp the trend of population change is an important issue for the government's statistical work. In this paper, we explore the empirical method of the census annual population estimation of the UK Statistics Office, and propose a method of population estimation based on the combination of triple system estimation and ratio estimation. The simulation results show that the new method can better overcome the interaction bias caused by the independence of two systems, and improve the accuracy of the total population estimation based on the reasonable stratification of the population. At the same time, the research of this paper also puts forward the construction idea of ``the third set of demographic data resources'', which is not only the data foundation of constructing and applying triple system estimation, but also helpful to further promote the statistical modernization reform.
    Related Articles | Metrics
    Cited: CSCD(1)
    Tensor Response Regression Model Based on Tensor Decomposition and Its Parameter Estimation
    XU Changqing, SONG Shan, FENG Yan
    Journal of Systems Science and Mathematical Sciences    2022, 42 (3): 742-752.   DOI: 10.12341/jssms20513
    Abstract117)      PDF (429KB)(67)       Save
    In this paper, the tensor response regression model and the least square estimation of its coefficient tensor are studied. In order to improve the estimation accuracy of the model's coefficient tensor, CP decomposition and Tucker decomposition of the coefficient tensor of the model are carried out to construct two tensor response regression models. These two models can not only capture the spatial structure information of tensor data, but also greatly reduce the number of parameters to be estimated. Then, the parameter estimation algorithm corresponding to the model is given. Finally, Monte Carlo numerical experiments show that the estimation accuracy of coefficient tensor of the two improved regression models is significantly improved, and the estimation accuracy of coefficient tensor of tensor response regression model based on Tucker decomposition is the best.
    Reference | Related Articles | Metrics
    On MPPS Balanced Sampling for Multipurpose Survey
    YANG Guijun, SHEN Wenjing, LIANG Xinyu
    Journal of Systems Science and Mathematical Sciences    2022, 42 (3): 715-729.   DOI: 10.12341/jssms21298
    Abstract116)      PDF (547KB)(39)       Save
    Balanced sampling is a widely applied sampling method that uses auxiliary information to improve the sample structure. The inclusion probabilities of MPPS (Multivariate Probability Proportional to Size) sampling used in multipurpose surveys can be exactly satisfied in balanced sampling. Based on the properties of MPPS sampling and balanced sampling, this paper proposes MPPS balanced sampling for multipurpose surveys. The main idea of this method is to use the auxiliary information of multiple survey variables in determining the inclusion probability and random sampling at the same time to improve the accuracy of the estimator. Simulation analysis shows that MPPS balanced sampling is better than equal probability balanced sampling, MPPS systematic sampling and MPPS Poisson sampling. At the same time, the theoretical properties of the HT estimator of MPPS balanced sampling is discussed. For a randomly arranged population, the applicable approximate variance and variance estimation are given. At last, Chinese main livestock and poultry survey is taken as an example, and a multipurpose sampling survey scheme of MPPS balanced sampling is proposed. This could provide a new option for Chinese government statistical survey.
    Reference | Related Articles | Metrics
    The Condition for the Recovery of Block Sparse Signal Based on Redundant Tight Frame via the Unconstrained $\ell_{2,1}$-Analysis Method
    LIU Yangshuo, LIU Hongyu, GE Huanmin
    Journal of Systems Science and Mathematical Sciences    2022, 42 (3): 509-527.   DOI: 10.12341/jssms21065
    Abstract116)      PDF (597KB)(67)       Save
    In this paper, we mainly apply the convex decomposition of block sparse signals to analyse the unconstrained $\ell_{2,1}$-analysis model and develop the condition for the recovery of block sparse signals based on redundant tight frames via the unconstrained $\ell_{2,1}$-analysis method, which is based on restricted isometry property under tight frame. We first develop two significant lemmas based on the convex decomposition theory. Second, we build the weak condition based on restricted isometry property under tight frame for the recovery of block sparse signals based on redundant tight frames via the unconstrained $\ell_{2,1}$-analysis method. Last, numerical experiments is established to verify the recovery performance of the unconstrained $\ell_{2,1}$-analysis method.
    Reference | Related Articles | Metrics
    Extreme Conditional Quantile Estimation via Extrapolated Intermediate Ordinal Quantiles
    YANG Xiaorong, LI Lu, WU Shidi, XU Shizhan
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 434-461.   DOI: 10.12341/jssms21129
    Abstract116)      PDF (3914KB)(33)       Save
    In recent years, conditional quantile estimation regression has been widely used in finance, biology, and medicine. Quantile regression is an appropriate and effective method to estimate the influences of covariates on the responses at different quantile levels. However, due to the sparsity of the tail data, there is usually a large bias when the quantile regression method is used to estimate extreme conditional quantiles. In this paper, the extreme value theory and the quantile regression are combined to estimate the extreme quantile levels at the tail of the linear quantile regression model with an extrapolation of intermediate conditional quantiles. Based on the kernel function, the bias-corrected tail estimators are constructed, and their asymptotic normality is proved. Both numerical simulation and the real data analysis both show that the proposed method has a certain stability, that is, the number of intermediate quantiles has no large impacts on the estimation of extreme value index and extreme condition quantiles.
    Reference | Related Articles | Metrics
    Period Vehicle Routing Problem for Fault-Shared Bicycle Recycling with Uncertain Demand
    XU Yang, ZHOU Yanan, LAI Kin Keung, SU Bing, ZHANG Xin
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 337-354.   DOI: 10.12341/jssms21208
    Abstract113)      PDF (1339KB)(49)       Save
    In order to timely and effectively recover the fault shared vehicles in the urban road network, the fault shared vehicles scattered on the edge of the road network are clustered to form collection points. Considering the uncertain characteristics of the recovery demand on the cluster collection points, a recovery periodic vehicle route selection model aiming at minimizing the total distance is established. The basis constrained robust optimization method is adopted, the uncertain recovery is described by bounded interval, and the disturbance coefficient and control coefficient are introduced to adjust the robustness and adaptability of the model. An approximate algorithm is designed to solve the model, the time complexity of the approximate algorithm is proved, the upper and lower bounds of the approximate ratio of the algorithm are analyzed, and an example is used to verify the approximate ratio of the algorithm. The results show that the algorithm has good performance. Finally, the effectiveness of the algorithm and model is further verified by analyzing the influence of disturbance coefficient and control coefficient on the approximation ratio of the objective function and the algorithm.
    Reference | Related Articles | Metrics
    Social Electricity Consumption Forecasting Based on Jackknife Model Averaging
    ZHANG Xiaoyuan, DENG Changrui, HUANG Yanmei, BAO Yukun
    Journal of Systems Science and Mathematical Sciences    2022, 42 (3): 588-598.   DOI: 10.12341/jssms21089
    Abstract113)      PDF (500KB)(54)       Save
    The accurate prediction of electricity consumption points out the fluctuation range of electricity consumption in a given time window in the future, which not only provides important information for power supply enterprises, but also an important basis for power departments to formulate relevant policies. In view of the complexity of electricity consumption fluctuations, the Jackknife model average (JMA) theory is employed for electricity consumption forecasting. This technique maximizes the utilization of various information by weighting the predicted values of different models, and finally improves the accuracy of electricity consumption prediction. Furthermore, the forecasting performance of the JMA method is evaluated and compared with seven benchmark models on the basis of accuracy measures and Diebold-Mariano test by selecting the monthly electricity consumption data sets of China and the United States in different periods. The experimental results show that the Jackknife model average method can effectively reduce the prediction error of a single electricity consumption prediction model and is an effective electricity consumption prediction model.
    Reference | Related Articles | Metrics
    Pricing and Coordination Strategies in a Supply Chain with a Risk-Averse Retailer Under Rebate Model
    ZHANG Huimin, HOU Fujun, LOU Zhenkai
    Journal of Systems Science and Mathematical Sciences    2022, 42 (3): 626-640.   DOI: 10.12341/jssms21326
    Abstract113)      PDF (615KB)(71)       Save
    Rebate is a typical pull promotion strategy, which has been widely used by more and more enterprises. This paper constructs a supply chain consisting of a rebate manufacturer and a risk-averse retailer. By considering the different sensitivity of consumers for rebate, we study the pricing strategy of the rebate manufacturer and the risk-averse retailer by establishing a centralized decision model and two kinds of decentralized decision model, and further analyze the effects of risk-averse degree, rebate sensitivity coefficient and redemption rate on the optimal decisions and profits. The results show that the rebate value and the whole supply chain's expected profit are highest in the centralized scenario, followed by the retailer-led scenario, and lowest under the manufacturer-led scenario. No matter who dominates the channel, with the increase in the risk-averse degree, the rebate value and the manufacturer's expected profit increase, while the retailer's expected profit decreases. However, the effect of risk-averse degree on the optimal retail price depends on who dominates the channel and redemption rate. Furthermore, this paper designs a two-part tariff contract to coordinate the supply chain, and finds that the two-part tariff contract can realize the supply chain coordination. Final, we perform the numerical analysis to verify the aforementioned conclusions, and obtain some management insights.
    Reference | Related Articles | Metrics
    Research on Decision-Making and Coordination of Green Supply Chain Considering Fair Preference of Retailers
    WANG Jianhua, WANG Lin
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 386-397.   DOI: 10.12341/jssms20527
    Abstract112)      PDF (572KB)(91)       Save
    When the manufacturer is in a leading position in the green supply chain, the retailer is in a following position at this time and will pay attention to the profit distribution in the supply chain. This behavior of paying attention to profit distribution is defined as the characteristic of fairness preference. In order to explore the impact of retailers' fair preference behavior on the green supply chain, three supply chain game models under different situations are constructed to compare the greenness level of products, market demands, and the profits and utility of each member. Studies have shown that retailers' fairness preferences can increase the proportion of their channel profits, but it will indirectly reduce the greenness of the product, and the market demand will also decrease, which will have a greater negative impact on the manufacturer and the entire system; therefore, a comprehensive supply chain coordination contract based on cost sharing-revenue sharing is proposed to improve the negative impact of retailer behavior characteristics on the supply chain. Through research, it is found that a certain sharing ratio can not only encourage manufacturers to actively carry out green research and development, but also improve the utility and profit of both manufacturers and retailers. At last, we verify the validity of the contract through numerical simulations.
    Reference | Related Articles | Metrics
    Local Exponential Synchronization of Neural Network with Saturated Impulsive Inputs
    HE Zhilong, CHEN Xiaokun
    Journal of Systems Science and Mathematical Sciences    2022, 42 (3): 542-554.   DOI: 10.12341/jssms21186
    Abstract110)      PDF (1281KB)(164)       Save
    In this paper, we design a kind of impulsive controller with actuator saturation to obtain chaotic synchronization of the drive-response neural network. Firstly, we use the sector nonlinear model method and the polyhedral representation method to deal with the saturation nonlinearity of the system at the impulse moment. Secondly, by selecting the appropriate quadratic Lyapunov function, combined with mathematical induction to obtain the local exponential synchronization criterion related to linear matrix inequalities (LMIs). Finally, a numerical example is used to verify the validity of the conclusion.
    Reference | Related Articles | Metrics
    Research on the Tourist Volume Forecast of Scenic Spots Considering the Effect of Holidays-A Hybrid Prediction Method Based on Prophet-NNAR
    LI Yong, LI Yunpeng
    Journal of Systems Science and Mathematical Sciences    2022, 42 (6): 1537-1550.   DOI: 10.12341/jssms21561
    Abstract108)      PDF (983KB)(47)       Save
    Recently, incidents of tourists being stranded due to the overloaded reception of tourists in attractions are very common. Therefore, accurate and effective prediction of the tourist volume in attractions and rational allocation of resources has become a challenge for scenic spot managers. Because of the influence of external factors, such as holidays, the time series curve of tourist volume in attractions usually presents nonlinear characteristics, which undoubtedly increases the practical difficulty of accurately predicting the tourist volume. This study proposes a method for forecasting tourist volume in attractions that considers the effects of holidays, namely, the Prophet-neural network autoregressive (NNAR) hybrid forecasting method. First, the Prophet model, which considers the effects of holidays, is used to predict the original tourist volume of attractions. Then, the NNAR model is used to predict the residual part of the predicted value of the Prophet model. Finally, the two results are combined as the final prediction result of the Prophet-NNAR hybrid model. Taking the historical tourist volume data of Jiuzhaigou scenic spot (from January 1, 2013 to July 31, 2017) as the data source, the effectiveness of the Prophet-NNAR hybrid forecasting method is verified using the R software. Results show that the Prophet-NNAR hybrid forecasting method is effective. The prediction performance of the Prophet-NNAR hybrid forecasting method is not only better than that of single-model methods (i.e., Prophet model, Prophet model that does not consider the effects of holidays, and NNAR model) but also stronger than the seasonal autoregressive integrated moving average and exponential smoothing models. Moreover, the combined results of the Diebold-Mariano test can confirm that the superiority of the Prophet-NNAR hybrid forecasting method over the other methods is statistically significant.
    Reference | Related Articles | Metrics
    An Evaluation Model of Credit Rating Migration Risk with Stochastic Asset Volatility
    LIANG Jin, ZHOU Huihui
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 304-317.   DOI: 10.12341/jssms21346
    Abstract107)      PDF (686KB)(37)       Save
    In this paper, we use corporate bonds as a means to assess the risk of credit rating migration with random volatility. Different from the previous model for evaluating credit rating migration, this paper for the first time considers the risk of migration with random volatility. According to the size of the company's assets, the company is divided into high or low credit rating. It's assumed that the motion of the company's assets meets the Heston stochastic volatility model, and the volatility of company assets regresses around different mean volatility under different credit ratings. By calculating the value of corporate bonds under such asset fluctuations, the risk of credit rating migration with random volatility will be evaluated. In order to build the complete evaluation model, we introduce a special zero-coupon coupon to hedge the risks caused by the randomness of volatility. Then the partial differential equation of the corporate bond value can be derived, with continuous first-order partial derivatives about assets on the boundary of the credit rating migration. Under the new model, through the ADI difference method, the numerical solution of the corporate bond value is obtained and then the influence of the parameters and the financial significance are analyzed.
    Reference | Related Articles | Metrics
    How Do Different Crude Oil Shock Factors Affect the Chinese Stock Market—Evidence from the Firm-Level Data
    ZHAO Wanli, FAN Ying, JI Qiang, ZHANG Dayong
    Journal of Systems Science and Mathematical Sciences    2022, 42 (2): 255-270.   DOI: 10.12341/jssms21390
    Abstract107)      PDF (1406KB)(54)       Save
    China depends increasingly on the international crude oil markets, and thus its economy shows high sensitivity to oil shocks. Using listed firms' data, this paper explores stock market responses to three type of oil shocks, namely, aggregate supply shocks, aggregate demand shocks and oil specific demand shocks. Our research finds that oil specific demand shock has the highest impact in the short term, followed by aggregate demand shocks. These impacts are all negative, but the supply side shock has positive but insignificant impact. We also find clear industrial heterogeneity in the responses to oil shocks. In addition, oil shocks have stronger long-term impacts on the listed firms in China. While the industrial heterogeneity remains, we find that both aggregate demand and aggregate supply shocks have positive impacts on average stock prices, whereas the direction of oil specific demand shocks' impacts go to the opposite side.
    Reference | Related Articles | Metrics
    Cited: CSCD(1)
    A New Method for Processing Interference Data Based on Propensity Score Matching
    LIU Shengnan, LIU Wenjun, ZOU Guohua
    Journal of Systems Science and Mathematical Sciences    2022, 42 (1): 153-174.   DOI: 10.12341/jssms21660
    Abstract101)      PDF (610KB)(63)       Save
    It is very common that data are mixed with some interference data. For the easily recognized interference data, there are many methods of analysis in literature. However, if the interference data is difficult to identify, then the traditional methods will not be valid. Propensity scores can be used to characterize multidimensional data, and data with the same propensity values are similar and likely come from the same population. Therefore, by applying the propensity score matching, this paper proposes a new method to detect interference data. Two algorithms are developed for this purpose. One is used to obtain the proportion of real data in the original data, and estimate the population mean of real data. The second aims to extract the pseudo-real data and conduct modeling analysis. An extensive simulation study shows good performance of the proposed algorithms.
    Related Articles | Metrics
    Research on Subsidy Performance of Green Supply Chain Considering Consumer Preference
    YU Xiaohui, ZHANG Zhiqiang, YU Yanan
    Journal of Systems Science and Mathematical Sciences    2022, 42 (4): 818-831.   DOI: 10.12341/jssms21459T
    Abstract101)      PDF (719KB)(71)       Save
    In order to improve the green supply chain performance, the government gives retailers green product subsidies, and manufacturers give retailers green promotion subsidies. Based on the Stackelberg game analysis of the secondary supply chain composed of manufacturers and retailers, we explore the impact of green product subsidies and green promotion subsidies on the supply chain performance considering consumers’ green preference. It is found that subsidies have a positive effect on product greenness and profit. When consumer preference is not high or product greenness coefficient is low, green product subsidies are more suitable; When consumers have high preference or green promotion efficiency, green promotion subsidies are more suitable. As an external subsidy of the supply chain, the regulation ability of green product subsidy is limited; As an internal subsidy of supply chain, green promotion subsidy can produce higher performance.
    Reference | Related Articles | Metrics