In this work, a feasible and low-cost approach is proposed for level measurement in multiphase systems inside tanks used for petroleum-derived oil production. The developed level sensor system consisted of light-emitting diodes (LEDs), light-dependent resistor (LDR), and a low-cost microprocessor. Two different types of oil were tested: AW460 and AW68. Linear regression (LR) was applied for 11 scenarios and showed a direct correlation between the level of oil and the sensor’s output. The measurement with AW460 oil presented a perfect linear behavior, while for AW68, a higher standard deviation was obtained justifying the occurrence of the nonlinearity in several scenarios. In order to overcome the nonlinear effect, two machine learning (ML) techniques were tested: K-nearest neighbors regression (KNNR) and multilayer perceptron (MLP) neural network regression. The highest correlation coefficient ( R2 ) and the lowest root mean squared error (RMSE) were obtained for AW68 with MLP. Therefore, MLP was used for regression (level prediction for water, oil, and emulsion) as well as classification (identify the type of oil in the reservoir) simultaneously. The suggested network exhibited a high accuracy for oil identification (99.801%) and improved linear performance in regression ( R2 = 0.9989 and RMSE = 0.065).
DOCUMENT
Reducing the use of pesticides by early visual detection of diseases in precision agriculture is important. Because of the color similarity between potato-plant diseases, narrow band hyper-spectral imaging is required. Payload constraints on unmanned aerial vehicles require reduc- tion of spectral bands. Therefore, we present a methodology for per-patch classification combined with hyper-spectral band selection. In controlled experiments performed on a set of individual leaves, we measure the performance of five classifiers and three dimensionality-reduction methods with three patch sizes. With the best-performing classifier an error rate of 1.5% is achieved for distinguishing two important potato-plant diseases.
MULTIFILE
Routine immunization (RI) of children is the most effective and timely public health intervention for decreasing child mortality rates around the globe. Pakistan being a low-and-middle-income-country (LMIC) has one of the highest child mortality rates in the world occurring mainly due to vaccine-preventable diseases (VPDs). For improving RI coverage, a critical need is to establish potential RI defaulters at an early stage, so that appropriate interventions can be targeted towards such population who are identified to be at risk of missing on their scheduled vaccine uptakes. In this paper, a machine learning (ML) based predictive model has been proposed to predict defaulting and non-defaulting children on upcoming immunization visits and examine the effect of its underlying contributing factors. The predictive model uses data obtained from Paigham-e-Sehat study having immunization records of 3,113 children. The design of predictive model is based on obtaining optimal results across accuracy, specificity, and sensitivity, to ensure model outcomes remain practically relevant to the problem addressed. Further optimization of predictive model is obtained through selection of significant features and removing data bias. Nine machine learning algorithms were applied for prediction of defaulting children for the next immunization visit. The results showed that the random forest model achieves the optimal accuracy of 81.9% with 83.6% sensitivity and 80.3% specificity. The main determinants of vaccination coverage were found to be vaccine coverage at birth, parental education, and socio-economic conditions of the defaulting group. This information can assist relevant policy makers to take proactive and effective measures for developing evidence based targeted and timely interventions for defaulting children.
MULTIFILE