Recently Published

NHANES Vanilla RNN
A Recurrent Neural Network (RNN) is a model that intakes functional data one data point at a time (t), computing a hidden state (h), based on the current input (x), and the previous hidden state, both weighted (W), increased by the learned bias term (b), and applied to the tanh function (σ) calculated by the model. h(t​) = σ(W(h) * ​h(t-1) ​+ W(x) * ​x(t) ​+ b) Scalar, non-functional data is input as a flat vector and applied to the ReLU function for the simplicity of the relationship. We trained the model using a validation split of 0.2, meaning that the RNN cycles through 80% of the training data, updating the model’s weights at each data point, and then uses the remaining 20% of the training data to measure the performance of the model via mean squared error and mean absolute error. The RNN then adjusts the weights by backpropagation and repeats this process, which is called an epoch. In each epoch, the model uses an “adam” (Adaptive Moment Estimation) optimizer, which does particularly well with noisy or high-dimensional data. Our model typically stopped improving after about 8 epochs, so we set the RNN to run 10 epochs, with the idea that this would be a similar process to 10-fold cross-validation. Variable Importance is found through a permutation-based loop. For each variable, its values were randomly shuffled across all subjects, breaking any true association with the outcome. The modified data was passed through the trained model, and the increase in RMSE was recorded. A larger increase indicates greater importance of that variable.
Acayucan sem 24-27
Alvarado sem 24-27
Veracruz sem 24-27
Orizaba sem 24-27
Cordoba sem 24-27
Análisis de las propiedades de los estimadores puntuales
Informe final de Análisis Estadístico
Coatepec 24-27