Although most of the points are already discussed. But I would like to highlight again one more reason for NaN which is missing.
tf.estimator.DNNClassifier(
hidden_units, feature_columns, model_dir=None, n_classes=2, weight_column=None,
label_vocabulary=None, optimizer='Adagrad', activation_fn=tf.nn.relu,
dropout=None, config=None, warm_start_from=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, batch_norm=False
)
By default activation function is "Relu". It could be possible that intermediate layer's generating a negative value and "Relu" convert it into the 0. Which gradually stops training.
I observed the "LeakyRelu" able to solve such problems.