in section 3.4 of their article, the authors explain how they handle missing values when searching the best candidate split for tree growing. Specifically, they create a default direction for those nodes with, as splitting feature, one with missing values in the current instance set. At prediction time, if the prediction path goes through this node and the feature value is missing, the default direction is followed.
However the prediction phase would break down when the feature values is missing and the node does not have a default direction (and this can occur in many scenarios). In other words, how do they associate a default direction to all nodes, even those with missing-free splitting feature in the active instance set at training time?

My understanding of the algorithm is that a default direction is assigned probabalistically based on the distribution of the training data if no missing data is available at training time. IE. Just go in the direction with the majority of samples in the training set. In practice I'd say it's a bad idea to have missing data in your data set. Generally, the model will perform better if the data scientist cleans the data set up in a smart way before training the GBM algorithm. For example, replace all NA with the mean/median value or impute the value by finding the K nearest neighbors and averaging their values for that feature to impute the training point.
I'm also wondering why data would be missing at test time and not at train. That seems to imply the distribution of your data is evolving over time. An algorithm that can be trained as new data is available like a neural net may do better in you use case. Or you could always make a specialist model. For example let's say the missing feature is credit score in your model. Because some people may not approve you to access their credit. Why not train one model using credit and one not using credit. The model trained excluding credit may be able to get much of the lift credit was providing by using other correlated features.