Anderson, R. Summary: The key idea is to randomly drop units along with their connections from the neural network during training.
Most but not all of these 20 papers, including the top 8, are on the topic of Deep Learning. Google Scholar 8. Summary: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change.
Ng, E. HIC that presents how publications build upon and relate to each other is result of identifying meaningful citations. We show how an ensemble of regression trees can be used to estimate the face's landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions.
A survey on feature selection methodsby Chandrashekar, G. The criteria we used to select the 20 top papers are by using citation counts from three academic sources: scholar.
Johnson, G. Lanzetta, M. ACM Comput. Bouzakis, K.
Klocke, R, Markworth, L.