打字猴:1.70054181e+09
1700541810
1700541811 [13] Goldfarb D. A family of variable-metric methods derived by variational means[J]. Mathematics of Computing, 1970, 24(109): 23-26.
1700541812
1700541813 [14] Shanno D F. Conditioning of quasi-Newton methods for function minimization[J]. Mathematics of Computation, 1970, 24(111): 647-656.
1700541814
1700541815 [15] Liu D C, Nocedal J. On the limited memory BFGS method for large scale optimization[J]. Mathematical Programming, 1989, 45(1-3): 503-528.
1700541816
1700541817 [16] Abramson N, Braverman D, Sebestyen G. Pattern recognition and machine learning[M]. Academic Press, 1963.
1700541818
1700541819 [17] He H, Garcia E A. Learning from imbalanced data[J]. IEEE Transactions on Knowledge and Data Engineering, 2009, 21(9): 1263-1284.
1700541820
1700541821 [18] Xu B, Wang N, Chen T, et al. Empirical evaluation of rectified activations in convolutional network[J]. Computer Science, 2015.
1700541822
1700541823 [19] Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1): 1929-1958.
1700541824
1700541825 [20] Kim Y. Convolutional neural networks for sentence classification[J]. Eprint Arxiv, 2014.
1700541826
1700541827 [21] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]// Computer Vision and Pattern Recognition. IEEE, 2016: 770-778.
1700541828
1700541829 [22] Liu P, Qiu X, Huang X. Recurrent neural network for text classification with multi-task learning[J]. 2016: 2873-2879.
1700541830
1700541831 [23] Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780.
1700541832
1700541833 [24] Chung J, Gulcehre C, Cho K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[J]. Eprint Arxiv, 2014.
1700541834
1700541835 [25] Le Q V, Jaitly N, Hinton G E. A simple way to initialize recurrent networks of rectified linear units[J]. Computer Science, 2015.
1700541836
1700541837 [26] Gers F A, Schmidhuber J, Cummins F. Learning to forget: continual prediction with LSTM[M]. Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale, 1999: 850-855.
1700541838
1700541839 [27] Gers F A, Schmidhuber J. Recurrent nets that time and count[C]// Ieee-Inns-Enns International Joint Conference on Neural Networks. IEEE, 2000: 189-194 vol.3.
1700541840
1700541841 [28] Weston J, Chopra S, Bordes A. Memory Networks[J]. Eprint Arxiv, 2014.
1700541842
1700541843 [29] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate[J]. Computer Science, 2014.
1700541844
1700541845 [30] Xu K, Ba J, Kiros R, et al. Show, attend and tell: neural image caption generation with visual attention[J]. Computer Science, 2015: 2048-2057.
1700541846
1700541847 [31] Mnih V, Kavukcuoglu K, Silver D, et al. Playing atari with deep reinforcement learning[J]. Computer Science, 2013.
1700541848
1700541849 [32] Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Advances in Neural Information Processing Systems, 2014, 3: 2672-2680.
1700541850
1700541851 [33] Goodfellow I. NIPS 2016 Tutorial: generative adversarial networks[J]. 2016.
1700541852
1700541853 [34] Arjovsky M, Chintala S, Bottou L. Wasserstein GAN[J]. 2017.
1700541854
1700541855 [35] Arjovsky M, Bottou L. Towards principled methods for training generative adversarial networks[J]. 2017.
1700541856
1700541857 [36] Denton E L, Chintala S, Fergus R. Deep generative image models using a Laplacian pyramid of adversarial networks[C]//International Conference on Neural Information Processing Systems. MIT Press, 2015: 1486-1494.
1700541858
1700541859 [37] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks[J]. Computer Science, 2015.
[ 上一页 ]  [ :1.70054181e+09 ]  [ 下一页 ]