Open Access
Issue
MATEC Web Conf.
Volume 176, 2018
2018 6th International Forum on Industrial Design (IFID 2018)
Article Number 01024
Number of page(s) 6
Section Intelligent Design and Computer Technology
DOI https://doi.org/10.1051/matecconf/201817601024
Published online 02 July 2018
  1. Bahdanau, D., Cho, K. and Bengio, Y., 2014. Neural Machine Translation by Jointly Learning to Align and Translate. Computer Science. [Google Scholar]
  2. Cho, K., Merrienboer, B. V., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H. and Bengio, Y., 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In EMNLP, 2014 Conference on Empirical Methods in Natural Language Processing (pp.1724–1734). ACL. [CrossRef] [Google Scholar]
  3. Feng, M., Xiang, B., Glass, M. R., Wang, L. and Zhou, B., 2015. Applying deep learning to answer selection: A study and an open task. In ASRU, Automatic Speech Recognition and Understanding (pp.813-820). [Google Scholar]
  4. Graves, A., 1997. Long Short-Term Memory. Neural Computation. 9(8), p.1735. [CrossRef] [PubMed] [Google Scholar]
  5. Heilman, M. and Smith, N. A., 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technologies, The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (pp.1011-1019). ACL. [Google Scholar]
  6. Luong, M. T., Pham, H. and Manning, C. D., 2015. Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. ACL. [Google Scholar]
  7. Mnih, V., Heess, N., Graves, A. and Kavukcuoglu, K., 2014. Recurrent models of visual attention. In Proceedings of the 27th International Conference on Neural Information Processing Systems (pp.2204-2212). MIT Press. [Google Scholar]
  8. Mueller, J. and Thyagarajan, A., 2016. Siamese recurrent architectures for learning sentence similarity. In AAAI-16, 3th AAAI Conference on Artificial Intelligence (pp.2786-2792). AAAI Press. [Google Scholar]
  9. Neculoiu, P., Versteegh, M. and Rotaru, M., 2016. Learning Text Similarity with Siamese Recurrent Networks. In ACL2016, 1st Workshop on Representation Learning for NLP. [Google Scholar]
  10. Nie, L., Wei, X., Zhang, D., Wang, X., Gao, Z. and Yang, Y., 2017. Data-Driven Answer Selection in Community QA Systems. IEEE Transactions on Knowledge & Data Engineering. 29(6): 1186-1198. [CrossRef] [Google Scholar]
  11. Santos, C. D., Tan, M., Xiang, B. and Zhou, B., 2016. Attentive Pooling Networks. Computer Science. [Google Scholar]
  12. Severyn, A. and Moschitti, A., 2013. Automatic feature engineering for answer selection and extraction. In EMNLP, 2013 Conference on Empirical Methods in Natural Language Processing (pp.458-467). ACL. [Google Scholar]
  13. Socher, R., Perelygin, A., Wu, J. Y., Chuang, J., Manning, C. D., Ng, A. Y. and Potts, C., 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, Conference on Empirical Methods in Natural Language Processing (pp.1631-1642). ACL. [Google Scholar]
  14. Tan, M., Santos, C. D., Xiang, B. and Zhou, B., 2016. Improved Representation Learning for Question Answer Matching. In ACL, 54th Annual Meeting of the Association for Computational Linguistics (pp.464-473). ACL. [Google Scholar]
  15. Wang, B., Liu, K. and Zhao, J., 2016. Inner Attention based Recurrent Neural Networks for Answer Selection. In ACL, 54th Annual Meeting of the Association for Computational Linguistics (pp.1288-1297). ACL. [Google Scholar]
  16. Wang, D. and Nyberg, E., 2015. A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering. 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (pp.707-712). ACL. [Google Scholar]
  17. Wang, M. and Manning, C. D., 2010. Probabilistic tree-edit models with structured latent variables for textual entailment and question answering. 23rd International Conference on Computational Linguistics (pp.1164-1172). Coling 2010 Organizing Committee. [Google Scholar]
  18. Wang, S. and Jiang, J., 2016. Learning Natural Language Inference with LSTM. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. ACL. [Google Scholar]
  19. Yin, W., Schütze, H., Xiang, B. and Zhou, B., 2015. ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs. Transactions of the Association of Computational Linguistics. [Google Scholar]
  20. Yu, L., Hermann, K. M., Blunsom, P. and Pulman, S., 2014. Deep Learning for Answer Sentence Selection. Computer Science. [Google Scholar]
  21. Zheng, Y., Zemel, R. S., Zhang, Y. J. and Larochelle, H., 2015. A Neural Autoregressive Approach to Attention-based Recognition. International Journal of Computer Vision. 113(1): 67-79. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.