Open Access
Issue
MATEC Web Conf.
Volume 173, 2018
2018 International Conference on Smart Materials, Intelligent Manufacturing and Automation (SMIMA 2018)
Article Number 03034
Number of page(s) 6
Section Digital Signal and Image Processing
DOI https://doi.org/10.1051/matecconf/201817303034
Published online 19 June 2018
  1. Jegou H, Douze M, Schimid C. Packing bag-of-features[C]. // In IEEE 12th International Conference on Computer Vision (ICCV), Piscataway: IEEE, 2009, 2357-2364. [Google Scholar]
  2. Peyre G. A Review of Adaptive Image Representations[J]. IEEE Journal of Selected Topics in Signal Processing, 2011, 5(5): 896-911. [CrossRef] [Google Scholar]
  3. Philbin J, Chum O, Isard M, et al. Object retrieval with large vocabularies and fast spatial matching[C]. // In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway: IEEE,2007, 1-8. [Google Scholar]
  4. Chum O, Philbin J, Sivic J, et al. Total recall: Automatic query expansion with a generative feature model for object retrieval[C]. //In IEEE 11th International Conference on Computer Vision (ICCV), Piscataway: IEEE, 2007: 1-8. [Google Scholar]
  5. Li F F, Perona P. A Bayesian hierarchical model for learning natural scene categories[C]. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway: IEEE, 2005, 2: 524-531. [Google Scholar]
  6. Deng C Z, Cao H Q. Construction of Multiscale Ridgelet Dictionary and Its Application for Image Coding[J]. Journal of Image and Graphics, 2009,14(7):1273-1278. [Google Scholar]
  7. Sun Y B, Wei Z H, Xiao L, et al. Multimorphology Sparsity Regularized Image Super-Resolution. ACTA ELECTRONICA SINICA, 2010, 38(12):2898-2903. [Google Scholar]
  8. Aharon M, Elad M, Bruckstein A. The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation[J]. IEEE Transactions on Signal Processing, 2006, 54(11): 4311-4322. [NASA ADS] [CrossRef] [Google Scholar]
  9. Csruka G, Dance C R., Fan L, et al. Visual categorization with bags of keypoints[C]. // In ECCV International Workshop on Statistical Learning in Computer Vision, Berlin: Springer, 2004: 1-22. [Google Scholar]
  10. Csurka G, Dance C R, Perronnin F, et al. Generic visual categorization using weak geometry[J]. Toward Category-Level Object Recognition Lecture Notes in Computer Science, 2006: 207–224. [CrossRef] [Google Scholar]
  11. Lazebnik S, Schmid C, Ponce J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[C]. // In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway: IEEE, 2006: 2169–2176. [Google Scholar]
  12. Zhang J, Marszalek M, Lazebnik S, et al. Local features and kernels for classification of texture and object categories: a comprehensive study[J]. International Journal of Computer Vision, 2007, 73(2): 213–238. [CrossRef] [Google Scholar]
  13. Sivic J, Zisserman A. Video google: A text retrieval approach to object matching in videos[C]. // In IEEE International Conference on Computer Vision (ICCV), Piscataway: IEEE, 2003: 1470–1477. [CrossRef] [Google Scholar]
  14. Boiman O, Shechtman E, Irani M. In defense of nearest-neighbor based image classification[C]. // In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway: IEEE, 2008: 1-8. [Google Scholar]
  15. Li F F, Fergus R, Perona P. Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories[J]. Computer Vision and Image Understanding, 2007, 106(1): 59-70. [CrossRef] [Google Scholar]
  16. Bosch A, Zisserman A, Munoz X. Scene classification using a hybrid generative/ dicriminative approach[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2008, 30(4): 712-727. [CrossRef] [Google Scholar]
  17. Liu Y, Jin R., Sukthankar R., et al. Unifying discriminative visual codebook generation with classifier training for object category recognition[C]. // In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway: IEEE, 2008: 1-8. [Google Scholar]
  18. Jurie F, Triggs B. Creating efficient codebooks for visual recognition[C]. // In IEEE International Conference on Computer Vision (ICCV), Piscataway: IEEE, 2005, 1: 604-610. [Google Scholar]
  19. Burghouts, G J, Schutte K. Spatio-temporal layout of human actions for improved bag-of-words action detection[J]. Pattern Recognition Letters, 2013, 34(15): 1861-1869. [CrossRef] [Google Scholar]
  20. Banerji, S, Sinha A, et al.. A New Bag of Words LBP (BoWL) Descriptor for Scene Image Classification[J]. Computer Analysis of Images and Patterns, 2013, 8047: 490-497. [CrossRef] [Google Scholar]
  21. Yang J C, Yu K, Gong Y H, et al. Linear spatial pyramid matching using sparse coding for image classification[C]. // In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway: IEEE, 2009: 1794-1801. [Google Scholar]
  22. Wang J J, Yang J C, Yu K, et al. Locality-constrained linear coding for image classification[C]. // In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway: IEEE, 2010: 3360-3367. [Google Scholar]
  23. Rubinstein, R, Peleg T, et al.. Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model[J]. IEEE Transactions on Signal Processing, 2013, 61(3): 661-677. [NASA ADS] [CrossRef] [Google Scholar]
  24. Jiang Z, Lin Z, et al. Label Consistent K-SVD: Learning A Discriminative Dictionary for Recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, (99): 1-13. [Google Scholar]
  25. Zhang Z, Ganesh A, et al. TILT: Transform Invariant Low-Rank Textures[J]. International Journal of Computer Vision, 2012, 99(1): 1-24. [CrossRef] [Google Scholar]
  26. Liu G C, lin Z C, et al. Robust Recovery of Subspace Structures by Low-Rank Representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 171-184. [CrossRef] [Google Scholar]
  27. Zhang N, Yang J. Low-rank representation based discriminative projection for robust feature extraction[J]. Neurocomputing, 2013, 111(0): 13-20. [CrossRef] [Google Scholar]
  28. Shalit U, Weinshall D, et al.. Online learning in the embedded manifold of low-rank matrices[J]. Journal of Machine Learning Research, 2012, 13(1): 429-458. [Google Scholar]
  29. Liu Y Y, Jiao L C, et al. An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion[J]. Neural Networks, 2013, 48(0): 8-18. [CrossRef] [Google Scholar]
  30. Zhang X, Sun F, et al.. Fast Low-Rank Subspace Segmentation[J]. IEEE Transactions on Knowledge and Data Engineering, 2013, (99): 1-6. [Google Scholar]
  31. Yang J F, Yin W T, Zhang Y, et al. A fast algorithm for edge preserving variational multichannel image restoration[J]. SIAM Journal on Imaging Sciences, 2009, 2(2): 569–592. [CrossRef] [Google Scholar]
  32. Boiman O, Shechtman E, Irani M. In defense of nearest-neighbor based image classification[C]. // In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway: IEEE, 2008: 1-8. [Google Scholar]
  33. Shotton J, Winn J, Rother C, et al. Textonboost for image understanding: Multi-class object recognition and segmentation by jointly modeling appearance, shape and context[J]. International Journal of Computer Vision, 2009, 81(1): 2–23. [CrossRef] [Google Scholar]
  34. Zhang H, Berg A C, MAIRE M, MALIK J. Svm-knn: Discriminative nearest heighbor classification for visual category recognition[C]. //In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway: IEEE, 2006: 2126-2136. [Google Scholar]
  35. Griffin G, Holub A, PeronA P. Caltech-256 Object Category Dataset. Technical Report 7694, California Institute of Technology, 2007. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.