Open Access
Issue
MATEC Web Conf.
Volume 401, 2024
21st International Conference on Manufacturing Research (ICMR2024)
Article Number 10009
Number of page(s) 8
Section Manufacturing / Engineering Management
DOI https://doi.org/10.1051/matecconf/202440110009
Published online 27 August 2024
  1. SymphonyAI. “Industrial LLM - SymphonyAI with Microsoft”. https://www.symphonyai.com/, accessed April 29, 2024 (2024) [Google Scholar]
  2. Z. Zhang, X. Han, Z. Liu, X. Jiang, M. Sun and Q. Liu, “ERNIE: Enhanced language representation with informative entities”, arXiv preprint arXiv:1905.07129, (2019) [Google Scholar]
  3. S. Zhang, J. Zhou, X. Ma, C. Wen, S. Pirttikangas, C. Yu and C. Yang, “TSViT: A Time Series Vision Transformer for Fault Diagnosis”, arXiv preprint arXiv:2311.06916, (2023) [Google Scholar]
  4. J. Zhang, S. Cao, L. Hu, L. Feng, L. Hou and J. Li, “KB-Plugin: A Plug-and-play Framework for Large Language Models to Induce Programs over Low-resourced Knowledge Bases”, arXiv preprint arXiv:2402.01619, (2024) [Google Scholar]
  5. J. Li, Y. Zhang, W. Qiang, L. Si, C. Jiao, X. Hu and F. Sun, “Disentangle and remerge: interventional knowledge distillation for few-shot object detection from a conditional causal perspective”, in proceedings of the AAAI Conference on Artificial Intelligence, 37 (1), pp. 1323-1333, (2023) [CrossRef] [Google Scholar]
  6. J. Li, F. Song, Y. Jin, W. Qiang, C. Zheng, F. Sun, and H. Xiong, “BayesPrompt: Prompting Large-Scale Pre-Trained Language Models on Few-shot Inference via Debiased Domain Abstraction”, arXiv preprint arXiv:2401.14166, (2024) [Google Scholar]
  7. Y. Yuan, “On the power of foundation models”, arXiv:2211.16327, (2023) [Google Scholar]
  8. P. Liu, L. Qian, X. Zhao, B. Tao, IEEE T. Indust. Inform, 20 (6) pp. 1-10, (2024) [Google Scholar]
  9. X. Liu, G. Wang, H. Yang, D. Zha, “Data-centric FinGPT: Democratising internet- scale data for financial large language models”, arXiv:2307.10485v2, (2023) [Google Scholar]
  10. J. Jiang, K. Zhou, W. Zhao, Y. Song, C. Zhu, H. Zhu and J. Wen,”KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning over Knowledge Graph”, arXiv preprint arXiv:2402.11163, (2024) [Google Scholar]
  11. H. Luo, Z. Tang, S. Peng, Y. Guo, W. Zhang, C. Ma and W. Lin, “Chatkbqa: A generate- then-retrieve framework for knowledge base question answering with fine-tuned large language models”, arXiv preprint arXiv:2310.08975, (2023) [Google Scholar]
  12. J. Zhou, Q. Lu, X. Chai, C. Liu, W. Shen, “A Data-Driven and Knowledge Graph Enhanced Intelligent Framework for Modeling Cognitive Digital Twins”, in Int. Conf. on SMC (2024) [Google Scholar]
  13. K. Guu, K. Lee, Z. Tung, P. Pasupatand, M. Chang, “Retrieval augmented language model pre-training”, in international conference on machine learning, pp. 3929-3938, PMLR, (2020) [Google Scholar]
  14. S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang and X. Wu, IEEE Trans. Knowl. Data Eng, 36 (7), 2024. [Google Scholar]
  15. Y. Xu, J. Lu and J. Zhang, “Bridging the Gap between Different Vocabularies for LLM Ensemble”, arXiv preprint arXiv:2404.09492 (2024) [Google Scholar]
  16. T. Shnitzer, A. Ou, M. Silva, K. Soule, Y. Sun, J. Solomon and M. Yurochkin, “Large language model routing with benchmark datasets”, arXiv preprint arXiv:2309.15789, (2023) [Google Scholar]
  17. Y. Duan, Y. Hong, L. Niu and L. Zhang, “Few-shot defect image generation via defect- aware feature manipulation”,in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 1, pp. 571-578, (2023) [CrossRef] [Google Scholar]
  18. G. Fatouros, K. Metaxas, J. Soldats and D. Kyriazis, “Can large language models beat wall street? unveiling the potential of ai in stock selection”, arXiv preprint arXiv:2401.03737 (2024) [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.