Jump to content

User:Alpha5: Difference between revisions

1,404 bytes removed ,  29 April 2023
no edit summary
No edit summary
Tag: Reverted
No edit summary
Tag: Reverted
Line 1: Line 1:
<nowiki>---</nowiki>
{{Model infobox
<nowiki>language: en</nowiki>
| hugging-face-uri = openai/clip-vit-large-patch14
<nowiki>license: cc-by-nc-sa-4.0</nowiki>
| type =
<nowiki>---</nowiki>
| task = 
| library =
| dataset =
| language =
| paper =
| related-to =
| license =
}}


= LayoutLMv3 =
{{Model infobox
| hugging-face-uri = bert-base-uncased
| type =  
| task = 
| library =
| dataset =
| language =
| paper =
| related-to =
| license =
}}


[https://www.microsoft.com/en-us/research/project/document-ai/ Microsoft Document AI] | [https://aka.ms/layoutlmv3 GitHub]
</noinclude>
 
== Model description ==
 
LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis.
 
[https://arxiv.org/abs/2204.08387 LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking]
Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, ACM Multimedia 2022.
 
== Citation ==
 
If you find LayoutLM useful in your research, please cite the following paper:
 
<pre>
@inproceedings{huang2022layoutlmv3,
  author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei},
  title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking},
  booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
  year={2022}
}
</pre>
== License ==
 
The content of this project itself is licensed under the [https://creativecommons.org/licenses/by-nc-sa/4.0/ Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)].
Portions of the source code are based on the [https://github.com/huggingface/transformers transformers] project.
[https://opensource.microsoft.com/codeofconduct Microsoft Open Source Code of Conduct]