Options
Number of Attention Heads vs. Number of Transformer-encoders in Computer Vision
Journal
Proceedings of the 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management - KDIR
ISSN
2184-3228
Type
conference paper
Date Issued
2022-10
Author(s)
Research Team
Data Science and Natural Language Processing
Abstract
Determining an appropriate number of attention heads on one hand and the number of transformer-encoders, on the other hand, is an important choice for Computer Vision (CV) tasks using the Transformer architecture. Computing experiments confirmed the expectation that the total number of parameters has to satisfy the condition of overdetermination (i.e., number of constraints significantly exceeding the number of parameters). Then, good generalization performance can be expected. This sets the boundaries within which the number of heads and the number of transformers can be chosen. If the role of context in images to be classified can be assumed to be small, it is favorable to use multiple transformers with a low number of heads (such as one or two). In classifying objects whose class may heavily depend on the context within the image (i.e., the meaning of a patch being dependent on other patches), the number of heads is equally important as that of transformers.
Language
English
HSG Classification
contribution to scientific community
HSG Profile Area
None
Refereed
Yes
Publisher
SciTePress
Start page
315
End page
321
Subject(s)
Division(s)
Contact Email Address
bernhard.bermeitinger@unisg.ch
Eprints ID
267726