Afaik no, but the further out you tile/place islands from 0-1 UV space, you'll get some inaccuracies in engine. Could be wrong, but was something I read up and found out first hand years ago.
The actual cost on modern hardware should be negligible, but since you're asking a yes-or-no question, yes the impact does exist: https://stackoverflow.com/a/50253669
EDIT: Should add though that the cost is in GPU processing, not GPU memory. The texture data will always occupy the same size in GPU memory, no matter if that texture will be tiled or not. It's the tiled rendering that isn't as as efficient as the not-tiled rendering.
When you have a single quad with large UV coordinates on each vertex so that you have lots of repetition of a texture, for each rasterized pixel of that quad the texture sampler has to fetch texels that are far apart, and this involves repeatedly caching different bits of the texture. There's an older (2006) post explaining this, with different words: https://community.khronos.org/t/textures-gl-repeat-performance/34694/5
According to the first link, the way to improve on the tiling performance is to enable MIP-mapping on the textures that will be tiled, so that whenever the assets are small on screen they will sample smaller copies of those textures.
Indeed... I think it's worth reiterating that it's not a practical concern in any reasonable scenario these days, I i imagine that you'd hit visual problems with precision before you hit problems with performance anyway
Replies
i imagine that you'd hit visual problems with precision before you hit problems with performance anyway