Hey all!
So I'm working on a model of the solar system and obviously have a lot of spheres that I need to unwrap. I didn't think this would be an issue when I started but now I'm finding that unwrapping a sphere without major seams or distortion is a lot harder than one would think and there really aren't any useful tutorials on the subject that I've been able to find.
Does anyone have any tips on how to do this properly in 3D Studio max? I would think pelt mapping a sphere would be an easy way to do it, but I can't seem to loose the distortion in the center area of the sphere.
Any help on this would be greatly appreciated.
Replies
Start off with a box, unwrap it.
Turbosmooth it once or twice until its a smooth sphere, might need to relax it.
Assign all the faces to one smoothing group and you're good to go.
Now you have a nice sphere made of all quads unwrapped in a predictable way.
There's a quad sphere primitive shape plug-in for max, probably something for Maya too, otherwise you could just as quickly make it.
Ha ha ha Darth you crack me up, thats a good method too, could probably easily apply it to a quadsphere too, instead of going with a box map?
Basically, to make a quad sphere thats easier to unwrap than a regular sphere primitive you should start off with a cube, add a couple of segments to each axis and apply a spherify modifier, you'll be able to follow the example's here pretty easily after that.
Anyway, here's a screenie of how I do it. I'm pretty good at making seams disappear with Photoshop (I'd say I'm more skilled in PS than I am at unwrapping) so you have to play off your strengths. Try Vig's method first, it makes a lot more sense HA.
Vig, I tried your idea of using a box and then just Msmoothing it a few times to turn it into a sphere but the problem there is it ends up at about 768 polys by the time it looks nicely spherical.
I was talking to Kman and he pointed out that using a Geosphere with Octa for the Geodesic base type is a much better way to go about it and also makes the poly count much lower. He unwrapped a sphere for me and showed me how he laid it out which was quite good, however after playing around on my own for a bit, I realized that if I simply create a geosphere and load an unwrap onto it, Max unwraps the mesh perfectly without my help at all.
It does have a bit of a seam at the top, but with a little work in photoshop, I could make it completely invisible. The only problem now is if I want to add some snowy areas to the poles, I'd need to re-unwrap the top and bottom and it might make a more obvious seam.
Still, not bad for virtually no work. If anyone else has to do something similar, I definitely suggest using Octa Geospheres.
BTW, what size of a monitor do you work with? I've got a 40" tv hooked up to my computer with a 1920x1080 resolution and your website requires me to scroll to the right just to see the entire thing. You might want to scale it down just a smidgen :P
Your models look pretty good otherwise
/saved
I thought you were the grand champion of max? Quick, brush it off like it was a freudian slip and you already knew how it worked!
(end of thread derailment )
I'd go the geosphere route too, that or the sphere, but with a spherify kind of modifier on top to make it round without using too many polies
Always learn something new.
Now that you mention it, I remember it coming up in a previous "how do you unwrap a sphere" thread about a year ago. I think Mojokey brought up the geosphere? Because about that time I convinced our modelers to start using them for the eyes in our models (creepy wire views look like teeth). It saved enough polys we could do proper forehead wrinkles.
I personally don't use them that much in stuff I make, as I hardly ever need just a sphere and I need something with clean straight edges that I can use to build other things with. The quadsphere is pretty good at that. It also ports over to a sculpting app a lot better than a geosphere or a regular sphere with poles.
I'm actually starting up a new project now that I met a useful programmer! Its a very small turn based space game actually hence the need for figuring out how to unwrap spheres :P
Brutal as I'm sure you've figured out has been put on indefinite hold until it gets the proper funding it deserves!
Vig, I don't believe you get schooled all the time, but I'm sure it happens occasionally. And see, what'd I tell you? It was a freudian slip of sorts... you just forgot that you knew the answer already
Yeah they do look hella freaky in wire form, but hey, they work so what do I care? Lower poly too so that's always a bonus.
The last picture. How do you get the sphere to unwrap like that? I normally use quad spheres but for doing a prop like a globe, I think unwrapping it this way would be more beneficial.
I too have been studying this, for exactly the same reasons.
first off, I am a blender guy, so this help may need to be translated into Maxese.
I just posted how to do this, here:
https://blenderartists.org/t/spherical-unwraps/685424/2
If you don't want to go there, another user posted a method thats almost perfect, but leaves gaps at the poles by turning a plane into a sphere and not merging the square faces near the poles into triangles thus allowing you to make a simple cylinder map to it.
While that method is decent, it presents a problem, you can never ever merge the vertecies at the poles! otherwise you lose the extra edges which give you square UV faces to fill-in the texture space. If you merged them you’d suddenly be missing some texture space coverage and would see very obvious seams.
However, not merging them will create surface normal inconsistency at the poles (dark spots). (keep in mind blender renders unmerged non-parallel edges as separate surfaces so reflections and shadows will not float over them properly.)
But there is another way to get a perfect spherical map on a normal UV sphere. It CAN use 100% texture space as well. but it cannot handle generated textures NATIVELY, and will take some extra mapping steps.
You probably never thought of mapping it corner to corner. that is to say you put the poles at any 2 diagonally opposite corners of the texture space and fill in the rest. It’ll take extra work to make the UV layout, but it’s worth it in the end. Just make sure you unwrap a blank sphere first then make copies with new duplicate materials for each object that way you don’t have to redo the UV layout each time…
Technically you could use ANY good UV layout you want and get a seamless mapped sphere. It just takes extra steps in the mapping process, so if you don’t like the poles at the corners method, you can follow the following steps with your layout. (hint it will work for just about any shape even an Icosphere!)
Step 1: make your UV layout how you think it will be best for your sphere. or how it NEEDS TO BE for your project’s aims.
Step 2: For blender’s built-in generated textures (if using some other square or rectangle texture, ignore this step)
on your sphere make any layers you need and any generated textures you want to apply, get them adjusted so they look good.
on a UV unwrapped plane, load the sphere’s material(for a copy of the textures), then bake the generated textures to an image one layer at a time, save them each with easily findable names.
Step 3: Make a new plane, subdivide it so it has the same number of faces as your sphere but for now maintain a rectangle or square outline.
Step 4: Unwrap the plane to fill the texture bounds and load your intended sphere textures.
Step 5: Modify the plane in Edit mode (you might want to open 2 copies of blender, so you can see your sphere’s UV outline while doing this) and adjust the faces of the plane to EXACTLY match the proportions etc. of the UV layout of the sphere!
Step 6: put a camera above it angled down. set the render settings to not render the background, but instead render alpha. Change the output image format to a format that supports alpha (TGA, PNG, GIF) and switch the texture color settings to RGBA, Change the image proportions to match the proportions of your intended sphere images size on X and Y. adjust the camera height until the plane occupies the entire render area exactly (or until it will exactly match the Sphere’s UV layout (you see what I’m doing here?)).
Step 7: back on the plane, set it to shadeless, render it 1 texture layer at a time and save each image as an easily understandable name. (I use NOR at the end of a normal map COL at the end of a Color map etc., I.E. Water128x128NOR.tga, Water128x128COL.tga)
Step 8: on your sphere, import the images you just rendered as textures on your sphere, the UV layout should match exactly and you should not have any visible creases.
Benefit: No bad normals at the poles, no gaps, blender sees the whole sphere as 1 surface so reflections and shadows flow over the whole sphere properly, everything is merged you don’t have to worry about accidental “select all”+“remove doubles” episodes. Complete coverage. Full control of the UV layout.
This method is untested. It makes sense logically. I cannot guarantee generated textures will repeat correctly after being saved to an image, that may take some manual adjustment in The GIMP. (copy a small amount of the edges of the image to another layer or image, then delete 1 opposite side of each on X and Y from the original image, paste the opposite side’s pixels there, blend/smudge into place to remove any hard pixel edges.)
I will be testing this at some point.
exactly, but, the point of my comment was not the shape, but method to generate and apply the texture to any UV map in a way that has no creases regardless the shape.
I would like to bump this post to post the outcome.
The system I proposed would Epic Fail if using normal maps. and did so.
There is another system that I have come-up with and tested with normal maps. I almost have it perfect. controling the border margin of the map is the final step.
What did I do?
So you have to keep all edges of the UV islands actually CONNECTED on the image space. so nothing should attempt to flow over the edge and leave the software to attempt to put both sides of the texture space on repeat together. For this purpose you will need to do a "rupee" map with flat sides on the edges of the image space, and a pointy top and bottom in center image.
Next you need to make 2 Materials. one for each side of the object. This is to ensure that you can control the TEXTURE normals later!
Next you need to make one set of textures for each side. (so you can map each side to it's respective set of images)
After that You will need to bake your textures to this UV face layout for both sides of the sphere. Island Pixel Margins are the only thing I have yet to perfect, in my software it looks like a border margin of 1 pixel should be roughly perfect. it's possible the UV map edges will have to leave a 1 pixel edge as well. it's also possible that there should be a 1 Pixel Edge AND TURN TEXTURE REPEAT OFF (use Clipped, Extend etc. instead). this is my intuition of how to fix it.
Finally after all your baked images are applied, MAKE SURE ALL THE NORMALS ARE BEING APPLIED TO THE SAME INTENSITY!!!, select the normal maps on each side individually one at a time and use Trial and Error to get the right combination of Positive and Negative TEXTURE normal calculation. if done right the UV space texture normal Crease should completely disappear leaving you to only have to deal with Mipmap/ border pixel margins etc to perfect it.
over and done.
Edit: Additional info: SUBSURFS WILL MURDER THIS DELICATE SYSTEM IRREPERABLY. this means you CANNOT let the software automatically smooth the model into a billion polygons for you. you will need to work manually with a model of the desired complexity.
"Hard to follow, some screenshots would really help here." -Eric
ok I don't know if I'll do step by step images but I'll post a few that show the UV outlines and the final outcome. There is a minor seam, it's worst at the poles, it's just a texture based seam so I might have to expand the margin at the poles and manually edit the textures to include a larger margin near the poles and a smaller margin elsewhere.
Each side should have a "rupee" UV face outline, here we see the Positive X Axis Side of the texture., there needs to be a separate texture for the negative side. Also BOTH sides of the sphere's UV outline should be exactly overlaid to match eachother so that all the UV edges bewteen the 2 sides meet in texture space.
final outcome (Old Blender GE) (So far)
Hard to follow, some screenshots would really help here.
Bump
So, as good as that was, I found yet another way to do this. a way thats better.
I call it the "Top-Centered Flared" method. It is completely seamless over 90% of the object even with normals applied. It takes a lot of work to do it though. And some of the information may not translate so well to your software package.
the Idea is to use a typical UV Sphere, create equadistant rings each one bigger than the last onto the UV map. starting from the top in the center and working out towards the bottom at the edge of the image space then scale it Line by line till it hits the end of the image space ("Squaring" it).
On the 3D end you will need to subdivide the pole pointing edges of the "South pole" and remove the central vertex then AFTER baking (based on World and Sphere MapInput); scale this ring to 0 in 3D space and match it to the negative offset of the top pole position(if the top is 1.000 , then bottom will be -1.000).
Images will have to be Baked to this layout. (Tested!!! Works wonders!)
the only downside is much like the flat plane ToSphere method or the geosphere method theres some puckering of the texture at the "South" pole, however unlike the Plane to Sphere method or geosphere method; it is ONLY at the south pole(plane to sphere/geosphere has puckering at North AND South). meaning in the Top Centered method, the northpole and the rest have no such "puckered" seam.
This does mean you'll need 2 models a North and a South, and you'll need Code to tell the software to load the corresponding model when you(the camera) are above or below it's equator. then it appears to be seamless in entirety.
Making this UV layout takes a lot of careful calculation of ring scaling, and time, and good use of the "limit faces to image bounds" feature. it is not the kinda thing you want to do for each new sphere or progress would be VERY slow (can be done in an hour and a half casually) the alternative is you will want to have a north and south set of "Baking" models, and a North and south set of "Finished" models. This way you can just change the Global+Sphere image on the Baking models to any image and bake, then merely make a copy of the "Finished" models and apply the baked images.
This is very complicated. but it looks something like this (North) (this is a 32 vertex 32 ring UV sphere so it has 1024 faces the same texture can be applied to any other sized UV sphere so long as you unwrap it the same):
if I come up with yet a better way I'll let you know, but so far so good.
sounds like something you'd want a script to do for you :o
Sounds kind of like you're trying to re-invent map projections?
The general agreement is, every method has its drawbacks, and there's always a trade-off between distortion and number of seams (as with all/most UV mapping)— but it's an interesting topic at any rate.
https://en.wikipedia.org/wiki/List_of_map_projections
https://en.wikipedia.org/wiki/Peirce_quincuncial_projection
this was the original ideal I had, however this would have created major southern seams.
So I ended up doing something between that and https://en.wikipedia.org/wiki/Azimuthal_equidistant_projection
it would be nice to make a script to do it yes, in python I could do it. but, meh, preparing two mapping models for baking is just easier than losing myself in geometrical logic.
The downside to this method is you'll need a large image area for clarity. with a 1024x1024 image this is 16 pixels between rings for a 32 ring sphere. which is not very clear. however mipmap really helps smooth it out.
After thinking about it over night, the best thing to do will be to manually create distance models. so for a close up view of a planet maybe 8192x8192, where as if it's a smaller object (like a ball) 512x512 might do for the closeup, but if it's farther away, like the moon, even a planet could be 512x512. remember this is based on camera distance. so a telescope camera could trigger a higher res model and texture to spawn. and if somethign is even further away, maybe consider very small textures, 32x32 or 64x64
Why not simply sds a cube, and map it into 6 pieces? That way you have the most equal texel density everywhere. The pole of a sphere is always the biggest trouble maker ...
@JoshexDirad
didnt read your previous post properly
why not use a mercator projection ? that would get you closest to undistorted pixels
and to cover things i spotted in your last post...
adding new small textures for distant objects is the opposite of an optimisation - the computer does that for you, in hardware, for free, automatically.
what about a cube quad sphere ?
https://us.v-cdn.net/5021068/uploads/editor/b2/7r9156vj1t4z.zip
Cube quad sphere wouldn't be seamless. It would have a ton of seams and be really difficult to make the texture seamless.
I don't know what sds means, but if it's in 6 pieces there'd be a ton of seams. Also, the 3rd post already suggested a cube. In my experience, a Geosphere set to Octa still gives the best result where you don't have to mess with the texture too much and it's fairly easy to remove the seam. I love that this topic is still being posted to like 10 years later :P
Why are uv seams difficult for you @JoshexDirad ? Every single 3d model has them, and it's a solved problem. Highpoly baking, triplanar projection, 3d procedurals, 3d paint, etc.
>I don't know what sds means
SDS stands for Subdivision Surface. In Max they call it turbosmooth from what i remember. You make a cube, and with three or four SDS levels this cube turns into a sphere, with 6 faces from the cube underneath. The result is that you can achieve a pretty equal texel density at the whole sphere when you map it. This is not the case with a sphere projection. The highest texel density is at the equator here, and the lowest at the poles.
It is true that you have more seams this way. But a sphere is also not seamless. It still requires at least one seam. And i would care more about the texel density of the single faces than about having a few more seams in this case.
Not to be that guy, but you don't get a sphere when you subdivide a cuve. Smoothing algorithms don't produce circular curves at an edge. 3ds Max has a Spherify Modifier you can add, then you get your quad-sphere.
I don't know how people think they found a new perfect way to unwrap a sphere, when the way we unwrap spheres hasn't changed in 25 years. Takes a certain amount of arrogance to think yourself smarter than 25 years worth of 3D artists...and franky, most of the recent suggestions are junk. Full of distortions and impossible to work with. If you are unable to draw your textures, unless you live in 4 dimensions and can see past the distortions, and you are baking your textures into these perfect seamless coordinates anyway, why bother with seams? just "Flatten Mapping", unwrap every triangle into its own little island. No distortion whatsoever.
And if your rendering software is not able to render smooth normals over uv seams, you don't need seamless unwraps, you need a better rendering software.
Yes, it is not completely round first. The sds algo is not perfect in this regards. But usually already close enough that you will not notice the difference without having a closer look at. In Blender you can also apply a cast to sphere to make the result even rounder. I guess it's the same than the spheriphy modifier in Max. So the improvement is one click away :)
But i have to agree at the other part. It's wasted time and energy to invent the wheel from scratch. There are enough working methods around. Just pick one of the proven ones. That's why i mentioned the SDS cube method. It is also around since eons.
I think it’s about time we set this thread out to pasture to die. Lol
No offence meant to anyone, but it’s getting long in the tooth and people are reposting suggestions that were already made. It’s good for reference though. Can a mod lock this thread?
To be fair, the thread now may as well have been a separate thread than your original; all of these responses are interacting with the post from July 22nd by JoshexDirad, not your original post from 2010.
yep.
I found this thread by searching for info on how to do this and my replies were meant to propose other methods for if you really wanted a UV sphere. but I wasn't satisfied with the results of any method I tried. especially as I planned to use it in a Game engine (which typically have less than perfect rendering engines).
really make your Sphere the way you need it for what you are going to use it for. there is no perfect way to do it, and all those movie quality spheres with a billion polygons honestly poke a tender spot in my gut and mock me saying "see they could do it, why can't you?, people are going to laugh at your game. no one will play it all because of the seams." it stings.
The sds method or cube to sphere method works, yes, but I haven't been able to make it work with normal maps without seams. hence I keep trying new methods. in the end I decided "if there has to be a seam, I should --put-- hide it somewhere it wont be seen from 90% of the angles the object could be viewed at"
Yes islanding each face works seamlessly, as long as you don't apply Normal Maps. once you do edges become apparent. I had the same opinion originally, that anything would work and after that it was just down to baking to make it seamless, this works for color textures, but normal perpixel offsetting doesn't like it. the problem isn't actually a texture seam, the problem is lighting will highlight the square edges of each normal mapped face, so instead of a smooth curved shadow of the backside you instead get a jittered shadow following edges. the same is true for specular regions. it'll drive you nuts thinking the normals are facing the wrong way on one side of the edge.
The most even method I've seen is the plane to sphere method, where you take a flat plane of the desired number of rings X, and segments Y map any flat square image to it, and then make it into a sphere without merging the poles. This can accept the Mercator Projection in render this looks perfect, however in the game engine... It looks like the sphere has two "back passages" North and South and "Oh, I don't think we wanna go that way. It's the back passage." to quote the imps from Conker's bad fur day. or a chocolate starfish on a muddy world.
I am a mad scientist. to me reinventing the wheel is fun work especially when I need it to save face and continue to use what I have to learn with. learning how to make a different kind of fire from the standard sinewaving fuel-source-arcing anti-magnetic plasma?... I mean sure if theres a good reason, but it kinda needs to be justified. Would be interesting to see square waving fire but I can't imagine what it'd be useful for.
>The sds method or cube to sphere method works, yes, but I haven't been able to make it work with normal maps without seams.
Hm.
It can be that neighbor pixels bleeds in. But this is not only of interest for normalmaps. Just more visible here. Make sure that the UV mapping ends as close as possible at the texel border of your chosen texture resolution.
OK First I must define Seams and Creases.
Seam(s): to refer to a place on a surface where 2 materials or textures meet; in which the pattern of the pixels is broken causing a visible line.
Crease(s): to refer to a place where CONFLICTING Texture Normals AND, either Image bounds OR UV outline edges between 2 UV Islands Meet causing a 3D Per-pixel ridge.
Seams are easy to deal with, you just make sure the texture repeats in all 4 directions or on all edges of the UV outline.
Creases are kinda harder to deal with, And it's difficult to put into words; It has to do primarily with conflicting normals on connected edges of UV islands, so, if for example you have a TOP circle map, and a BOTTOM circle map which you Baked in your software, you might find that on the TOP map, the bottom right 45 Degree side of the circle map is BLUE-Based, but due to the way the Baking Engine Renders the normals around a sphere the corresponding BOTTOM RIGHT 45 degree side of the BOTTOM circle map might be PURPLE-Based! this will definitely create a Crease! basically Purple represents a different hieght/angle than blue, so it's like one is going up and the other is flat or going down.
to beat this, you really cannot directly Bake off of a sphere as the software will TRY to represent the curve of the sphere as well (from the persepective of each island without considering what happened with baking on other islands), not just your texture or surface normals (even if you have it set to baking from Tangent (which is supposed to work for this very situation, but fails when textures loop)).
But One fix is that the base of your texture needs to be baked "Flat" as in the default "flat" light-blue-base
So as a rule of thumb if the edge is blue on top, the same edge needs to be blue on the bottom, if the edge is purple on top, the same edge needs to be purple on the bottom, red on top? Red on bottom. Green on top? Green on bottom, orange on top? Orange on bottom, Yellow on top? Yellow on bottom. and the Blend of fall-off from one color to the next MUST match.
I'm still thinking of a solution that doesn't mean manually drawing the normals of top and bottom images, becasue with my understanding of it I CAN draw sphere maps that will have no crease. but baking them is what I really want to do because then I can use generated textures which flow over it naturally.
even so, I'll make a simple imageset which will work manually to prove my point that the colors need to match to get rid of creases.
OK here's my manually done pic where I made sure the normals matched side to side even though I used 2 materials (top and bottom) and 2 different textures "A" and "B"
0 Seams, 0 Creases. because appareently I know better than the people who designed baking engines....
But it's a pain to do it manually just to get no creases. for generated textures I'd have to find a way to render a texture which loops or something what a pain.
pay no attention to the poor quality UV unwrapping. I could have made each ring of the sphere equidistant from eachother and the texture would look alot nicer but. MEH. this was enough of a pain to put together manually.
AND HOLD THE PHONE!!! SUCCESS!!!!!! the following may be different in any given software package. I had success making a Procedural/generated texture bake to a sphere without any creases!!!!!!! I understand WHY and HOW it worked, but it's difficult to put in words so bear with me.
so, Procedural/Generated Textures are Calculated by mathematical theorems of pattern progression. when you set an object to "Sphere" MapInput it essentially tells the softwaree that in the 3D view the Z axis determines pattern progression, so if you have something like a sphere that is round, and has a height on the Z axis. then it will attempt to calculate how that procedural texture would start at one end of the sphere and meet at the other(depending on the height of the object in world space). HOWEVER. that distance in mysoftware maxes out at +/- 1.000 bender unit, however when I had a sphere that was exactly 1 BU high, blender was freaking out and baking it with a crease OR messing up on the normals of even or odd faces of either top or bottom. but when I scaled it down to 0.9 Blender units tall, then it worked flawlessly.
Here enjoy:
SphereLoopingMagic
I guess I should explain how and why this was done, from an unwrapping perspective.
1: I unwrapped a UV sphere from topview, by doing this all the equatorial edges meet in texturespace (this is to avoid the image bounds, because if you go over the image bounds in the past I've found that generates a crease. but I have yet to verify if that is still true when matching seam/crease normals.)
2: without splitting the sphere into 2 objects(do not split it, leave it whole) select the top half of the sphere and give it an image to bake to, then select the bottom half of the sphere and create a new image to bake to for the bottom. your software Should automatically understand that the selected faces belong to that image.
3: in object mode scale the sphere down on Z (up down) to 0.9, (this distance from [0,0,0] to the top should be 0.9 and the distance to the bottom should be -0.9. technically anything less than 1.0 will work, so 0.999 could work (but that has not been verified).
4: now in object mode, you'll need a baking Margin, this tells the software how many pixels to extend past the edge of the island when rendering. I did 5 pixels (but tbh it's only 5 pixels on the curves not on the actual axis sides because there wasn't enough texture space to draw 5 pixels, this could be fixed by scaling the unwrapped sphere's UV islands down to make 5 pixels of space around the edge), some amount of a margin is necessary if using MipMap (everyone uses mipmap), mipmap essentially blends pixels that are near an edge, so if theres NOTHING past the edge then it'll blend Alpha 0.000 or BLACK hex#000000 with your Baked normals causing an ugly alpha crease. Thus you need a margin to trick Mipmaps into believing that the texture continues past the UV island bounds.
5: bake the entire object to normals. if done correctly the software should bake 2 halves of the sphere TOGETHER where because it's in ONE operation it's FORCED to contemplate the way the normal colors flow over the equitorial edges of TOP and BOTTOM and make sure they loop (it has to make sure the equatorial edge normals are all the same).
thats how to bake a sphere.
instead of top to bottom, you can also do front to back, or side to side. just make sure the edges between your 2 halves are correctly overlaid in UV outline space.
I think the answer is less than none but I'm open to being persuaded
Granted I only quickly read your last reply and skimmed over the rest of the thread, so I might be misunderstanding something, but it seems like you are missing that if you place a geometry crease (as in a discontinued smoothing of the surface, a hard edge or different smoothing group) where you see your "normal crease", it doesn't matter if the normal colors are different. They actually have to be.
Provided baking and display renderer are properly aligned, each side of the seam will be modulated from their respective actual surface normals to the target normals by the normal map providing the difference. So if you wanted to bake a procedural bump map, the easiest way to go about it would be to have one HP sphere with continues smoothing to which the procedural bump or normal map is applied and a second LP sphere with hard edges where the UV seams are. But even doing just 1 continuous smoothing group should work fine, depending on your viewing renderer, especially with the shallow angles you have on a sphere.
Edit: Yeah, I misremembered that. You need to split the UVs when there's a hard edge in the LP mesh, not vice versa. When baking and display renderer are properly aligned (using the same math to calculate and read the normals), gamma is right etc. you should not see a seam at those "creases" you are describing, despite different colors meeting in the normal map. The renderer should take care of that.
And you don't even need a second sphere if you just want to bake to the same mesh.
but the colors do matter, even the baking engine I was using knows this, it's just finnickey when using sphere-mapping (because the Up and Down Axis(Z) determines texture progression(repeating, clipping etc). going too close to the 3D world limits of "one repeat of the texture" can result in a partial repeat of X number of lines of pixels causing anything from a seam and crease, to a pinwheel of 2 different maps on one end. (so scaling it down slightly on Z fixes that))
ALSO in my testing here I've determined that you need to bake the entire sphere ALL AT ONCE to get the baking engine to recognise that these flaces at the seam are connected and need to be baked as if they were one surface (albeit on 2 different textures). if you bake each half of the sphere separately you'll get different results, where it baked each half as a self-contained surface without consideration for how the colors will flow over the edge to the next surface (because the baking engine doesn't know the next side exists or thinks the other side has been separated into a different rendering instance). "Instancing" really is a good word for what the baking engine is doing when you bake 2 halfs of something separately.
The best example I can give is with specularity and shadows over 2 objects with flush/matching edges, some rendering and baking engines Instance them as separate objects to render during the reendering step causing specularity and shadows to be uniquely cast onto each object in question (so if the specular focus hits the edge of the 2 objects it will be inconsistent; one side will have full specular illumination at the edge while the other might have partial or no specular illumation at the edge at all, but instead might have it's specular focus on the opposite edge of itself) causing an obvious edge in an otherwise flush seamless wall. This too is because of normals.
it would be different depending on the edge normals of the sphere as well, / \ vs __ . if it's / \ then essentially whats happening is you can draw a line from the end of the "/" edge at the angle the face's edge is pointing and THAT'S where the baking engine thinks this face will continue, it has no clue about the following "\" face so it will draw the edge normals accordingly (without considering the next face).
having conflicting normal colors on an edge does not compensate for this "instancing" instead it excentuates the fact that it was done separately, so they'll look separate with a crease and/ or seam. I'm just reporting what I've observed, and I beat this horse (the horse called sphere baking) to death. it took me many many headaches to figure out why it was not working. "instancing" of the baking engine.
let me be clearer, it's like this task;
"I want you to look for anomalies on this line, if there is no anomaly write '0', if there is an anomaly such as a surface curve the line is following I want you to represent that curve with a number from -256 to 256 describing it's percent difference in total from the minimum anomaly to 0 and from 0 to the maximum anomaly."
(it's a perfect half circular curve):
-256, -255, ........, 0, ........, 255, 256
second task, here's another curve I want you to do the same thing, but I want you to rotate your view clockwise 180 degrees around the axis before staring so as to align with this next curve:
(it's a perfect half circular curve as well, but now you're looking at it from a different perspective from the last curve; so left is right and right is left but by instinct you start with the LEAST "-256" on the left like you were reading a book and end with the MOST "256" on the right)
256, 255, ........, 0, ........, -255, -256
edge left: -256 meets 256
edge right: 256 meets -256
= crease.
NOW that's because it's split in 2 tasks, but if it is done in one task the one doing the task will know that 256 needs to match 256 and that it needs to loop to cover the entire curve not just half of it (because they know both parts exist). then it comes out with the correct answer because it does not realign it's view half way through.
if it were as easy as trusting software to do it anyway you threw it at it, then there wouldn't be people wondering why they have creases. and after I investigated this I found out why creases form, it is conflicting normals the software does not know how to compensate for, it sees the object (with smoothing set) as a single surface the light needs to be smoothed over, however THEN it sees each side of the object as a singular UV island instance with a cut edge that separates it from the other half (even though theres no cut edge).
purple meeting blue is up meeting down, if theres no blend from purple to blue gradually then you just get a sudden shift in normals which results in a crease. remember the software does not create a truely 3D surface with normals, the normals just control how light reflects off it. so if the object surface itself is smoothed, but the texture isn't.... then you get a crease.
here's a map of mars with equidistant rings applied via this method:
please, do find any creases, seams or distortions.
by the way the "seam" in that pic is running right through the middle pillar of the 3 pillars of mars. so you can try to look for it in depth.
my response to that, is; "define properly" because as I've said in my post above, "properly " can be anyway that meets a few key rules, 1: the normals are baked to images in ONE baking operation not 2 separate operations, 2: the edge normals match so light flowing over a smoothed surface knows that it should continue to flow smoothly over this surface between the 2 maps. 3: the pattern of the edge pixels needs to match or be an exact continuation thereof. 4: make sure there's a median edge around the mapped islands so Mipmaps doesn;t blend Alpha 0.000 with the edge colors of your map.
4 rules. if you meet them, your sphere will not have seams nor creases. then it doesn't matter if you use a UV sphere or a geo sphere, or a human character etc. the outcome will be seamless.
Might I add, UV spheres have benefits that geo or icospheres cann't do as easily. UV spheres are split into concentric rings. it's easier to divide them in ways that would be a chore or even break your map with an Icosphere or cubesphere,
I am doing more research on this. From intuition of a test I'm going to run today; it may actually be best to do a front and back set of UV islands rather than top to bottom. I wont say why till I'm certain that it works for what I think it does.
it works in animations with procedurally geenerated stuff, but not in the game engine, so I guess I could say I developed this technique for use in game engines. (my original idea way way back now, was to make a sky sphere instead of a skybox, with realistic day and night transitions, clouds etc in game. then I had to study creases and seams because, honestly my top centered flared model had it's issues, you would need 2 spheres and swap between them based on the view angle to the object and it wouldn't work with transparent spheres. but with this new method, there are no limits.
No, seriously, i miss every information here at the moment to be really able to judge. The image is too dark and too small, the texture resolution is unknown, the mapping is unknown. From what i can see here i could give you the same result with a sds cube and ways less hassle and nearly distortionfree. The cube method asides for comparison would be a great thing. Even better a Blender scene with the two versions besides.
But please not in Blender 2.49 format
honestly I've tried the cube method in the past, and the icosphere method and neither worked in this software. however, with the 4 rules I've come to know about sphere mapping, it could be possible to do the cubesphere or icosphere methods, but that'd take a day or so of work to setup with my current understanding of them (involving trial and error).
regardless ask for images and you shall recieve. ask for a file and I wish I could help... I could do a .obj with packed images if you like? I think thats the only format that can exchange between 249b and eevee3D 2.8+.
ok pics first:
ok enough illumination?:
this is the way it's mapped:
keep in mind again, this is a UVsphere, I have not found any one online who has told me how to get seamless creaseless normal maps bake to a UV Sphere, so as far as I understand this is new territory.
here's the actual texture images applied to the UV sphere (should be fairly easy to apply to another UVsphere):
Mars top:
Mars Bottom:
now I'll look into exporting as obj.
object "set smooth" may be required on import.
by the way, I said to look for a seam, because there is a seam but it's so faint you'd miss it if you weren't looking for it, and have to be way zoomed in to see it at all, and even then if you take your eyes off it again, you'll have to look for it again. it's that unnoticeable.
EDIT!!!!! UPDATE!!!!:
I also managed to unwrap Front to Back and the outcome is, I can actually take a sphere-mapped texture and turn it back into a flat rectangular world map!!!!
I have defeated spheres. well.. almost. I also have a way to record a sphere looping texture to a flat rectangular map but it suffers from alot of corner to corner mirroring. if only I could find a better way to bake generated textures which flawlessly repeat on all 4 sides of a rectangle without any mirroring. still, now I can place any texture on a sphere and have no creases when using normals, and now I can geenerate textures from a sphere to a flat rectangle map.
heres how to do the rectangle map from a sphere:
step 1: from front view in orthograhic, unwrap a sphere and project from view to bounds.
step 2: add your texture to the object set to sphere and global in mapto settings. make it shadeless for a color map.
step 3: Using a square image as a basis: scale each latitudal (up to down) column of faces, scale them in sets of 2 (right and left) both on front and back at the same time ideally. scale them on X in the UV editor, to the image bounds. then scale the internal curve to the image bounds on X as well. then scale them down by their percent of the total; for 2 faces as part of 16 different columns(32 columns all the way around) you have 8 groups which need to be scaled, the first group (center) needs it's outside scaled to 0.125, the second group should have it's inner curve scaled to 0.125, and it's outer curve scaled to .25. etc. in interations of +0.125 for each curve of each group of columns.
step 4: scale each row of verticies (Longitudal lines) on X to the image bounds. this step will make all the faces of the sphere share equal texture space on X.
step 5: scale all the groups of rows on Y until they are all equally proportioned on y including the top and bottom points of the triangular end caps of the UV sphere as one row of faces.
step 6: subdivide the spoke edges of the end caps ONCE. place your 3D cursor at the top of the top point of the cap, and scale the newly created ring of vertecies to less than 1 pixel wide compared to your texture space (in my case with a UV sphere of -1 to 1 wide (2 total) and a texture space of 560 (the mars image happened to be like that) this came to a spoke edge length of 0.0035) from the 3D cursor as the center of the scaling .
step 7: in the UV editor, scale this newly created line of verticies to the image width on X. then selecting the top line of these newly added UV edges move it to 0.5 pixels from the top of the image. move the bottom line of verticies to 0.5 pixels from the bottom of the image space.
step 8: widen the image to the full width of the total rectangular map.
step 7: scale the UV outlines to 0.5 on X
step 8: in the UV editor place the back side next to the front side and scale the backside -1.0 on X.
step 9: select all the triangle tip points of the tops and bottoms of the sphere in the UV editor, and scale on X to 0 so they are centered on X and all 4 points meet.
step 10: select the corner points of the rectangular UV map and Scale them on Y to the image height.
step 11: BAKE.
Doing a front to back or side to side UV unwrap that fills the entire texture space of a rectangular map, is alot of work! HOWEVER.. it's actually superior to my top and bottom method. as you can literally apply any rectangular or sqaure image to it from UV. and bake any spherical textures from it to a rectangular map. it's the best of both worlds. HOWEVER I'm not certain the normals would loop right in this situation!, there may be a crease if you tried to bake normals like this. I'll have to test that.
Edit again, Testing complete, NO. do not apply a flat rectangular map in the Front to Back method through the 11 steps above! it results in polar starfishing, so, the top to bottom method is superior for applying sphere textures to a UV sphere, and the front to back method is superior for generating rectangular maps From sphere mapped textures.
images in coming:
the UV outlines:
the original texture (credit nasa):
the output texture of the 11 step process:
I might have not made it exactly the right width as the original but meh. it's just an illustration of the theory.
Interesting, it's less clear of an image, I blame Mipmaps! perhaps I can have even better textures overall if I completely turn off mipmaps? that may delete the TINY TINY seam I had int he top to bottom method!........
Yes thanks. But i would have loved to see a closeup of the seam.
The reason why i asked for a closer shot of the seam was that it's also about the pixels of the texture aligning with the borders of the mesh. When it does not align then you end in neighbour pixel bleeding through, since they are half covered. And this is then very visible in normal maps for example. It changes the values of the normals then. And you have with both shown methods very visible different texel resolutions. The equirectangular method shows this very dramatically at the poles. Mapping the sphere from above shows it at the seams versus the pole then, even with your preparations. You cannot trick physics. And the inner rings displays simply fewer texels And again the direction of the mesh is against the direction of the pixels at the texture. Which results in bleeding through.
The cube has of course also some disadvantages, no question. You have now eight poles instead of two. It's a tradeoff, no matter what method you choose. And in the end is of course allowed what works. But i always ended in a cubic mapping ...
You do an enourmous effort for ending in something that can be much better achieved with a simple subdivided cube* and ways less effort.
*note that in Blender you need to cast to sphere then, SDS in Blender doesn't really make the cube completely round.