Looks cool but the final example doesn't make sense to me. I don't see any birds in the image prior but there's one in the top left smart filled area. Maybe I'm misunderstanding what he had before that...
Looks cool but the final example doesn't make sense to me. I don't see any birds in the image prior but there's one in the top left smart filled area. Maybe I'm misunderstanding what he had before that...
That's the magic wand cursor.
My mind is being blown by all this... junk people are making! First that smart resizing thingy, with the deciding what should be kept in the image, and now this. How the heck does it even work?!
WTF, this shouldn't be possible. It irks me they don't show the image without the selection thingy after the C-A fill, so you could see whether the scene really fits together.
Looks cool but the final example doesn't make sense to me. I don't see any birds in the image prior but there's one in the top left smart filled area. Maybe I'm misunderstanding what he had before that...
Called bull on that one too for other reasons. There's no repetition or evidence of sampling on the sky or mountains. How did the program know those were mountains and to continue making it jagged? In other words why did it extend the horizon line on the left side, but made it rigid on the right?
Photoshop begins to learn at a geometric rate until on August 29, 2010, Photoshop became self-aware
lol but seriously, if it works as advertised I can see it becoming such a huge time saver in the long run it's crazy. I hope it's not an early april fools :P
There have been videos of this tech in various forms for years, some of which are way more mind blowing that this (not that this isn't totally crazy already). It's awesome to see it's finally coming out!
hmmmm, at beginning i was like sure i can see the artifacts in the healing brush, i beleive, then that end shot..........mmmmmmm..............
edit- just saw that video pseudo- ok maybe.......looks kindof understandable now, im geussing any shape that you completey encompass with the tool, it checks for colour ranges that cross the shape then fills that with a selection that it can find on the rest of the image...... impressive
still unsure about the panorama from the first though
too good to be true?!? you mean that when the awesome love machine tool from the gods finishes its progress bar voodooism that it won't look like a patch job of wonderous proportions and blurry smush mush dog slop?!?!?!?!
egads!@!
maybe not too awful, but not perfecto
yeah i'm sorry... just look at the second video at the result @ 3:34...
for quick starting points, yeah, but final finished result... hell no... a time saver, not a life saver.... oh, that sounds damn tasty..... lifesavers... yummm.
Great... An option box that pops up every time I hit delete. So I also have to hit enter or click it... What a joke, this should be a separate tool or action, not clog up a simple function like delete.
I usually don't prefer automatic solutions because they never work as nicely as in these videos. I guess we will see if they do ever implement it. Seems like a pretty interesting tool though. Should save some time clone stamping. The last example was by far the best.
"Build out the bushes, the foreground, the sky.."
HAHAHAHAHAHAH! That a genius boy!
This seems so.. illogical to me? Its one thing to identify the surrounding pixels and make a well calculated guess - which I suppose, couldn't be achieved so accurately with computing - and another to guess the ELEMENTS surrounding other elements.
If you look, you can see a section of the outer bit of the photo got copied, mirrored, and scaled outward to fit into the empty space, there are a bunch of repeating elements all around the border, and im sure up close the distortion is noticable, just a nice scale and sampler tool is all.
"Build out the bushes, the foreground, the sky.."
HAHAHAHAHAHAH! That a genius boy!
This seems so.. illogical to me? Its one thing to identify the surrounding pixels and make a well calculated guess - which I suppose, couldn't be achieved so accurately with computing - and another to guess the ELEMENTS surrounding other elements.
From what I understand from hearing about tech demos of this (they've been working on this for YEARS) is that it follows obvious lines (like the horizon line, buildings, parts of stuff, etc etc, and tries to figure out what to do from there. It samples the tree line for anything past the above of the hill, and samples the grass for anything on the hill, but all the program is seeing is sections. The deleting on the lens flair seemed really nice and easy.
This looks like a combination of the tech of their existing healing brush, patch tool, and pattern maker plugin.. and maybe a couple more.
You can analyze an image at more than an individual pixel level using a variety of techniques, also seeing as how CS3+ have started using GPU resources they could likely code this as a shader or a CUDA extension to the main program and do things that would bave been computationally prohibitive a matter of a couple years ago.
Replies
My mind is being blown by all this... junk people are making! First that smart resizing thingy, with the deciding what should be kept in the image, and now this. How the heck does it even work?!
EDIT: oops... it is the cursor, should probably read the whole thread before giving smart answers
Called bull on that one too for other reasons. There's no repetition or evidence of sampling on the sky or mountains. How did the program know those were mountains and to continue making it jagged? In other words why did it extend the horizon line on the left side, but made it rigid on the right?
wait, how does it do that? Wizardry? alien technology? some sort of voodoo?
edit: wait, no, it's a bit more complex than just mirroring and the left side is more of a smooth form. The people at Adobe are sorcerers, confirmed.
lol but seriously, if it works as advertised I can see it becoming such a huge time saver in the long run it's crazy. I hope it's not an early april fools :P
[ame]
edit- just saw that video pseudo- ok maybe.......looks kindof understandable now, im geussing any shape that you completey encompass with the tool, it checks for colour ranges that cross the shape then fills that with a selection that it can find on the rest of the image...... impressive
still unsure about the panorama from the first though
> insert facepalm smiley here <
egads!@!
maybe not too awful, but not perfecto
yeah i'm sorry... just look at the second video at the result @ 3:34...
for quick starting points, yeah, but final finished result... hell no... a time saver, not a life saver.... oh, that sounds damn tasty..... lifesavers... yummm.
I usually don't prefer automatic solutions because they never work as nicely as in these videos. I guess we will see if they do ever implement it. Seems like a pretty interesting tool though. Should save some time clone stamping. The last example was by far the best.
HAHAHAHAHAHAH! That a genius boy!
This seems so.. illogical to me? Its one thing to identify the surrounding pixels and make a well calculated guess - which I suppose, couldn't be achieved so accurately with computing - and another to guess the ELEMENTS surrounding other elements.
From what I understand from hearing about tech demos of this (they've been working on this for YEARS) is that it follows obvious lines (like the horizon line, buildings, parts of stuff, etc etc, and tries to figure out what to do from there. It samples the tree line for anything past the above of the hill, and samples the grass for anything on the hill, but all the program is seeing is sections. The deleting on the lens flair seemed really nice and easy.
But on a brighter note, HOLY CRAP! That's awesome!
They teased this last year, it took longer and gave crappier results, I'm glad they are sticking with it.
Well, what about the trashy blend brush thingy in cs5? Since it'll be released in a couple of days
You can analyze an image at more than an individual pixel level using a variety of techniques, also seeing as how CS3+ have started using GPU resources they could likely code this as a shader or a CUDA extension to the main program and do things that would bave been computationally prohibitive a matter of a couple years ago.