So like it comes with a version of Unreal Editor 3. Albit an older one, but something us peons can play with. Of course you have to fork out $200 to get ahold of it.
Kind of makes the idea of a dedicated video or physics card seem a bit silly. It's okay to assume multiple CPU's as a standard as long as you pretend they're for specific things? Twisted way to go about it, but I guess it's working.
@acc:
We can also run physics on our cpus, but that doesn't make the idea of a physics card useless. Running physics on your video card will slow them down.
And it's not just pretending they're for specific things. As far as I know those risk processors have a specialised instruction set and are even on slower clock speed faster than the cpu in their field.
[ QUOTE ] @acc:
We can also run physics on our cpus, but that doesn't make the idea of a physics card useless. Running physics on your video card will slow them down.
And it's not just pretending they're for specific things. As far as I know those risk processors have a specialised instruction set and are even on slower clock speed faster than the cpu in their field.
[/ QUOTE ]
ATI claims that X1k family is times faster than the Aegia card, while its not as efficient it has so much more raw processing power.
And they have the bonus of people allready having the cards so next time they upgrade they can set the old card on physics duty. Even if i have to buy a new mobo its still almost half the price of the Aegia card.
[ QUOTE ]
So like it comes with a version of Unreal Editor 3. Albit an older one, but something us peons can play with. Of course you have to fork out $200 to get ahold of it.
[/ QUOTE ]
Yep, Airtight's demo was done in U3 and since the binary includes the editor you get a copy of it as well. They had a demo of the card recently at my school(digipen) and there were multiple instances in the Cell Factor demo where there was a noticable hang when fluids were spawed. Airtight's demo also dropped down to about ~15 fps at some points, and this was on a high end Dell XPS machine.
As far as on-card physics go I think NVidia/Havok's solution is a little cleaner than Ageia's approach. Basically you have two cards in SLI, if you have heavy physics one card is dedicated to simulating it, otherwise you can just crank up the graphics.
As far as on-card physics go I think NVidia/Havok's solution is a little cleaner than Ageia's approach. Basically you have two cards in SLI, if you have heavy physics one card is dedicated to simulating it, otherwise you can just crank up the graphics.
[/ QUOTE ]
That, and with the advent of nVidia's quad sli, and new 512mb videocards, we have all new levels of insane combinations of rendering graphics, and computing physics.
I'm wondering how long a computer with 2 gig worth of video card computational power would remain "high end" before next gen games wipe it out.
Dravalen, I saw the developer presentation right after yours.
Anyhow, One point that was brought up with having the physics card on the same line as the video is bandwidth. Your bandwidth is being limited to having that particular PCIE doing both physics and graphics.
Oh, and a dedeicated physics card is capable of surpassing a dual core processor by a factor of 10.
One work around mentioned is having a smaller physics chip on the video card dealing specifically with items that are effected with graphics to a large degree, like cloth. While having a separate dedicated physics card for more "global" effects.
[ QUOTE ] @acc:
We can also run physics on our cpus, but that doesn't make the idea of a physics card useless. Running physics on your video card will slow them down.
And it's not just pretending they're for specific things. As far as I know those risk processors have a specialised instruction set and are even on slower clock speed faster than the cpu in their field.
[/ QUOTE ]
I know, that's not my point. My point is that if we're going to be using these specialized cards for doing a wide range of things, then is it really specializing? If a DX10 video card can handle collision, animation, fluid simulation, AI, etc. then it isn't just a "video" card anymore.
There's nothing wrong with cards going this way. It's just dumb to pretend that all we've got is a dedicated video card when it's capable of doing pretty much any game-related task we want it to.
Perhaps we should keep it specialized...otherwise it becomes just another processor if it starts to do everything. Parts of our brains are organized to handle special tasks, and only those tasks, so why not computers?
IE. The sound card should handle sound processing, the physics card should handle the physics processing for the game engine, the graphics card the graphics, and the cpu should handle routing the information to each different part of the machine, and handle the AI. Pretty much covered all bases there.
Not a gimmik. Highly useable for MP deathmatch as the videos show. Thrown objects, shields, vehicles etc. I have an older Physx development card donated by Aegia. Since I can't use the Reality Engine anymore, I've been using it with the DarkBasic Pro physics extensions. It's amazing for lack of a better word. Now we just need a video card that can keep up with the amount of objects we are throwing at it.
300 US$ according to their website is a lot, I mean their software probably is smart enough to handle multiple processors such as done on ps3 and xbox360, so that card probably is a way to "emulate" the multiprocessor stuff from consoles easier on pc. however when the card almost costs as much as the whole console, why get it ?
games will be made to make the most out of ps3&xbox360, we will get more multi-core cpus for the pc anyway, I somewhat dont think that this will really be the big thing... if it was third or quarter the price, it would have a much better chance imo.
It's PC gaming. The excitement of new technology, and the ability o brag or view the best of the best settings in games etc. 300$ is'nt bad for new tech. I can barely touch/push the PPU limits vs the rendering power of a 7800 gtx on my box. It's pretty easy to say this card will scale for a year or so.
Not to mention Nvidia or ATI's next step may be to aquire this technology and build it into the next gen video cards.
Not a gimmik. Highly useable for MP deathmatch as the videos show. Thrown objects, shields, vehicles etc. I have an older Physx development card donated by Aegia. Since I can't use the Reality Engine anymore, I've been using it with the DarkBasic Pro physics extensions. It's amazing for lack of a better word. Now we just need a video card that can keep up with the amount of objects we are throwing at it.
[/ QUOTE ]
Lan only thought, you won't be seeing multiplayer like this across the internet for a long time, if ever.
[ QUOTE ]
Not to mention Nvidia or ATI's next step may be to aquire this technology and build it into the next gen video cards.
[/ QUOTE ]
There is not much technology behind it, NV or ATI could easily design their own, not to mention GPU's are really good at physics too.
You may ask why Aegia is the only one coming out with physics card then?
Because its way too optimistic to assume the significant amount of people will buy one and thus it will most likely flop.
[ QUOTE ]
Lan only thought, you won't be seeing multiplayer like this across the internet for a long time, if ever.
[/ QUOTE ]
Sounds fishy, theoretically it could run over the net if all PPU's would always come up will same results.
But by the looks of LAN-only i would assume it its very random so you have to send all the coords for the stuff affected by physics. From what i assume the PPU cant handle proper physics and has to cheat by approximating (spelling?).
There's a not a large chance of it happening, the only way to get it to work is if your phyiscs is very deterministic(non-random) so that you can fire off an event at a certain time and have the clientside do all the prediction for you. A game that used a lock-step type of network model, like an RTS might be able to pull it off. However most FPS and twitch games tend to use more of a predictive model.
Since a good majority of objects are predicted one needs to send positions and velocities for each. When you have thousands of objects in a scene there's simply no way that to send updates for each item over a internet connection.
Now there's probably a few tricks you could do to get it working(do physics in lock-step or something similar) but as you see it's far from a trivial problem.
You do it like this:
Player A throws a box Player B's PPU gets the initial info like direction and force, now it should be able to calculate all the events that are caused by that box.
Now to make small lag not kill the game you sync the computers at the beginning of the round and timestamp each packet, so if one computers laggs behind a little it will speed up the physics and catches up, ofcourse it assumes that you dont intend to use the PPU 100% all the time but have a reserve.
And ofcourse you trust the client 100%.
For 1 on 1 duels i think it would be doable.
Also you'd have to store the game state for the last n milliseconds (depending on the lag), during a lag spike that can easily be half a second. You have to use a server-client model, peer to peer always desyncs. In this case the server would do all the physics calculations and send regular complete syncs to the clients (many times a second, perhaps limited to the client's immediate surroundings) plus any change events that occur between the complete syncs. The client PPUs would only do local prediction and use their predicted situation to send the change events to the server.
The result would still be very bandwidth hungry but any less and you risk desyncs.
Or the ppu could handle non-gameplay necessary elements of physics processing, like leaves, grass, trees blowing in the wind, particle effects, liquids/water effects, etc. while the server still handles the good old fashioned physics stuff, like player on player collision, player on environment, or player on projectiles/interactive objects. That way it wouldn't really effect people during a lag spike, or if something was calculated oddly. Who would care, or even know if a blade of grass, leaf, pond ripple, whatever interacted differently on someone elses machine compared to their own? Have the ppu control and calculate the mundane icing on the cake stuff for now.
I don't think of physics as just a gimmick, I just see it as an advancement. Maybe a dedicated physics card is gimmicky. Most 3D games now have some form of required physics calculations and it might make more sense as multicore CPUs become more popular and available, to use the 2nd (or an available) core as a physics processor. With PS3 and Xbox360 hosting multiple cores this could (or is) be done there too.
[ QUOTE ]
Oh, and a dedeicated physics card is capable of surpassing a dual core processor by a factor of 10.
[/ QUOTE ]
And 78% of all statistics are pulled out of someone's ass
[ QUOTE ]
Of course then we're back to square one: Physics being nothing but a gimmick.
[/ QUOTE ]
It isn't anymore a gimmick than normal mapping, higher polycounts, reflective textures, per-pixel lighting, self-shadowing, shaders, in game music, etc. It can all be used to add mood, and a touch of real-ness to a game.
it still doesnt play a key role for gameplay, just like you could "downscale" many games technically and they would still work.
the thing that would worry me about physics as gameplay feature is, that AI would have a extremely tough time using it. We humans are just so trained in basic laws of physics, while we are not all a MacGyver, most of us could create some logic chain of physic interaction to solve puzzles. How would AI do that (other than scripted) and how would they react on it ? When exactly this is still part of big research in robot industry to make them autonomously act in our world.
Hence I think physics could make a key role in typical "human solves puzzle" situation, or just the detailing, like particles, debris, smoke... which of course is great for better mood, but doesnt really affect the gameplay a lot.
And as we have those type of puzzle games (incredible machine) for a long time, there is no real revolution here imo. It more affects the overall drive of more effect loaded environment, better graphics, effects...
Is that UnrealEd 3? If so thats the one that comes with unreal2004! If its the newest, still under wraps, I wanna play with it now version then Im gonna buy a physics card... Im not sure I want one... any one want to buy a physics card with out the bundled software..........
Replies
Kind of makes the idea of a dedicated video or physics card seem a bit silly. It's okay to assume multiple CPU's as a standard as long as you pretend they're for specific things? Twisted way to go about it, but I guess it's working.
We can also run physics on our cpus, but that doesn't make the idea of a physics card useless. Running physics on your video card will slow them down.
And it's not just pretending they're for specific things. As far as I know those risk processors have a specialised instruction set and are even on slower clock speed faster than the cpu in their field.
@acc:
We can also run physics on our cpus, but that doesn't make the idea of a physics card useless. Running physics on your video card will slow them down.
And it's not just pretending they're for specific things. As far as I know those risk processors have a specialised instruction set and are even on slower clock speed faster than the cpu in their field.
[/ QUOTE ]
ATI claims that X1k family is times faster than the Aegia card, while its not as efficient it has so much more raw processing power.
And they have the bonus of people allready having the cards so next time they upgrade they can set the old card on physics duty. Even if i have to buy a new mobo its still almost half the price of the Aegia card.
So like it comes with a version of Unreal Editor 3. Albit an older one, but something us peons can play with. Of course you have to fork out $200 to get ahold of it.
[/ QUOTE ]
Yep, Airtight's demo was done in U3 and since the binary includes the editor you get a copy of it as well. They had a demo of the card recently at my school(digipen) and there were multiple instances in the Cell Factor demo where there was a noticable hang when fluids were spawed. Airtight's demo also dropped down to about ~15 fps at some points, and this was on a high end Dell XPS machine.
As far as on-card physics go I think NVidia/Havok's solution is a little cleaner than Ageia's approach. Basically you have two cards in SLI, if you have heavy physics one card is dedicated to simulating it, otherwise you can just crank up the graphics.
As far as on-card physics go I think NVidia/Havok's solution is a little cleaner than Ageia's approach. Basically you have two cards in SLI, if you have heavy physics one card is dedicated to simulating it, otherwise you can just crank up the graphics.
[/ QUOTE ]
That, and with the advent of nVidia's quad sli, and new 512mb videocards, we have all new levels of insane combinations of rendering graphics, and computing physics.
I'm wondering how long a computer with 2 gig worth of video card computational power would remain "high end" before next gen games wipe it out.
-R
Anyhow, One point that was brought up with having the physics card on the same line as the video is bandwidth. Your bandwidth is being limited to having that particular PCIE doing both physics and graphics.
Oh, and a dedeicated physics card is capable of surpassing a dual core processor by a factor of 10.
One work around mentioned is having a smaller physics chip on the video card dealing specifically with items that are effected with graphics to a large degree, like cloth. While having a separate dedicated physics card for more "global" effects.
@acc:
We can also run physics on our cpus, but that doesn't make the idea of a physics card useless. Running physics on your video card will slow them down.
And it's not just pretending they're for specific things. As far as I know those risk processors have a specialised instruction set and are even on slower clock speed faster than the cpu in their field.
[/ QUOTE ]
I know, that's not my point. My point is that if we're going to be using these specialized cards for doing a wide range of things, then is it really specializing? If a DX10 video card can handle collision, animation, fluid simulation, AI, etc. then it isn't just a "video" card anymore.
There's nothing wrong with cards going this way. It's just dumb to pretend that all we've got is a dedicated video card when it's capable of doing pretty much any game-related task we want it to.
IE. The sound card should handle sound processing, the physics card should handle the physics processing for the game engine, the graphics card the graphics, and the cpu should handle routing the information to each different part of the machine, and handle the AI. Pretty much covered all bases there.
Not a gimmik. Highly useable for MP deathmatch as the videos show. Thrown objects, shields, vehicles etc. I have an older Physx development card donated by Aegia. Since I can't use the Reality Engine anymore, I've been using it with the DarkBasic Pro physics extensions. It's amazing for lack of a better word. Now we just need a video card that can keep up with the amount of objects we are throwing at it.
games will be made to make the most out of ps3&xbox360, we will get more multi-core cpus for the pc anyway, I somewhat dont think that this will really be the big thing... if it was third or quarter the price, it would have a much better chance imo.
Not to mention Nvidia or ATI's next step may be to aquire this technology and build it into the next gen video cards.
http://www.cellfactorgame.com/
Not a gimmik. Highly useable for MP deathmatch as the videos show. Thrown objects, shields, vehicles etc. I have an older Physx development card donated by Aegia. Since I can't use the Reality Engine anymore, I've been using it with the DarkBasic Pro physics extensions. It's amazing for lack of a better word. Now we just need a video card that can keep up with the amount of objects we are throwing at it.
[/ QUOTE ]
Lan only thought, you won't be seeing multiplayer like this across the internet for a long time, if ever.
Not to mention Nvidia or ATI's next step may be to aquire this technology and build it into the next gen video cards.
[/ QUOTE ]
There is not much technology behind it, NV or ATI could easily design their own, not to mention GPU's are really good at physics too.
You may ask why Aegia is the only one coming out with physics card then?
Because its way too optimistic to assume the significant amount of people will buy one and thus it will most likely flop.
Lan only thought, you won't be seeing multiplayer like this across the internet for a long time, if ever.
[/ QUOTE ]
Sounds fishy, theoretically it could run over the net if all PPU's would always come up will same results.
But by the looks of LAN-only i would assume it its very random so you have to send all the coords for the stuff affected by physics. From what i assume the PPU cant handle proper physics and has to cheat by approximating (spelling?).
Since a good majority of objects are predicted one needs to send positions and velocities for each. When you have thousands of objects in a scene there's simply no way that to send updates for each item over a internet connection.
Now there's probably a few tricks you could do to get it working(do physics in lock-step or something similar) but as you see it's far from a trivial problem.
Player A throws a box Player B's PPU gets the initial info like direction and force, now it should be able to calculate all the events that are caused by that box.
Now to make small lag not kill the game you sync the computers at the beginning of the round and timestamp each packet, so if one computers laggs behind a little it will speed up the physics and catches up, ofcourse it assumes that you dont intend to use the PPU 100% all the time but have a reserve.
And ofcourse you trust the client 100%.
For 1 on 1 duels i think it would be doable.
Also you'd have to store the game state for the last n milliseconds (depending on the lag), during a lag spike that can easily be half a second. You have to use a server-client model, peer to peer always desyncs. In this case the server would do all the physics calculations and send regular complete syncs to the clients (many times a second, perhaps limited to the client's immediate surroundings) plus any change events that occur between the complete syncs. The client PPUs would only do local prediction and use their predicted situation to send the change events to the server.
The result would still be very bandwidth hungry but any less and you risk desyncs.
[ QUOTE ]
Oh, and a dedeicated physics card is capable of surpassing a dual core processor by a factor of 10.
[/ QUOTE ]
And 78% of all statistics are pulled out of someone's ass
Of course then we're back to square one: Physics being nothing but a gimmick.
[/ QUOTE ]
It isn't anymore a gimmick than normal mapping, higher polycounts, reflective textures, per-pixel lighting, self-shadowing, shaders, in game music, etc. It can all be used to add mood, and a touch of real-ness to a game.
the thing that would worry me about physics as gameplay feature is, that AI would have a extremely tough time using it. We humans are just so trained in basic laws of physics, while we are not all a MacGyver, most of us could create some logic chain of physic interaction to solve puzzles. How would AI do that (other than scripted) and how would they react on it ? When exactly this is still part of big research in robot industry to make them autonomously act in our world.
Hence I think physics could make a key role in typical "human solves puzzle" situation, or just the detailing, like particles, debris, smoke... which of course is great for better mood, but doesnt really affect the gameplay a lot.
And as we have those type of puzzle games (incredible machine) for a long time, there is no real revolution here imo. It more affects the overall drive of more effect loaded environment, better graphics, effects...