Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Geek Culture / What Kind of Signals Are Sent to the GPU For 2D Rendering??

Author
Message
Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 29th Mar 2011 23:46 Edited at: 30th Mar 2011 01:29
Not quite sure if the title name is proper with 'signals' but what kind of data is sent to the GPU? I know in the old days the video memory just held the entire screen and the DA converter converted the data on the video memory into analog signals that were sent to the screen. I know that now a days, there is still video memory, but what kind of data is put in there for the GPU to render? On Wikipedia it says that vertexes, lighting, textures, etc are put there for 3D scenes, but what kind of data is put there for 2D scenes?

As always answers are much appreciated!

=
kaedroho
17
Years of Service
User Offline
Joined: 21st Aug 2007
Location: Oxford,UK
Posted: 30th Mar 2011 00:03 Edited at: 30th Mar 2011 00:05
The data is very similar to that sent to the GPU when doing 3D. As 2D in alot of games is done with Direct3D.

Direct3D/OpenGL instructions are not sent directly to the GPU (as many people think). They are sent to the driver then translated into a format made up specifically for that GPU then sent. There is very little information on the formats used as many companies don't like to release any information on them for some reason.

The data sent to the GPU is mainly textures and vertices. I think shaders (but not sure) are also stored in the video memory. As I said above, the data is probably not the same as it was inputted in a command. It may have been processed or compressed by the driver.

Your signature has been erased by a mod please reduce it to no larger than 600 x 120.
Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 30th Mar 2011 00:47 Edited at: 30th Mar 2011 01:31
Ah, i knew about the driver stuff, but it seems like everything is rendered in kind of a 3D way? So pretty much the image is loaded into video memory then the GPU is told what to do with it from then on? If so, then how is video processed? I would guess that in that area of graphics processing the entire video is sent to video memory then played by the GPU... Thanks!

blown up REALLY big is
Kevin Picone
22
Years of Service
User Offline
Joined: 27th Aug 2002
Location: Australia
Posted: 30th Mar 2011 01:47 Edited at: 30th Mar 2011 01:53
Quote: "I know in the old days the video memory just held the entire screen and the DA converter converted the data on the video memory into analog signals that were sent to the screen. "


Oddly enough the principal hasn't really changed, we've always had 'video memory' and chips to decode frame buffer(s) and spit it out the display device (TV/monitor)

Many older systems (from 8bit -> 32bit) though used a shared memory approach. So graphics / audio and CPU code all have access to a shared pool of memory. Rather than the separate cache ideal that's become the norm today.

The interfaces for programming old school graphics chips for example are simply setting the chip modes by writing directly to the devices hardware interface. The interface is mapped into visible memory (normally as overlay), so you can write the modes to the a certain address, and set it's display properties. So if you wanted a sprite, or wanted async memory transfer you write the info to the devices and enabled it.

Many people here probably have a C64 background, so if you're really interested then I'd recommend downloading a c64 emulator and having a crack at programming it's VIC chip, which you can do from it's built in BASIC.

Programming hardware interface directly was/is fun, but it doesn't really work if you want interchangeable components. So these days the hardware interfaces are abstracted from the programmer behind drivers. So the driver gives us (via the OS) a common interface, how it's implemented behind closed doors is mostly irrelevant to us.



Quote: "I would guess that in that area of graphics processing the entire video is sent to video memory then played by the GPU"


Given the size of any reasonable video clip, then caching the entire thing in video memory isn't a viable option.

In the simplest model, the data packets are streamed from the source (where ever it may be), they're then decompressed as frame data and pushed into video memory for display, the same for audio.

Given the diversity of PC hardware the decompression passes need to have CPU by default. But there's nothing stopping the decompression code from using local hardware if available. For example perhaps the system has such a decoder in hardware (MPEG, DSP ), or GPU shaders etc. Then the data path can be offloaded from the CPU's back.

kaedroho
17
Years of Service
User Offline
Joined: 21st Aug 2007
Location: Oxford,UK
Posted: 30th Mar 2011 01:53 Edited at: 30th Mar 2011 01:55
Videos are never loaded fully. This is because of the amount or RAM they use, some films can take a few GBs of space. What they do is something called streaming, they load the bit which is currently being processed/displayed and when its done with it, it deletes it from RAM and loads the next frame.

I think (but not entirely sure) the video is loaded and decompressed on the CPU. The data for the current frame gets sent to the GPU as a texture where the GPU displays it. I also think (but again, not sure) that some cards can have the compressed video sent to them and the GPU decompresses it. This would make sense as sending a RAW HD video down a pci bus would really slow a system down.

Your signature has been erased by a mod please reduce it to no larger than 600 x 120.
Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 30th Mar 2011 02:34
yeh, that would be about 315mb per second to send raw video to the card... But this all makes more sense now, thank you! I will get the C64 emulator, why not?!?!?

blown up REALLY big is
Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 31st Mar 2011 01:55
I do apologize for double posting to bump my thread, but research didnt give me a clear answer and i dont want to start a new thread.
Quote: "The data sent to the GPU is mainly textures and vertices"
does this apply for 2D graphics? I mean i could see how this seemingly 3D approach would work for 2D stuff, but it seems more efficient to just draw 2D stuff without vertices and textures, and instead with sprites.

DBD79
Kevin Picone
22
Years of Service
User Offline
Joined: 27th Aug 2002
Location: Australia
Posted: 31st Mar 2011 02:40 Edited at: 31st Mar 2011 02:41
You're talking about two different concepts.

A Hardware sprite is a 'block' that's fetched with the frame buffer during output to the display device, at no point though are they ever part of the frame buffer (video data). Hardware Sprites are none destructive. So overlaying a sprite in the middle of screen doesn't change the pixels behind it.

Hardware accelerated sprites (as we have today) as drawn onto the frame buffer, so the sprite pixels are merged the frame buffer pixels. Making them destructive.

Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 31st Mar 2011 04:56
Ok! So 2D stuff isnt rendered with vertices and textures i understand?

DBD79
PrimalBeans
14
Years of Service
User Offline
Joined: 14th Oct 2010
Location: The sewer.... hunting alligatiors.
Posted: 31st Mar 2011 12:18 Edited at: 31st Mar 2011 12:21
Heres what i understand. A buffer is a 2 dimensional array of data (Generally speaking) that represents the screen. The gpu is a cpu of sorts that proccesses information... nothing else.. Basically raw data is fed to the gpu and written to the video buffer then transfered to the screen at sync time. 2d / 3d doesnt matter. Video cards have the ability to process both 2d and 3d... raw data is what is going to determine what the gpu does... if that makes sense.

(im not good at explaining anything... lol)
Im not sure how to explain... most modern 2d games use d3d (In direct x instances anyway) instead of ddraw becuase of the focus of more modern graphics cards. So the difference between 2d and 3d to a gpu is minimal.

Im really trying to sort out the best way to explain this and yet understand what your question is really.

If your concerned about what opperates the card rather then whats sent to it then i can say that your looking at sending bitmasks to adresses... and working with device interrupts...



:/ im not sure i understand whats being asked...

lol nm it was answered above

Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 31st Mar 2011 17:12 Edited at: 31st Mar 2011 17:12
I think that clarified stuff good enough I already knew about the 2D array buffer and what not, but what confused me is when they told me that many use D3D for 2D, but you told me why they did that

SO thank you!

DBD79
PrimalBeans
14
Years of Service
User Offline
Joined: 14th Oct 2010
Location: The sewer.... hunting alligatiors.
Posted: 1st Apr 2011 09:15
yep the tools for 2d drawing are all in d3d bottom line is its all 2d anyway... your drawing to a 2d surface and the screen buffer is only 2 dimensions as well. the gpu is what does the translation of drawing from a 3d description to a 2d image...

Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 1st Apr 2011 18:38
You guys keep confusing me!
Quote: "from a 3d description to a 2d image.."
So does that mean that 2D graphics are rendered as textures on a flat plane?

DBD79
Interplanetary Funk
15
Years of Service
User Offline
Joined: 19th Apr 2010
Location: Ipswich, United Kingdom
Posted: 1st Apr 2011 18:43
Not in DirectX. In DX you can draw straight onto the display buffer, or you can do 3D through the 3D transformation pipeline which basically converts a 3D model into 2D pixels and pastes them onto the display buffer.

With a lot of modern GPUs you load the graphics data into the GRAM (stuff like vertices and bitmaps) and then tell the GPU how to arrange those pieces of data on the screen buffer, which is effectively a 2D array.

That's my understanding of it from the (admittedly little) DX I know.

Get on my level
Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 1st Apr 2011 19:46
Ok! They were confusing me with DX...

DBD79
Diggsey
19
Years of Service
User Offline
Joined: 24th Apr 2006
Location: On this web page.
Posted: 1st Apr 2011 21:23 Edited at: 1st Apr 2011 21:24
Most commonly images are loaded into graphics memory as textures and then drawn to the screen as textured quads in exactly the same way that 3d models are rendered.

Things that use this method include:
- All parts of DirectX related to drawing. The old DirectDraw API used to do it differently but that has been deprecated. The replacement called Direct2D uses quads as I described.
- Windows OpenGL implementations (as it is just a layer over the Direct3D implementation)
- All programs using DirectX or windows OpenGL are therefore using quads to draw images, including DBPro.

The main reason for this is obvious. If you have to access pixel data from your program, and your program is running on the CPU, then the operation you are doing cannot be hardware accelerated. If all you have to do is send some vertices and refer to some image data already loaded into GPU memory, the GPU can do all the hard work for you.

[b]
Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 1st Apr 2011 22:36
I see! Now i couldnt find the definition of quad but i can figure out what it is quite well... Thank you!

DBD79
Interplanetary Funk
15
Years of Service
User Offline
Joined: 19th Apr 2010
Location: Ipswich, United Kingdom
Posted: 1st Apr 2011 22:44
@Diggsey
Is that so? I must've misinterpreted it then. Thanks for the correction

Get on my level
Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 1st Apr 2011 22:51
So i am essentially looking at a flat plane with a website texture on it now...

DBD79
Diggsey
19
Years of Service
User Offline
Joined: 24th Apr 2006
Location: On this web page.
Posted: 2nd Apr 2011 00:48 Edited at: 2nd Apr 2011 00:48
Not necessarily. Browsers don't tend to explicitly use hardware acceleration, so it just depends if the operating system does, and that depends on the graphics drivers, etc. If you're using vista or windows 7 you're probably right as I'm fairly sure they use DirectX to combine all the windows together in the window manager.

[b]
Dark Java Dude 64
Community Leader
14
Years of Service
User Offline
Joined: 21st Sep 2010
Location: Neither here nor there nor anywhere
Posted: 2nd Apr 2011 03:59
I see! Yep im using 7 so that is (likely) the case!

DBD79

Login to post a reply

Server time is: 2025-05-22 02:05:17
Your offset time is: 2025-05-22 02:05:17