just a quick question that I haven't been able to find any details about: I am using the python win32api to capture a screen shot of my computer. I want to roll my own image compression algorithm (for fun, I don't expect professional level results), but I am struggling to understand the pixel data I am getting from the bitmap itself. Here is the relevant code:
width = win32api.GetSystemMetrics(win32con.SM_CXVIRTUALSCREEN)
height = win32api.GetSystemMetrics(win32con.SM_CYVIRTUALSCREEN)
left = win32api.GetSystemMetrics(win32con.SM_XVIRTUALSCREEN)
top = win32api.GetSystemMetrics(win32con.SM_YVIRTUALSCREEN)
hwin = win32gui.GetDesktopWindow()
hwindc = win32gui.GetWindowDC(hwin)
srcdc = win32ui.CreateDCFromHandle(hwindc)
memdc = srcdc.CreateCompatibleDC()
bmp = win32ui.CreateBitmap()
bmp.CreateCompatibleBitmap(srcdc, width, height)
memdc.SelectObject(bmp)
memdc.BitBlt((0, 0), (width, height), srcdc, (left, top), win32con.SRCCOPY)
bmpinfo = bmp.GetInfo()
bmpInt = bmp.GetBitmapBits(False)
GetBitmapBits(False) returns an integer array / tuple. But I can't find any information about how bmpInt relates to pixel data. The output looks like this:
123,1,-1,-13,-55,2,23,123 ...
How do these correspond to the RGB values of each pixel? Are every 3 ints one pixel? Or is there an alpha channel? Also, why are there negative numbers? For reference, here is the documentation: https://mhammond.github.io/pywin32/PyCBitmap__GetBitmapBits_meth.html. There's no explanation there...
Ok answering my own question, just in case anyone else ever has the same problem. GetBitmapBits(False) returns an integer for each R G and B value of each pixel. So:
14, 16, 17, -1
represents R:14, G:16, B:17, and -1 for the alpha channel. The negative numbers are actually offsets from 255, so -112 above would be equivalent to 255 - 112, or 143.