Search code examples
qtpyqt5pixeldpi

What is the difference between Point and Pixel and why is Pixel unaffected by DPI?


I created a small script to draw some text on an image, the purpose of this code is to illustrate that the font size of the drawn text is unaffacted by self.image.setDotsPerMeterX(75), self.image.setDotsPerMeterY(75) and self.image.setDevicePixelRatio(1) when using setPixelSize(25). However the drawn text is affected by dpi when using setPointSizeF(25). My question is, "What is the difference between Points and Pixels, and why is Pixel font unaffected by dpi?"

from PyQt5.QtCore import Qt
from PyQt5.QtGui import QFont, QImage, QPainter
from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget

DEFAULT_WIDTH = DEFAULT_HEIGHT = 250


class QPainterWidget(QWidget):
    def __init__(self):
        super().__init__()
        self.image = QImage(DEFAULT_WIDTH, DEFAULT_HEIGHT, QImage.Format_RGB32)
        self.image.fill(Qt.green)
        self.image.setDevicePixelRatio(1)
        self.image.setDotsPerMeterX(75)
        self.image.setDotsPerMeterY(75)
        painter = QPainter(self.image)

        point_size_font = QFont('arial')
        point_size_font.setPointSizeF(25)
        painter.setFont(point_size_font)
        painter.drawText(0, 0, DEFAULT_WIDTH//2, DEFAULT_HEIGHT//2, Qt.AlignCenter, "point font text")

        pixel_size_font = QFont('arial')
        pixel_size_font.setPixelSize(25)
        painter.setFont(pixel_size_font)
        painter.drawText(0, 0, (DEFAULT_WIDTH//2) + 100, (DEFAULT_HEIGHT//2) + 100, Qt.AlignCenter, "pixel font text")

        painter.end()

    def paintEvent(self, event):
        painter = QPainter(self)
        painter.drawImage(0, 0, self.image)
        painter.end()


class MainWindow(QMainWindow):
    def __init__(self):
        super().__init__()
        self.resize(DEFAULT_WIDTH, DEFAULT_HEIGHT)
        self.setCentralWidget(QPainterWidget())


app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()

Solution

  • A raster image is normally composed of a grid of pixels[1], which are the smallest element of a picture ("pix"-"el") that shows some color, and will compose the final image in its entirety.

    The "size" of a raster image is always measured in the length of those pixels for each dimension. But that's not an physical size (like meters or inches), it's an absolute unit of quantity.

    For example, a standard (real[2]) HD screen has a physical resolution of 1920x1080 pixels. It means that that screen has a physical grid of leds (or whatever) that are exactly put in 1920 columns and 1080 rows.
    But you can actually have screens with that same resolution that are 2 meters wide or 20 even centimeters (and less).

    If you create an image that has the above resolution, and draw a text that uses a font that is set for a pixel size of 1080, then you'll see the text filling the whole image vertically, but that text would be 1 meter tall in the first screen, or 10 cm in the second.

    When you specify the "setDotsPerMeter", you're setting the density of pixels in a physical reference, as the documentation explains:

    Sets the number of pixels that fit [...] in a physical meter.

    If you draw the text setting the font size in points, the result will change depending on the setting of those "dots per meter".

    Now, here is where things become confusing, so let's put your code on hold and imagine a "virtual" situation.

    I am an artist and I want to create a project that shows a black square using grids. I have two groups (a and b) of 4 square canvasses:

    1. a 1 meter wide canvas, with grids 10cm wide (100 "pixels");
    2. a 10 centimeter wide canvas, with grids 1cm wide (again, 100 "pixels");
    3. a 1 meter wide canvas, with grids 1cm wide (10000 "pixels");
    4. a 10 centimeter wide canvas, with grids 1mm wide (10000 "pixels" as above);

    Then, I start with the first group, and I will paint in black just 1 "cell" of that grid, as they were "pixels", which is the same as using font.setPixelSize(1): a 1-pixel square.
    I will get squares that, for each canvas, will be wide:

    1. 10cm;
    2. 1cm;
    3. 1cm;
    4. 1mm;

    Now, the second group. The aim is to always get a 10x10cm square, just like using font.setPointSize(1) (and "point" is the reference). This means that I'll need to paint in black:

    1. just one cell in the grid;
    2. all cells in the grid;
    3. 100 cells (10x10);
    4. all 10000 cells in the grid;

    And this is because each one has its DotsPerMeter based on the grid size, and each "dot" is the expected result of a 10x10cm square:

    1. 10;
    2. 100;
    3. 100;
    4. 10000;

    Back to our case, you have to also consider three aspects: the density of the image (and the default OS settings), the device pixel ratio (which could differ from the DotsPerMeter settings) of the paint device (the image), and the density of the screen in which it's visualized. The concept is similar to our view: the resolution of the image doesn't change on the depending on the screen of your screen, and none of them changes if you look at that screen from 1 meter or you put it just 10cm before your eyes.

    Also, consider that the OS uses a DPI setting used to display text (when its size is set with points, not with pixels) based on various aspects:

    • default settings (the standard convention is 96 DPI);
    • screen settings;
    • system-based screen settings;
    • text scaling (for modern systems);

    Then, setting the density (DotsPerMeter) of an image, as explained in the documentation:

    does not change the scale or aspect ratio of the image when it is rendered on other paint devices.


    So, in conclusion, as the QFont documentation explains, you should always use the point size, unless you explicitly need a pixel reference.

    For images, that could be fine, since those values are absolute to the actual resolution of the image.
    But for visualization purposes, that doesn't work well: you either get a pixelated result, or the text will be too small to be read.

    Notes:

      [1] See this answer on GraphicDesign and Pixel on WikiPedia.

      [2] This is valid only for computer screens, TV screens often support HD resolutions, but actually have smaller physical resolutions and use downscaling, since it's cheaper to have a processor that scales the image than producing screens with twice the pixels in the same size: many standard HD televisions actually use 1366x768 screens (1'049'088 pixels against 2'073'600).