Copying over every texel (4x`f32x4`) for every texture unit is
relatively expensive. By checking if we even need to remember these
texel values, we reduce the time spent in `rasterize_triangle` by
around 2% as measured in Quake III.
Previously, we would precalculate "alpha blend factors" on every
configuration update and then calculate the source and destination
blending factors in one go using all these factors. The idea here was
probably that we would get better performance by avoiding branching.
However, by measuring blending performance in Quake III, it seems that
this simpler version that only calculates the required factors reduces
the CPU time spent in `rasterize_triangle` by 3%.
As a bonus, `GL_SRC_ALPHA_SATURATE` is now also implemented.
Each texture unit now has its own texture transformation matrix stack.
Introduce a new texture unit configuration that is synced when changed.
Because we're no longer passing a silly `Vector` when drawing each
primitive, this results in a slightly improved frames per second :^)
Looking at how Khronos defines layers:
https://www.khronos.org/opengl/wiki/Array_Texture
We both have 3D textures and layers of 2D textures, which can both be
encoded in our existing `Typed3DBuffer` as depth. Since we support
depth already in the GPU API, remove layer everywhere.
Also pass in `Texture2D::LOG2_MAX_TEXTURE_SIZE` as the maximum number
of mipmap levels, so we do not allocate 999 levels on each Image
instantiation.
This makes it consistent with our other `blit_from_color_buffer` and
paves the way for a third method that will be introduced in one of the
next commits.
In OpenGL this is called the (base) internal format which is an
expectation expressed by the client for the minimum supported texel
storage format in the GPU for textures.
Since we store everything as RGBA in a `FloatVector4`, the only thing
we do in this patch is remember the expected internal format, and when
we write new texels we fixate the value for the alpha channel to 1 for
two formats that require it.
`PixelConverter` has learned how to transform pixels during transfer to
support this.
A GPU (driver) is now responsible for reading and writing pixels from
and to user data. The client (LibGL) is responsible for specifying how
the user data must be interpreted or written to.
This allows us to centralize all pixel format conversion in one class,
`LibSoftGPU::PixelConverter`. For both the input and output image, it
takes a specification containing the image dimensions, the pixel type
and the selection (basically a clipping rect), and converts the pixels
from the input image to the output image.
Effectively this means we now support almost all OpenGL 1.5 formats,
and all custom logic has disappeared from:
- `glDrawPixels`
- `glReadPixels`
- `glTexImage2D`
- `glTexSubImage2D`
The new logic is still unoptimized, but on my machine I experienced no
noticeable slowdown. :^)
Also skip the test for the `::Always` alpha test function in the hot
loop. This test function is very unlikely to be set, so leave that up
to `::test_alpha()`.
This commit implements glClipPlane and its supporting calls, backed
by new support for user-defined clip planes in the software GPU clipper.
This fixes some visual bugs seen in the Quake III port, in which mirrors
would only reflect correctly from close distances.
Implement (anti)aliased point drawing and anti-aliased line drawing.
Supported through LibGL's `GL_POINTS`, `GL_LINES`, `GL_LINE_LOOP` and
`GL_LINE_STRIP`.
In order to support this, `LibSoftGPU`s rasterization logic was
reworked. Now, any primitive can be drawn by invoking `rasterize()`
which takes care of the quad loop and fragment testing logic. Three
callbacks need to be passed:
* `set_coverage_mask`: the primitive needs to provide initial coverage
mask information so fragments can be discarded early.
* `set_quad_depth`: fragments survived stencil testing, so depth values
need to be set so depth testing can take place.
* `set_quad_attributes`: fragments survived depth testing, so fragment
shading is going to take place. All attributes like color, tex coords
and fog depth need to be set so alpha testing and eventually,
fragment rasterization can take place.
As of this commit, there are four instantiations of this function:
* Triangle rasterization
* Points - aliased
* Points - anti-aliased
* Lines - anti-aliased
In order to standardize vertex processing for all primitive types,
things like vertex transformation, lighting and tex coord generation
are now taking place before clipping.
Our move to floating point precision has eradicated the pixel artifacts
in Quake 1, but introduced new and not so subtle rendering glitches in
games like Tux Racer. This commit changes three things to get the best
of both worlds:
1. Subpixel logic based on `i32` types was reintroduced, the number of
bits is set to 6. This reintroduces the artifacts in Quake 1 but
fixes rendering of Tux Racer.
2. Before triangle culling, subpixel coordinates are calculated and
stored in `Triangle`. These coordinates are rounded, which fixes the
Quake 1 artifacts. Tux Racer is unaffected.
3. The triangle area (actually parallelogram area) is also stored in
`Triangle` so we don't need to recalculate it later on. In our
previous subpixel code, there was a subtle disconnect between the
two calculations (one with and one without subpixel precision) which
resulted in triangles incorrectly being culled. This fixes some
remaining Quake 1 artifacts.
This adds a virtual base class for GPU devices located in LibGPU.
The OpenGL context now only talks to this device agnostic interface.
Currently the device interface is simply a copy of the existing SoftGPU
interface to get things going :^)
This introduces a new device independent base class for Images in LibGPU
that also keeps track of the device from which it was created in order
to prevent assigning images across devices.
This introduces a new abstraction layer, LibGPU, that serves as the
usermode interface to GPU devices. To get started we just move the
DeviceConfig there and make sure everything still works :^)
We now support generating top-left submatrices from a `Gfx::Matrix`
and we move the normal transformation calculation into
`SoftGPU::Device`. No functional changes.
Currently, LibSoftGPU is still OpenGL-minded in that it uses a
coordinate system with the origin of `(0, 0)` at the lower-left of
textures, buffers and window coordinates. Because we are blitting to a
`Gfx::Bitmap` that has the origin at the top-left, we need to flip the
Y-coordinates somewhere in the rasterization logic.
We used to do this during conversion of NDC-coordinates to window
coordinates. This resulted in some incorrect behavior when
rasterization did not pass through the vertex transformation logic,
e.g. when calling `glDrawPixels`.
This changes the coordinate system to OpenGL's throughout, only to blit
the final color buffer upside down to the target bitmap. This fixes
drawing to the depth buffer directly resulting in upside down images.
Between the OpenGL client and server, a lot of data type and color
conversion needs to happen. We are performing these conversions both in
`LibSoftGPU` and `LibGL`, which is not ideal. Additionally, some
concepts like the color, depth and stencil buffers should share their
logic but have separate implementations.
This is the first step towards generalizing our `LibSoftGPU` frame
buffer: a generalized `Typed3DBuffer` is introduced for arbitrary 3D
value storage and retrieval, and `Typed2DBuffer` wraps around it to
provide in an easy-to-use 2D pixel buffer. The color, depth and stencil
buffers are replaced by `Typed2DBuffer` and are now managed by the new
`FrameBuffer` class.
The `Image` class now uses multiple `Typed3DBuffer`s for layers and
mipmap levels. Additionally, the textures are now always stored as
BGRA8888, only converting between formats when reading or writing
pixels.
Ideally this refactor should have no functional changes, but some
graphical glitches in Grim Fandango seem to be fixed and most OpenGL
ports get an FPS boost on my machine. :^)
This function was added as a FIXME but was then arbitrarily invoked in
the rest of `Device`. We are better off removing this FIXME for now and
reevaluate introducing multithreading later on, so the code is not
littered with useless empty function calls.
This implements an 8-bit front stencil buffer. Stencil operations are
SIMD optimized. LibGL changes include:
* New `glStencilMask` and `glStencilMaskSeparate` functions
* New context parameter `GL_STENCIL_CLEAR_VALUE`
Implements support for `glRasterPos` and updating the raster position's
window coordinates through `glBitmap`. The input for `glRasterPos` is
an object position that needs to go through the same vertex
transformations as our regular triangles.
When `GL_COLOR_MATERIAL` is enabled, specific material parameters can
be overwritten by the current color per-vertex during the lighting
calculations. Which parameter is controlled by `glColorMaterial`.
Also move the lighting calculations _before_ clipping, because the spec
says so. As a result, we interpolate the resulting vertex color instead
of the input color.
This was currently only set in the OpenGL context, as the previous
architecture did all of the transformation in LibGL before passing the
transformed triangles onto the rasterizer. As this has now changed, and
we require the vertex data to be in eye-space before we can apply
lighting, we need to pass this flag along as well via the GPU options.
Most of the T&L stuff is, like on an actual GPU, now done inside of
LibSoftGPU. As such, it no longer makes sense to have specific values
like the scene ambient color inside of LibGL as part of the GL context.
These have now been moved into LibSoftGPU and use the same pattern as
the render options to set/get.
These two functions have been turned from stubs into actually doing
something. They now set the correspondingmaterial data member based on
the value passed into the `pname`argument.
Co-authored-by: Stephan Unverwerth <s.unverwerth@serenityos.org>