Previously, calling `.right()` on a `Gfx::Rect` would return the last
column's coordinate still inside the rectangle, or `left + width - 1`.
This is called 'endpoint inclusive' and does not make a lot of sense for
`Gfx::Rect<float>` where a rectangle of width 5 at position (0, 0) would
return 4 as its right side. This same problem exists for `.bottom()`.
This changes `Gfx::Rect` to be endpoint exclusive, which gives us the
nice property that `width = right - left` and `height = bottom - top`.
It enables us to treat `Gfx::Rect<int>` and `Gfx::Rect<float>` exactly
the same.
All users of `Gfx::Rect` have been updated accordingly.
Copying over every texel (4x`f32x4`) for every texture unit is
relatively expensive. By checking if we even need to remember these
texel values, we reduce the time spent in `rasterize_triangle` by
around 2% as measured in Quake III.
Previously, we would precalculate "alpha blend factors" on every
configuration update and then calculate the source and destination
blending factors in one go using all these factors. The idea here was
probably that we would get better performance by avoiding branching.
However, by measuring blending performance in Quake III, it seems that
this simpler version that only calculates the required factors reduces
the CPU time spent in `rasterize_triangle` by 3%.
As a bonus, `GL_SRC_ALPHA_SATURATE` is now also implemented.
We changed elapsed() to return i64 instead of int as that's what
AK::Time::to_milliseconds() returns, causing a bunch of implicit lossy
conversions in callers. Clean those up with a mix of type changes and
casts.
Size and format information are the same for every implementation and do
not need to be virtual. This removes the need to reimplement them for
each driver.
Same as with inputs, we define outputs as a generic array of floats.
This can later be expanded to accomodate multiple render targets or
vertex attributes in the case of a vertex shader.
Previously we would store vertex color and texture coordinates in
separate fields in PixelQuad. To make them accessible from shaders we
need to store them as a completely generic array of floats.
This will make it easier to support both string types at the same time
while we convert code, and tracking down remaining uses.
One big exception is Value::to_string() in LibJS, where the name is
dictated by the ToString AO.
We have a new, improved string type coming up in AK (OOM aware, no null
state), and while it's going to use UTF-8, the name UTF8String is a
mouthful - so let's free up the String name by renaming the existing
class.
Making the old one have an annoying name will hopefully also help with
quick adoption :^)
Otherwise, we end up propagating those dependencies into targets that
link against that library, which creates unnecessary link-time
dependencies.
Also included are changes to readd now missing dependencies to tools
that actually need them.
We were invoking `frac_int_range` twice to get the `alpha` and `beta`
values to interpolate between 4 texels, but these call into
`floor_int_range` again. Let's not repeat the work.
This remained undetected for a long time as HeaderCheck is disabled by
default. This commit makes the following file compile again:
// file: compile_me.cpp
#include <LibDNS/Question.h>
// That's it, this was enough to cause a compilation error.
Likewise for most other files touched by this commit.
On every texel access, some floating point instructions involved in
copying 4 floats popped up. Let `Image::texel() const` return a
`FloatVector4 const&` to prevent these operations.
This results in a ~7% FPS increase in GLQuake on my machine.
Looking at `Tubes` before and after this change, comparing the original
loop to the one using `memcpy`, including the time for `memcpy` itself,
resulted in ~15% fewer samples in profiles on my machine.
With 6 bits of precision, the maximum triangle coordinate we can
handle is sqrt(2^31 / (1 << 6)^2) = ~724. Rendering to a target of
800x600 or higher quickly becomes a mess because of integer overflow.
By reducing the subpixel precision to 4 bits, we support coordinates up
to ~2896, which means that we can (try to) render to target sizes like
2560x1440.
This fixes the main menu backdrop for the Half-Life port. It also
introduces more white pixel artifacts in Quake's water / lava
rendering, but this is a level geometry visualization bug (see
`r_novis`).
Up until now, we have only dealt with games that pass Q = 1 for their
texture coordinates. PrBoom+, however, relies on proper homogenous
texture coordinates for its relatively complex sky rendering, which
means that we should perform this per-fragment division.
Each texture unit now has its own texture transformation matrix stack.
Introduce a new texture unit configuration that is synced when changed.
Because we're no longer passing a silly `Vector` when drawing each
primitive, this results in a slightly improved frames per second :^)
Looking at how Khronos defines layers:
https://www.khronos.org/opengl/wiki/Array_Texture
We both have 3D textures and layers of 2D textures, which can both be
encoded in our existing `Typed3DBuffer` as depth. Since we support
depth already in the GPU API, remove layer everywhere.
Also pass in `Texture2D::LOG2_MAX_TEXTURE_SIZE` as the maximum number
of mipmap levels, so we do not allocate 999 levels on each Image
instantiation.
This makes it consistent with our other `blit_from_color_buffer` and
paves the way for a third method that will be introduced in one of the
next commits.
`GL_COMBINE` is basically a fixed function calculator to perform simple
arithmetics on configurable fragment sources. This patch implements a
number of texture env parameters with support for the RGBA internal
format.
OpenGL allows GPUs to approximate a triangle's maximum depth slope
which prevents a number computationally expensive instructions. On my
machine, this gives me +6% FPS in Quake III.
We are able to reuse `render_bounds` here since it is the containing
rect of the (X, Y) window coordinates of the triangle, thus its width
and height are the maximum delta X and delta Y, respectively.
The Quake 3 port makes use of this extension to determine a more
efficient multitexturing strategy. Since LibSoftGPU supports it, let's
report the extension in LibGL. :^)
In OpenGL this is called the (base) internal format which is an
expectation expressed by the client for the minimum supported texel
storage format in the GPU for textures.
Since we store everything as RGBA in a `FloatVector4`, the only thing
we do in this patch is remember the expected internal format, and when
we write new texels we fixate the value for the alpha channel to 1 for
two formats that require it.
`PixelConverter` has learned how to transform pixels during transfer to
support this.
A GPU (driver) is now responsible for reading and writing pixels from
and to user data. The client (LibGL) is responsible for specifying how
the user data must be interpreted or written to.
This allows us to centralize all pixel format conversion in one class,
`LibSoftGPU::PixelConverter`. For both the input and output image, it
takes a specification containing the image dimensions, the pixel type
and the selection (basically a clipping rect), and converts the pixels
from the input image to the output image.
Effectively this means we now support almost all OpenGL 1.5 formats,
and all custom logic has disappeared from:
- `glDrawPixels`
- `glReadPixels`
- `glTexImage2D`
- `glTexSubImage2D`
The new logic is still unoptimized, but on my machine I experienced no
noticeable slowdown. :^)