According to the spec, these calls should be identical to an invocation
of `glVertex2*`, which sets the W-coordinate to 1 by default.
This fixes the credits sequence rendering of Tux Racer.
This is the vectorized version of `gl_tex_parameter`, which sets the
parameters of a texture's sampler. We currently only support one single
pname, `GL_TEXTURE_BORDER_COLOR`, which sets the border color of a texel
for if it is sampled outside of a mip-map's range.
This adds a virtual base class for GPU devices located in LibGPU.
The OpenGL context now only talks to this device agnostic interface.
Currently the device interface is simply a copy of the existing SoftGPU
interface to get things going :^)
This introduces a new device independent base class for Images in LibGPU
that also keeps track of the device from which it was created in order
to prevent assigning images across devices.
This introduces a new abstraction layer, LibGPU, that serves as the
usermode interface to GPU devices. To get started we just move the
DeviceConfig there and make sure everything still works :^)
We now support generating top-left submatrices from a `Gfx::Matrix`
and we move the normal transformation calculation into
`SoftGPU::Device`. No functional changes.
We were normalizing data read from vertex attribute pointers based on
their usage, but there is nothing written about this behavior in the
spec or in man pages.
When we implement `glVertexAttribPointer` however, the user can
optionally enable normalization per vertex attribute pointer. This
refactors the `VertexAttribPointer` to have a `normalize` field so we
can support that future implementation.
This merges GLContext and SoftwareGLContext into a single GLContext
class. Since the hardware abstraction is handled via the GPU device
interface we do not need the virtual base of GLContext anymore. All
context handling functionality from the old GLContext has been moved
into the new version. All methods in GLContext are now non virtual and
the class is marked as final.
We were lacking support for default textures (i.e. calling
`glBindTexture` with a `texture` argument of `0`) which caused our
Quake2 port to render red screens whenever a video was playing. Every
texture unit is now initialized with a default 2D texture.
Additionally, we had this concept of a "currently bound target" on our
texture units which is not how OpenGL wants us to handle targets.
Calling `glBindTexture` should set the texture for the provided target
only, making it sort of an alias for future operations on the same
target.
Finally, `glDeleteTextures` should not remove the bound texture from
the target in the texture unit, but it should reset it to the default
texture.
This matches the rename of RGBA32 to ARGB32. It also makes more sense
when you see it used with 32-bit hexadecimal literals:
Before:
Color::from_rgba(0xaarrggbb)
After:
Color::from_argb(0xaarrggbb)
The ARGB32 typedef is used for 32-bit #AARRGGBB quadruplets. As such,
the name RGBA32 was misleading, so let's call it ARGB32 instead.
Since endianness is a thing, let's not encode any assumptions about byte
order in the name of this type. ARGB32 is basically a "machine word"
of color.
GL_LINEAR_MIPMAP_NEAREST means choose nearest mipmap level, interpolate
texels linearly.
GL_NEAREST_MIPMAP_LINEAR means choose the two closest mipmap levels,
sample the texels unfiltered and linearly interpolate based on the
fractional value of the mipmap level.
Previously we had this backwards.
Our API still specifies it as a double, but internally we communicate a
float to the rasterizer. Additionally, clamp the value to 0..1 as
described in the spec.
Our implementation keeps the top-most item on the matrix stacks in a
member variable, so we can always use that instead of considering the
actual stack.
Additionally, the current matrix mode should not influence retrieving
the projection or model view matrix.