The current implementation is not entirely correct yet. Two classes have
been added:
- TypedArrayConstructor, which the various typed array constructors now
inherit from. Calling or constructing this class (from JS, that is)
directly is not possible, we might want to move this abstract class
functionality to NativeFunction at a later point.
- TypedArrayPrototype, which the various typed array prototypes now have
as their own prototype. This will be the place where most of the
functionality is being shared.
Relevant parts from the spec:
22.2.1 The %TypedArray% Intrinsic Object
The %TypedArray% intrinsic object:
- is a constructor function object that all of the TypedArray
constructor objects inherit from.
- along with its corresponding prototype object, provides common
properties that are inherited by all TypedArray constructors and their
instances.
22.2.2 Properties of the %TypedArray% Intrinsic Object
The %TypedArray% intrinsic object:
- has a [[Prototype]] internal slot whose value is %Function.prototype%.
22.2.2.3 %TypedArray%.prototype
The initial value of %TypedArray%.prototype is the %TypedArray%
prototype object.
22.2.6 Properties of the TypedArray Constructors
Each TypedArray constructor:
- has a [[Prototype]] internal slot whose value is %TypedArray%.
22.2.6.2 TypedArray.prototype
The initial value of TypedArray.prototype is the corresponding
TypedArray prototype intrinsic object (22.2.7).
22.2.7 Properties of the TypedArray Prototype Objects
Each TypedArray prototype object:
- has a [[Prototype]] internal slot whose value is %TypedArray.prototype%.
22.2.7.2 TypedArray.prototype.constructor
The initial value of a TypedArray.prototype.constructor is the
corresponding %TypedArray% intrinsic object.
This patch adds six of the standard type arrays and tries to share as
much code as possible:
- Uint8Array
- Uint16Array
- Uint32Array
- Int8Array
- Int16Array
- Int32Array
Proxy is an "exotic object" and doesn't have its own prototype. Use the
regular object prototype instead, but most stuff is happening on the
target object anyway. :^)
Instead of hiding JS exceptions raised on the web, we now print them to
the debug log. This will make it a bit easier to work out why some web
pages aren't working right. :^)
with statements evaluate an expression and put the result of it at the
"front" of the scope chain. This is implemented by creating a WithScope
object and placing it in front of the VM's current call frame's scope.
Both GlobalObject and LexicalEnvironment now inherit from ScopeObject,
and the VM's call frames point to a ScopeObject chain rather than just
a LexicalEnvironment chain.
This gives us much more flexibility to implement things like "with",
and also unifies some of the code paths that previously required
special handling of the global object.
There's a bunch of more cleanup that can be done in the wake of this
change, and there might be some oversights in the handling of the
"super" keyword, but this generally seems like a good architectural
improvement. :^)
Taking non-const cell pointers is asking for trouble, since passing e.g
a "const Object*" to Value(Object*) will actually call Value(bool),
which is most likely not what you want.
It would be nice to be able to cache some shapes globally in the VM,
but then they can't be tied to a specific global object. So let's just
get rid of the requirement that shapes are tied to a global object.
This should be using the individual flag boolean properties rather than
the [[OriginalFlags]] internal slot.
Use an enumerator macro here for brevity, this will be useful for other
things as well. :^)
- Default values should depend on arguments being undefined, not being
missing
- "(?:)" for empty pattern happens in RegExp.prototype.source, not the
constructor
This makes RegExpObject compile and store a Regex<ECMA262>, adds
all flag-related properties, and implements `RegExpPrototype.test()`
(complete with 'lastIndex' support) :^)
It should be noted that this only implements `test()' using the builtin
`exec()'.
The Lexer constructor calls consume() once, which initializes m_position
to be > 0 and sets m_character. consume() calls is_line_terminator(),
which wasn't accounting for this state.
If a receiver is given, e.g. via Reflect.get/set(), forward it to the
target object's get()/put() or use it as last argument of the trap
function. The default value is the Proxy object itself.
Passing in a plain Value and expecting it to be a native property is
error prone, let's use a more narrow type and pass a NativeProperty
reference directly.
We can't just to_string() the PropertyName, it might be a symbol.
Instead to_value() it and then use to_string_without_side_effects() as
usual.
Fixes#4062.
Previously we would iterate over all the live HeapBlocks in order to
learn if an arbitrary pointer-sized value was a pointer into a live
HeapBlock. This was quite time-consuming.
Instead of that, just put all the live HeapBlock*'s in a HashTable
and identify pointers by doing a bit-masked lookup into the table.
This prevents stack overflows when calling infinite/deep recursive
functions, e.g.:
const f = () => f(); f();
JSON.stringify({}, () => ({ foo: "bar" }));
new Proxy({}, { get: (_, __, p) => p.foo }).foo;
The VM caches a StackInfo object to not slow down function calls
considerably. VM::push_call_frame() will throw an exception if
necessary (plain Error with "RuntimeError" as its .name).
Keeping the VM call frames in a Vector could cause them to move around
underneath us due to Vector resizing. Avoid this issue by allocating
CallFrame objects on the stack and having the VM simply keep a list
of pointers to each CallFrame, instead of the CallFrames themselves.
Fixes#3830.
Fixes#3951.
As the global object is constructed and initialized in a different way
than most other objects we were not setting its prototype! This made
things like "globalThis.toString()" fail unexpectedly.
If value.to_string() throws an exception and returns a null string we
must create an invalid StringOrSymbol, not one from the null string
(which ASSERT()s).
Some things, like (the non-generic version of) Array.prototype.pop(),
check is_empty() to determine whether an action, like removing elements,
can be performed. We need to know the array-like size for that, not the
size of the underlying storage, which can be different - and is not
something IndexedProperties should expose so I removed its size().
Fixes#3948.
- We have to check if the property name is a string before calling
as_string() on it
- We can't as_number() the same property name but have to use the parsed
index number
Fixes#3950.
We can't assume that property names can be converted to strings anymore,
as we have symbols. Use name.to_value() instead.
This makes something like this possible:
new Proxy(Object, { get(t, p) { return t[p] } })[Symbol.hasInstance]
This was probably a result of search & replace, it's quite ridiculous in
some places. Let use the existing pattern of getting a reference to the
VM once at each function start consistently.
This fixes Array.prototype.{join,toString}() crashing with arrays
containing themselves, i.e. circular references.
The spec is suspiciously silent about this, and indeed engine262, a
"100% spec compliant" ECMA-262 implementation, can't handle these cases.
I had a look at some major engines instead and they all seem to keep
track or check for circular references and return an empty string for
already seen objects.
- SpiderMonkey: "AutoCycleDetector detector(cx, obj)"
- V8: "CycleProtectedArrayJoin<JSArray>(...)"
- JavaScriptCore: "StringRecursionChecker checker(globalObject, thisObject)"
- ChakraCore: "scriptContext->CheckObject(thisArg)"
To keep things simple & consistent this uses the same pattern as
JSONObject, MarkupGenerator and js: simply putting each seen object in a
HashTable<Object*>.
Fixes#3929.
This renames Object::to_primitive() to Object::ordinary_to_primitive()
for two reasons:
- No confusion with Value::to_primitive()
- To match the spec's name
Also change existing uses of Object::to_primitive() to
Value::to_primitive() when the spec uses the latter (which will still
call Object::ordinary_to_primitive()). Object::to_string() has been
removed as it's not needed anymore (and nothing the spec uses).
This makes it possible to overwrite an object's toString and valueOf and
have them provide results for anything that uses to_primitive() - e.g.:
const o = { toString: undefined, valueOf: () => 42 };
Number(o) // 42, previously NaN
["foo", o].toString(); // "foo,42", previously "foo,[object Object]"
++o // 43, previously NaN
etc.
This should not just inherit Object.prototype.toString() (and override
Object::to_string()) but be its own function, i.e.
'RegExp.prototype.toString !== Object.prototype.toString'.
When value.to_string() throws an exception it returns a null string in
which case we must not construct a valid PropertyName.
Also ASSERT in PropertyName(String) and PropertyName(FlyString) to
prevent this from happening in the future.
Fixes#3941.
We must *never* call some method that expects a non-empty value on the
result of a function call without checking for exceptions first. It
won't work reliably.
Fixes#3939.
Two issues:
- throw_exception() with ErrorType::InstanceOfOperatorBadPrototype would
receive rhs_prototype.to_string_without_side_effects(), which would
ASSERT_NOT_REACHED() as to_string_without_side_effects() must not be
called on an empty value. It should (and now does) receive the RHS
value instead as the message is "'prototype' property of {} is not an
object".
- Value::instance_of() was missing an exception check after calling
has_instance_method, to_boolean() on an empty value result would crash
as well.
Fixes#3930.
This adds a new MetaProperty AST node which will be used for
'new.target' and 'import.meta' meta properties. The parser now
distinguishes between "in function context" and "in arrow function
context" (which is required for this).
When encountering TokenType::New we will attempt to parse it as meta
property and resort to regular new expression parsing if that fails,
much like the parsing of labelled statements.
This is a bit nicer for two reasons:
- The absence of line number/column information isn't based on 'values
are zero' anymore but on Optional's value
- When reporting syntax errors with position information other than the
current token's position we had to store line and column ourselves,
like this:
auto foo_start_line = m_parser_state.m_current_token.line_number();
auto foo_start_column = m_parser_state.m_current_token.line_column();
...
syntax_error("...", foo_start_line, foo_start_column);
Which now becomes:
auto foo_start= position();
...
syntax_error("...", foo_start);
This makes it easier to report correct positions for syntax errors
that only emerge a few tokens later :^)
By having the "is this a use strict directive?" logic in
parse_string_literal() we would apply it to *any* string literal, which
is incorrect and would lead to false positives - e.g.:
"use strict" + 1
`"use strict"`
"\123"; ({"use strict": ...})
Relevant part from the spec which is now implemented properly:
[...] and where each ExpressionStatement in the sequence consists
entirely of a StringLiteral token [...]
I also got rid of UseStrictDirectiveState which is not needed anymore.
Fixes#3903.
The previous approach (keeping track of the current source position
manually) was only working for single line sources (which is fair
considering this was developed for Browser's JS console).
The new approach is much simpler: append token trivia (all whitespace
and comments since the last token), then append styled token value.
https://tc39.es/ecma262/#sec-functiondeclarations-in-ifstatement-statement-clauses
B.3.4 FunctionDeclarations in IfStatement Statement Clauses
The following augments the IfStatement production in 13.6:
IfStatement[Yield, Await, Return] :
if ( Expression[+In, ?Yield, ?Await] ) FunctionDeclaration[?Yield, ?Await, ~Default] else Statement[?Yield, ?Await, ?Return]
if ( Expression[+In, ?Yield, ?Await] ) Statement[?Yield, ?Await, ?Return] else FunctionDeclaration[?Yield, ?Await, ~Default]
if ( Expression[+In, ?Yield, ?Await] ) FunctionDeclaration[?Yield, ?Await, ~Default] else FunctionDeclaration[?Yield, ?Await, ~Default]
if ( Expression[+In, ?Yield, ?Await] ) FunctionDeclaration[?Yield, ?Await, ~Default]
This production only applies when parsing non-strict code. Code matching
this production is processed as if each matching occurrence of
FunctionDeclaration[?Yield, ?Await, ~Default] was the sole
StatementListItem of a BlockStatement occupying that position in the
source code. The semantics of such a synthetic BlockStatement includes
the web legacy compatibility semantics specified in B.3.3.
B.1.3 HTML-like Comments
The syntax and semantics of 11.4 is extended as follows except that this
extension is not allowed when parsing source code using the goal symbol
Module:
Syntax (only relevant part included)
SingleLineHTMLCloseComment ::
LineTerminatorSequence HTMLCloseComment
HTMLCloseComment ::
WhiteSpaceSequence[opt] SingleLineDelimitedCommentSequence[opt] --> SingleLineCommentChars[opt]
Fixes#3810.
ES 5(.1) described parsing of the function body string as:
https://www.ecma-international.org/ecma-262/5.1/#sec-15.3.2.1
7. If P is not parsable as a FormalParameterList[opt] then throw a SyntaxError exception.
8. If body is not parsable as FunctionBody then throw a SyntaxError exception.
We implemented it as building the source string of a complete function
and feeding that to the parser, with the same outcome. ES 2015+ does
exactly that, but with newlines at certain positions:
https://tc39.es/ecma262/#sec-createdynamicfunction
16. Let bodyString be the string-concatenation of 0x000A (LINE FEED), ? ToString(bodyArg), and 0x000A (LINE FEED).
17. Let prefix be the prefix associated with kind in Table 49.
18. Let sourceString be the string-concatenation of prefix, " anonymous(", P, 0x000A (LINE FEED), ") {", bodyString, and "}".
This patch updates the generated source string to match these
requirements. This will make certain edge cases work, e.g.
'new Function("-->")', where the user supplied input must be placed on
its own line to be valid syntax.
This is, and I can't stress this enough, a lot better than all the
manual bounds checking and indexing that was going on before.
Also fixes a small bug where "\u{}" wouldn't get rejected as invalid
unicode escape sequence.
We only use expect(...).toEval() / not.toEval() for checking syntax
errors, where we obviously can't put the code in a regular function. For
runtime errors we do exactly that, so toEval() should not fail - this
allows us to use undefined identifiers in syntax tests.
This allows us to communicate details about invalid tokens to the parser
without having to invent a bunch of specific invalid tokens like
TokenType::InvalidNumericLiteral.
Newlines after line continuation were inserted into the string
literals. This patch makes the parser ignore the newlines after \ and
also makes it so that "use strict" containing a line continuation is
not a valid "use strict".
- A regular function can have duplicate parameters except in strict mode
or if its parameter list is not "simple" (has a default or rest
parameter)
- An arrow function can never have duplicate parameters
Compared to other engines I opted for more useful syntax error messages
than a generic "duplicate parameter name not allowed in this context":
"use strict"; function test(foo, foo) {}
^
Uncaught exception: [SyntaxError]: Duplicate parameter 'foo' not allowed in strict mode (line: 1, column: 34)
function test(foo, foo = 1) {}
^
Uncaught exception: [SyntaxError]: Duplicate parameter 'foo' not allowed in function with default parameter (line: 1, column: 20)
function test(foo, ...foo) {}
^
Uncaught exception: [SyntaxError]: Duplicate parameter 'foo' not allowed in function with rest parameter (line: 1, column: 23)
(foo, foo) => {}
^
Uncaught exception: [SyntaxError]: Duplicate parameter 'foo' not allowed in arrow function (line: 1, column: 7)
https://tc39.es/ecma262/#sec-directive-prologues-and-the-use-strict-directive
A Use Strict Directive is an ExpressionStatement in a Directive Prologue
whose StringLiteral is either of the exact code point sequences
"use strict" or 'use strict'. A Use Strict Directive may not contain an
EscapeSequence or LineContinuation.
https://tc39.es/ecma262/#sec-additional-syntax-string-literals
The syntax and semantics of 11.8.4 is extended as follows except that
this extension is not allowed for strict mode code:
Syntax
EscapeSequence::
CharacterEscapeSequence
LegacyOctalEscapeSequence
NonOctalDecimalEscapeSequence
HexEscapeSequence
UnicodeEscapeSequence
LegacyOctalEscapeSequence::
OctalDigit [lookahead ∉ OctalDigit]
ZeroToThree OctalDigit [lookahead ∉ OctalDigit]
FourToSeven OctalDigit
ZeroToThree OctalDigit OctalDigit
ZeroToThree :: one of
0 1 2 3
FourToSeven :: one of
4 5 6 7
NonOctalDecimalEscapeSequence :: one of
8 9
This definition of EscapeSequence is not used in strict mode or when
parsing TemplateCharacter.
Note
It is possible for string literals to precede a Use Strict Directive
that places the enclosing code in strict mode, and implementations must
take care to not use this extended definition of EscapeSequence with
such literals. For example, attempting to parse the following source
text must fail:
function invalid() { "\7"; "use strict"; }
We're passing a token to this function, so m_current_token is actually
the next token - which leads to incorrect line/column numbers for string
literal syntax errors:
"\u"
^
Uncaught exception: [SyntaxError]: Malformed unicode escape sequence (line: 1, column: 5)
Rather than:
"\u"
^
Uncaught exception: [SyntaxError]: Malformed unicode escape sequence (line: 1, column: 1)
This was a regression introduced by 9ffe45b - a TryStatement without
'catch' clause *is* allowed, if it has a 'finally' clause. It is now
checked properly that at least one of both is present.
This separates matching/parsing of statements and declarations and
fixes a few edge cases where the parser would incorrectly accept a
declaration where only a statement is allowed - for example:
if (foo) const a = 1;
for (var bar;;) function b() {}
while (baz) class c {}
From the spec: https://tc39.es/ecma262/#sec-literals-numeric-literals
The SourceCharacter immediately following a NumericLiteral must not be
an IdentifierStart or DecimalDigit.
For example: 3in is an error and not the two input elements 3 and in.
Otherwise we crash the interpreter when an exception is thrown during
evaluation of the while or do/while test expression - which is easily
caused by a ReferenceError - e.g.:
while (someUndefinedVariable) {
// ...
}
A large number of JS strings are a single ASCII character. This patch
adds a 128-entry cache for those strings to the VM. The cost of the
cache is 1536 byte of GC heap (all in same block) + 2304 bytes malloc.
This avoids a lot of GC heap allocations, and packing all of these
in the same heap block is nice for fragmentation as well.
This allows us to provide better error messages as we can point the
syntax error location to the exact first invalid parameter instead of
always the end of the function within a object literal or class
definition.
Before this change:
const Foo = { set bar() {} }
^
Uncaught exception: [SyntaxError]: Object setter property must have one argument (line: 1, column: 28)
class Foo { set bar() {} }
^
Uncaught exception: [SyntaxError]: Class setter method must have one argument (line: 1, column: 26)
After this change:
const Foo = { set bar() {} }
^
Uncaught exception: [SyntaxError]: Setter function must have one argument (line: 1, column: 23)
class Foo { set bar() {} }
^
Uncaught exception: [SyntaxError]: Setter function must have one argument (line: 1, column: 21)
The only possible downside of this change is that class getters/setters
and functions in objects are not distinguished in the message anymore -
I don't think that's important though, and classes are (mostly) just
syntactic sugar anyway.
I'm about to add even more options and a bunch of unnamed true/false
arguments is really not helpful. Let's make this a single parse options
parameter using bit flags.
This provides a huge speed-up for objects with large numbers as property
keys in some situation. Previously we would simply iterate from 0-<max>
and check if there's a non-empty value at each index - now we're being
smarter and compute a list of non-empty indices upfront, by checking
each value in the packed elements vector and appending the sparse
elements hashmap keys (for GenericIndexedPropertyStorage).
Consider this example, an object with a single own property, which is a
number increasing by a factor of 10 each iteration:
for (let i = 0; i < 10; ++i) {
const o = {[10 ** i]: "foo"};
const start = Date.now();
Object.getOwnPropertyNames(o); // <-- IndexedPropertyIterator
const end = Date.now();
console.log(`${10 ** i} -> ${(end - start) / 1000}s`);
}
Before this change:
1 -> 0.0000s
10 -> 0.0000s
100 -> 0.0000s
1000 -> 0.0000s
10000 -> 0.0005s
100000 -> 0.0039s
1000000 -> 0.0295s
10000000 -> 0.2489s
100000000 -> 2.4758s
1000000000 -> 25.5669s
After this change:
1 -> 0.0000s
10 -> 0.0000s
100 -> 0.0000s
1000 -> 0.0000s
10000 -> 0.0000s
100000 -> 0.0000s
1000000 -> 0.0000s
10000000 -> 0.0000s
100000000 -> 0.0000s
1000000000 -> 0.0000s
Fixes#3805.
If there's a newline between the closing paren and arrow it's not a
valid arrow function, ASI should kick in instead (it'll then fail with
"Unexpected token Arrow")
This simplifies try_parse_arrow_function_expression() and fixes a few
cases that should not produce an arrow function AST but did:
(a,,) => {}
(a b) => {}
(a ...b) => {}
(...b a) => {}
The new parsing logic checks whether parens are expected and uses
parse_function_parameters() if so, rolling back if a new syntax error
occurs during that. Otherwise it's just an identifier in which case we
parse the single parameter ourselves.
It would be cool to solve this in a general way so that looking up
a string literal or StringView in a HashMap with String keys avoids
creating a temp string.
For now, this patch simply addresses the issue in JS::Lexer.
This is a 2-3% speed-up on test-js.
Instead of performing a prototype transition for every new object we
create via {}, prebake the object returned by Object::create_empty()
with a shape with ObjectPrototype as the prototype.
We also prebake the shape for the object assigned to the "prototype"
property of new ScriptFunction objects, since those are extremely
common and that code broke from this change anyway.
This avoid a large number of transitions and is a small speed-up on
test-js.
This is not actually necessary, since no GC allocations are made during
this process. If we ever make property tables into heap cells, we'd
have to rethink this.
This assumption only works for the m_packed_elements Vector where a
missing value at a certain index still returns an empty value, but not
for the m_sparse_elements HashMap, which is being used for indices
>= 200 - in that case the Optional<ValueAndAttributes> result will not
have a value.
This fixes a crash in the js REPL where printing an array with a hole at
any index >= 200 would crash.
When we're initializing objects, we're just adding a bunch of new
properties, without transition, and without overlap (we never add
the same property twice.)
Take advantage of this by skipping lookups entirely (no need to see
if we're overwriting an existing property) during initialization.
Another nice test-js speedup :^)
Roughly 7% of test-js runtime was spent creating FlyStrings from string
literals. This patch frontloads that work and caches all the commonly
used names in LibJS on a CommonPropertyNames struct that hangs off VM.
When changing the attributes of an existing property of an object with
unique shape we must not change the PropertyMetadata offset.
Doing so without resizing the underlying storage vector caused an OOB
write crash.
Fixes#3735.
Previously, when a loop detected an unwind of type ScopeType::Function
(which means a return statement was executed inside of the loop), it
would just return undefined. This set the VM's last_value to undefined,
when it should have been the returned value. This patch makes all loop
statements return the appropriate value in the above case.
'continue' is no longer allowed outside of a loop, and an unlabeled
'break' is not longer allowed outside of a loop or switch statement.
Labeled 'break' statements are still allowed everywhere, even if the
label does not exist.
Instead of keeping all the HeapBlocks in one big list, we now split it
into two levels:
- Heap has a set of Allocators, each with a specific cell size.
- Allocators have two lists of blocks, "full" and "usable".
Allocating a new cell no longer has to scan the entire set of blocks,
but instead just needs to find the right allocator and then pop a cell
from its freelist. If all the blocks in the allocator are full, a new
block will be created.
Blocks are moved from the "full" to "usable" list after sweeping has
determined that they are not completely empty and not completely full.
There are certainly many ways we can improve on this. This patch is
mostly about getting the new allocator architecture in place. :^)
While initialization common runtime objects like functions, prototypes,
etc, we don't really care about tracking transitions for each and every
property added to them.
This patch puts objects into a "disable transitions" mode while we call
initialize() on them. After that, adding more properties will cause new
transitions to be generated and added to the chain.
This gives a ~10% speed-up on test-js. :^)
When reifying a shape transition chain, look for the nearest previous
shape in the transition chain that has a property table already, and
use that as the starting point.
This achieves two things:
1. We do less work when reifying property tables that already have
partial property tables earlier in the chain.
2. This enables adding properties to a shape without performing a
transition. This will be useful for initializing runtime objects
with way fewer allocations. See next patch. :^)
The check for invalid lhs and assignment to eval/arguments in strict
mode should happen for all kinds of assignment expressions, not just
AssignmentOp::Assignment.
So far we have three different syntax highlighters for LibJS:
- js's Line::Editor stylization
- JS::MarkupGenerator
- GUI::JSSyntaxHighlighter
This not only caused repetition of most token types in each highlighter
but also a lot of inconsistency regarding the styling of certain tokens:
- JSSyntaxHighlighter was considering TokenType::Period to be an
operator whereas MarkupGenerator categorized it as punctuation.
- MarkupGenerator was considering TokenType::{Break,Case,Continue,
Default,Switch,With} control keywords whereas JSSyntaxHighlighter just
disregarded them
- MarkupGenerator considered some future reserved keywords invalid and
others not. JSSyntaxHighlighter and js disregarded most
Adding a new token type meant adding it to ENUMERATE_JS_TOKENS as well
as each individual highlighter's switch/case construct.
I added a TokenCategory enum, and each TokenType is now associated to a
certain category, which the syntax highlighters then can use for styling
rather than operating on the token type directly. This also makes
changing a token's category everywhere easier, should we need to do that
(e.g. I decided to make TokenType::{Period,QuestionMarkPeriod}
TokenCategory::Operator for now, but we might want to change them to
Punctuation.
There's no point in trying to achieve shape sharing for global objects,
so we can simply make the shape unique from the start and avoid making
a transition chain.
Previously whenever you would ask a Shape how many properties it had,
it would reify the property table into a HashMap and use HashMap::size()
to answer the question.
This can be a huge waste of time if we don't need the property table for
anything else, so this patch implements property count tracking in a
separate integer member of Shape. :^)