ladybird/Libraries/LibJS/Lexer.h

95 lines
2.6 KiB
C
Raw Normal View History

/*
* Copyright (c) 2020, Stephan Unverwerth <s.unverwerth@serenityos.org>
*
* SPDX-License-Identifier: BSD-2-Clause
*/
#pragma once
#include "Token.h"
#include <AK/ByteString.h>
#include <AK/HashMap.h>
#include <AK/String.h>
#include <AK/StringView.h>
namespace JS {
class Lexer {
public:
explicit Lexer(StringView source, StringView filename = "(unknown)"sv, size_t line_number = 1, size_t line_column = 0);
Token next();
ByteString const& source() const { return m_source; }
String const& filename() const { return m_filename; }
void disallow_html_comments() { m_allow_html_comments = false; }
Token force_slash_as_regex();
private:
void consume();
bool consume_exponent();
bool consume_octal_number();
bool consume_hexadecimal_number();
bool consume_binary_number();
bool consume_decimal_number();
bool is_unicode_character() const;
u32 current_code_point() const;
bool is_eof() const;
bool is_line_terminator() const;
bool is_whitespace() const;
Optional<u32> is_identifier_unicode_escape(size_t& identifier_length) const;
Optional<u32> is_identifier_start(size_t& identifier_length) const;
Optional<u32> is_identifier_middle(size_t& identifier_length) const;
bool is_line_comment_start(bool line_has_token_yet) const;
bool is_block_comment_start() const;
bool is_block_comment_end() const;
bool is_numeric_literal_start() const;
2020-04-13 17:50:58 +00:00
bool match(char, char) const;
bool match(char, char, char) const;
bool match(char, char, char, char) const;
template<typename Callback>
bool match_numeric_literal_separator_followed_by(Callback) const;
bool slash_means_division() const;
TokenType consume_regex_literal();
ByteString m_source;
size_t m_position { 0 };
Token m_current_token;
char m_current_char { 0 };
bool m_eof { false };
String m_filename;
size_t m_line_number { 1 };
size_t m_line_column { 0 };
bool m_regex_is_in_character_class { false };
LibJS: Add template literals Adds fully functioning template literals. Because template literals contain expressions, most of the work has to be done in the Lexer rather than the Parser. And because of the complexity of template literals (expressions, nesting, escapes, etc), the Lexer needs to have some template-related state. When entering a new template literal, a TemplateLiteralStart token is emitted. When inside a literal, all text will be parsed up until a '${' or '`' (or EOF, but that's a syntax error) is seen, and then a TemplateLiteralExprStart token is emitted. At this point, the Lexer proceeds as normal, however it keeps track of the number of opening and closing curly braces it has seen in order to determine the close of the expression. Once it finds a matching curly brace for the '${', a TemplateLiteralExprEnd token is emitted and the state is updated accordingly. When the Lexer is inside of a template literal, but not an expression, and sees a '`', this must be the closing grave: a TemplateLiteralEnd token is emitted. The state required to correctly parse template strings consists of a vector (for nesting) of two pieces of information: whether or not we are in a template expression (as opposed to a template string); and the count of the number of unmatched open curly braces we have seen (only applicable if the Lexer is currently in a template expression). TODO: Add support for template literal newlines in the JS REPL (this will cause a syntax error currently): > `foo > bar` 'foo bar'
2020-05-03 22:41:14 +00:00
struct TemplateState {
bool in_expr;
u8 open_bracket_count;
};
Vector<TemplateState> m_template_states;
bool m_allow_html_comments { true };
Optional<size_t> m_hit_invalid_unicode;
static HashMap<DeprecatedFlyString, TokenType> s_keywords;
struct ParsedIdentifiers : public RefCounted<ParsedIdentifiers> {
// Resolved identifiers must be kept alive for the duration of the parsing stage, otherwise
// the only references to these strings are deleted by the Token destructor.
HashTable<DeprecatedFlyString> identifiers;
};
RefPtr<ParsedIdentifiers> m_parsed_identifiers;
};
}