67 Commits

Author SHA1 Message Date
bca0a14371 Allow storing a result value for a token from a lexer code block 2022-10-16 21:40:25 -04:00
ca8a360c0e Provide matched text to lexer user code block 2022-10-15 13:37:34 -04:00
623c644e74 Work on real D strings instead of ubyte pointer and length
Also fix a couple UTF-8 decoder bugs!
2022-10-15 13:32:33 -04:00
de93d23585 Add test for non-LALR grammar failing to generate parser 2022-10-13 05:17:06 -04:00
ad09ff039a Add spec to test parsing lists 2022-10-13 05:02:05 -04:00
727c8cd1ea Execute rule user code blocks when reducing the rule 2022-10-13 04:56:50 -04:00
31970522de Store parse result; add result_type grammar keyword 2022-10-12 20:56:14 -04:00
02be6de48e Add lexer modes and $mode() code expansion 2022-10-09 22:49:01 -04:00
b2d11321fe Add grammar syntax to specify lexer mode for tokens and patterns 2022-10-04 22:23:39 -04:00
66d654b6b9 Add $token() user code block expansion 2022-10-02 10:43:47 -04:00
43fb74fe4b Capture and verify stdout results from tests 2022-10-02 10:31:07 -04:00
01ef4fc27c Rename some test files 2022-10-02 10:10:04 -04:00
f46b5b3f4d Remove TOKEN_EOF; define EOF token and start rule in Generator 2022-10-02 10:07:44 -04:00
150be33826 Assign pattern code IDs in Generator instead of Grammar 2022-10-01 10:36:35 -04:00
e7e30c4f28 Add pattern statement 2022-09-30 21:05:18 -04:00
04367db0ac Add forward slashes around patterns and parse more robustly 2022-09-28 23:05:01 -04:00
4b148f3797 Add Grammar spec 2022-09-27 14:45:29 -04:00
a4c5546876 Disable parser debug output 2022-09-27 12:33:23 -04:00
4d716f6c10 Remove Token#c_name; use given token case in token constants 2022-09-25 14:48:49 -04:00
672098ad32 Execute user code blocks assigned to tokens 2022-09-24 17:31:40 -04:00
38ae5ac7a1 Split Token class into Token/Pattern 2022-09-15 22:46:44 -04:00
bf075a69f6 Test matching a semicolon 2022-07-23 22:25:17 -04:00
b682c72b17 Add semicolon to end of all grammar statements 2022-07-23 22:09:19 -04:00
382e17804c Test SLR grammar 2022-06-27 21:06:03 -04:00
30f4cfcc99 Write parser log file
Fix bug of skipping rule set IDs.
Remove unneeded out_sets from ItemSet class.
2022-06-26 11:06:55 -04:00
2fbe13e071 Do not consume lookahead token when reducing 2022-06-25 21:35:54 -04:00
f2cc5b112e Handle shifting states after reducing 2022-06-25 16:16:20 -04:00
84c4a16ce6 Start on Parser.parse() 2022-06-21 23:03:00 -04:00
df8088c3c6 Clean up rule format in grammar files 2022-06-05 16:28:35 -04:00
ba74d0a20a Reduce maximum code point value to not interfere with magic code point values used by parser 2022-06-05 15:24:40 -04:00
fe607291f4 Use .propane extension for test grammars 2022-06-05 15:18:55 -04:00
f37801ec9e Store tokens and drop tokens separately 2022-06-05 14:36:19 -04:00
7598c589fe Detect other invalid UTF-8 encodings 2022-05-31 22:26:09 -04:00
ddadc2008b Rename to propane 2022-05-28 20:20:03 -04:00
746ec89be8 Add test for a rule that can be arrived at from multiple states 2021-09-21 21:40:11 -04:00
850e639e3a update identical rule spec to use lookahead symbol 2021-09-06 20:18:17 -04:00
bdb10e7afc test duplicate rules 2021-09-05 09:50:04 -04:00
08e3516ad9 Add wikipedia LR(0) parser example test 2021-09-04 22:33:34 -04:00
6026bf1514 Start building following item sets 2021-08-29 09:41:00 -04:00
2e16b0bd6e Start on Item and ItemSet 2021-08-28 09:02:19 -04:00
00016f16b3 Combine Grammar and Generator into top-level Imbecile class 2021-08-22 21:04:46 -04:00
9459883e74 Add Lexer class; Move LexerDFA to Lexer::DFA 2021-08-18 17:09:45 -04:00
2685c05360 Change rule syntax 2021-07-19 21:55:08 -04:00
c0c3353fd7 Test lexing empty null string returns EOF 2021-07-06 12:06:07 -04:00
3158e51059 Add length field to LexedToken 2021-07-06 11:59:35 -04:00
d9e4f64d2e Fix returning TOKEN_EOF when lexing at EOF 2021-07-06 11:55:44 -04:00
e8df4296cc Begin testing lexer 2021-07-06 11:09:39 -04:00
1271e19b50 Test multi-byte code point decoding 2021-07-06 11:02:43 -04:00
12e11399af Add decoder tests 2021-07-06 10:57:06 -04:00
ca7d4862f9 Run test executable; build with unit tests 2021-07-06 10:03:42 -04:00