Compare commits

..

No commits in common. "e2db9c95c572292b7d97a5a27a296ff1962cb883" and "a7348be95dd7c0ae503d46d5af701fda10f3c06f" have entirely different histories.

116 changed files with 1660 additions and 5074 deletions

View File

@ -1,38 +0,0 @@
name: Run Propane Tests
on:
push:
branches:
- master
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
ruby-version: ['3.4']
steps:
- name: Install dependencies (Linux)
if: runner.os == 'Linux'
run: sudo apt-get update && sudo apt-get install -y gcc gdc ldc
- name: Install dependencies (macOS)
if: runner.os == 'macOS'
run: brew install gcc ldc
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: ${{ matrix.ruby-version }}
- name: Install dependencies
run: bundle install
- name: Run tests
run: rake all

1
.rspec
View File

@ -1,2 +1,3 @@
--format documentation
--color
--require spec_helper

View File

@ -1,123 +1,8 @@
## v4.1.0
### New Features
- Add `p_context_delete()` and `p_tree_delete()` for D targets.
## v4.0.0
### New Features
- Add `context_user_fields` statement to allow custom context user fields.
- Add `token_user_fields` statement to allow custom token user fields.
- Add `on_token_node` statement to allow custom code when constructing token nodes.
- Add `free_token_node` statement to allow custom code when freeing token nodes.
- Add `p_context_delete()`.
- Allow `drop` patterns to execute lexer user code blocks.
### Breaking Changes
- Replace `p_context_init()` with `p_context_new()` and `p_context_delete()`.
- Renamed `p_free_tree()` to `p_tree_delete()`.
- The `free_token_node` statement now takes a user code block instead of a
function name parameter.
## v3.0.0
### New Features
- Add support for multiple starting rules (#38)
- Add `p_free_tree()` functions to reclaim generated tree memory
- Add `free_token_node` grammar statement to reclaim user-allocated memory stored in a Token tree node `pvalue` field
- Add valgrind memory leak tests to unit tests
- Fix build issues for C++ to officially support C++ target output
### Improvements
- Document `p_lex()` and `p_token_info_t` in user guide (#37)
### Breaking Changes
- Rename AST generation mode to tree generation mode (see [UPGRADING.md](UPGRADING.md))
## v2.3.0
### New Features
- Add \D, \S, \w, \W special character classes
### Improvements
- Include line numbers for pattern errors
- Improve performance in a few places
- Parallelize parser table generation on Linux hosts
- Add github workflow to run unit tests
### Fixes
- Fix a couple clang warnings for C backend
- Fix C backend not fully initializing pvalues when multiple ptypes are used with different sizes.
- Fix some user guide examples
## v2.2.1
### Fixes
- Fix GC issue for D backend when AST is enabled (#36)
## v2.2.0
### Improvements
- Allow multiple lexer modes to be specified for a lexer pattern (#35)
- Document p_decode_code_point() API function (#34)
## v2.1.1
### Fixes
- Field aliases for AST node fields could alias incorrect field when multiple rule alternatives present for one rule set (#33)
## v2.1.0
### Improvements
- Report rule name and line number for conflicting AST node field positions errors (#32)
## v2.0.0
### Improvements
- Log conflicting rules on reduce/reduce conflict (#31)
- Use 1-based row and column values for position values (#30)
### Fixes
- Fix named optional rules (#29)
### Upgrading
- Adjust all uses of p_position_t row and col values to expect 1-based instead
of 0-based values.
## v1.5.1
### Improvements
- Improve performance (#28)
## v1.5.0
### New Features
- Track start and end text positions for tokens and rules in AST node structures (#27)
- Add warnings for shift/reduce conflicts to log file (#25)
- Add -w command line switch to treat warnings as errors and output to stderr (#26)
- Add rule field aliases (#24)
### Improvements
- Show line numbers of rules on conflict (#23)
- Track token position in AST Token node
## v1.4.0

View File

@ -1,6 +1,5 @@
source "https://rubygems.org"
gem "base64"
gem "rake"
gem "rspec"
gem "rdoc"

View File

@ -1,48 +1,40 @@
GEM
remote: https://rubygems.org/
specs:
base64 (0.3.0)
date (3.5.1)
diff-lcs (1.6.2)
docile (1.4.1)
erb (6.0.1)
psych (5.3.1)
date
diff-lcs (1.5.0)
docile (1.4.0)
psych (5.1.0)
stringio
rake (13.3.1)
rdoc (7.2.0)
erb
rake (13.0.6)
rdoc (6.5.0)
psych (>= 4.0.0)
tsort
redcarpet (3.6.1)
rspec (3.13.2)
rspec-core (~> 3.13.0)
rspec-expectations (~> 3.13.0)
rspec-mocks (~> 3.13.0)
rspec-core (3.13.6)
rspec-support (~> 3.13.0)
rspec-expectations (3.13.5)
redcarpet (3.6.0)
rspec (3.12.0)
rspec-core (~> 3.12.0)
rspec-expectations (~> 3.12.0)
rspec-mocks (~> 3.12.0)
rspec-core (3.12.2)
rspec-support (~> 3.12.0)
rspec-expectations (3.12.3)
diff-lcs (>= 1.2.0, < 2.0)
rspec-support (~> 3.13.0)
rspec-mocks (3.13.7)
rspec-support (~> 3.12.0)
rspec-mocks (3.12.6)
diff-lcs (>= 1.2.0, < 2.0)
rspec-support (~> 3.13.0)
rspec-support (3.13.7)
rspec-support (~> 3.12.0)
rspec-support (3.12.1)
simplecov (0.22.0)
docile (~> 1.1)
simplecov-html (~> 0.11)
simplecov_json_formatter (~> 0.1)
simplecov-html (0.13.2)
simplecov-html (0.12.3)
simplecov_json_formatter (0.1.4)
stringio (3.2.0)
stringio (3.0.7)
syntax (1.2.2)
tsort (0.2.0)
PLATFORMS
ruby
DEPENDENCIES
base64
rake
rdoc
redcarpet

View File

@ -6,10 +6,8 @@ Propane is a LALR Parser Generator (LPG) which:
* generates a built-in lexer to tokenize input
* supports UTF-8 lexer inputs
* generates a table-driven shift/reduce parser to parse input in linear time
* targets C, C++, or D language outputs
* optionally supports automatic full parse tree generation
* supports starting parsing from multiple start rules
* tracks input text start and end positions for all matched tokens/rules
* targets C or D language outputs
* optionally supports automatic full AST generation
* is MIT-licensed
* is distributable as a standalone Ruby script
@ -33,14 +31,9 @@ Propane is typically invoked from the command-line as `./propane`.
Usage: ./propane [options] <input-file> <output-file>
Options:
-h, --help Show this usage and exit.
--log LOG Write log file. This will show all parser states and their
associated shifts and reduces. It can be helpful when
debugging a grammar.
--version Show program version and exit.
-w Treat warnings as errors. This option will treat shift/reduce
conflicts as fatal errors and will print them to stderr in
addition to the log file.
--log LOG Write log file
--version Show program version and exit
-h, --help Show this usage and exit
The user must specify the path to a Propane input grammar file and a path to an
output file.

View File

@ -1,7 +1,5 @@
require "rake/clean"
require "rspec/core/rake_task"
require "simplecov"
require "stringio"
CLEAN.include %w[spec/run gen .yardoc yard coverage dist]
@ -14,18 +12,6 @@ RSpec::Core::RakeTask.new(:spec, :example_pattern) do |task, args|
task.rspec_opts = %W[-e "#{args.example_pattern}" -f documentation]
end
end
task :spec do |task, args|
unless ENV["dist_specs"]
original_stdout = $stdout
sio = StringIO.new
$stdout = sio
SimpleCov.collate Dir["coverage/.resultset.json"]
$stdout = original_stdout
sio.string.lines.each do |line|
$stdout.write(line) unless line =~ /Coverage report generated for/
end
end
end
# dspec task is useful to test the distributable release script, but is not
# useful for coverage information.

View File

@ -1,21 +0,0 @@
## v4.0.0
### API Changes
- Replace any calls to `p_context_init()` with `p_context_new()`.
- Replace any references to the address of a statically allocated context
structure with the pointer returned from `p_context_init()` (e.g. `&context`
-> `context`).
- Add a call to `p_context_delete()` (for C or C++) after lexing/parsing to
reclaim context memory.
- Rename `p_free_tree()` calls to `p_tree_delete()`.
- Change `free_token_node` statement calls from taking a function name argument
to taking a user code block.
## v3.0.0
### Grammar Changes
- Rename `ast;` statement to `tree;`.
- Rename `ast_prefix;` statement to `tree_prefix;`.
- Rename `ast_suffix;` statement to `tree_suffix;`.

View File

@ -43,52 +43,27 @@ const char * <%= @grammar.prefix %>token_names[] = {
*************************************************************************/
/**
* Allocate and initialize lexer/parser context structure.
*
* Deinitialize and deallocate with <%= @grammar.prefix %>context_delete().
* Initialize lexer/parser context structure.
*
* @param[out] context
* Lexer/parser context structure.
* @param input
* Text input.
* @param input_length
* Text input length.
*
* @return Context structure for lexer/parser.
*/
<%= @grammar.prefix %>context_t * <%= @grammar.prefix %>context_new(uint8_t const * input, size_t input_length)
void <%= @grammar.prefix %>context_init(<%= @grammar.prefix %>context_t * context, uint8_t const * input, size_t input_length)
{
<% if @cpp %>
<%= @grammar.prefix %>context_t * context = new <%= @grammar.prefix %>context_t();
<% else %>
<%= @grammar.prefix %>context_t * context = (<%= @grammar.prefix %>context_t *)calloc(1, sizeof(<%= @grammar.prefix %>context_t));
<% end %>
/* New default-initialized context structure. */
<%= @grammar.prefix %>context_t newcontext = {0};
/* Lexer initialization. */
context->input = input;
context->input_length = input_length;
context->text_position.row = 1u;
context->text_position.col = 1u;
context->mode = <%= @lexer.mode_id("default") %>;
newcontext.input = input;
newcontext.input_length = input_length;
newcontext.mode = <%= @lexer.mode_id("default") %>;
return context;
}
/**
* Deinitialize and deallocate lexer/parser context structure.
*
* For C++, destructors will be called for any context user fields. However, if
* pointers are used to store allocated resources, the user should free them
* before calling this function.
*
* @param context
* Lexer/parser context structure allocated with <%= @grammar.prefix %>context_new().
*/
void <%= @grammar.prefix %>context_delete(<%= @grammar.prefix %>context_t * context)
{
<% if @cpp %>
delete context;
<% else %>
free(context);
<% end %>
/* Copy to the user's context structure. */
*context = newcontext;
}
/**************************************************************************
@ -367,10 +342,8 @@ static lexer_state_id_t check_lexer_transition(uint32_t current_state, uint32_t
static size_t find_longest_match(<%= @grammar.prefix %>context_t * context,
lexer_match_info_t * out_match_info, size_t * out_unexpected_input_length)
{
lexer_match_info_t longest_match;
memset(&longest_match, 0, sizeof(longest_match));
lexer_match_info_t attempt_match;
memset(&attempt_match, 0, sizeof(attempt_match));
lexer_match_info_t longest_match = {0};
lexer_match_info_t attempt_match = {0};
*out_match_info = longest_match;
uint32_t current_state = lexer_mode_table[context->mode].state_table_offset;
for (;;)
@ -384,7 +357,6 @@ static size_t find_longest_match(<%= @grammar.prefix %>context_t * context,
switch (result)
{
case P_SUCCESS:
{
lexer_state_id_t transition_state = check_lexer_transition(current_state, code_point);
if (transition_state != INVALID_LEXER_STATE_ID)
{
@ -393,7 +365,7 @@ static size_t find_longest_match(<%= @grammar.prefix %>context_t * context,
if (code_point == '\n')
{
attempt_match.delta_position.row++;
attempt_match.delta_position.col = 1u;
attempt_match.delta_position.col = 0u;
}
else
{
@ -416,7 +388,6 @@ static size_t find_longest_match(<%= @grammar.prefix %>context_t * context,
*out_unexpected_input_length = attempt_match.length + code_point_length;
return P_UNEXPECTED_INPUT;
}
}
break;
case P_EOF:
@ -474,8 +445,7 @@ static size_t find_longest_match(<%= @grammar.prefix %>context_t * context,
*/
static size_t attempt_lex_token(<%= @grammar.prefix %>context_t * context, <%= @grammar.prefix %>token_info_t * out_token_info)
{
<%= @grammar.prefix %>token_info_t token_info;
memset(&token_info, 0, sizeof(token_info));
<%= @grammar.prefix %>token_info_t token_info = {0};
token_info.position = context->text_position;
token_info.token = INVALID_TOKEN_ID;
lexer_match_info_t match_info;
@ -484,7 +454,6 @@ static size_t attempt_lex_token(<%= @grammar.prefix %>context_t * context, <%= @
switch (result)
{
case P_SUCCESS:
{
<%= @grammar.prefix %>token_t token_to_accept = match_info.accepting_state->token;
if (match_info.accepting_state->code_id != INVALID_USER_CODE_ID)
{
@ -536,7 +505,6 @@ static size_t attempt_lex_token(<%= @grammar.prefix %>context_t * context, <%= @
token_info.end_position.col = token_info.position.col + match_info.end_delta_position.col;
}
*out_token_info = token_info;
}
return P_SUCCESS;
case P_EOF:
@ -599,7 +567,7 @@ size_t <%= @grammar.prefix %>lex(<%= @grammar.prefix %>context_t * context, <%=
*************************************************************************/
/** Invalid position value. */
#define INVALID_POSITION (<%= @grammar.prefix %>position_t){0u, 0u}
#define INVALID_POSITION (<%= @grammar.prefix %>position_t){0xFFFFFFFFu, 0xFFFFFFFFu}
/** Reduce ID type. */
typedef <%= get_type_for(@parser.reduce_table.size) %> reduce_id_t;
@ -660,7 +628,7 @@ typedef struct
* reduce action.
*/
parser_state_id_t n_states;
<% if @grammar.tree %>
<% if @grammar.ast %>
/**
* Map of rule components to rule set child fields.
@ -668,7 +636,7 @@ typedef struct
uint16_t const * rule_set_node_field_index_map;
/**
* Number of rule set tree node fields.
* Number of rule set AST node fields.
*/
uint16_t rule_set_node_field_array_size;
@ -710,23 +678,19 @@ typedef struct
/** Parser value from this state. */
<%= @grammar.prefix %>value_t pvalue;
<% if @grammar.tree %>
/** tree node. */
void * tree_node;
<% if @grammar.ast %>
/** AST node. */
void * ast_node;
<% end %>
} state_value_t;
<% if @grammar.tree %>
/** Common tree node structure. */
typedef struct TreeNode_s
/** Common AST node structure. */
typedef struct
{
<%= @grammar.prefix %>position_t position;
<%= @grammar.prefix %>position_t end_position;
uint16_t n_fields;
uint8_t is_token;
struct TreeNode_s * fields[];
} TreeNode;
<% end %>
void * fields[];
} ASTNode;
/** Parser shift table. */
static const shift_t parser_shift_table[] = {
@ -735,7 +699,7 @@ static const shift_t parser_shift_table[] = {
<% end %>
};
<% if @grammar.tree %>
<% if @grammar.ast %>
<% @grammar.rules.each do |rule| %>
<% unless rule.flat_rule_set_node_field_index_map? %>
const uint16_t r_<%= rule.name.gsub("$", "_") %><%= rule.id %>_node_field_index_map[<%= rule.rule_set_node_field_index_map.size %>] = {<%= rule.rule_set_node_field_index_map.map {|v| v.to_s}.join(", ") %>};
@ -746,22 +710,17 @@ const uint16_t r_<%= rule.name.gsub("$", "_") %><%= rule.id %>_node_field_index_
/** Parser reduce table. */
static const reduce_t parser_reduce_table[] = {
<% @parser.reduce_table.each do |reduce| %>
{
<%= reduce[:token_id] %>u, /* Token: <%= reduce[:token] ? reduce[:token].name : "(any)" %> */
<%= reduce[:rule_id] %>u, /* Rule ID */
<%= reduce[:rule_set_id] %>u, /* Rule set ID (<%= reduce[:rule].rule_set.name %>) */
<% if @grammar.tree %>
<%= reduce[:n_states] %>u, /* Number of states */
{<%= reduce[:token_id] %>u, <%= reduce[:rule_id] %>u, <%= reduce[:rule_set_id] %>u, <%= reduce[:n_states] %>u
<% if @grammar.ast %>
<% if reduce[:rule].flat_rule_set_node_field_index_map? %>
NULL, /* No rule set node field index map (flat map) */
, NULL
<% else %>
&r_<%= reduce[:rule].name.gsub("$", "_") %><%= reduce[:rule].id %>_node_field_index_map[0], /* Rule set node field index map */
, &r_<%= reduce[:rule].name.gsub("$", "_") %><%= reduce[:rule].id %>_node_field_index_map[0]
<% end %>
<%= reduce[:rule].rule_set.tree_fields.size %>, /* Number of tree fields */
<%= reduce[:propagate_optional_target] %>}, /* Propagate optional target? */
<% else %>
<%= reduce[:n_states] %>u},
, <%= reduce[:rule].rule_set.ast_fields.size %>
, <%= reduce[:propagate_optional_target] %>
<% end %>
},
<% end %>
};
@ -831,7 +790,7 @@ static void state_values_stack_push(state_values_stack_t * stack)
if (current_length >= current_capacity)
{
size_t const new_capacity = current_capacity * 2u;
state_value_t * new_entries = (state_value_t *)malloc(new_capacity * sizeof(state_value_t));
state_value_t * new_entries = malloc(new_capacity * sizeof(state_value_t));
memcpy(new_entries, stack->entries, current_length * sizeof(state_value_t));
free(stack->entries);
stack->capacity = new_capacity;
@ -865,7 +824,7 @@ static void state_values_stack_free(state_values_stack_t * stack)
free(stack->entries);
}
<% unless @grammar.tree %>
<% unless @grammar.ast %>
/**
* Execute user code associated with a parser rule.
*
@ -948,8 +907,6 @@ static size_t check_reduce(size_t state_id, <%= @grammar.prefix %>token_t token)
*
* @param context
* Lexer/parser context structure.
* @start_state_id
* ID of the state in which to start.
*
* @retval P_SUCCESS
* The parser successfully matched the input text. The parse result value
@ -962,26 +919,25 @@ static size_t check_reduce(size_t state_id, <%= @grammar.prefix %>token_t token)
* @reval P_UNEXPECTED_INPUT
* Input text does not match any lexer pattern.
*/
static size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t start_state_id)
size_t <%= @grammar.prefix %>parse(<%= @grammar.prefix %>context_t * context)
{
<%= @grammar.prefix %>token_info_t token_info;
<%= @grammar.prefix %>token_t token = INVALID_TOKEN_ID;
state_values_stack_t statevalues;
size_t reduced_rule_set = INVALID_ID;
<% if @grammar.tree %>
<% if @grammar.ast %>
void * reduced_parser_node;
<% else %>
<%= @grammar.prefix %>value_t reduced_parser_value;
<% end %>
state_values_stack_init(&statevalues);
state_values_stack_push(&statevalues);
state_values_stack_index(&statevalues, -1)->state_id = start_state_id;
size_t result;
for (;;)
{
if (token == INVALID_TOKEN_ID)
{
size_t lexer_result = <%= lex_fn %>(context, &token_info);
size_t lexer_result = <%= @grammar.prefix %>lex(context, &token_info);
if (lexer_result != P_SUCCESS)
{
result = lexer_result;
@ -1000,8 +956,8 @@ static size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t start
if ((shift_state != INVALID_ID) && (token == TOKEN___EOF))
{
/* Successful parse. */
<% if @grammar.tree %>
context->parse_result = state_values_stack_index(&statevalues, -1)->tree_node;
<% if @grammar.ast %>
context->parse_result = (<%= @grammar.ast_prefix %><%= @grammar.start_rule %><%= @grammar.ast_suffix %> *)state_values_stack_index(&statevalues, -1)->ast_node;
<% else %>
context->parse_result = state_values_stack_index(&statevalues, -1)->pvalue;
<% end %>
@ -1017,20 +973,13 @@ static size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t start
if (reduced_rule_set == INVALID_ID)
{
/* We shifted a token, mark it consumed. */
<% if @grammar.tree %>
<% if @cpp %>
<%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %> * token_tree_node = new <%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %>();
<% else %>
<%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %> * token_tree_node = (<%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %> *)malloc(sizeof(<%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %>));
<% end %>
token_tree_node->position = token_info.position;
token_tree_node->end_position = token_info.end_position;
token_tree_node->n_fields = 0u;
token_tree_node->is_token = 1u;
token_tree_node->token = token;
token_tree_node->pvalue = token_info.pvalue;
<%= expand_code(@grammar.on_token_node, false, nil, nil) %>
state_values_stack_index(&statevalues, -1)->tree_node = token_tree_node;
<% if @grammar.ast %>
<%= @grammar.ast_prefix %>Token<%= @grammar.ast_suffix %> * token_ast_node = malloc(sizeof(<%= @grammar.ast_prefix %>Token<%= @grammar.ast_suffix %>));
token_ast_node->position = token_info.position;
token_ast_node->end_position = token_info.end_position;
token_ast_node->token = token;
token_ast_node->pvalue = token_info.pvalue;
state_values_stack_index(&statevalues, -1)->ast_node = token_ast_node;
<% else %>
state_values_stack_index(&statevalues, -1)->pvalue = token_info.pvalue;
<% end %>
@ -1039,12 +988,11 @@ static size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t start
else
{
/* We shifted a RuleSet. */
<% if @grammar.tree %>
state_values_stack_index(&statevalues, -1)->tree_node = reduced_parser_node;
<% if @grammar.ast %>
state_values_stack_index(&statevalues, -1)->ast_node = reduced_parser_node;
<% else %>
state_values_stack_index(&statevalues, -1)->pvalue = reduced_parser_value;
<%= @grammar.prefix %>value_t new_parse_result;
memset(&new_parse_result, 0, sizeof(new_parse_result));
<%= @grammar.prefix %>value_t new_parse_result = {0};
reduced_parser_value = new_parse_result;
<% end %>
reduced_rule_set = INVALID_ID;
@ -1056,38 +1004,39 @@ static size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t start
if (reduce_index != INVALID_ID)
{
/* We have something to reduce. */
<% if @grammar.tree %>
<% if @grammar.ast %>
if (parser_reduce_table[reduce_index].propagate_optional_target)
{
reduced_parser_node = state_values_stack_index(&statevalues, -1)->tree_node;
reduced_parser_node = state_values_stack_index(&statevalues, -1)->ast_node;
}
else if (parser_reduce_table[reduce_index].n_states > 0)
{
size_t n_fields = parser_reduce_table[reduce_index].rule_set_node_field_array_size;
size_t bytes = sizeof(TreeNode) + n_fields * sizeof(void *);
TreeNode * node = (TreeNode *)malloc(bytes);
memset(node, 0, bytes);
ASTNode * node = (ASTNode *)malloc(sizeof(ASTNode) + n_fields * sizeof(void *));
node->position = INVALID_POSITION;
node->end_position = INVALID_POSITION;
node->n_fields = n_fields;
for (size_t i = 0; i < n_fields; i++)
{
node->fields[i] = NULL;
}
if (parser_reduce_table[reduce_index].rule_set_node_field_index_map == NULL)
{
for (size_t i = 0; i < parser_reduce_table[reduce_index].n_states; i++)
{
node->fields[i] = (TreeNode *)state_values_stack_index(&statevalues, -(int)parser_reduce_table[reduce_index].n_states + (int)i)->tree_node;
node->fields[i] = state_values_stack_index(&statevalues, -(int)parser_reduce_table[reduce_index].n_states + (int)i)->ast_node;
}
}
else
{
for (size_t i = 0; i < parser_reduce_table[reduce_index].n_states; i++)
{
node->fields[parser_reduce_table[reduce_index].rule_set_node_field_index_map[i]] = (TreeNode *)state_values_stack_index(&statevalues, -(int)parser_reduce_table[reduce_index].n_states + (int)i)->tree_node;
node->fields[parser_reduce_table[reduce_index].rule_set_node_field_index_map[i]] = state_values_stack_index(&statevalues, -(int)parser_reduce_table[reduce_index].n_states + (int)i)->ast_node;
}
}
bool position_found = false;
for (size_t i = 0; i < n_fields; i++)
{
TreeNode * child = node->fields[i];
ASTNode * child = (ASTNode *)node->fields[i];
if ((child != NULL) && <%= @grammar.prefix %>position_valid(child->position))
{
if (!position_found)
@ -1105,11 +1054,9 @@ static size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t start
reduced_parser_node = NULL;
}
<% else %>
<%= @grammar.prefix %>value_t reduced_parser_value2;
memset(&reduced_parser_value2, 0, sizeof(reduced_parser_value2));
<%= @grammar.prefix %>value_t reduced_parser_value2 = {0};
if (parser_user_code(&reduced_parser_value2, parser_reduce_table[reduce_index].rule, &statevalues, parser_reduce_table[reduce_index].n_states, context) == P_USER_TERMINATED)
{
state_values_stack_free(&statevalues);
return P_USER_TERMINATED;
}
reduced_parser_value = reduced_parser_value2;
@ -1133,19 +1080,6 @@ static size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t start
return result;
}
size_t <%= @grammar.prefix %>parse(<%= @grammar.prefix %>context_t * context)
{
return parse_from(context, 0u);
}
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
size_t <%= @grammar.prefix %>parse_<%= start_rule %>(<%= @grammar.prefix %>context_t * context)
{
return parse_from(context, <%= i %>u);
}
<% end %>
/**
* Get the parse result value.
*
@ -1154,29 +1088,18 @@ size_t <%= @grammar.prefix %>parse_<%= start_rule %>(<%= @grammar.prefix %>conte
*
* @return Parse result value.
*/
<% if @grammar.tree %>
<%= @grammar.tree_prefix %><%= @grammar.start_rules[0] %><%= @grammar.tree_suffix %> * <%= @grammar.prefix %>result(<%= @grammar.prefix %>context_t * context)
{
return (<%= @grammar.tree_prefix %><%= @grammar.start_rules[0] %><%= @grammar.tree_suffix %> *) context->parse_result;
}
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
<%= @grammar.tree_prefix %><%= start_rule %><%= @grammar.tree_suffix %> * <%= @grammar.prefix %>result_<%= start_rule %>(<%= @grammar.prefix %>context_t * context)
{
return (<%= @grammar.tree_prefix %><%= start_rule %><%= @grammar.tree_suffix %> *) context->parse_result;
}
<% end %>
<% if @grammar.ast %>
<%= @grammar.ast_prefix %><%= @grammar.start_rule %><%= @grammar.ast_suffix %> * <%= @grammar.prefix %>result(<%= @grammar.prefix %>context_t * context)
<% else %>
<%= start_rule_type[1] %> <%= @grammar.prefix %>result(<%= @grammar.prefix %>context_t * context)
<% end %>
{
<% if @grammar.ast %>
return context->parse_result;
<% else %>
return context->parse_result.v_<%= start_rule_type[0] %>;
}
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
<%= start_rule_type(i)[1] %> <%= @grammar.prefix %>result_<%= start_rule %>(<%= @grammar.prefix %>context_t * context)
{
return context->parse_result.v_<%= start_rule_type(i)[0] %>;
}
<% end %>
<% end %>
}
/**
* Get the current text input position.
@ -1213,48 +1136,3 @@ size_t <%= @grammar.prefix %>user_terminate_code(<%= @grammar.prefix %>context_t
{
return context->token;
}
<% if @grammar.tree %>
static void tree_delete(TreeNode * node)
{
if (node->is_token)
{
<%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %> * token_tree_node = (<%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %> *)node;
<%= expand_code(@grammar.free_token_node, false, nil, nil) %>
<% if @cpp %>
delete token_tree_node;
<% else %>
free(token_tree_node);
<% end %>
}
else if (node->n_fields > 0u)
{
for (size_t i = 0u; i < node->n_fields; i++)
{
if (node->fields[i] != NULL)
{
tree_delete(node->fields[i]);
}
}
free(node);
}
}
/**
* Free all tree node memory.
*/
void <%= @grammar.prefix %>tree_delete(<%= @grammar.tree_prefix %><%= @grammar.start_rules[0] %><%= @grammar.tree_suffix %> * tree)
{
tree_delete((TreeNode *)tree);
}
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
/**
* Free all tree node memory.
*/
void <%= @grammar.prefix %>tree_delete_<%= start_rule %>(<%= @grammar.tree_prefix %><%= start_rule %><%= @grammar.tree_suffix %> * tree)
{
tree_delete((TreeNode *)tree);
}
<% end %>
<% end %>

View File

@ -8,8 +8,7 @@
module <%= @grammar.modulename %>;
<% end %>
import core.memory;
import core.stdc.stdlib : malloc, free;
import core.stdc.stdlib : malloc;
/**************************************************************************
* User code blocks
@ -66,16 +65,16 @@ public struct <%= @grammar.prefix %>position_t
uint col;
/** Invalid position value. */
enum INVALID = <%= @grammar.prefix %>position_t(0u, 0u);
enum INVALID = <%= @grammar.prefix %>position_t(0xFFFF_FFFF, 0xFFFF_FFFF);
/** Return whether the position is valid. */
public @property bool valid()
{
return row != 0u;
return row != 0xFFFF_FFFFu;
}
}
<% if @grammar.tree %>
<% if @grammar.ast %>
/** Parser values type. */
public alias <%= @grammar.prefix %>value_t = <%= @grammar.ptype %>;
<% else %>
@ -88,40 +87,33 @@ public union <%= @grammar.prefix %>value_t
}
<% end %>
<% if @grammar.tree %>
/** Common tree node structure. */
private struct TreeNode
<% if @grammar.ast %>
/** Common AST node structure. */
private struct ASTNode
{
<%= @grammar.prefix %>position_t position;
<%= @grammar.prefix %>position_t end_position;
ushort n_fields;
bool is_token;
void *[0] fields;
}
/** Tree node types. @{ */
public struct <%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %>
/** AST node types. @{ */
public struct <%= @grammar.ast_prefix %>Token<%= @grammar.ast_suffix %>
{
/* TreeNode fields must be present in the same order here. */
/* ASTNode fields must be present in the same order here. */
<%= @grammar.prefix %>position_t position;
<%= @grammar.prefix %>position_t end_position;
ushort n_fields;
bool is_token;
<%= @grammar.prefix %>token_t token;
<%= @grammar.prefix %>value_t pvalue;
<%= @grammar.token_user_fields %>
}
<% @parser.rule_sets.each do |name, rule_set| %>
<% next if name.start_with?("$") %>
<% next if rule_set.optional? %>
public struct <%= @grammar.tree_prefix %><%= name %><%= @grammar.tree_suffix %>
public struct <%= @grammar.ast_prefix %><%= name %><%= @grammar.ast_suffix %>
{
<%= @grammar.prefix %>position_t position;
<%= @grammar.prefix %>position_t end_position;
ushort n_fields;
bool is_token;
<% rule_set.tree_fields.each do |fields| %>
<% rule_set.ast_fields.each do |fields| %>
union
{
<% fields.each do |field_name, type| %>
@ -179,8 +171,8 @@ public struct <%= @grammar.prefix %>context_t
/* Parser context data. */
/** Parse result value. */
<% if @grammar.tree %>
void * parse_result;
<% if @grammar.ast %>
<%= @grammar.ast_prefix %><%= @grammar.start_rule %><%= @grammar.ast_suffix %> * parse_result;
<% else %>
<%= @grammar.prefix %>value_t parse_result;
<% end %>
@ -190,8 +182,6 @@ public struct <%= @grammar.prefix %>context_t
/** User terminate code. */
size_t user_terminate_code;
<%= @grammar.context_user_fields %>
}
/**************************************************************************
@ -231,39 +221,24 @@ private enum size_t INVALID_ID = cast(size_t)-1;
*************************************************************************/
/**
* Allocate and initialize lexer/parser context structure.
*
* Deinitialize and deallocate with <%= @grammar.prefix %>context_delete().
* Initialize lexer/parser context structure.
*
* @param[out] context
* Lexer/parser context structure.
* @param input
* Text input.
* @param input_length
* Text input length.
*
* @return Context structure for lexer/parser.
*/
<%= @grammar.prefix %>context_t * <%= @grammar.prefix %>context_new(string input)
public void <%= @grammar.prefix %>context_init(<%= @grammar.prefix %>context_t * context, string input)
{
/* New default-initialized context structure. */
<%= @grammar.prefix %>context_t * context = new <%= @grammar.prefix %>context_t;
<%= @grammar.prefix %>context_t newcontext;
/* Lexer initialization. */
context.input = input;
context.text_position.row = 1u;
context.text_position.col = 1u;
context.mode = <%= @lexer.mode_id("default") %>;
newcontext.input = input;
newcontext.mode = <%= @lexer.mode_id("default") %>;
return context;
}
/**
* Deinitialize and deallocate lexer/parser context structure.
*
* @param context
* Lexer/parser context structure allocated with <%= @grammar.prefix %>context_new().
*/
void <%= @grammar.prefix %>context_delete(<%= @grammar.prefix %>context_t * context)
{
/* Copy to the user's context structure. */
*context = newcontext;
}
/**************************************************************************
@ -559,7 +534,7 @@ private size_t find_longest_match(<%= @grammar.prefix %>context_t * context,
if (code_point == '\n')
{
attempt_match.delta_position.row++;
attempt_match.delta_position.col = 1u;
attempt_match.delta_position.col = 0u;
}
else
{
@ -819,7 +794,7 @@ private struct reduce_t
* reduce action.
*/
parser_state_id_t n_states;
<% if @grammar.tree %>
<% if @grammar.ast %>
/**
* Map of rule components to rule set child fields.
@ -827,7 +802,7 @@ private struct reduce_t
immutable(ushort) * rule_set_node_field_index_map;
/**
* Number of rule set tree node fields.
* Number of rule set AST node fields.
*/
ushort rule_set_node_field_array_size;
@ -869,9 +844,9 @@ private struct state_value_t
/** Parser value from this state. */
<%= @grammar.prefix %>value_t pvalue;
<% if @grammar.tree %>
/** Tree node. */
void * tree_node;
<% if @grammar.ast %>
/** AST node. */
void * ast_node;
<% end %>
this(size_t state_id)
@ -887,7 +862,7 @@ private immutable shift_t[] parser_shift_table = [
<% end %>
];
<% if @grammar.tree %>
<% if @grammar.ast %>
<% @grammar.rules.each do |rule| %>
<% unless rule.flat_rule_set_node_field_index_map? %>
immutable ushort[<%= rule.rule_set_node_field_index_map.size %>] r_<%= rule.name.gsub("$", "_") %><%= rule.id %>_node_field_index_map = [<%= rule.rule_set_node_field_index_map.map {|v| v.to_s}.join(", ") %>];
@ -898,22 +873,17 @@ immutable ushort[<%= rule.rule_set_node_field_index_map.size %>] r_<%= rule.name
/** Parser reduce table. */
private immutable reduce_t[] parser_reduce_table = [
<% @parser.reduce_table.each do |reduce| %>
reduce_t(
<%= reduce[:token_id] %>u, /* Token: <%= reduce[:token] ? reduce[:token].name : "(any)" %> */
<%= reduce[:rule_id] %>u, /* Rule ID */
<%= reduce[:rule_set_id] %>u, /* Rule set ID (<%= reduce[:rule].rule_set.name %>) */
<% if @grammar.tree %>
<%= reduce[:n_states] %>u, /* Number of states */
reduce_t(<%= reduce[:token_id] %>u, <%= reduce[:rule_id] %>u, <%= reduce[:rule_set_id] %>u, <%= reduce[:n_states] %>u
<% if @grammar.ast %>
<% if reduce[:rule].flat_rule_set_node_field_index_map? %>
null, /* No rule set node field index map (flat map) */
, null
<% else %>
&r_<%= reduce[:rule].name.gsub("$", "_") %><%= reduce[:rule].id %>_node_field_index_map[0], /* Rule set node field index map */
, &r_<%= reduce[:rule].name.gsub("$", "_") %><%= reduce[:rule].id %>_node_field_index_map[0]
<% end %>
<%= reduce[:rule].rule_set.tree_fields.size %>, /* Number of tree fields */
<%= reduce[:propagate_optional_target] %>), /* Propagate optional target? */
<% else %>
<%= reduce[:n_states] %>u), /* Number of states */
, <%= reduce[:rule].rule_set.ast_fields.size %>
, <%= reduce[:propagate_optional_target] %>
<% end %>
),
<% end %>
];
@ -924,7 +894,7 @@ private immutable parser_state_t[] parser_state_table = [
<% end %>
];
<% unless @grammar.tree %>
<% unless @grammar.ast %>
/**
* Execute user code associated with a parser rule.
*
@ -1007,8 +977,6 @@ private size_t check_reduce(size_t state_id, <%= @grammar.prefix %>token_t token
*
* @param context
* Lexer/parser context structure.
* @start_state_id
* ID of the state in which to start.
*
* @retval P_SUCCESS
* The parser successfully matched the input text. The parse result value
@ -1021,14 +989,13 @@ private size_t check_reduce(size_t state_id, <%= @grammar.prefix %>token_t token
* @reval P_UNEXPECTED_INPUT
* Input text does not match any lexer pattern.
*/
private size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t start_state_id)
public size_t <%= @grammar.prefix %>parse(<%= @grammar.prefix %>context_t * context)
{
<%= @grammar.prefix %>token_info_t token_info;
<%= @grammar.prefix %>token_t token = INVALID_TOKEN_ID;
state_value_t[] statevalues = new state_value_t[](1);
statevalues[0].state_id = start_state_id;
size_t reduced_rule_set = INVALID_ID;
<% if @grammar.tree %>
<% if @grammar.ast %>
void * reduced_parser_node;
<% else %>
<%= @grammar.prefix %>value_t reduced_parser_value;
@ -1037,7 +1004,7 @@ private size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t star
{
if (token == INVALID_TOKEN_ID)
{
size_t lexer_result = <%= lex_fn %>(context, &token_info);
size_t lexer_result = <%= @grammar.prefix %>lex(context, &token_info);
if (lexer_result != P_SUCCESS)
{
return lexer_result;
@ -1055,8 +1022,8 @@ private size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t star
if ((shift_state != INVALID_ID) && (token == TOKEN___EOF))
{
/* Successful parse. */
<% if @grammar.tree %>
context.parse_result = statevalues[$-1].tree_node;
<% if @grammar.ast %>
context.parse_result = cast(<%= @grammar.ast_prefix %><%= @grammar.start_rule %><%= @grammar.ast_suffix %> *)statevalues[$-1].ast_node;
<% else %>
context.parse_result = statevalues[$-1].pvalue;
<% end %>
@ -1070,10 +1037,9 @@ private size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t star
if (reduced_rule_set == INVALID_ID)
{
/* We shifted a token, mark it consumed. */
<% if @grammar.tree %>
<%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %> * token_tree_node = new <%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %>(token_info.position, token_info.end_position, 0u, true, token, token_info.pvalue);
<%= expand_code(@grammar.on_token_node, false, nil, nil) %>
statevalues[$-1].tree_node = token_tree_node;
<% if @grammar.ast %>
<%= @grammar.ast_prefix %>Token<%= @grammar.ast_suffix %> * token_ast_node = new <%= @grammar.ast_prefix %>Token<%= @grammar.ast_suffix %>(token_info.position, token_info.end_position, token, token_info.pvalue);
statevalues[$-1].ast_node = token_ast_node;
<% else %>
statevalues[$-1].pvalue = token_info.pvalue;
<% end %>
@ -1082,8 +1048,8 @@ private size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t star
else
{
/* We shifted a RuleSet. */
<% if @grammar.tree %>
statevalues[$-1].tree_node = reduced_parser_node;
<% if @grammar.ast %>
statevalues[$-1].ast_node = reduced_parser_node;
<% else %>
statevalues[$-1].pvalue = reduced_parser_value;
<%= @grammar.prefix %>value_t new_parse_result;
@ -1098,21 +1064,17 @@ private size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t star
if (reduce_index != INVALID_ID)
{
/* We have something to reduce. */
<% if @grammar.tree %>
<% if @grammar.ast %>
if (parser_reduce_table[reduce_index].propagate_optional_target)
{
reduced_parser_node = statevalues[$ - 1].tree_node;
reduced_parser_node = statevalues[$ - 1].ast_node;
}
else if (parser_reduce_table[reduce_index].n_states > 0)
{
size_t n_fields = parser_reduce_table[reduce_index].rule_set_node_field_array_size;
size_t node_size = TreeNode.sizeof + n_fields * (void *).sizeof;
TreeNode * node = cast(TreeNode *)malloc(node_size);
GC.addRange(node, node_size);
ASTNode * node = cast(ASTNode *)malloc(ASTNode.sizeof + n_fields * (void *).sizeof);
node.position = <%= @grammar.prefix %>position_t.INVALID;
node.end_position = <%= @grammar.prefix %>position_t.INVALID;
node.n_fields = cast(ushort)n_fields;
node.is_token = false;
foreach (i; 0..n_fields)
{
node.fields[i] = null;
@ -1121,20 +1083,20 @@ private size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t star
{
foreach (i; 0..parser_reduce_table[reduce_index].n_states)
{
node.fields[i] = statevalues[$ - parser_reduce_table[reduce_index].n_states + i].tree_node;
node.fields[i] = statevalues[$ - parser_reduce_table[reduce_index].n_states + i].ast_node;
}
}
else
{
foreach (i; 0..parser_reduce_table[reduce_index].n_states)
{
node.fields[parser_reduce_table[reduce_index].rule_set_node_field_index_map[i]] = statevalues[$ - parser_reduce_table[reduce_index].n_states + i].tree_node;
node.fields[parser_reduce_table[reduce_index].rule_set_node_field_index_map[i]] = statevalues[$ - parser_reduce_table[reduce_index].n_states + i].ast_node;
}
}
bool position_found = false;
foreach (i; 0..n_fields)
{
TreeNode * child = cast(TreeNode *)node.fields[i];
ASTNode * child = cast(ASTNode *)node.fields[i];
if (child && child.position.valid)
{
if (!position_found)
@ -1175,19 +1137,6 @@ private size_t parse_from(<%= @grammar.prefix %>context_t * context, size_t star
}
}
public size_t <%= @grammar.prefix %>parse(<%= @grammar.prefix %>context_t * context)
{
return parse_from(context, 0u);
}
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
public size_t <%= @grammar.prefix %>parse_<%= start_rule %>(<%= @grammar.prefix %>context_t * context)
{
return parse_from(context, <%= i %>u);
}
<% end %>
/**
* Get the parse result value.
*
@ -1196,58 +1145,18 @@ public size_t <%= @grammar.prefix %>parse_<%= start_rule %>(<%= @grammar.prefix
*
* @return Parse result value.
*/
<% if @grammar.tree %>
public <%= @grammar.tree_prefix %><%= @grammar.start_rules[0] %><%= @grammar.tree_suffix %> * <%= @grammar.prefix %>result(<%= @grammar.prefix %>context_t * context)
{
return cast(<%= @grammar.tree_prefix %><%= @grammar.start_rules[0] %><%= @grammar.tree_suffix %> *)context.parse_result;
}
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
public <%= @grammar.tree_prefix %><%= start_rule %><%= @grammar.tree_suffix %> * <%= @grammar.prefix %>result_<%= start_rule %>(<%= @grammar.prefix %>context_t * context)
{
return cast(<%= @grammar.tree_prefix %><%= start_rule %><%= @grammar.tree_suffix %> *)context.parse_result;
}
<% end %>
<% if @grammar.ast %>
public <%= @grammar.ast_prefix %><%= @grammar.start_rule %><%= @grammar.ast_suffix %> * <%= @grammar.prefix %>result(<%= @grammar.prefix %>context_t * context)
<% else %>
public <%= start_rule_type[1] %> <%= @grammar.prefix %>result(<%= @grammar.prefix %>context_t * context)
<% end %>
{
<% if @grammar.ast %>
return context.parse_result;
<% else %>
return context.parse_result.v_<%= start_rule_type[0] %>;
}
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
public <%= start_rule_type(i)[1] %> <%= @grammar.prefix %>result_<%= start_rule %>(<%= @grammar.prefix %>context_t * context)
{
return context.parse_result.v_<%= start_rule_type(i)[0] %>;
}
<% end %>
<% end %>
<% if @grammar.tree %>
private void tree_delete(TreeNode * node)
{
if (!node.is_token)
{
for (size_t i = 0u; i < node.n_fields; i++)
{
if (node.fields[i])
{
tree_delete(cast(TreeNode *)node.fields[i]);
}
}
GC.removeRange(node);
free(node);
}
}
void <%= @grammar.prefix %>tree_delete(<%= @grammar.tree_prefix %><%= @grammar.start_rules[0] %><%= @grammar.tree_suffix %> * tree)
{
tree_delete(cast(TreeNode *)tree);
}
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
void <%= @grammar.prefix %>tree_delete_<%= start_rule %>(<%= @grammar.tree_prefix %><%= start_rule %><%= @grammar.tree_suffix %> * tree)
{
tree_delete(cast(TreeNode *)tree);
}
<% end %>
<% end %>
/**
* Get the current text input position.

View File

@ -53,12 +53,12 @@ typedef struct
} <%= @grammar.prefix %>position_t;
/** Return whether the position is valid. */
#define <%= @grammar.prefix %>position_valid(p) ((p).row != 0u)
#define <%= @grammar.prefix %>position_valid(p) ((p).row != 0xFFFFFFFFu)
/** User header code blocks. */
<%= @grammar.code_blocks.fetch("header", "") %>
<% if @grammar.tree %>
<% if @grammar.ast %>
/** Parser values type. */
typedef <%= @grammar.ptype %> <%= @grammar.prefix %>value_t;
<% else %>
@ -71,19 +71,16 @@ typedef union
} <%= @grammar.prefix %>value_t;
<% end %>
<% if @grammar.tree %>
/** Tree node types. @{ */
typedef struct <%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %>
<% if @grammar.ast %>
/** AST node types. @{ */
typedef struct <%= @grammar.ast_prefix %>Token<%= @grammar.ast_suffix %>
{
<% # TreeNode fields must be present in the same order here. # %>
/* ASTNode fields must be present in the same order here. */
<%= @grammar.prefix %>position_t position;
<%= @grammar.prefix %>position_t end_position;
uint16_t n_fields;
uint8_t is_token;
<%= @grammar.token_user_fields %>
<%= @grammar.prefix %>token_t token;
<%= @grammar.prefix %>value_t pvalue;
} <%= @grammar.tree_prefix %>Token<%= @grammar.tree_suffix %>;
} <%= @grammar.ast_prefix %>Token<%= @grammar.ast_suffix %>;
<% @parser.rule_sets.each do |name, rule_set| %>
<% next if name.start_with?("$") %>
@ -94,14 +91,11 @@ struct <%= name %>;
<% @parser.rule_sets.each do |name, rule_set| %>
<% next if name.start_with?("$") %>
<% next if rule_set.optional? %>
typedef struct <%= @grammar.tree_prefix %><%= name %><%= @grammar.tree_suffix %>
typedef struct <%= @grammar.ast_prefix %><%= name %><%= @grammar.ast_suffix %>
{
<% # TreeNode fields must be present in the same order here. # %>
<%= @grammar.prefix %>position_t position;
<%= @grammar.prefix %>position_t end_position;
uint16_t n_fields;
uint8_t is_token;
<% rule_set.tree_fields.each do |fields| %>
<% rule_set.ast_fields.each do |fields| %>
union
{
<% fields.each do |field_name, type| %>
@ -109,7 +103,7 @@ typedef struct <%= @grammar.tree_prefix %><%= name %><%= @grammar.tree_suffix %>
<% end %>
};
<% end %>
} <%= @grammar.tree_prefix %><%= name %><%= @grammar.tree_suffix %>;
} <%= @grammar.ast_prefix %><%= name %><%= @grammar.ast_suffix %>;
<% end %>
/** @} */
@ -162,8 +156,8 @@ typedef struct
/* Parser context data. */
/** Parse result value. */
<% if @grammar.tree %>
void * parse_result;
<% if @grammar.ast %>
<%= @grammar.ast_prefix %><%= @grammar.start_rule %><%= @grammar.ast_suffix %> * parse_result;
<% else %>
<%= @grammar.prefix %>value_t parse_result;
<% end %>
@ -173,8 +167,6 @@ typedef struct
/** User terminate code. */
size_t user_terminate_code;
<%= @grammar.context_user_fields %>
} <%= @grammar.prefix %>context_t;
/**************************************************************************
@ -184,9 +176,7 @@ typedef struct
/** Token names. */
extern const char * <%= @grammar.prefix %>token_names[];
<%= @grammar.prefix %>context_t * <%= @grammar.prefix %>context_new(uint8_t const * input, size_t input_length);
void <%= @grammar.prefix %>context_delete(<%= @grammar.prefix %>context_t * context);
void <%= @grammar.prefix %>context_init(<%= @grammar.prefix %>context_t * context, uint8_t const * input, size_t input_length);
size_t <%= @grammar.prefix %>decode_code_point(uint8_t const * input, size_t input_length,
<%= @grammar.prefix %>code_point_t * out_code_point, uint8_t * out_code_point_length);
@ -194,27 +184,11 @@ size_t <%= @grammar.prefix %>decode_code_point(uint8_t const * input, size_t inp
size_t <%= @grammar.prefix %>lex(<%= @grammar.prefix %>context_t * context, <%= @grammar.prefix %>token_info_t * out_token_info);
size_t <%= @grammar.prefix %>parse(<%= @grammar.prefix %>context_t * context);
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
size_t <%= @grammar.prefix %>parse_<%= start_rule %>(<%= @grammar.prefix %>context_t * context);
<% end %>
<% if @grammar.tree %>
<%= @grammar.tree_prefix %><%= @grammar.start_rules[0] %><%= @grammar.tree_suffix %> * <%= @grammar.prefix %>result(<%= @grammar.prefix %>context_t * context);
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
<%= @grammar.tree_prefix %><%= start_rule %><%= @grammar.tree_suffix %> * <%= @grammar.prefix %>result_<%= start_rule %>(<%= @grammar.prefix %>context_t * context);
<% end %>
<% if @grammar.ast %>
<%= @grammar.ast_prefix %><%= @grammar.start_rule %><%= @grammar.ast_suffix %> * <%= @grammar.prefix %>result(<%= @grammar.prefix %>context_t * context);
<% else %>
<%= start_rule_type[1] %> <%= @grammar.prefix %>result(<%= @grammar.prefix %>context_t * context);
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
<%= start_rule_type(i)[1] %> <%= @grammar.prefix %>result_<%= start_rule %>(<%= @grammar.prefix %>context_t * context);
<% end %>
<% end %>
<% if @grammar.tree %>
void <%= @grammar.prefix %>tree_delete(<%= @grammar.tree_prefix %><%= @grammar.start_rules[0] %><%= @grammar.tree_suffix %> * tree);
<% @grammar.start_rules.each_with_index do |start_rule, i| %>
void <%= @grammar.prefix %>tree_delete_<%= start_rule %>(<%= @grammar.tree_prefix %><%= start_rule %><%= @grammar.tree_suffix %> * tree);
<% end %>
<% end %>
<%= @grammar.prefix %>position_t <%= @grammar.prefix %>position(<%= @grammar.prefix %>context_t * context);

File diff suppressed because it is too large Load Diff

View File

@ -17,17 +17,12 @@ syn region propaneTarget matchgroup=propaneDelimiter start="<<" end=">>$" contai
syn match propaneComment "#.*"
syn match propaneOperator "->"
syn match propaneFieldAlias ":[a-zA-Z0-9_]\+" contains=propaneFieldOperator
syn match propaneFieldOperator ":" contained
syn match propaneOperator "?"
syn keyword propaneKeyword drop free_token_node free_token_user_fields module prefix ptype start token token_user_fields tokenid tree tree_prefix tree_suffix
syn keyword propaneKeyword ast ast_prefix ast_suffix drop module prefix ptype start token tokenid
syn region propaneRegex start="/" end="/" skip="\v\\\\|\\/"
syn region propaneRegex start="/" end="/" skip="\\/"
hi def link propaneComment Comment
hi def link propaneKeyword Keyword
hi def link propaneRegex String
hi def link propaneOperator Operator
hi def link propaneFieldOperator Operator
hi def link propaneDelimiter Delimiter
hi def link propaneFieldAlias Identifier

View File

@ -11,33 +11,24 @@ class Propane
@log = StringIO.new
end
@language =
if output_file.end_with?(".d")
"d"
elsif output_file.end_with?(".c")
"c"
elsif output_file =~ %r{\.(cc|cpp|cxx)$}
@cpp = true
"c"
if output_file =~ /\.([a-z]+)$/
$1
else
raise Error.new("Could not determine target language from output file name (#{output_file})")
"d"
end
@options = options
process_grammar!
end
def generate
extensions = [nil]
extensions = [@language]
if @language == "c"
extensions += %w[h]
end
extensions.each do |extension|
template = Assets.get("parser.#{extension || @language}.erb")
if extension
output_file = @output_file.sub(%r{\.[a-z]+$}, ".#{extension}")
else
output_file = @output_file
end
template = Assets.get("parser.#{extension}.erb")
erb = ERB.new(template, trim_mode: "<>")
output_file = @output_file.sub(%r{\.[a-z]+$}, ".#{extension}")
result = erb.result(binding.clone)
File.open(output_file, "wb") do |fh|
fh.write(result)
@ -52,8 +43,8 @@ class Propane
# Assign default pattern mode to patterns without a mode assigned.
found_default = false
@grammar.patterns.each do |pattern|
if pattern.modes.empty?
pattern.modes << "default"
if pattern.mode.nil?
pattern.mode = "default"
found_default = true
end
pattern.ptypename ||= "default"
@ -76,15 +67,12 @@ class Propane
end
tokens_by_name[token.name] = token
end
# Create real start rule(s).
real_start_rules = @grammar.start_rules.map do |start_rule|
unless @grammar.rules.find {|rule| rule.name == start_rule}
raise Error.new("Start rule `#{start_rule}` not found")
# Check for user start rule.
unless @grammar.rules.find {|rule| rule.name == @grammar.start_rule}
raise Error.new("Start rule `#{@grammar.start_rule}` not found")
end
Rule.new("$#{start_rule}", [start_rule, "$EOF"], nil, nil, nil)
end
# Add real start rules before user-given rules.
@grammar.rules = real_start_rules + @grammar.rules
# Add "real" start rule.
@grammar.rules.unshift(Rule.new("$Start", [@grammar.start_rule, "$EOF"], nil, nil, nil))
# Generate and add rules for optional components.
generate_optional_component_rules!(tokens_by_name)
# Build rule sets.
@ -270,24 +258,6 @@ class Propane
"context.user_terminate_code = (#{user_terminate_code}); return #{retval};"
end
end
code = code.gsub(/\$\{context\.(\w+)\}/) do |match|
fieldname = $1
case @language
when "c"
"context->#{fieldname}"
when "d"
"context.#{fieldname}"
end
end
code = code.gsub(/\$\{token\.(\w+)\}/) do |match|
fieldname = $1
case @language
when "c"
"token_tree_node->#{fieldname}"
when "d"
"token_tree_node.#{fieldname}"
end
end
if parser
code = code.gsub(/\$\$/) do |match|
case @language
@ -321,7 +291,7 @@ class Propane
end
else
code = code.gsub(/\$\$/) do |match|
if @grammar.tree
if @grammar.ast
case @language
when "c"
"out_token_info->pvalue"
@ -354,21 +324,13 @@ class Propane
code
end
# Get the lex function to use.
#
# @return [String]
# Lex function to use.
def lex_fn
@grammar.custom_lex_fn || "#{@grammar.prefix}lex"
end
# Get the parser value type for the start rule.
#
# @return [Array<String>]
# Start rule parser value type name and type string.
def start_rule_type(start_rule_index = 0)
def start_rule_type
start_rule = @grammar.rules.find do |rule|
rule.name == @grammar.start_rules[start_rule_index]
rule.name == @grammar.start_rule
end
[start_rule.ptypename, @grammar.ptypes[start_rule.ptypename]]
end

View File

@ -5,44 +5,34 @@ class Propane
# Reserve identifiers beginning with a double-underscore for internal use.
IDENTIFIER_REGEX = /(?:[a-zA-Z]|_[a-zA-Z0-9])[a-zA-Z_0-9]*/
attr_reader :context_user_fields
attr_reader :custom_lex_fn
attr_reader :tree
attr_reader :tree_prefix
attr_reader :tree_suffix
attr_reader :free_token_node
attr_reader :ast
attr_reader :ast_prefix
attr_reader :ast_suffix
attr_reader :modulename
attr_reader :patterns
attr_accessor :rules
attr_reader :start_rules
attr_reader :rules
attr_reader :start_rule
attr_reader :tokens
attr_reader :code_blocks
attr_reader :ptypes
attr_reader :prefix
attr_reader :on_token_node
attr_reader :token_user_fields
def initialize(input)
@patterns = []
@start_rules = []
@start_rule = "Start"
@tokens = []
@rules = []
@code_blocks = {}
@line_number = 1
@next_line_number = @line_number
@modeline = nil
@mode = nil
@input = input.gsub("\r\n", "\n")
@ptypes = {"default" => "void *"}
@prefix = "p_"
@tree = false
@tree_prefix = ""
@tree_suffix = ""
@free_token_node = ""
@context_user_fields = nil
@on_token_node = ""
@token_user_fields = nil
@ast = false
@ast_prefix = ""
@ast_suffix = ""
parse_grammar!
@start_rules << "Start" if @start_rules.empty?
end
def ptype
@ -68,16 +58,11 @@ class Propane
def parse_statement!
if parse_white_space!
elsif parse_comment_line!
elsif @modeline.nil? && parse_mode_label!
elsif parse_context_user_fields_statement!
elsif parse_custom_lex_fn!
elsif parse_tree_statement!
elsif parse_tree_prefix_statement!
elsif parse_tree_suffix_statement!
elsif parse_free_token_node_statement!
elsif @mode.nil? && parse_mode_label!
elsif parse_ast_statement!
elsif parse_ast_prefix_statement!
elsif parse_ast_suffix_statement!
elsif parse_module_statement!
elsif parse_on_token_node_statement!
elsif parse_token_user_fields_statement!
elsif parse_ptype_statement!
elsif parse_pattern_statement!
elsif parse_start_statement!
@ -96,8 +81,8 @@ class Propane
end
def parse_mode_label!
if md = consume!(/(#{IDENTIFIER_REGEX}(?:\s*,\s*#{IDENTIFIER_REGEX})*)\s*:/)
@modeline = md[1]
if md = consume!(/(#{IDENTIFIER_REGEX})\s*:/)
@mode = md[1]
end
end
@ -109,37 +94,21 @@ class Propane
consume!(/#.*\n/)
end
def parse_context_user_fields_statement!
if md = consume!(/context_user_fields\b\s*/)
unless code = parse_code_block!
raise Error.new("Line #{@line_number}: expected code block")
end
@context_user_fields ||= ""
@context_user_fields += code
def parse_ast_statement!
if consume!(/ast\s*;/)
@ast = true
end
end
def parse_custom_lex_fn!
if md = consume!(/custom_lex_fn\b\s*(\w+)\s*;/)
@custom_lex_fn = $1
def parse_ast_prefix_statement!
if md = consume!(/ast_prefix\s+(\w+)\s*;/)
@ast_prefix = md[1]
end
end
def parse_tree_statement!
if consume!(/tree\s*;/)
@tree = true
end
end
def parse_tree_prefix_statement!
if md = consume!(/tree_prefix\s+(\w+)\s*;/)
@tree_prefix = md[1]
end
end
def parse_tree_suffix_statement!
if md = consume!(/tree_suffix\s+(\w+)\s*;/)
@tree_suffix = md[1]
def parse_ast_suffix_statement!
if md = consume!(/ast_suffix\s+(\w+)\s*;/)
@ast_suffix = md[1]
end
end
@ -148,45 +117,17 @@ class Propane
md = consume!(/([\w.]+)\s*/, "expected module name")
@modulename = md[1]
consume!(/;/, "expected `;'")
@modeline = nil
@mode = nil
true
end
end
def parse_on_token_node_statement!
if md = consume!(/on_token_node\b\s*/)
unless code = parse_code_block!
raise Error.new("Line #{@line_number}: expected code block")
end
@on_token_node += code
end
end
def parse_token_user_fields_statement!
if md = consume!(/token_user_fields\b\s*/)
unless code = parse_code_block!
raise Error.new("Line #{@line_number}: expected code block")
end
@token_user_fields ||= ""
@token_user_fields += code
end
end
def parse_free_token_node_statement!
if md = consume!(/free_token_node\b\s*/)
unless code = parse_code_block!
raise Error.new("Line #{@line_number}: expected code block")
end
@free_token_node += code
end
end
def parse_ptype_statement!
if consume!(/ptype\s+/)
name = "default"
if md = consume!(/(#{IDENTIFIER_REGEX})\s*=\s*/)
if @tree
raise Error.new("Multiple ptypes are unsupported in tree mode")
if @ast
raise Error.new("Multiple ptypes are unsupported in AST mode")
end
name = md[1]
end
@ -200,8 +141,8 @@ class Propane
md = consume!(/(#{IDENTIFIER_REGEX})\s*/, "expected token name")
name = md[1]
if md = consume!(/\((#{IDENTIFIER_REGEX})\)\s*/)
if @tree
raise Error.new("Multiple ptypes are unsupported in tree mode")
if @ast
raise Error.new("Multiple ptypes are unsupported in AST mode")
end
ptypename = md[1]
end
@ -212,9 +153,9 @@ class Propane
end
token = Token.new(name, ptypename, @line_number)
@tokens << token
pattern = Pattern.new(pattern: pattern, token: token, line_number: @line_number, code: code, modes: get_modes_from_modeline, ptypename: ptypename)
pattern = Pattern.new(pattern: pattern, token: token, line_number: @line_number, code: code, mode: @mode, ptypename: ptypename)
@patterns << pattern
@modeline = nil
@mode = nil
true
end
end
@ -224,15 +165,15 @@ class Propane
md = consume!(/(#{IDENTIFIER_REGEX})\s*/, "expected token name")
name = md[1]
if md = consume!(/\((#{IDENTIFIER_REGEX})\)\s*/)
if @tree
raise Error.new("Multiple ptypes are unsupported in tree mode")
if @ast
raise Error.new("Multiple ptypes are unsupported in AST mode")
end
ptypename = md[1]
end
consume!(/;/, "expected `;'");
token = Token.new(name, ptypename, @line_number)
@tokens << token
@modeline = nil
@mode = nil
true
end
end
@ -244,11 +185,9 @@ class Propane
raise Error.new("Line #{@line_number}: expected pattern to follow `drop'")
end
consume!(/\s+/)
unless code = parse_code_block!
consume!(/;/, "expected `;' or code block")
end
@patterns << Pattern.new(pattern: pattern, line_number: @line_number, code: code, modes: get_modes_from_modeline)
@modeline = nil
consume!(/;/, "expected `;'")
@patterns << Pattern.new(pattern: pattern, line_number: @line_number, mode: @mode)
@mode = nil
true
end
end
@ -256,12 +195,12 @@ class Propane
def parse_rule_statement!
if md = consume!(/(#{IDENTIFIER_REGEX})\s*(?:\((#{IDENTIFIER_REGEX})\))?\s*->\s*/)
rule_name, ptypename = *md[1, 2]
if @tree && ptypename
raise Error.new("Multiple ptypes are unsupported in tree mode")
if @ast && ptypename
raise Error.new("Multiple ptypes are unsupported in AST mode")
end
md = consume!(/((?:#{IDENTIFIER_REGEX}\??(?::#{IDENTIFIER_REGEX})?\s*)*)\s*/, "expected rule component list")
md = consume!(/((?:#{IDENTIFIER_REGEX}(?::#{IDENTIFIER_REGEX})?\??\s*)*)\s*/, "expected rule component list")
components = md[1].strip.split(/\s+/)
if @tree
if @ast
consume!(/;/, "expected `;'")
else
unless code = parse_code_block!
@ -269,7 +208,7 @@ class Propane
end
end
@rules << Rule.new(rule_name, components, code, ptypename, @line_number)
@modeline = nil
@mode = nil
true
end
end
@ -278,26 +217,23 @@ class Propane
if pattern = parse_pattern!
consume!(/\s+/)
if md = consume!(/\((#{IDENTIFIER_REGEX})\)\s*/)
if @tree
raise Error.new("Multiple ptypes are unsupported in tree mode")
if @ast
raise Error.new("Multiple ptypes are unsupported in AST mode")
end
ptypename = md[1]
end
unless code = parse_code_block!
raise Error.new("Line #{@line_number}: expected code block to follow pattern")
end
@patterns << Pattern.new(pattern: pattern, line_number: @line_number, code: code, modes: get_modes_from_modeline, ptypename: ptypename)
@modeline = nil
@patterns << Pattern.new(pattern: pattern, line_number: @line_number, code: code, mode: @mode, ptypename: ptypename)
@mode = nil
true
end
end
def parse_start_statement!
if md = consume!(/start\s+([\w\s]*);/)
start_rules = md[1].split(/\s+/).map(&:strip)
start_rules.each do |start_rule|
@start_rules << start_rule unless @start_rules.include?(start_rule)
end
if md = consume!(/start\s+(\w+)\s*;/)
@start_rule = md[1]
end
end
@ -311,7 +247,7 @@ class Propane
else
@code_blocks[name] = code
end
@modeline = nil
@mode = nil
true
end
end
@ -336,8 +272,6 @@ class Propane
end
elsif md = consume!(%r{(.)})
pattern += md[1]
elsif @input == "" || @input.start_with?("\n")
raise Error.new("Line #{@line_number}: Unterminated pattern; expected `/`")
end
end
pattern
@ -381,14 +315,6 @@ class Propane
end
end
def get_modes_from_modeline
if @modeline
Set[*@modeline.split(",").map(&:strip)]
else
Set.new
end
end
end
end

View File

@ -26,14 +26,8 @@ class Propane
private
def build_tables!
modenames = @grammar.patterns.reduce(Set.new) do |result, pattern|
result + pattern.modes
end
@modes = modenames.reduce({}) do |result, modename|
result[modename] = @grammar.patterns.select do |pattern|
pattern.modes.include?(modename)
end
result
@modes = @grammar.patterns.group_by do |pattern|
pattern.mode
end.transform_values do |patterns|
{dfa: DFA.new(patterns)}
end

View File

@ -14,22 +14,12 @@ class Propane
@item_sets = []
@item_sets_set = {}
@warnings = Set.new
@errors = Set.new
@options = options
start_items = grammar.rules[0...grammar.start_rules.length].map do |start_rule|
Item.new(start_rule, 0)
end
start_item_sets = start_items.map {|item| ItemSet.new([item])}
eval_item_sets = Set[*start_item_sets]
start_item = Item.new(grammar.rules.first, 0)
eval_item_sets = Set[ItemSet.new([start_item])]
while eval_item_sets.size > 0
item_set =
if start_item_sets.size > 0
# Ensure we evaluate start_item_sets first in order
start_item_sets.slice!(0)
else
eval_item_sets.first
end
item_set = eval_item_sets.first
eval_item_sets.delete(item_set)
unless @item_sets_set.include?(item_set)
item_set.id = @item_sets.size
@ -49,20 +39,11 @@ class Propane
end
build_reduce_actions!
build_follow_sets!
build_tables!
write_log!
errormessage = ""
if @errors.size > 0
errormessage += @errors.join("\n")
end
if @warnings.size > 0 && @options[:warnings_as_errors]
if errormessage != ""
errormessage += "\n"
end
errormessage += "Fatal errors (-w):\n" + @warnings.join("\n")
end
if errormessage != ""
raise Error.new(errormessage)
raise Error.new("Fatal errors (-w):\n" + @warnings.join("\n"))
end
end
@ -73,13 +54,24 @@ class Propane
@shift_table = []
@reduce_table = []
@item_sets.each do |item_set|
unless item_set.reduce_rules.empty?
item_set.shift_entries.each do |shift_entry|
token = shift_entry[:symbol]
if item_set.reduce_actions
if rule = item_set.reduce_actions[token]
@warnings << "Shift/Reduce conflict (state #{item_set.id}) between token #{token.name} and rule #{rule.name} (defined on line #{rule.line_number})"
shift_entries = item_set.next_symbols.map do |next_symbol|
state_id =
if next_symbol.name == "$EOF"
0
else
item_set.next_item_set[next_symbol].id
end
{
symbol: next_symbol,
state_id: state_id,
}
end
if item_set.reduce_actions
shift_entries.each do |shift_entry|
token = shift_entry[:symbol]
if item_set.reduce_actions.include?(token)
rule = item_set.reduce_actions[token]
@warnings << "Shift/Reduce conflict (state #{item_set.id}) between token #{token.name} and rule #{rule.name} (defined on line #{rule.line_number})"
end
end
end
@ -90,7 +82,7 @@ class Propane
propagate_optional_target: rule.optional? && rule.components.size == 1}]
elsif reduce_actions = item_set.reduce_actions
reduce_actions.map do |token, rule|
{token: token, token_id: token.id, rule_id: rule.id, rule: rule,
{token_id: token.id, rule_id: rule.id, rule: rule,
rule_set_id: rule.rule_set.id, n_states: rule.components.size,
propagate_optional_target: rule.optional? && rule.components.size == 1}
end
@ -99,11 +91,11 @@ class Propane
end
@state_table << {
shift_index: @shift_table.size,
n_shifts: item_set.shift_entries.size,
n_shifts: shift_entries.size,
reduce_index: @reduce_table.size,
n_reduces: reduce_entries.size,
}
@shift_table += item_set.shift_entries
@shift_table += shift_entries
@reduce_table += reduce_entries
end
end
@ -123,109 +115,7 @@ class Propane
# @return [void]
def build_reduce_actions!
@item_sets.each do |item_set|
build_shift_entries(item_set)
build_reduce_actions_for_item_set(item_set)
end
item_sets_to_process = @item_sets.select do |item_set|
# We need lookahead reduce actions if:
# 1) There is more than one possible rule to reduce. In this case the
# lookahead token can help choose which rule to reduce.
# 2) There is at least one shift action and one reduce action for
# this item set. In this case the lookahead reduce actions are
# needed to test for a Shift/Reduce conflict.
item_set.reduce_rules.size > 1 ||
(item_set.reduce_rules.size > 0 && item_set.shift_entries.size > 0)
end
if RbConfig::CONFIG["host_os"] =~ /linux/
item_sets_by_id = {}
item_sets_to_process.each do |item_set|
item_sets_by_id[item_set.object_id] = item_set
end
tokens_by_id = {}
@grammar.tokens.each do |token|
tokens_by_id[token.object_id] = token
end
rules_by_id = {}
@grammar.rules.each do |rule|
rules_by_id[rule.object_id] = rule
end
n_threads = Util.determine_n_threads
semaphore = Mutex.new
queue = Queue.new
threads = {}
n_threads.times do
piper, pipew = IO.pipe
thread = Thread.new do
loop do
item_set = nil
semaphore.synchronize do
item_set = item_sets_to_process.slice!(0)
end
break if item_set.nil?
fork do
piper.close
build_lookahead_reduce_actions_for_item_set(item_set, pipew)
end
end
queue.push(Thread.current)
end
threads[thread] = [piper, pipew]
end
until threads.empty?
thread = queue.pop
piper, pipew = threads[thread]
pipew.close
thread_txt = piper.read
thread_txt.each_line do |line|
if line.start_with?("RA,")
parts = line.split(",")
item_set_id, token_id, rule_id = parts[1..3].map(&:to_i)
item_set = item_sets_by_id[item_set_id]
unless item_set
raise "Internal error: could not find item set from thread"
end
token = tokens_by_id[token_id]
unless item_set
raise "Internal error: could not find token from thread"
end
rule = rules_by_id[rule_id]
unless item_set
raise "Internal error: could not find rule from thread"
end
item_set.reduce_actions ||= {}
item_set.reduce_actions[token] = rule
elsif line.start_with?("Error: ")
@errors << line.chomp
else
raise "Internal error: unhandled thread line #{line}"
end
end
thread.join
threads.delete(thread)
end
else
# Fall back to single threaded algorithm.
item_sets_to_process.each do |item_set|
item_set.reduce_actions = build_lookahead_reduce_actions_for_item_set(item_set)
end
end
end
# Build the shift entries for a single item set.
#
# @return [void]
def build_shift_entries(item_set)
item_set.shift_entries = item_set.next_symbols.map do |next_symbol|
state_id =
if next_symbol.name == "$EOF"
0
else
item_set.next_item_set[next_symbol].id
end
{
symbol: next_symbol,
state_id: state_id,
}
item_set.reduce_actions = build_reduce_actions_for_item_set(item_set)
end
end
@ -234,16 +124,24 @@ class Propane
# @param item_set [ItemSet]
# ItemSet (parser state)
#
# @return [void]
# @return [nil, Hash]
# If no reduce actions are possible for the given item set, nil.
# Otherwise, a mapping of lookahead Tokens to the Rules to reduce.
def build_reduce_actions_for_item_set(item_set)
# To build the reduce actions, we start by looking at any
# "complete" items, i.e., items where the parse position is at the
# end of a rule. These are the only rules that are candidates for
# reduction in the current ItemSet.
item_set.reduce_rules = Set.new(item_set.items.select(&:complete?).map(&:rule))
reduce_rules = Set.new(item_set.items.select(&:complete?).map(&:rule))
if item_set.reduce_rules.size == 1
item_set.reduce_rule = item_set.reduce_rules.first
if reduce_rules.size == 1
item_set.reduce_rule = reduce_rules.first
end
if reduce_rules.size == 0
nil
else
build_lookahead_reduce_actions_for_item_set(item_set)
end
end
@ -251,28 +149,25 @@ class Propane
#
# @param item_set [ItemSet]
# ItemSet (parser state)
# @param fh [File]
# Output file handle for multiprocessing mode.
#
# @return [Hash]
# Mapping of lookahead Tokens to the Rules to reduce.
def build_lookahead_reduce_actions_for_item_set(item_set, fh = nil)
def build_lookahead_reduce_actions_for_item_set(item_set)
reduce_rules = Set.new(item_set.items.select(&:complete?).map(&:rule))
# We will be looking for all possible tokens that can follow instances of
# these rules. Rather than looking through the entire grammar for the
# possible following tokens, we will only look in the item sets leading
# up to this one. This restriction gives us a more precise lookahead set,
# and allows us to parse LALR grammars.
item_sets = Set[item_set] + item_set.leading_item_sets
item_set.reduce_rules.reduce({}) do |reduce_actions, reduce_rule|
reduce_rules.reduce({}) do |reduce_actions, reduce_rule|
lookahead_tokens_for_rule = build_lookahead_tokens_to_reduce(reduce_rule, item_sets)
lookahead_tokens_for_rule.each do |lookahead_token|
if existing_reduce_rule = reduce_actions[lookahead_token]
error = "Error: reduce/reduce conflict (state #{item_set.id}) between rule #{existing_reduce_rule.name}##{existing_reduce_rule.id} (defined on line #{existing_reduce_rule.line_number}) and rule #{reduce_rule.name}##{reduce_rule.id} (defined on line #{reduce_rule.line_number}) for lookahead token #{lookahead_token}"
@errors << error
fh.puts(error) if fh
raise Error.new("Error: reduce/reduce conflict (state #{item_set.id}) between rule #{existing_reduce_rule.name}##{existing_reduce_rule.id} (defined on line #{existing_reduce_rule.line_number}) and rule #{reduce_rule.name}##{reduce_rule.id} (defined on line #{reduce_rule.line_number})")
end
reduce_actions[lookahead_token] = reduce_rule
fh.puts "RA,#{item_set.object_id},#{lookahead_token.object_id},#{reduce_rule.object_id}" if fh
end
reduce_actions
end
@ -319,7 +214,6 @@ class Propane
rule_set = item.rule.rule_set
unless checked_rule_sets.include?(rule_set)
rule_sets_to_check_after << rule_set
checked_rule_sets << rule_set
end
break
when Token
@ -339,6 +233,51 @@ class Propane
lookahead_tokens
end
# Build the follow sets for each ItemSet.
#
# @return [void]
def build_follow_sets!
@item_sets.each do |item_set|
item_set.follow_set = build_follow_set_for_item_set(item_set)
end
end
# Build the follow set for the given ItemSet.
#
# @param item_set [ItemSet]
# The ItemSet to build the follow set for.
#
# @return [Set]
# Follow set for the given ItemSet.
def build_follow_set_for_item_set(item_set)
follow_set = Set.new
rule_sets_to_check_after = Set.new
item_set.items.each do |item|
(1..).each do |offset|
case symbol = item.next_symbol(offset)
when nil
rule_sets_to_check_after << item.rule.rule_set
break
when Token
follow_set << symbol
break
when RuleSet
follow_set += symbol.start_token_set
unless symbol.could_be_empty?
break
end
end
end
end
reduce_lookaheads = build_lookahead_reduce_actions_for_item_set(item_set)
reduce_lookaheads.each do |token, rule_set|
if rule_sets_to_check_after.include?(rule_set)
follow_set << token
end
end
follow_set
end
def write_log!
@log.puts Util.banner("Parser Rules")
@grammar.rules.each do |rule|

View File

@ -22,7 +22,6 @@ class Propane
def initialize(rule, position)
@rule = rule
@position = position
@_hash = [@rule, @position].hash
end
# Hash function.
@ -30,7 +29,7 @@ class Propane
# @return [Integer]
# Hash code.
def hash
@_hash
[@rule, @position].hash
end
# Compare Item objects.

View File

@ -2,7 +2,7 @@ class Propane
class Parser
# Represent a parser "item set", which is a set of possible items that the
# parser could currently be parsing. This is equivalent to a parser state.
# parser could currently be parsing.
class ItemSet
# @return [Set<Item>]
@ -25,18 +25,14 @@ class Propane
# Rule to reduce if there is only one possibility.
attr_accessor :reduce_rule
# @return [Set<Rule>]
# Set of rules that could be reduced in this parser state.
attr_accessor :reduce_rules
# @return [nil, Hash]
# Reduce actions, mapping lookahead tokens to rules, if there is
# more than one rule that could be reduced.
attr_accessor :reduce_actions
# @return [Array<Hash>]
# Shift table entries.
attr_accessor :shift_entries
# @return [Set<Token>]
# Follow set for the ItemSet.
attr_accessor :follow_set
# Build an ItemSet.
#
@ -54,7 +50,7 @@ class Propane
# @return [Set<Token, RuleSet>]
# Set of next symbols for all Items in this ItemSet.
def next_symbols
@_next_symbols ||= Set.new(@items.map(&:next_symbol).compact)
Set.new(@items.map(&:next_symbol).compact)
end
# Build a next ItemSet for the given next symbol.
@ -103,8 +99,6 @@ class Propane
# @return [Set<ItemSet>]
# Set of all ItemSets that lead up to this ItemSet.
def leading_item_sets
@_leading_item_sets ||=
begin
result = Set.new
eval_sets = Set[self]
evaled = Set.new
@ -121,7 +115,6 @@ class Propane
end
result
end
end
# Represent the ItemSet as a String.
#

View File

@ -26,9 +26,9 @@ class Propane
# Regex NFA for matching the pattern.
attr_reader :nfa
# @return [Set]
# Lexer modes for this pattern.
attr_accessor :modes
# @return [String, nil]
# Lexer mode for this pattern.
attr_accessor :mode
# @return [String, nil]
# Parser value type name.
@ -46,16 +46,16 @@ class Propane
# Token to be returned by this pattern.
# @option options [Integer, nil] :line_number
# Line number where the token was defined in the input grammar.
# @option options [String, nil] :modes
# Lexer modes for this pattern.
# @option options [String, nil] :mode
# Lexer mode for this pattern.
def initialize(options)
@code = options[:code]
@pattern = options[:pattern]
@token = options[:token]
@line_number = options[:line_number]
@modes = options[:modes]
@mode = options[:mode]
@ptypename = options[:ptypename]
regex = Regex.new(@pattern, @line_number)
regex = Regex.new(@pattern)
regex.nfa.end_state.accepts = self
@nfa = regex.nfa
end

View File

@ -4,13 +4,12 @@ class Propane
attr_reader :unit
attr_reader :nfa
def initialize(pattern, line_number)
def initialize(pattern)
@pattern = pattern.dup
@line_number = line_number
@unit = parse_alternates
@nfa = @unit.to_nfa
if @pattern != ""
raise Error.new(%[Line #{@line_number}: unexpected "#{@pattern}" in pattern])
raise Error.new(%[Unexpected "#{@pattern}" in pattern])
end
end
@ -42,7 +41,7 @@ class Propane
mu = MultiplicityUnit.new(last_unit, min_count, max_count)
au.replace_last!(mu)
else
raise Error.new("Line #{@line_number}: #{c} follows nothing")
raise Error.new("#{c} follows nothing")
end
when "|"
au.new_alternate!
@ -60,7 +59,7 @@ class Propane
def parse_group
au = parse_alternates
if @pattern[0] != ")"
raise Error.new("Line #{@line_number}: unterminated group in pattern")
raise Error.new("Unterminated group in pattern")
end
@pattern.slice!(0)
au
@ -71,7 +70,7 @@ class Propane
index = 0
loop do
if @pattern == ""
raise Error.new("Line #{@line_number}: unterminated character class")
raise Error.new("Unterminated character class")
end
c = @pattern.slice!(0)
if c == "]"
@ -85,13 +84,13 @@ class Propane
elsif c == "-" && @pattern[0] != "]"
begin_cu = ccu.last_unit
unless begin_cu.is_a?(CharacterRangeUnit) && begin_cu.code_point_range.size == 1
raise Error.new("Line #{@line_number}: character range must be between single characters")
raise Error.new("Character range must be between single characters")
end
if @pattern[0] == "\\"
@pattern.slice!(0)
end_cu = parse_backslash
unless end_cu.is_a?(CharacterRangeUnit) && end_cu.code_point_range.size == 1
raise Error.new("Line #{@line_number}: character range must be between single characters")
raise Error.new("Character range must be between single characters")
end
max_code_point = end_cu.code_point
else
@ -117,7 +116,7 @@ class Propane
elsif max_count.to_s != ""
max_count = max_count.to_i
if max_count < min_count
raise Error.new("Line #{@line_number}: maximum repetition count cannot be less than minimum repetition count")
raise Error.new("Maximum repetition count cannot be less than minimum repetition count")
end
else
max_count = nil
@ -125,33 +124,28 @@ class Propane
@pattern = pattern
[min_count, max_count]
else
raise Error.new("Line #{@line_number}: unexpected match count following {")
raise Error.new("Unexpected match count at #{@pattern}")
end
end
def parse_backslash
if @pattern == ""
raise Error.new("Line #{@line_number}: error: unfollowed \\")
raise Error.new("Error: unfollowed \\")
else
c = @pattern.slice!(0)
case c
when "a"
CharacterRangeUnit.new("\a")
CharacterRangeUnit.new("\a", "\a")
when "b"
CharacterRangeUnit.new("\b")
CharacterRangeUnit.new("\b", "\b")
when "d"
CharacterRangeUnit.new("0", "9")
when "D"
ccu = CharacterClassUnit.new
ccu << CharacterRangeUnit.new("0", "9")
ccu.negate = true
ccu
when "f"
CharacterRangeUnit.new("\f")
CharacterRangeUnit.new("\f", "\f")
when "n"
CharacterRangeUnit.new("\n")
CharacterRangeUnit.new("\n", "\n")
when "r"
CharacterRangeUnit.new("\r")
CharacterRangeUnit.new("\r", "\r")
when "s"
ccu = CharacterClassUnit.new
ccu << CharacterRangeUnit.new(" ")
@ -161,35 +155,10 @@ class Propane
ccu << CharacterRangeUnit.new("\f")
ccu << CharacterRangeUnit.new("\v")
ccu
when "S"
ccu = CharacterClassUnit.new
ccu << CharacterRangeUnit.new(" ")
ccu << CharacterRangeUnit.new("\t")
ccu << CharacterRangeUnit.new("\r")
ccu << CharacterRangeUnit.new("\n")
ccu << CharacterRangeUnit.new("\f")
ccu << CharacterRangeUnit.new("\v")
ccu.negate = true
ccu
when "t"
CharacterRangeUnit.new("\t")
CharacterRangeUnit.new("\t", "\t")
when "v"
CharacterRangeUnit.new("\v")
when "w"
ccu = CharacterClassUnit.new
ccu << CharacterRangeUnit.new("_")
ccu << CharacterRangeUnit.new("0", "9")
ccu << CharacterRangeUnit.new("a", "z")
ccu << CharacterRangeUnit.new("A", "Z")
ccu
when "W"
ccu = CharacterClassUnit.new
ccu << CharacterRangeUnit.new("_")
ccu << CharacterRangeUnit.new("0", "9")
ccu << CharacterRangeUnit.new("a", "z")
ccu << CharacterRangeUnit.new("A", "Z")
ccu.negate = true
ccu
CharacterRangeUnit.new("\v", "\v")
else
CharacterRangeUnit.new(c)
end

View File

@ -92,20 +92,17 @@ class Propane
@units = []
@negate = false
end
def method_missing(*args, &block)
@units.__send__(*args, &block)
def initialize
@units = []
end
def method_missing(*args)
@units.__send__(*args)
end
def <<(thing)
if thing.is_a?(CharacterClassUnit)
if thing.negate
CodePointRange.invert_ranges(thing.map(&:code_point_range)).each do |cpr|
CharacterRangeUnit.new(cpr.first, cpr.last)
end
else
thing.each do |ccu_unit|
@units << ccu_unit
end
end
else
@units << thing
end

View File

@ -36,7 +36,7 @@ class Propane
# @return [Array<Integer>]
# Map this rule's components to their positions in the parent RuleSet's
# node field pointer array. This is used for tree construction.
# node field pointer array. This is used for AST construction.
attr_accessor :rule_set_node_field_index_map
# Construct a Rule.

View File

@ -4,8 +4,8 @@ class Propane
class RuleSet
# @return [Array<Hash>]
# tree fields.
attr_reader :tree_fields
# AST fields.
attr_reader :ast_fields
# @return [Integer]
# ID of the RuleSet.
@ -100,28 +100,26 @@ class Propane
# Finalize a RuleSet after adding all Rules to it.
def finalize(grammar)
if grammar.tree
build_tree_fields(grammar)
if grammar.ast
build_ast_fields(grammar)
end
end
private
# Build the set of tree fields for this RuleSet.
# Build the set of AST fields for this RuleSet.
#
# This is an Array of Hashes. Each entry in the Array corresponds to a
# field location in the tree node. The entry is a Hash. It could have one or
# field location in the AST node. The entry is a Hash. It could have one or
# two keys. It will always have the field name with a positional suffix as
# a key. It may also have the field name without the positional suffix if
# that field only exists in one position across all Rules in the RuleSet.
#
# @return [void]
def build_tree_fields(grammar)
field_tree_node_indexes = {}
def build_ast_fields(grammar)
field_ast_node_indexes = {}
field_indexes_across_all_rules = {}
# Stores the index into @tree_fields by field alias name.
field_aliases = {}
@tree_fields = []
@ast_fields = []
@rules.each do |rule|
rule.components.each_with_index do |component, i|
if component.is_a?(RuleSet) && component.optional?
@ -132,25 +130,15 @@ class Propane
else
node_name = component.name
end
struct_name = "#{grammar.tree_prefix}#{node_name}#{grammar.tree_suffix}"
struct_name = "#{grammar.ast_prefix}#{node_name}#{grammar.ast_suffix}"
field_name = "p#{node_name}#{i + 1}"
unless field_tree_node_indexes[field_name]
field_tree_node_indexes[field_name] = @tree_fields.size
@tree_fields << {field_name => struct_name}
end
rule.aliases.each do |alias_name, index|
if index == i
alias_tree_fields_index = field_tree_node_indexes[field_name]
if field_aliases[alias_name] && field_aliases[alias_name] != alias_tree_fields_index
raise Error.new("Error: conflicting tree node field positions for alias `#{alias_name}` in rule #{rule.name} defined on line #{rule.line_number}")
end
field_aliases[alias_name] = alias_tree_fields_index
@tree_fields[alias_tree_fields_index][alias_name] = @tree_fields[alias_tree_fields_index].first[1]
end
unless field_ast_node_indexes[field_name]
field_ast_node_indexes[field_name] = @ast_fields.size
@ast_fields << {field_name => struct_name}
end
field_indexes_across_all_rules[node_name] ||= Set.new
field_indexes_across_all_rules[node_name] << field_tree_node_indexes[field_name]
rule.rule_set_node_field_index_map[i] = field_tree_node_indexes[field_name]
field_indexes_across_all_rules[node_name] << field_ast_node_indexes[field_name]
rule.rule_set_node_field_index_map[i] = field_ast_node_indexes[field_name]
end
end
field_indexes_across_all_rules.each do |node_name, indexes_across_all_rules|
@ -158,8 +146,20 @@ class Propane
# If this field was only seen in one position across all rules,
# then add an alias to the positional field name that does not
# include the position.
@tree_fields[indexes_across_all_rules.first]["p#{node_name}"] =
"#{grammar.tree_prefix}#{node_name}#{grammar.tree_suffix}"
@ast_fields[indexes_across_all_rules.first]["p#{node_name}"] =
"#{grammar.ast_prefix}#{node_name}#{grammar.ast_suffix}"
end
end
# Now merge in the field aliases as given by the user in the
# grammar.
field_aliases = {}
@rules.each do |rule|
rule.aliases.each do |alias_name, index|
if field_aliases[alias_name] && field_aliases[alias_name] != index
raise Error.new("Error: conflicting AST node field positions for alias `#{alias_name}`")
end
field_aliases[alias_name] = index
@ast_fields[index][alias_name] = @ast_fields[index].first[1]
end
end
end

View File

@ -10,32 +10,6 @@ class Propane
"#{s}\n* #{message} *\n#{s}\n"
end
# Determine the number of threads to use.
#
# @return [Integer]
# The number of threads to use.
def determine_n_threads
# Try to figure out how many threads are available on the host hardware.
begin
case RbConfig::CONFIG["host_os"]
when /linux/
return File.read("/proc/cpuinfo").scan(/^processor\s*:/).size
when /mswin|mingw|msys/
if `wmic cpu get NumberOfLogicalProcessors -value` =~ /NumberOfLogicalProcessors=(\d+)/
return $1.to_i
end
when /darwin/
if `sysctl -n hw.ncpu` =~ /(\d+)/
return $1.to_i
end
end
rescue
end
# If we can't figure it out, default to 4.
4
end
end
end

View File

@ -1,3 +1,3 @@
class Propane
VERSION = "4.1.0"
VERSION = "1.4.0"
end

View File

@ -120,11 +120,11 @@ string: /\\t/ <<
>>
string: /\\u[0-9a-fA-F]{4}/ <<
/* Not actually going to encode the code point for this example... */
char s[] = {'{', (char)match[2], (char)match[3], (char)match[4], (char)match[5], '}', 0};
char s[] = {'{', match[2], match[3], match[4], match[5], '}', 0};
str_append(&string_value, s);
>>
string: /[^\\]/ <<
char s[] = {(char)match[0], 0};
char s[] = {match[0], 0};
str_append(&string_value, s);
>>
Start -> Value <<

View File

@ -5,7 +5,7 @@
JSONValue * JSONValue_new(size_t id)
{
JSONValue * jv = (JSONValue *)calloc(1, sizeof(JSONValue));
JSONValue * jv = calloc(1, sizeof(JSONValue));
jv->id = id;
return jv;
}
@ -29,7 +29,7 @@ void JSONObject_append(JSONValue * object, char const * name, JSONValue * value)
}
}
size_t const new_size = size + 1;
JSONObjectEntry * new_entries = (JSONObjectEntry *)malloc(sizeof(object->object.entries[0]) * new_size);
void * new_entries = malloc(sizeof(object->object.entries[0]) * new_size);
if (size > 0)
{
memcpy(new_entries, object->object.entries, size * sizeof(object->object.entries[0]));
@ -52,7 +52,7 @@ void JSONArray_append(JSONValue * array, JSONValue * value)
{
size_t const size = array->array.size;
size_t const new_size = size + 1;
JSONValue ** new_entries = (JSONValue **)malloc(sizeof(JSONValue *) * new_size);
JSONValue ** new_entries = malloc(sizeof(JSONValue *) * new_size);
if (array->array.size > 0)
{
memcpy(new_entries, array->array.entries, sizeof(JSONValue *) * size);

View File

@ -11,12 +11,6 @@
#define JSON_FALSE 5u
#define JSON_NULL 6u
typedef struct JSONObjectEntry_s
{
char const * name;
struct JSONValue_s * value;
} JSONObjectEntry;
typedef struct JSONValue_s
{
size_t id;
@ -25,7 +19,11 @@ typedef struct JSONValue_s
struct
{
size_t size;
JSONObjectEntry * entries;
struct
{
char const * name;
struct JSONValue_s * value;
} * entries;
} object;
struct
{

View File

@ -151,30 +151,30 @@ EOF
o = grammar.patterns.find {|pattern| pattern.token == o}
expect(o).to_not be_nil
expect(o.modes).to be_empty
expect(o.mode).to be_nil
o = grammar.tokens.find {|token| token.name == "b"}
expect(o).to_not be_nil
o = grammar.patterns.find {|pattern| pattern.token == o}
expect(o).to_not be_nil
expect(o.modes).to eq Set["m1"]
expect(o.mode).to eq "m1"
o = grammar.patterns.find {|pattern| pattern.pattern == "foo"}
expect(o).to_not be_nil
expect(o.modes).to be_empty
expect(o.mode).to be_nil
o = grammar.patterns.find {|pattern| pattern.pattern == "bar"}
expect(o).to_not be_nil
expect(o.modes).to eq Set["m2"]
expect(o.mode).to eq "m2"
o = grammar.patterns.find {|pattern| pattern.pattern == "q"}
expect(o).to_not be_nil
expect(o.modes).to be_empty
expect(o.mode).to be_nil
o = grammar.patterns.find {|pattern| pattern.pattern == "r"}
expect(o).to_not be_nil
expect(o.modes).to eq Set["m3"]
expect(o.mode).to eq "m3"
end
it "allows assigning ptypes to tokens and rules" do

View File

@ -126,74 +126,6 @@ EOF
]
expect(run(<<EOF, ";")).to eq expected
token semicolon /;/;
EOF
end
it "matches a negated character class" do
expected = [
["pattern", "/abc/"],
]
expect(run(<<EOF, "/abc/")).to eq expected
token pattern /\\/[^\\s]*\\//;
EOF
end
it "matches special character classes " do
expected = [
["a", "abc123_FOO"],
]
expect(run(<<EOF, "abc123_FOO")).to eq expected
token a /\\w+/;
EOF
expected = [
["b", "FROG*%$#"],
]
expect(run(<<EOF, "FROG*%$#")).to eq expected
token b /FROG\\D{1,4}/;
EOF
expected = [
["c", "$883366"],
]
expect(run(<<EOF, "$883366")).to eq expected
token c /$\\d+/;
EOF
expected = [
["d", "^&$@"],
]
expect(run(<<EOF, "^&$@")).to eq expected
token d /^\\W+/;
EOF
expected = [
["a", "abc123_FOO"],
[nil, " "],
["b", "FROG*%$#"],
[nil, " "],
["c", "$883366"],
[nil, " "],
["d", "^&$@"],
]
expect(run(<<EOF, "abc123_FOO FROG*%$# $883366 ^&$@")).to eq expected
token a /\\w+/;
token b /FROG\\D{1,4}/;
token c /$\\d+/;
token d /^\\W+/;
drop /\\s+/;
EOF
end
it "matches a negated character class with a nested inner negated character class" do
expected = [
["t", "$&*"],
]
expect(run(<<EOF, "$&*")).to eq expected
token t /[^%\\W]+/;
EOF
end
it "\\s matches a newline" do
expected = [["s", "\n"]]
expect(run(<<EOF, "\n")).to eq expected
token s /\\s/;
EOF
end
end

View File

@ -2,14 +2,14 @@ class Propane
RSpec.describe Regex do
it "parses an empty expression" do
regex = Regex.new("", 1)
regex = Regex.new("")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0].size).to eq 0
end
it "parses a single character unit expression" do
regex = Regex.new("a", 1)
regex = Regex.new("a")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -19,7 +19,7 @@ class Propane
end
it "parses a group with a single character unit expression" do
regex = Regex.new("(a)", 1)
regex = Regex.new("(a)")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -33,7 +33,7 @@ class Propane
end
it "parses a *" do
regex = Regex.new("a*", 1)
regex = Regex.new("a*")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -47,7 +47,7 @@ class Propane
end
it "parses a +" do
regex = Regex.new("a+", 1)
regex = Regex.new("a+")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -61,7 +61,7 @@ class Propane
end
it "parses a ?" do
regex = Regex.new("a?", 1)
regex = Regex.new("a?")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -75,7 +75,7 @@ class Propane
end
it "parses a multiplicity count" do
regex = Regex.new("a{5}", 1)
regex = Regex.new("a{5}")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -89,7 +89,7 @@ class Propane
end
it "parses a minimum-only multiplicity count" do
regex = Regex.new("a{5,}", 1)
regex = Regex.new("a{5,}")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -103,7 +103,7 @@ class Propane
end
it "parses a minimum and maximum multiplicity count" do
regex = Regex.new("a{5,8}", 1)
regex = Regex.new("a{5,8}")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -118,7 +118,7 @@ class Propane
end
it "parses an escaped *" do
regex = Regex.new("a\\*", 1)
regex = Regex.new("a\\*")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -131,7 +131,7 @@ class Propane
end
it "parses an escaped +" do
regex = Regex.new("a\\+", 1)
regex = Regex.new("a\\+")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -144,7 +144,7 @@ class Propane
end
it "parses an escaped \\" do
regex = Regex.new("\\\\d", 1)
regex = Regex.new("\\\\d")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -157,7 +157,7 @@ class Propane
end
it "parses a character class" do
regex = Regex.new("[a-z_]", 1)
regex = Regex.new("[a-z_]")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -175,7 +175,7 @@ class Propane
end
it "parses a negated character class" do
regex = Regex.new("[^xyz]", 1)
regex = Regex.new("[^xyz]")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -189,25 +189,8 @@ class Propane
expect(ccu[0].first).to eq "x".ord
end
it "parses a negated character class with inner character classes" do
regex = Regex.new("[^x\\sz]", 1)
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
seq_unit = regex.unit.alternates[0]
expect(seq_unit.size).to eq 1
expect(seq_unit[0]).to be_a Regex::CharacterClassUnit
ccu = seq_unit[0]
expect(ccu.negate).to be_truthy
expect(ccu.size).to eq 8
expect(ccu[0]).to be_a Regex::CharacterRangeUnit
expect(ccu[0].first).to eq "x".ord
expect(ccu[1].first).to eq " ".ord
expect(ccu[7].first).to eq "z".ord
end
it "parses - as a plain character at beginning of a character class" do
regex = Regex.new("[-9]", 1)
regex = Regex.new("[-9]")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -221,7 +204,7 @@ class Propane
end
it "parses - as a plain character at end of a character class" do
regex = Regex.new("[0-]", 1)
regex = Regex.new("[0-]")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -237,7 +220,7 @@ class Propane
end
it "parses - as a plain character at beginning of a negated character class" do
regex = Regex.new("[^-9]", 1)
regex = Regex.new("[^-9]")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -252,7 +235,7 @@ class Propane
end
it "parses . as a plain character in a character class" do
regex = Regex.new("[.]", 1)
regex = Regex.new("[.]")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -267,7 +250,7 @@ class Propane
end
it "parses - as a plain character when escaped in middle of character class" do
regex = Regex.new("[0\\-9]", 1)
regex = Regex.new("[0\\-9]")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -286,7 +269,7 @@ class Propane
end
it "parses alternates" do
regex = Regex.new("ab|c", 1)
regex = Regex.new("ab|c")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 2
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -296,7 +279,7 @@ class Propane
end
it "parses a ." do
regex = Regex.new("a.b", 1)
regex = Regex.new("a.b")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 1
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit
@ -307,7 +290,7 @@ class Propane
end
it "parses something complex" do
regex = Regex.new("(a|)*|[^^]|\\|v|[x-y]+", 1)
regex = Regex.new("(a|)*|[^^]|\\|v|[x-y]+")
expect(regex.unit).to be_a Regex::AlternatesUnit
expect(regex.unit.alternates.size).to eq 4
expect(regex.unit.alternates[0]).to be_a Regex::SequenceUnit

File diff suppressed because it is too large Load Diff

View File

@ -2,10 +2,6 @@ unless ENV["dist_specs"]
require "bundler/setup"
require "simplecov"
class MyFormatter
def format(*args)
end
end
SimpleCov.start do
add_filter "/spec/"
add_filter "/.bundle/"
@ -16,7 +12,6 @@ unless ENV["dist_specs"]
end
project_name "Propane"
merge_timeout 3600
formatter(MyFormatter)
end
RSpec.configure do |config|

View File

@ -6,10 +6,10 @@
int main()
{
char const * input = "a, ((b)), b";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
Start * start = p_result(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
Start * start = p_result(&context);
assert(start->pItems1 != NULL);
assert(start->pItems != NULL);
Items * items = start->pItems;
@ -33,22 +33,16 @@ int main()
assert_eq(22, itemsmore->pItem->pToken1->pvalue);
assert(itemsmore->pItemsMore == NULL);
p_tree_delete(start);
p_context_delete(context);
input = "";
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
start = p_result(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
start = p_result(&context);
assert(start->pItems == NULL);
p_tree_delete(start);
p_context_delete(context);
input = "2 1";
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
start = p_result(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
start = p_result(&context);
assert(start->pItems != NULL);
assert(start->pItems->pItem != NULL);
assert(start->pItems->pItem->pDual != NULL);
@ -57,8 +51,5 @@ int main()
assert(start->pItems->pItem->pDual->pTwo2 == NULL);
assert(start->pItems->pItem->pDual->pOne1 == NULL);
p_tree_delete(start);
p_context_delete(context);
return 0;
}

View File

@ -10,9 +10,10 @@ int main()
unittest
{
string input = "a, ((b)), b";
p_context_t * context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
Start * start = p_result(context);
p_context_t context;
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
Start * start = p_result(&context);
assert(start.pItems1 !is null);
assert(start.pItems !is null);
Items * items = start.pItems;
@ -36,20 +37,16 @@ unittest
assert_eq(22, itemsmore.pItem.pToken1.pvalue);
assert(itemsmore.pItemsMore is null);
p_tree_delete(start);
input = "";
context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
start = p_result(context);
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
start = p_result(&context);
assert(start.pItems is null);
p_tree_delete(start);
input = "2 1";
context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
start = p_result(context);
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
start = p_result(&context);
assert(start.pItems !is null);
assert(start.pItems.pItem !is null);
assert(start.pItems.pItem.pDual !is null);
@ -57,6 +54,4 @@ unittest
assert(start.pItems.pItem.pDual.pOne2 !is null);
assert(start.pItems.pItem.pDual.pTwo2 is null);
assert(start.pItems.pItem.pDual.pOne1 is null);
p_tree_delete(start);
}

View File

@ -6,17 +6,14 @@
int main()
{
char const * input = "\na\nb\nc";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
Start * start = p_result(&context);
assert_eq(TOKEN_a, start->first->pToken->token);
assert_eq(TOKEN_b, start->second->pToken->token);
assert_eq(TOKEN_c, start->third->pToken->token);
p_tree_delete(start);
p_context_delete(context);
return 0;
}

View File

@ -10,13 +10,12 @@ int main()
unittest
{
string input = "\na\nb\nc";
p_context_t * context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
Start * start = p_result(&context);
assert_eq(TOKEN_a, start.first.pToken.token);
assert_eq(TOKEN_b, start.second.pToken.token);
assert_eq(TOKEN_c, start.third.pToken.token);
p_tree_delete(start);
}

View File

@ -0,0 +1,102 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include "testutils.h"
int main()
{
char const * input = "\na\n bb ccc";
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
Start * start = p_result(&context);
assert_eq(1, start->pT1->pToken->position.row);
assert_eq(0, start->pT1->pToken->position.col);
assert_eq(1, start->pT1->pToken->end_position.row);
assert_eq(0, start->pT1->pToken->end_position.col);
assert(p_position_valid(start->pT1->pA->position));
assert_eq(2, start->pT1->pA->position.row);
assert_eq(2, start->pT1->pA->position.col);
assert_eq(2, start->pT1->pA->end_position.row);
assert_eq(7, start->pT1->pA->end_position.col);
assert_eq(1, start->pT1->position.row);
assert_eq(0, start->pT1->position.col);
assert_eq(2, start->pT1->end_position.row);
assert_eq(7, start->pT1->end_position.col);
assert_eq(1, start->position.row);
assert_eq(0, start->position.col);
assert_eq(2, start->end_position.row);
assert_eq(7, start->end_position.col);
input = "a\nbb";
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert_eq(0, start->pT1->pToken->position.row);
assert_eq(0, start->pT1->pToken->position.col);
assert_eq(0, start->pT1->pToken->end_position.row);
assert_eq(0, start->pT1->pToken->end_position.col);
assert(p_position_valid(start->pT1->pA->position));
assert_eq(1, start->pT1->pA->position.row);
assert_eq(0, start->pT1->pA->position.col);
assert_eq(1, start->pT1->pA->end_position.row);
assert_eq(1, start->pT1->pA->end_position.col);
assert_eq(0, start->pT1->position.row);
assert_eq(0, start->pT1->position.col);
assert_eq(1, start->pT1->end_position.row);
assert_eq(1, start->pT1->end_position.col);
assert_eq(0, start->position.row);
assert_eq(0, start->position.col);
assert_eq(1, start->end_position.row);
assert_eq(1, start->end_position.col);
input = "a\nc\nc";
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert_eq(0, start->pT1->pToken->position.row);
assert_eq(0, start->pT1->pToken->position.col);
assert_eq(0, start->pT1->pToken->end_position.row);
assert_eq(0, start->pT1->pToken->end_position.col);
assert(p_position_valid(start->pT1->pA->position));
assert_eq(1, start->pT1->pA->position.row);
assert_eq(0, start->pT1->pA->position.col);
assert_eq(2, start->pT1->pA->end_position.row);
assert_eq(0, start->pT1->pA->end_position.col);
assert_eq(0, start->pT1->position.row);
assert_eq(0, start->pT1->position.col);
assert_eq(2, start->pT1->end_position.row);
assert_eq(0, start->pT1->end_position.col);
assert_eq(0, start->position.row);
assert_eq(0, start->position.col);
assert_eq(2, start->end_position.row);
assert_eq(0, start->end_position.col);
input = "a";
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert_eq(0, start->pT1->pToken->position.row);
assert_eq(0, start->pT1->pToken->position.col);
assert_eq(0, start->pT1->pToken->end_position.row);
assert_eq(0, start->pT1->pToken->end_position.col);
assert(!p_position_valid(start->pT1->pA->position));
assert_eq(0, start->pT1->position.row);
assert_eq(0, start->pT1->position.col);
assert_eq(0, start->pT1->end_position.row);
assert_eq(0, start->pT1->end_position.col);
assert_eq(0, start->position.row);
assert_eq(0, start->position.col);
assert_eq(0, start->end_position.row);
assert_eq(0, start->end_position.col);
return 0;
}

View File

@ -0,0 +1,104 @@
import testparser;
import std.stdio;
import testutils;
int main()
{
return 0;
}
unittest
{
string input = "\na\n bb ccc";
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
Start * start = p_result(&context);
assert_eq(1, start.pT1.pToken.position.row);
assert_eq(0, start.pT1.pToken.position.col);
assert_eq(1, start.pT1.pToken.end_position.row);
assert_eq(0, start.pT1.pToken.end_position.col);
assert(start.pT1.pA.position.valid);
assert_eq(2, start.pT1.pA.position.row);
assert_eq(2, start.pT1.pA.position.col);
assert_eq(2, start.pT1.pA.end_position.row);
assert_eq(7, start.pT1.pA.end_position.col);
assert_eq(1, start.pT1.position.row);
assert_eq(0, start.pT1.position.col);
assert_eq(2, start.pT1.end_position.row);
assert_eq(7, start.pT1.end_position.col);
assert_eq(1, start.position.row);
assert_eq(0, start.position.col);
assert_eq(2, start.end_position.row);
assert_eq(7, start.end_position.col);
input = "a\nbb";
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert_eq(0, start.pT1.pToken.position.row);
assert_eq(0, start.pT1.pToken.position.col);
assert_eq(0, start.pT1.pToken.end_position.row);
assert_eq(0, start.pT1.pToken.end_position.col);
assert(start.pT1.pA.position.valid);
assert_eq(1, start.pT1.pA.position.row);
assert_eq(0, start.pT1.pA.position.col);
assert_eq(1, start.pT1.pA.end_position.row);
assert_eq(1, start.pT1.pA.end_position.col);
assert_eq(0, start.pT1.position.row);
assert_eq(0, start.pT1.position.col);
assert_eq(1, start.pT1.end_position.row);
assert_eq(1, start.pT1.end_position.col);
assert_eq(0, start.position.row);
assert_eq(0, start.position.col);
assert_eq(1, start.end_position.row);
assert_eq(1, start.end_position.col);
input = "a\nc\nc";
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert_eq(0, start.pT1.pToken.position.row);
assert_eq(0, start.pT1.pToken.position.col);
assert_eq(0, start.pT1.pToken.end_position.row);
assert_eq(0, start.pT1.pToken.end_position.col);
assert(start.pT1.pA.position.valid);
assert_eq(1, start.pT1.pA.position.row);
assert_eq(0, start.pT1.pA.position.col);
assert_eq(2, start.pT1.pA.end_position.row);
assert_eq(0, start.pT1.pA.end_position.col);
assert_eq(0, start.pT1.position.row);
assert_eq(0, start.pT1.position.col);
assert_eq(2, start.pT1.end_position.row);
assert_eq(0, start.pT1.end_position.col);
assert_eq(0, start.position.row);
assert_eq(0, start.position.col);
assert_eq(2, start.end_position.row);
assert_eq(0, start.end_position.col);
input = "a";
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert_eq(0, start.pT1.pToken.position.row);
assert_eq(0, start.pT1.pToken.position.col);
assert_eq(0, start.pT1.pToken.end_position.row);
assert_eq(0, start.pT1.pToken.end_position.col);
assert(!start.pT1.pA.position.valid);
assert_eq(0, start.pT1.position.row);
assert_eq(0, start.pT1.position.col);
assert_eq(0, start.pT1.end_position.row);
assert_eq(0, start.pT1.end_position.col);
assert_eq(0, start.position.row);
assert_eq(0, start.position.col);
assert_eq(0, start.end_position.row);
assert_eq(0, start.end_position.col);
}

View File

@ -6,10 +6,10 @@
int main()
{
char const * input = "a, ((b)), b";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
PStartS * start = p_result(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
PStartS * start = p_result(&context);
assert(start->pItems1 != NULL);
assert(start->pItems != NULL);
PItemsS * items = start->pItems;
@ -33,22 +33,16 @@ int main()
assert_eq(22, itemsmore->pItem->pToken1->pvalue);
assert(itemsmore->pItemsMore == NULL);
p_tree_delete(start);
p_context_delete(context);
input = "";
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
start = p_result(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
start = p_result(&context);
assert(start->pItems == NULL);
p_tree_delete(start);
p_context_delete(context);
input = "2 1";
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
start = p_result(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
start = p_result(&context);
assert(start->pItems != NULL);
assert(start->pItems->pItem != NULL);
assert(start->pItems->pItem->pDual != NULL);
@ -57,8 +51,5 @@ int main()
assert(start->pItems->pItem->pDual->pTwo2 == NULL);
assert(start->pItems->pItem->pDual->pOne1 == NULL);
p_tree_delete(start);
p_context_delete(context);
return 0;
}

View File

@ -10,9 +10,10 @@ int main()
unittest
{
string input = "a, ((b)), b";
p_context_t * context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
PStartS * start = p_result(context);
p_context_t context;
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
PStartS * start = p_result(&context);
assert(start.pItems1 !is null);
assert(start.pItems !is null);
PItemsS * items = start.pItems;
@ -36,20 +37,16 @@ unittest
assert_eq(22, itemsmore.pItem.pToken1.pvalue);
assert(itemsmore.pItemsMore is null);
p_tree_delete(start);
input = "";
context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
start = p_result(context);
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
start = p_result(&context);
assert(start.pItems is null);
p_tree_delete(start);
input = "2 1";
context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
start = p_result(context);
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
start = p_result(&context);
assert(start.pItems !is null);
assert(start.pItems.pItem !is null);
assert(start.pItems.pItem.pDual !is null);
@ -57,6 +54,4 @@ unittest
assert(start.pItems.pItem.pDual.pOne2 !is null);
assert(start.pItems.pItem.pDual.pTwo2 is null);
assert(start.pItems.pItem.pDual.pOne1 is null);
p_tree_delete(start);
}

View File

@ -0,0 +1,84 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include "testutils.h"
int main()
{
char const * input = "abbccc";
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
Start * start = p_result(&context);
assert_eq(0, start->pT1->pToken->position.row);
assert_eq(0, start->pT1->pToken->position.col);
assert_eq(0, start->pT1->pToken->end_position.row);
assert_eq(0, start->pT1->pToken->end_position.col);
assert_eq(0, start->pT1->position.row);
assert_eq(0, start->pT1->position.col);
assert_eq(0, start->pT1->end_position.row);
assert_eq(0, start->pT1->end_position.col);
assert_eq(0, start->pT2->pToken->position.row);
assert_eq(1, start->pT2->pToken->position.col);
assert_eq(0, start->pT2->pToken->end_position.row);
assert_eq(2, start->pT2->pToken->end_position.col);
assert_eq(0, start->pT2->position.row);
assert_eq(1, start->pT2->position.col);
assert_eq(0, start->pT2->end_position.row);
assert_eq(2, start->pT2->end_position.col);
assert_eq(0, start->pT3->pToken->position.row);
assert_eq(3, start->pT3->pToken->position.col);
assert_eq(0, start->pT3->pToken->end_position.row);
assert_eq(5, start->pT3->pToken->end_position.col);
assert_eq(0, start->pT3->position.row);
assert_eq(3, start->pT3->position.col);
assert_eq(0, start->pT3->end_position.row);
assert_eq(5, start->pT3->end_position.col);
assert_eq(0, start->position.row);
assert_eq(0, start->position.col);
assert_eq(0, start->end_position.row);
assert_eq(5, start->end_position.col);
input = "\n\n bb\nc\ncc\n\n a";
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert_eq(2, start->pT1->pToken->position.row);
assert_eq(2, start->pT1->pToken->position.col);
assert_eq(2, start->pT1->pToken->end_position.row);
assert_eq(3, start->pT1->pToken->end_position.col);
assert_eq(2, start->pT1->position.row);
assert_eq(2, start->pT1->position.col);
assert_eq(2, start->pT1->end_position.row);
assert_eq(3, start->pT1->end_position.col);
assert_eq(3, start->pT2->pToken->position.row);
assert_eq(0, start->pT2->pToken->position.col);
assert_eq(4, start->pT2->pToken->end_position.row);
assert_eq(1, start->pT2->pToken->end_position.col);
assert_eq(3, start->pT2->position.row);
assert_eq(0, start->pT2->position.col);
assert_eq(4, start->pT2->end_position.row);
assert_eq(1, start->pT2->end_position.col);
assert_eq(6, start->pT3->pToken->position.row);
assert_eq(5, start->pT3->pToken->position.col);
assert_eq(6, start->pT3->pToken->end_position.row);
assert_eq(5, start->pT3->pToken->end_position.col);
assert_eq(6, start->pT3->position.row);
assert_eq(5, start->pT3->position.col);
assert_eq(6, start->pT3->end_position.row);
assert_eq(5, start->pT3->end_position.col);
assert_eq(2, start->position.row);
assert_eq(2, start->position.col);
assert_eq(6, start->end_position.row);
assert_eq(5, start->end_position.col);
return 0;
}

View File

@ -0,0 +1,86 @@
import testparser;
import std.stdio;
import testutils;
int main()
{
return 0;
}
unittest
{
string input = "abbccc";
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
Start * start = p_result(&context);
assert_eq(0, start.pT1.pToken.position.row);
assert_eq(0, start.pT1.pToken.position.col);
assert_eq(0, start.pT1.pToken.end_position.row);
assert_eq(0, start.pT1.pToken.end_position.col);
assert_eq(0, start.pT1.position.row);
assert_eq(0, start.pT1.position.col);
assert_eq(0, start.pT1.end_position.row);
assert_eq(0, start.pT1.end_position.col);
assert_eq(0, start.pT2.pToken.position.row);
assert_eq(1, start.pT2.pToken.position.col);
assert_eq(0, start.pT2.pToken.end_position.row);
assert_eq(2, start.pT2.pToken.end_position.col);
assert_eq(0, start.pT2.position.row);
assert_eq(1, start.pT2.position.col);
assert_eq(0, start.pT2.end_position.row);
assert_eq(2, start.pT2.end_position.col);
assert_eq(0, start.pT3.pToken.position.row);
assert_eq(3, start.pT3.pToken.position.col);
assert_eq(0, start.pT3.pToken.end_position.row);
assert_eq(5, start.pT3.pToken.end_position.col);
assert_eq(0, start.pT3.position.row);
assert_eq(3, start.pT3.position.col);
assert_eq(0, start.pT3.end_position.row);
assert_eq(5, start.pT3.end_position.col);
assert_eq(0, start.position.row);
assert_eq(0, start.position.col);
assert_eq(0, start.end_position.row);
assert_eq(5, start.end_position.col);
input = "\n\n bb\nc\ncc\n\n a";
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert_eq(2, start.pT1.pToken.position.row);
assert_eq(2, start.pT1.pToken.position.col);
assert_eq(2, start.pT1.pToken.end_position.row);
assert_eq(3, start.pT1.pToken.end_position.col);
assert_eq(2, start.pT1.position.row);
assert_eq(2, start.pT1.position.col);
assert_eq(2, start.pT1.end_position.row);
assert_eq(3, start.pT1.end_position.col);
assert_eq(3, start.pT2.pToken.position.row);
assert_eq(0, start.pT2.pToken.position.col);
assert_eq(4, start.pT2.pToken.end_position.row);
assert_eq(1, start.pT2.pToken.end_position.col);
assert_eq(3, start.pT2.position.row);
assert_eq(0, start.pT2.position.col);
assert_eq(4, start.pT2.end_position.row);
assert_eq(1, start.pT2.end_position.col);
assert_eq(6, start.pT3.pToken.position.row);
assert_eq(5, start.pT3.pToken.position.col);
assert_eq(6, start.pT3.pToken.end_position.row);
assert_eq(5, start.pT3.pToken.end_position.col);
assert_eq(6, start.pT3.position.row);
assert_eq(5, start.pT3.position.col);
assert_eq(6, start.pT3.end_position.row);
assert_eq(5, start.pT3.end_position.col);
assert_eq(2, start.position.row);
assert_eq(2, start.position.col);
assert_eq(6, start.end_position.row);
assert_eq(5, start.end_position.col);
}

View File

@ -5,29 +5,25 @@
int main()
{
char const * input = "1 + 2 * 3 + 4";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
assert_eq(11, p_result(context));
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
assert_eq(11, p_result(&context));
input = "1 * 2 ** 4 * 3";
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
assert_eq(48, p_result(context));
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
assert_eq(48, p_result(&context));
input = "(1 + 2) * 3 + 4";
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
assert_eq(13, p_result(context));
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
assert_eq(13, p_result(&context));
input = "(2 * 2) ** 3 + 4 + 5";
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
assert_eq(73, p_result(context));
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
assert_eq(73, p_result(&context));
return 0;
}

View File

@ -10,23 +10,23 @@ int main()
unittest
{
string input = "1 + 2 * 3 + 4";
p_context_t * context;
context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
assert_eq(11, p_result(context));
p_context_t context;
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
assert_eq(11, p_result(&context));
input = "1 * 2 ** 4 * 3";
context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
assert_eq(48, p_result(context));
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
assert_eq(48, p_result(&context));
input = "(1 + 2) * 3 + 4";
context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
assert_eq(13, p_result(context));
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
assert_eq(13, p_result(&context));
input = "(2 * 2) ** 3 + 4 + 5";
context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
assert_eq(73, p_result(context));
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
assert_eq(73, p_result(&context));
}

View File

@ -1,15 +0,0 @@
#include "testparser.h"
#include <assert.h>
#include <stdio.h>
#include <string.h>
int main()
{
char const * input = " # comment 1\n# comment 2\na\n";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
return 0;
}

View File

@ -1,16 +0,0 @@
import testparser;
import std.stdio;
import testutils;
int main()
{
return 0;
}
unittest
{
string input = " # comment 1\n# comment 2\na\n";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
}

View File

@ -5,43 +5,38 @@
int main()
{
char const * input = "a 42";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
input = "a\n123\na a";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_UNEXPECTED_TOKEN);
assert(p_position(context).row == 3);
assert(p_position(context).col == 4);
assert(p_token(context) == TOKEN_a);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_UNEXPECTED_TOKEN);
assert(p_position(&context).row == 2);
assert(p_position(&context).col == 3);
assert(p_token(&context) == TOKEN_a);
input = "12";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_UNEXPECTED_TOKEN);
assert(p_position(context).row == 1);
assert(p_position(context).col == 1);
assert(p_token(context) == TOKEN_num);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_UNEXPECTED_TOKEN);
assert(p_position(&context).row == 0);
assert(p_position(&context).col == 0);
assert(p_token(&context) == TOKEN_num);
input = "a 12\n\nab";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_UNEXPECTED_INPUT);
assert(p_position(context).row == 3);
assert(p_position(context).col == 2);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_UNEXPECTED_INPUT);
assert(p_position(&context).row == 2);
assert(p_position(&context).col == 1);
input = "a 12\n\na\n\n77\na \xAA";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_DECODE_ERROR);
assert(p_position(context).row == 6);
assert(p_position(context).col == 5);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_DECODE_ERROR);
assert(p_position(&context).row == 5);
assert(p_position(&context).col == 4);
assert(strcmp(p_token_names[TOKEN_a], "a") == 0);
assert(strcmp(p_token_names[TOKEN_num], "num") == 0);
p_context_delete(context);
return 0;
}

View File

@ -9,31 +9,31 @@ int main()
unittest
{
string input = "a 42";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
input = "a\n123\na a";
context = p_context_new(input);
assert(p_parse(context) == P_UNEXPECTED_TOKEN);
assert(p_position(context) == p_position_t(3, 4));
assert(p_token(context) == TOKEN_a);
p_context_init(&context, input);
assert(p_parse(&context) == P_UNEXPECTED_TOKEN);
assert(p_position(&context) == p_position_t(2, 3));
assert(p_token(&context) == TOKEN_a);
input = "12";
context = p_context_new(input);
assert(p_parse(context) == P_UNEXPECTED_TOKEN);
assert(p_position(context) == p_position_t(1, 1));
assert(p_token(context) == TOKEN_num);
p_context_init(&context, input);
assert(p_parse(&context) == P_UNEXPECTED_TOKEN);
assert(p_position(&context) == p_position_t(0, 0));
assert(p_token(&context) == TOKEN_num);
input = "a 12\n\nab";
context = p_context_new(input);
assert(p_parse(context) == P_UNEXPECTED_INPUT);
assert(p_position(context) == p_position_t(3, 2));
p_context_init(&context, input);
assert(p_parse(&context) == P_UNEXPECTED_INPUT);
assert(p_position(&context) == p_position_t(2, 1));
input = "a 12\n\na\n\n77\na \xAA";
context = p_context_new(input);
assert(p_parse(context) == P_DECODE_ERROR);
assert(p_position(context) == p_position_t(6, 5));
p_context_init(&context, input);
assert(p_parse(&context) == P_DECODE_ERROR);
assert(p_position(&context) == p_position_t(5, 4));
assert(p_token_names[TOKEN_a] == "a");
assert(p_token_names[TOKEN_num] == "num");

View File

@ -6,9 +6,8 @@
int main()
{
char const * input = "foo1\nbar2";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
return 0;
}

View File

@ -9,7 +9,7 @@ int main()
unittest
{
string input = "foo1\nbar2";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
}

View File

@ -38,75 +38,73 @@ int main()
p_token_info_t token_info;
char const * input = "5 + 4 * \n677 + 567";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 1u);
assert(token_info.position.col == 1u);
assert(token_info.end_position.row == 1u);
assert(token_info.end_position.col == 1u);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 0u);
assert(token_info.position.col == 0u);
assert(token_info.end_position.row == 0u);
assert(token_info.end_position.col == 0u);
assert(token_info.length == 1u);
assert(token_info.token == TOKEN_int);
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 1u);
assert(token_info.position.col == 3u);
assert(token_info.end_position.row == 1u);
assert(token_info.end_position.col == 3u);
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 0u);
assert(token_info.position.col == 2u);
assert(token_info.end_position.row == 0u);
assert(token_info.end_position.col == 2u);
assert(token_info.length == 1u);
assert(token_info.token == TOKEN_plus);
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 1u);
assert(token_info.position.col == 5u);
assert(token_info.end_position.row == 1u);
assert(token_info.end_position.col == 5u);
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 0u);
assert(token_info.position.col == 4u);
assert(token_info.end_position.row == 0u);
assert(token_info.end_position.col == 4u);
assert(token_info.length == 1u);
assert(token_info.token == TOKEN_int);
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 1u);
assert(token_info.position.col == 7u);
assert(token_info.end_position.row == 1u);
assert(token_info.end_position.col == 7u);
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 0u);
assert(token_info.position.col == 6u);
assert(token_info.end_position.row == 0u);
assert(token_info.end_position.col == 6u);
assert(token_info.length == 1u);
assert(token_info.token == TOKEN_times);
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 2u);
assert(token_info.position.col == 1u);
assert(token_info.end_position.row == 2u);
assert(token_info.end_position.col == 3u);
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 1u);
assert(token_info.position.col == 0u);
assert(token_info.end_position.row == 1u);
assert(token_info.end_position.col == 2u);
assert(token_info.length == 3u);
assert(token_info.token == TOKEN_int);
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 2u);
assert(token_info.position.col == 5u);
assert(token_info.end_position.row == 2u);
assert(token_info.end_position.col == 5u);
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 1u);
assert(token_info.position.col == 4u);
assert(token_info.end_position.row == 1u);
assert(token_info.end_position.col == 4u);
assert(token_info.length == 1u);
assert(token_info.token == TOKEN_plus);
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 2u);
assert(token_info.position.col == 7u);
assert(token_info.end_position.row == 2u);
assert(token_info.end_position.col == 9u);
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 1u);
assert(token_info.position.col == 6u);
assert(token_info.end_position.row == 1u);
assert(token_info.end_position.col == 8u);
assert(token_info.length == 3u);
assert(token_info.token == TOKEN_int);
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 2u);
assert(token_info.position.col == 10u);
assert(token_info.end_position.row == 2u);
assert(token_info.end_position.col == 10u);
assert(token_info.length == 0u);
assert(token_info.token == TOKEN___EOF);
p_context_delete(context);
context = p_context_new((uint8_t const *)"", 0u);
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 1u);
assert(token_info.position.col == 1u);
assert(token_info.position.col == 9u);
assert(token_info.end_position.row == 1u);
assert(token_info.end_position.col == 1u);
assert(token_info.end_position.col == 9u);
assert(token_info.length == 0u);
assert(token_info.token == TOKEN___EOF);
p_context_init(&context, (uint8_t const *)"", 0u);
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info.position.row == 0u);
assert(token_info.position.col == 0u);
assert(token_info.end_position.row == 0u);
assert(token_info.end_position.col == 0u);
assert(token_info.length == 0u);
assert(token_info.token == TOKEN___EOF);
p_context_delete(context);
return 0;
}

View File

@ -44,26 +44,26 @@ unittest
{
p_token_info_t token_info;
string input = "5 + 4 * \n677 + 567";
p_context_t * context;
context = p_context_new(input);
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(1, 1), p_position_t(1, 1), 1, TOKEN_int));
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(1, 3), p_position_t(1, 3), 1, TOKEN_plus));
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(1, 5), p_position_t(1, 5), 1, TOKEN_int));
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(1, 7), p_position_t(1, 7), 1, TOKEN_times));
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(2, 1), p_position_t(2, 3), 3, TOKEN_int));
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(2, 5), p_position_t(2, 5), 1, TOKEN_plus));
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(2, 7), p_position_t(2, 9), 3, TOKEN_int));
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(2, 10), p_position_t(2, 10), 0, TOKEN___EOF));
p_context_t context;
p_context_init(&context, input);
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(0, 0), p_position_t(0, 0), 1, TOKEN_int));
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(0, 2), p_position_t(0, 2), 1, TOKEN_plus));
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(0, 4), p_position_t(0, 4), 1, TOKEN_int));
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(0, 6), p_position_t(0, 6), 1, TOKEN_times));
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(1, 0), p_position_t(1, 2), 3, TOKEN_int));
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(1, 4), p_position_t(1, 4), 1, TOKEN_plus));
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(1, 6), p_position_t(1, 8), 3, TOKEN_int));
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(1, 9), p_position_t(1, 9), 0, TOKEN___EOF));
context = p_context_new("");
assert(p_lex(context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(1, 1), p_position_t(1, 1), 0, TOKEN___EOF));
p_context_init(&context, "");
assert(p_lex(&context, &token_info) == P_SUCCESS);
assert(token_info == p_token_info_t(p_position_t(0, 0), p_position_t(0, 0), 0, TOKEN___EOF));
}

View File

@ -6,11 +6,10 @@
int main()
{
char const * input = "identifier_123";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
printf("pass1\n");
p_context_delete(context);
return 0;
}

View File

@ -9,8 +9,8 @@ int main()
unittest
{
string input = `identifier_123`;
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
writeln("pass1");
}

View File

@ -6,17 +6,15 @@
int main()
{
char const * input = "abc \"a string\" def";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
printf("pass1\n");
p_context_delete(context);
input = "abc \"abc def\" def";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
printf("pass2\n");
p_context_delete(context);
return 0;
}

View File

@ -9,13 +9,13 @@ int main()
unittest
{
string input = `abc "a string" def`;
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
writeln("pass1");
input = `abc "abc def" def`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
writeln("pass2");
}

View File

@ -1,22 +0,0 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include <stdio.h>
int main()
{
char const * input = "abc.def";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
printf("pass1\n");
p_context_delete(context);
input = "abc . abc";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
printf("pass2\n");
p_context_delete(context);
return 0;
}

View File

@ -1,21 +0,0 @@
import testparser;
import std.stdio;
int main()
{
return 0;
}
unittest
{
string input = `abc.def`;
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
writeln("pass1");
input = `abc . abc`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
writeln("pass2");
}

View File

@ -5,17 +5,15 @@
int main()
{
char const * input = "x";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 1u);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 1u);
input = "fabulous";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 8u);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 8u);
return 0;
}

View File

@ -9,13 +9,13 @@ int main()
unittest
{
string input = `x`;
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 1u);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 1u);
input = `fabulous`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 8u);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 8u);
}

View File

@ -5,16 +5,14 @@
int main()
{
char const * input = "x";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_UNEXPECTED_INPUT);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_UNEXPECTED_INPUT);
input = "123";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 123u);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 123u);
return 0;
}

View File

@ -9,12 +9,12 @@ int main()
unittest
{
string input = `x`;
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_UNEXPECTED_INPUT);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_UNEXPECTED_INPUT);
input = `123`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 123u);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 123u);
}

View File

@ -5,10 +5,9 @@
int main()
{
char const * input = "\a\b\t\n\v\f\rt";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
return 0;
}

View File

@ -9,7 +9,7 @@ int main()
unittest
{
string input = "\a\b\t\n\v\f\rt";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
}

View File

@ -6,16 +6,14 @@
int main()
{
char const * input1 = "a\n1";
myp1_context_t * context1;
context1 = myp1_context_new((uint8_t const *)input1, strlen(input1));
assert(myp1_parse(context1) == MYP1_SUCCESS);
myp1_context_delete(context1);
myp1_context_t context1;
myp1_context_init(&context1, (uint8_t const *)input1, strlen(input1));
assert(myp1_parse(&context1) == MYP1_SUCCESS);
char const * input2 = "bcb";
myp2_context_t * context2;
context2 = myp2_context_new((uint8_t const *)input2, strlen(input2));
assert(myp2_parse(context2) == MYP2_SUCCESS);
myp2_context_delete(context2);
myp2_context_t context2;
myp2_context_init(&context2, (uint8_t const *)input2, strlen(input2));
assert(myp2_parse(&context2) == MYP2_SUCCESS);
return 0;
}

View File

@ -10,12 +10,12 @@ int main()
unittest
{
string input1 = "a\n1";
myp1_context_t * context1;
context1 = myp1_context_new(input1);
assert(myp1_parse(context1) == MYP1_SUCCESS);
myp1_context_t context1;
myp1_context_init(&context1, input1);
assert(myp1_parse(&context1) == MYP1_SUCCESS);
string input2 = "bcb";
myp2_context_t * context2;
context2 = myp2_context_new(input2);
assert(myp2_parse(context2) == MYP2_SUCCESS);
myp2_context_t context2;
myp2_context_init(&context2, input2);
assert(myp2_parse(&context2) == MYP2_SUCCESS);
}

View File

@ -1,54 +0,0 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include "testutils.h"
int main()
{
char const * input = "b";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
assert(start->a == NULL);
assert(start->pToken2 != NULL);
assert_eq(TOKEN_b, start->pToken2->token);
assert(start->pR3 == NULL);
assert(start->pR == NULL);
assert(start->r == NULL);
p_tree_delete(start);
p_context_delete(context);
input = "abcd";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert(start->a != NULL);
assert_eq(TOKEN_a, start->pToken1->token);
assert(start->pToken2 != NULL);
assert(start->pR3 != NULL);
assert(start->pR != NULL);
assert(start->r != NULL);
assert(start->pR == start->pR3);
assert(start->pR == start->r);
assert_eq(TOKEN_c, start->pR->pToken1->token);
p_tree_delete(start);
p_context_delete(context);
input = "bdc";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert(start->a == NULL);
assert(start->pToken2 != NULL);
assert(start->r != NULL);
assert_eq(TOKEN_d, start->pR->pToken1->token);
p_tree_delete(start);
p_context_delete(context);
return 0;
}

View File

@ -1,51 +0,0 @@
import testparser;
import std.stdio;
import testutils;
int main()
{
return 0;
}
unittest
{
string input = "b";
p_context_t * context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
assert(start.pToken1 is null);
assert(start.pToken2 !is null);
assert_eq(TOKEN_b, start.pToken2.token);
assert(start.pR3 is null);
assert(start.pR is null);
assert(start.r is null);
p_tree_delete(start);
input = "abcd";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert(start.pToken1 != null);
assert_eq(TOKEN_a, start.pToken1.token);
assert(start.pToken2 != null);
assert(start.pR3 != null);
assert(start.pR != null);
assert(start.r != null);
assert(start.pR == start.pR3);
assert(start.pR == start.r);
assert_eq(TOKEN_c, start.pR.pToken1.token);
p_tree_delete(start);
input = "bdc";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert(start.pToken1 is null);
assert(start.pToken2 !is null);
assert(start.pR !is null);
assert_eq(TOKEN_d, start.pR.pToken1.token);
p_tree_delete(start);
}

View File

@ -5,20 +5,17 @@
int main()
{
char const * input = "b";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
input = "abcd";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
input = "abdc";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
return 0;
}

View File

@ -9,15 +9,15 @@ int main()
unittest
{
string input = "b";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
input = "abcd";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
input = "abdc";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
}

View File

@ -6,23 +6,20 @@
int main()
{
char const * input = "b";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
Start * start = p_result(&context);
assert(start->pToken1 == NULL);
assert(start->pToken2 != NULL);
assert_eq(TOKEN_b, start->pToken2->token);
assert(start->pR3 == NULL);
assert(start->pR == NULL);
p_tree_delete(start);
p_context_delete(context);
input = "abcd";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert(start->pToken1 != NULL);
assert_eq(TOKEN_a, start->pToken1->token);
assert(start->pToken2 != NULL);
@ -31,21 +28,15 @@ int main()
assert(start->pR == start->pR3);
assert_eq(TOKEN_c, start->pR->pToken1->token);
p_tree_delete(start);
p_context_delete(context);
input = "bdc";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert(start->pToken1 == NULL);
assert(start->pToken2 != NULL);
assert(start->pR != NULL);
assert_eq(TOKEN_d, start->pR->pToken1->token);
p_tree_delete(start);
p_context_delete(context);
return 0;
}

View File

@ -10,21 +10,20 @@ int main()
unittest
{
string input = "b";
p_context_t * context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
Start * start = p_result(&context);
assert(start.pToken1 is null);
assert(start.pToken2 !is null);
assert_eq(TOKEN_b, start.pToken2.token);
assert(start.pR3 is null);
assert(start.pR is null);
p_tree_delete(start);
input = "abcd";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert(start.pToken1 != null);
assert_eq(TOKEN_a, start.pToken1.token);
assert(start.pToken2 != null);
@ -33,16 +32,12 @@ unittest
assert(start.pR == start.pR3);
assert_eq(TOKEN_c, start.pR.pToken1.token);
p_tree_delete(start);
input = "bdc";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
start = p_result(&context);
assert(start.pToken1 is null);
assert(start.pToken2 !is null);
assert(start.pR !is null);
assert_eq(TOKEN_d, start.pR.pToken1.token);
p_tree_delete(start);
}

View File

@ -5,15 +5,13 @@
int main()
{
char const * input = "aba";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
input = "abb";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
return 0;
}

View File

@ -9,11 +9,11 @@ int main()
unittest
{
string input = "aba";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
input = "abb";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
}

View File

@ -5,23 +5,20 @@
int main()
{
char const * input = "a";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_UNEXPECTED_TOKEN);
assert(p_position(context).row == 1);
assert(p_position(context).col == 2);
assert(context->token == TOKEN___EOF);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_UNEXPECTED_TOKEN);
assert(p_position(&context).row == 0);
assert(p_position(&context).col == 1);
assert(context.token == TOKEN___EOF);
input = "a b";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
input = "bb";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
return 0;
}

View File

@ -9,17 +9,17 @@ int main()
unittest
{
string input = "a";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_UNEXPECTED_TOKEN);
assert(p_position(context) == p_position_t(1, 2));
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_UNEXPECTED_TOKEN);
assert(p_position(&context) == p_position_t(0, 1));
assert(context.token == TOKEN___EOF);
input = "a b";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
input = "bb";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
}

View File

@ -5,10 +5,9 @@
int main()
{
char const * input = "ab";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
return 0;
}

View File

@ -9,7 +9,7 @@ int main()
unittest
{
string input = "ab";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
}

View File

@ -6,58 +6,51 @@
int main()
{
char const * input = "";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
input = "{}";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context)->id == JSON_OBJECT);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context)->id == JSON_OBJECT);
input = "[]";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context)->id == JSON_ARRAY);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context)->id == JSON_ARRAY);
input = "-45.6";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context)->id == JSON_NUMBER);
assert(p_result(context)->number == -45.6);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context)->id == JSON_NUMBER);
assert(p_result(&context)->number == -45.6);
input = "2E-2";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context)->id == JSON_NUMBER);
assert(p_result(context)->number == 0.02);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context)->id == JSON_NUMBER);
assert(p_result(&context)->number == 0.02);
input = "{\"hi\":true}";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
JSONValue * o = p_result(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
JSONValue * o = p_result(&context);
assert(o->id == JSON_OBJECT);
assert_eq(1, o->object.size);
assert(strcmp(o->object.entries[0].name, "hi") == 0);
assert(o->object.entries[0].value->id == JSON_TRUE);
p_context_delete(context);
input = "{\"ff\": false, \"nn\": null}";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
o = p_result(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
o = p_result(&context);
assert(o->id == JSON_OBJECT);
assert_eq(2, o->object.size);
assert(strcmp(o->object.entries[0].name, "ff") == 0);
assert(o->object.entries[0].value->id == JSON_FALSE);
assert(strcmp(o->object.entries[1].name, "nn") == 0);
assert(o->object.entries[1].value->id == JSON_NULL);
p_context_delete(context);
return 0;
}

View File

@ -10,45 +10,45 @@ int main()
unittest
{
string input = ``;
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
input = `{}`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(cast(JSONObject)p_result(context));
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(cast(JSONObject)p_result(&context));
input = `[]`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(cast(JSONArray)p_result(context));
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(cast(JSONArray)p_result(&context));
input = `-45.6`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(cast(JSONNumber)p_result(context));
assert((cast(JSONNumber)p_result(context)).value == -45.6);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(cast(JSONNumber)p_result(&context));
assert((cast(JSONNumber)p_result(&context)).value == -45.6);
input = `2E-2`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(cast(JSONNumber)p_result(context));
assert((cast(JSONNumber)p_result(context)).value == 0.02);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(cast(JSONNumber)p_result(&context));
assert((cast(JSONNumber)p_result(&context)).value == 0.02);
input = `{"hi":true}`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(cast(JSONObject)p_result(context));
JSONObject o = cast(JSONObject)p_result(context);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(cast(JSONObject)p_result(&context));
JSONObject o = cast(JSONObject)p_result(&context);
assert(o.value["hi"]);
assert(cast(JSONTrue)o.value["hi"]);
input = `{"ff": false, "nn": null}`;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(cast(JSONObject)p_result(context));
o = cast(JSONObject)p_result(context);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(cast(JSONObject)p_result(&context));
o = cast(JSONObject)p_result(&context);
assert(o.value["ff"]);
assert(cast(JSONFalse)o.value["ff"]);
assert(o.value["nn"]);

View File

@ -5,23 +5,20 @@
int main()
{
char const * input = "a";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 1u);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 1u);
input = "";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 0u);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 0u);
input = "aaaaaaaaaaaaaaaa";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 16u);
p_context_delete(context);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 16u);
return 0;
}

View File

@ -9,18 +9,18 @@ int main()
unittest
{
string input = "a";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 1u);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 1u);
input = "";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 0u);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 0u);
input = "aaaaaaaaaaaaaaaa";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
assert(p_result(context) == 16u);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
assert(p_result(&context) == 16u);
}

View File

@ -6,17 +6,15 @@
int main()
{
char const * input = "abcdef";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
printf("pass1\n");
p_context_delete(context);
input = "defabcdef";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
printf("pass2\n");
p_context_delete(context);
return 0;
}

View File

@ -9,13 +9,13 @@ int main()
unittest
{
string input = "abcdef";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
writeln("pass1");
input = "defabcdef";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
writeln("pass2");
}

View File

@ -5,10 +5,9 @@
int main()
{
char const * input = "defghidef";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
p_context_delete(context);
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert(p_parse(&context) == P_SUCCESS);
return 0;
}

View File

@ -9,7 +9,7 @@ int main()
unittest
{
string input = "defghidef";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
p_context_t context;
p_context_init(&context, input);
assert(p_parse(&context) == P_SUCCESS);
}

View File

@ -0,0 +1,17 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include "testutils.h"
int main()
{
char const * input = "hi";
p_context_t context;
p_context_init(&context, (uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(&context));
Top * top = p_result(&context);
assert(top->pToken != NULL);
assert_eq(TOKEN_hi, top->pToken->token);
return 0;
}

View File

@ -10,10 +10,10 @@ int main()
unittest
{
string input = "hi";
p_context_t * context;
context = p_context_new(input);
assert_eq(P_SUCCESS, p_parse(context));
Top * top = p_result(context);
p_context_t context;
p_context_init(&context, input);
assert_eq(P_SUCCESS, p_parse(&context));
Top * top = p_result(&context);
assert(top.pToken !is null);
assert_eq(TOKEN_hi, top.pToken.token);
}

View File

@ -1,20 +0,0 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include "testutils.h"
int main()
{
char const * input = "hi";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
Top * top = p_result(context);
assert(top->pToken != NULL);
assert_eq(TOKEN_hi, top->pToken->token);
p_tree_delete(top);
p_context_delete(context);
return 0;
}

View File

@ -1,30 +0,0 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include "testutils.h"
int main()
{
char const * input = "bbbb";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
int result = p_result(context);
assert_eq(8, result);
p_context_delete(context);
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse_Bs(context) == P_SUCCESS);
result = p_result_Bs(context);
assert_eq(8, result);
p_context_delete(context);
input = "c";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse_R(context) == P_SUCCESS);
result = p_result_R(context);
assert_eq(3, result);
p_context_delete(context);
return 0;
}

View File

@ -1,29 +0,0 @@
import testparser;
import std.stdio;
import testutils;
int main()
{
return 0;
}
unittest
{
string input = "bbbb";
p_context_t * context;
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
int result = p_result(context);
assert(result == 8);
context = p_context_new(input);
assert(p_parse_Bs(context) == P_SUCCESS);
result = p_result_Bs(context);
assert(result == 8);
input = "c";
context = p_context_new(input);
assert(p_parse_R(context) == P_SUCCESS);
result = p_result_R(context);
assert(result == 3);
}

View File

@ -1,40 +0,0 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include "testutils.h"
int main()
{
char const * input = "bbbb";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
assert_not_null(start->bs);
assert_not_null(start->bs->b);
assert_not_null(start->bs->bs->b);
assert_not_null(start->bs->bs->bs->b);
assert_not_null(start->bs->bs->bs->bs->b);
p_tree_delete(start);
p_context_delete(context);
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse_Bs(context) == P_SUCCESS);
Bs * bs = p_result_Bs(context);
assert_not_null(bs->b);
assert_not_null(bs->bs->b);
assert_not_null(bs->bs->bs->b);
assert_not_null(bs->bs->bs->bs->b);
p_tree_delete_Bs(bs);
p_context_delete(context);
input = "c";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse_R(context) == P_SUCCESS);
R * r = p_result_R(context);
assert_not_null(r->c);
p_tree_delete_R(r);
p_context_delete(context);
return 0;
}

View File

@ -1,41 +0,0 @@
import testparser;
import std.stdio;
import testutils;
int main()
{
return 0;
}
unittest
{
string input = "bbbb";
p_context_t * context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
assert(start.bs);
assert(start.bs.b);
assert(start.bs.bs.b);
assert(start.bs.bs.bs.b);
assert(start.bs.bs.bs.bs.b);
p_tree_delete(start);
context = p_context_new(input);
assert(p_parse_Bs(context) == P_SUCCESS);
Bs * bs = p_result_Bs(context);
assert(bs.b);
assert(bs.bs.b);
assert(bs.bs.bs.b);
assert(bs.bs.bs.bs.b);
p_tree_delete_Bs(bs);
input = "c";
context = p_context_new(input);
assert(p_parse_R(context) == P_SUCCESS);
R * r = p_result_R(context);
assert(r.c);
p_tree_delete_R(r);
}

View File

@ -1,46 +0,0 @@
#include "testparser.h"
#include <assert.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main()
{
char const * input =
"# c1\n"
"# c2\n"
"\n"
"first\n"
"\n \n \n"
" # s1\n"
" # s2\n"
"second\n";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
assert(start->pIDs);
assert(start->pIDs->id);
#ifdef __cplusplus
assert(start->pIDs->id->comments == "# c1\n# c2\n");
#else
assert(start->pIDs->id->comments);
assert(strcmp(start->pIDs->id->comments, "# c1\n# c2\n") == 0);
#endif
assert(start->pIDs->pIDs);
assert(start->pIDs->pIDs->id);
#ifdef __cplusplus
assert(start->pIDs->pIDs->id->comments == "# s1\n# s2\n");
#else
assert(start->pIDs->pIDs->id->comments);
assert(strcmp(start->pIDs->pIDs->id->comments, "# s1\n# s2\n") == 0);
#endif
#ifndef __cplusplus
free(context->comments);
#endif
p_context_delete(context);
p_tree_delete(start);
return 0;
}

View File

@ -1,31 +0,0 @@
import testparser;
import std.stdio;
int main()
{
return 0;
}
unittest
{
string input =
"# c1\n" ~
"# c2\n" ~
"\n" ~
"first\n" ~
"\n \n \n" ~
" # s1\n" ~
" # s2\n" ~
"second\n";
p_context_t * context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
assert(start.pIDs);
assert(start.pIDs.id);
assert(start.pIDs.id.comments == "# c1\n# c2\n");
assert(start.pIDs.pIDs);
assert(start.pIDs.pIDs.id);
assert(start.pIDs.pIDs.id.comments == "# s1\n# s2\n");
p_tree_delete(start);
}

View File

@ -1,20 +0,0 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include "testutils.h"
int main()
{
char const * input = "ab";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert_eq(P_SUCCESS, p_parse(context));
Start * start = p_result(context);
assert(start->a != NULL);
assert(*start->a->pvalue == 1);
assert(start->b != NULL);
assert(*start->b->pvalue == 2);
p_tree_delete(start);
p_context_delete(context);
}

View File

@ -1,114 +0,0 @@
#include "testparser.h"
#include <assert.h>
#include <string.h>
#include "testutils.h"
int main()
{
char const * input = "\na\n bb ccc";
p_context_t * context;
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
assert_eq(2, start->pT1->pToken->position.row);
assert_eq(1, start->pT1->pToken->position.col);
assert_eq(2, start->pT1->pToken->end_position.row);
assert_eq(1, start->pT1->pToken->end_position.col);
assert(p_position_valid(start->pT1->pA->position));
assert_eq(3, start->pT1->pA->position.row);
assert_eq(3, start->pT1->pA->position.col);
assert_eq(3, start->pT1->pA->end_position.row);
assert_eq(8, start->pT1->pA->end_position.col);
assert_eq(2, start->pT1->position.row);
assert_eq(1, start->pT1->position.col);
assert_eq(3, start->pT1->end_position.row);
assert_eq(8, start->pT1->end_position.col);
assert_eq(2, start->position.row);
assert_eq(1, start->position.col);
assert_eq(3, start->end_position.row);
assert_eq(8, start->end_position.col);
p_tree_delete(start);
p_context_delete(context);
input = "a\nbb";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert_eq(1, start->pT1->pToken->position.row);
assert_eq(1, start->pT1->pToken->position.col);
assert_eq(1, start->pT1->pToken->end_position.row);
assert_eq(1, start->pT1->pToken->end_position.col);
assert(p_position_valid(start->pT1->pA->position));
assert_eq(2, start->pT1->pA->position.row);
assert_eq(1, start->pT1->pA->position.col);
assert_eq(2, start->pT1->pA->end_position.row);
assert_eq(2, start->pT1->pA->end_position.col);
assert_eq(1, start->pT1->position.row);
assert_eq(1, start->pT1->position.col);
assert_eq(2, start->pT1->end_position.row);
assert_eq(2, start->pT1->end_position.col);
assert_eq(1, start->position.row);
assert_eq(1, start->position.col);
assert_eq(2, start->end_position.row);
assert_eq(2, start->end_position.col);
p_tree_delete(start);
p_context_delete(context);
input = "a\nc\nc";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert_eq(1, start->pT1->pToken->position.row);
assert_eq(1, start->pT1->pToken->position.col);
assert_eq(1, start->pT1->pToken->end_position.row);
assert_eq(1, start->pT1->pToken->end_position.col);
assert(p_position_valid(start->pT1->pA->position));
assert_eq(2, start->pT1->pA->position.row);
assert_eq(1, start->pT1->pA->position.col);
assert_eq(3, start->pT1->pA->end_position.row);
assert_eq(1, start->pT1->pA->end_position.col);
assert_eq(1, start->pT1->position.row);
assert_eq(1, start->pT1->position.col);
assert_eq(3, start->pT1->end_position.row);
assert_eq(1, start->pT1->end_position.col);
assert_eq(1, start->position.row);
assert_eq(1, start->position.col);
assert_eq(3, start->end_position.row);
assert_eq(1, start->end_position.col);
p_tree_delete(start);
p_context_delete(context);
input = "a";
context = p_context_new((uint8_t const *)input, strlen(input));
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert_eq(1, start->pT1->pToken->position.row);
assert_eq(1, start->pT1->pToken->position.col);
assert_eq(1, start->pT1->pToken->end_position.row);
assert_eq(1, start->pT1->pToken->end_position.col);
assert(!p_position_valid(start->pT1->pA->position));
assert_eq(1, start->pT1->position.row);
assert_eq(1, start->pT1->position.col);
assert_eq(1, start->pT1->end_position.row);
assert_eq(1, start->pT1->end_position.col);
assert_eq(1, start->position.row);
assert_eq(1, start->position.col);
assert_eq(1, start->end_position.row);
assert_eq(1, start->end_position.col);
p_tree_delete(start);
p_context_delete(context);
return 0;
}

View File

@ -1,111 +0,0 @@
import testparser;
import std.stdio;
import testutils;
int main()
{
return 0;
}
unittest
{
string input = "\na\n bb ccc";
p_context_t * context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
Start * start = p_result(context);
assert_eq(2, start.pT1.pToken.position.row);
assert_eq(1, start.pT1.pToken.position.col);
assert_eq(2, start.pT1.pToken.end_position.row);
assert_eq(1, start.pT1.pToken.end_position.col);
assert(start.pT1.pA.position.valid);
assert_eq(3, start.pT1.pA.position.row);
assert_eq(3, start.pT1.pA.position.col);
assert_eq(3, start.pT1.pA.end_position.row);
assert_eq(8, start.pT1.pA.end_position.col);
assert_eq(2, start.pT1.position.row);
assert_eq(1, start.pT1.position.col);
assert_eq(3, start.pT1.end_position.row);
assert_eq(8, start.pT1.end_position.col);
assert_eq(2, start.position.row);
assert_eq(1, start.position.col);
assert_eq(3, start.end_position.row);
assert_eq(8, start.end_position.col);
p_tree_delete(start);
input = "a\nbb";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert_eq(1, start.pT1.pToken.position.row);
assert_eq(1, start.pT1.pToken.position.col);
assert_eq(1, start.pT1.pToken.end_position.row);
assert_eq(1, start.pT1.pToken.end_position.col);
assert(start.pT1.pA.position.valid);
assert_eq(2, start.pT1.pA.position.row);
assert_eq(1, start.pT1.pA.position.col);
assert_eq(2, start.pT1.pA.end_position.row);
assert_eq(2, start.pT1.pA.end_position.col);
assert_eq(1, start.pT1.position.row);
assert_eq(1, start.pT1.position.col);
assert_eq(2, start.pT1.end_position.row);
assert_eq(2, start.pT1.end_position.col);
assert_eq(1, start.position.row);
assert_eq(1, start.position.col);
assert_eq(2, start.end_position.row);
assert_eq(2, start.end_position.col);
p_tree_delete(start);
input = "a\nc\nc";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert_eq(1, start.pT1.pToken.position.row);
assert_eq(1, start.pT1.pToken.position.col);
assert_eq(1, start.pT1.pToken.end_position.row);
assert_eq(1, start.pT1.pToken.end_position.col);
assert(start.pT1.pA.position.valid);
assert_eq(2, start.pT1.pA.position.row);
assert_eq(1, start.pT1.pA.position.col);
assert_eq(3, start.pT1.pA.end_position.row);
assert_eq(1, start.pT1.pA.end_position.col);
assert_eq(1, start.pT1.position.row);
assert_eq(1, start.pT1.position.col);
assert_eq(3, start.pT1.end_position.row);
assert_eq(1, start.pT1.end_position.col);
assert_eq(1, start.position.row);
assert_eq(1, start.position.col);
assert_eq(3, start.end_position.row);
assert_eq(1, start.end_position.col);
p_tree_delete(start);
input = "a";
context = p_context_new(input);
assert(p_parse(context) == P_SUCCESS);
start = p_result(context);
assert_eq(1, start.pT1.pToken.position.row);
assert_eq(1, start.pT1.pToken.position.col);
assert_eq(1, start.pT1.pToken.end_position.row);
assert_eq(1, start.pT1.pToken.end_position.col);
assert(!start.pT1.pA.position.valid);
assert_eq(1, start.pT1.position.row);
assert_eq(1, start.pT1.position.col);
assert_eq(1, start.pT1.end_position.row);
assert_eq(1, start.pT1.end_position.col);
assert_eq(1, start.position.row);
assert_eq(1, start.position.col);
assert_eq(1, start.end_position.row);
assert_eq(1, start.end_position.col);
p_tree_delete(start);
}

Some files were not shown because too many files have changed in this diff Show More