Skip to content

Commit

Permalink
Remove duplicate documentation
Browse files Browse the repository at this point in the history
Some parts of the usage section in the procedural knowledge learning chapter of the Soar manual were not included in the cli reference, thus they were moved.

Discussion see SoarGroup#41
  • Loading branch information
moschmdt committed Sep 16, 2024
1 parent 6a3a7af commit c002300
Show file tree
Hide file tree
Showing 4 changed files with 76 additions and 603 deletions.
12 changes: 12 additions & 0 deletions docs/reference/cli/cmd_chunk.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,18 @@ This feature is not yet implemented.

## Preventing Possible Correctness Issues

It is theoretically possible to detect nearly all of the sources of correctness
issues and prevent rules from forming when those situations are detected. In
Soar 9.6.0, though, only one filter is available, `allow-local-negations`. Future
versions of Soar will include more correctness filters.

Note that it is still possible to detect that your agent may have encountered a
known source of a correctness issue by looking at the output of the chunk
stats command. It has specific statistics for some of the sources, while others
can be gleaned indirectly. For example, if the stats show that some rules
required repair, you know that your agent testing or augmenting a previous
result in a substate.

### chunk allow-local-negations

The option `allow-local-negations` control whether or not chunks can be created
Expand Down
43 changes: 43 additions & 0 deletions docs/reference/cli/cmd_explain.md
Original file line number Diff line number Diff line change
Expand Up @@ -329,6 +329,49 @@ explainer will write out a file with the statistics when either Soar exits or a
`soar init` is executed. This option is still considered experimental and in
beta.
## Explaining Learned Procedural Knowledge
While explanation-based chunking makes it easier for people to now incorporate
learning into their agents, the complexity of the analysis it performs makes it
far more difficult to understand how the learned rules were formed. The
explainer is a new module that has been developed to help ameliorate this
problem. The explainer allows you to interactively explore how rules were
learned.
When requested, the explainer will make a very detailed record of everything
that happened during a learning episode. Once a user specifies a recorded chunk
to "discuss", they can browse all of the rule firings that contributed to the
learned rule, one at a time. The explainer will present each of these rules with
detailed information about the identity of the variables, whether it tested
knowledge relevant to the the superstate, and how it is connected to other rule
firings in the substate. Rule firings are assigned IDs so that user can quickly
choose a new rule to examine.
The explainer can also present several different screens that show more verbose
analyses of how the chunk was created. Specifically, the user can ask for a
description of (1) the chunk’s initial formation, (2) the identities of
variables and how they map to identity sets, (3) the constraints that the
problem-solving placed on values that a particular identity can have, and (4)
specific statistics about that chunk, such as whether correctness issues were
detected or whether it required repair to make it fully operational.
Finally, the explainer will also create the data necessary to visualize all of
the processing described in an image using the new ’visualize’ command. These
visualization are the easiest way to quickly understand how a rule was formed.
Note that, despite recording so much information, a lot of effort has been put
into minimizing the cost of the explainer. When debugging, we often let it
record all chunks and justifications formed because it is efficient enough to do
so.
Use the explain command without any arguments to display a summary of which rule
firings the explainer is watching. It also shows which chunk or justification
the user has specified is the current focus of its output, i.e. the chunk being
discussed.
Tip: This is a good way to get a chunk id so that you don’t have to type or
paste in a chunk name.
## Visualizing an Explanation
Soar's `visualize` command allows you to create images that represent processing
Expand Down
10 changes: 8 additions & 2 deletions docs/reference/cli/cmd_visualize.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ applies to visualizing memory systems.
## File Handling Settings

`file-name` specifies the base file name that Soar will use when creating both
graphviz data files and images. You can specify a path as well, for example
Graphviz data files and images. You can specify a path as well, for example
"visualization/soar_viz", but make sure the directory exists first!

`use-same-file` tells the visualizer to always overwrite the same files for each
Expand All @@ -114,7 +114,7 @@ command does not yet handle file creation as robustly as it could. If the file
already exists, it will simply overwrite it rather than looking for a new file
name.

`generate-image` specifies whether the visualizer should render the graphviz
`generate-image` specifies whether the visualizer should render the Graphviz
file into an image. This setting is overridden if the viewer-launch setting is
enabled.

Expand Down Expand Up @@ -144,6 +144,12 @@ Note that your operating system chooses which program to launch based on the
file type. This feature has not been tested extensively on other platforms.
Certain systems may not allow Soar to launch an external program.

???+ note
For the visualizer to work, you must have Graphviz and DOT installed, which are
free third-party tools, and both must be available on your path. To date, the
visualizer has only been tested on Mac and Linux. It is possible that certain
systems may not allow Soar to launch an external program.

## See Also

- [explain](./cmd_explain.md)
Expand Down
Loading

0 comments on commit c002300

Please sign in to comment.