Test cases and regression
Test cases turn important events into automatic checks. Any time the content changes, RuleForge runs the cases and tells you if something stopped working. This practice is called regression.
Why bother
A rule that detects correctly today can stop after a tweak. Without test cases, you only find out in production. With cases, RuleForge warns you first.
What a case records
- Name and description.
- The sample event.
- The log format.
- What's expected as a result: which decoder should apply, which rule should fire, which fields should be extracted.
Create a case
- In the project, open Cases.
- Click New case.
- Enter the name, log format, and event.
- Save.
- Fill in the expectations (what should happen when the event is processed).
Tip: capture the real events you investigated during the work. They make the best base of cases.
Run cases
- Run a single case — useful after a specific change that affects only that detection.
- Run batch regression — runs every case in the project. Do this before opening a review or creating a version.
The result shows which passed, which failed, and for failures what diverged from the expectation.
Best practices
- Record cases for the most sensitive scenarios early.
- Fill expectations carefully — a poorly defined case produces confusing "failures".
- Always run regression before opening a review.
- Keep cases alive: when a detection's purpose changes, update the matching case.
Common issues
"Regression failed but I don't understand why"
Check that the case's expectations reflect the correct behavior. An out-of-date expectation is a frequent cause of false failures.
"I can't edit a case"
Your role may allow running but not editing. See Roles and permissions.
"The result changed when I switched workspaces"
That's expected. The content of the active workspace influences the case's result.