Compare commits

..

17 Commits

Author SHA1 Message Date
Jozef Izso
12c7abe9ab Add conclusion output column to integration test summary table
- Added job outputs to expose conclusion from each test scenario
- Added new "Conclusion" column to summary table with colored badges
- Shows actual conclusion output (🟢 success / 🔴 failure /  N/A)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-14 15:12:30 +01:00
Jozef Izso
3b5ad0231b Update scenario 4 to be a regression test for issue #217
The bug has been fixed - conclusion output now correctly reflects
test failures independent of fail-on-error setting. Updated comments
and summary to indicate this is now a regression test.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-14 15:12:15 +01:00
Jozef Izso
c89704a410 Add integration tests for fail-on-error and fail-on-empty scenarios (#217)
Add workflow and fixtures to test the behavior of fail-on-error and
fail-on-empty parameters across different scenarios:

- Passing tests with fail-on-error true/false
- Failing tests with fail-on-error true/false
- Empty results with fail-on-empty true/false

Scenario 4 (failing tests + fail-on-error=false) is expected to fail
until issue #217 is fixed, documenting the bug where check conclusion
shows 'success' even when tests fail.

The workflow outputs a GitHub Actions summary with a markdown table
showing all test results.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-14 15:11:36 +01:00
Jozef Izso
ee446707ff Merge pull request #692 from dorny/release/v2.3.0 2025-11-30 01:52:48 +01:00
Jozef Izso
fe45e95373 test-reporter release v2.3.0 2025-11-30 01:49:30 +01:00
Jozef Izso
e40a1da745 Merge pull request #682 from dorny/dependabot/npm_and_yarn/reports/mocha/multi-f14266366f 2025-11-30 01:01:42 +01:00
dependabot[bot]
3445860437 Bump js-yaml and mocha in /reports/mocha
Bumps [js-yaml](https://github.com/nodeca/js-yaml) to 4.1.1 and updates ancestor dependency [mocha](https://github.com/mochajs/mocha). These dependencies need to be updated together.


Updates `js-yaml` from 4.0.0 to 4.1.1
- [Changelog](https://github.com/nodeca/js-yaml/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nodeca/js-yaml/compare/4.0.0...4.1.1)

Updates `mocha` from 8.3.0 to 11.7.5
- [Release notes](https://github.com/mochajs/mocha/releases)
- [Changelog](https://github.com/mochajs/mocha/blob/v11.7.5/CHANGELOG.md)
- [Commits](https://github.com/mochajs/mocha/compare/v8.3.0...v11.7.5)

---
updated-dependencies:
- dependency-name: js-yaml
  dependency-version: 4.1.1
  dependency-type: indirect
- dependency-name: mocha
  dependency-version: 11.7.5
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-29 23:45:48 +00:00
Jozef Izso
9ef5c136b2 Merge pull request #691 from dorny/fix/complete-documentation 2025-11-30 00:40:18 +01:00
Jozef Izso
83e20c1534 Merge pull request #685 from dorny/dependabot/npm_and_yarn/reports/jest/js-yaml-3.14.2 2025-11-30 00:37:29 +01:00
Jozef Izso
4331a3b620 Clarify the dotnet-nunit docs to require NUnit3TestAdapter for nunit logger 2025-11-23 15:26:03 +01:00
Jozef Izso
04232af26f Complete documentation for all supported reporters
This commit addresses several documentation gaps to ensure all implemented
reporters are properly documented across action.yml and README.md.

Changes:
1. Updated action.yml description to include all supported languages:
   - Added: Go, Python (pytest, unittest), Ruby (RSpec), Swift

2. Added Ruby/RSpec to supported languages list in README.md

3. Added detailed documentation sections in README.md:
   - dotnet-nunit: Added section with NUnit3 XML format instructions
   - rspec-json: Added section with RSpec JSON formatter configuration

All reporters now have:
- Entry in action.yml description
- Entry in README supported languages list
- Entry in README usage documentation (reporter input)
- Detailed documentation section in README "Supported formats"
- Implementation in src/main.ts
- Tests in __tests__/

This ensures users can discover and use all available reporters without
confusion about what is supported.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-22 18:05:33 +01:00
Jozef Izso
cf146f4036 Merge pull request #690 from dorny/fix/add-golang-json-to-action-yml 2025-11-22 17:50:03 +01:00
Jozef Izso
33fc27cf09 Merge pull request #687 from dorny/dependabot/github_actions/actions/checkout-6 2025-11-22 17:49:02 +01:00
dependabot[bot]
fc80cb4400 Bump actions/checkout from 5 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-21 23:07:16 +00:00
dependabot[bot]
79ea6a9d0e Bump js-yaml from 3.14.0 to 3.14.2 in /reports/jest
Bumps [js-yaml](https://github.com/nodeca/js-yaml) from 3.14.0 to 3.14.2.
- [Changelog](https://github.com/nodeca/js-yaml/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nodeca/js-yaml/compare/3.14.0...3.14.2)

---
updated-dependencies:
- dependency-name: js-yaml
  dependency-version: 3.14.2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-18 19:47:48 +00:00
Jozef Izso
aef3d726a6 Merge pull request #683 from micmarc/feature/python-pytest 2025-11-15 18:19:24 +01:00
Michael Marcus
c1a56edcfe Enhance pytest support
Add robust test schema for pytest report
Update README with sample pytest command
2025-11-15 11:55:41 -05:00
21 changed files with 4583 additions and 2311 deletions

View File

@@ -21,7 +21,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Set Node.js
uses: actions/setup-node@v6

View File

@@ -13,7 +13,7 @@ jobs:
name: Build & Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- uses: actions/setup-node@v6
with:
node-version-file: '.nvmrc'

View File

@@ -0,0 +1,320 @@
name: Integration Tests (#217) - fail-on-error/fail-on-empty
on:
workflow_dispatch:
push:
pull_request:
paths:
- 'src/**'
- 'dist/**'
- 'action.yml'
- '.github/workflows/integration-tests.yml'
- '__tests__/fixtures/integration/**'
jobs:
# ============================================
# Scenario 1: Passing tests, fail-on-error=true
# Expected: Step passes, conclusion=success
# ============================================
test-passing-fail-on-error-true:
name: "Passing tests | fail-on-error=true"
runs-on: ubuntu-slim
outputs:
conclusion: ${{ steps.report.outputs.conclusion }}
steps:
- uses: actions/checkout@v6
- name: Run test reporter
id: report
uses: ./
with:
name: 'Integration Test - Passing (fail-on-error=true)'
path: '__tests__/fixtures/integration/passing-tests.xml'
reporter: java-junit
fail-on-error: 'true'
fail-on-empty: 'true'
- name: Validate results
run: |
echo "=== Test Results ==="
echo "Step outcome: success (would have failed otherwise)"
echo "Conclusion: ${{ steps.report.outputs.conclusion }}"
echo "Passed: ${{ steps.report.outputs.passed }}"
echo "Failed: ${{ steps.report.outputs.failed }}"
if [ "${{ steps.report.outputs.conclusion }}" != "success" ]; then
echo "FAIL: Expected conclusion 'success' but got '${{ steps.report.outputs.conclusion }}'"
exit 1
fi
echo "PASS: All validations passed"
# ============================================
# Scenario 2: Passing tests, fail-on-error=false
# Expected: Step passes, conclusion=success
# ============================================
test-passing-fail-on-error-false:
name: "Passing tests | fail-on-error=false"
runs-on: ubuntu-slim
outputs:
conclusion: ${{ steps.report.outputs.conclusion }}
steps:
- uses: actions/checkout@v6
- name: Run test reporter
id: report
uses: ./
with:
name: 'Integration Test - Passing (fail-on-error=false)'
path: '__tests__/fixtures/integration/passing-tests.xml'
reporter: java-junit
fail-on-error: 'false'
fail-on-empty: 'true'
- name: Validate results
run: |
echo "=== Test Results ==="
echo "Conclusion: ${{ steps.report.outputs.conclusion }}"
if [ "${{ steps.report.outputs.conclusion }}" != "success" ]; then
echo "FAIL: Expected conclusion 'success' but got '${{ steps.report.outputs.conclusion }}'"
exit 1
fi
echo "PASS: All validations passed"
# ============================================
# Scenario 3: Failing tests, fail-on-error=true
# Expected: Step FAILS, conclusion=failure
# ============================================
test-failing-fail-on-error-true:
name: "Failing tests | fail-on-error=true"
runs-on: ubuntu-slim
outputs:
conclusion: ${{ steps.report.outputs.conclusion }}
steps:
- uses: actions/checkout@v6
- name: Run test reporter
id: report
continue-on-error: true
uses: ./
with:
name: 'Integration Test - Failing (fail-on-error=true)'
path: '__tests__/fixtures/integration/failing-tests.xml'
reporter: java-junit
fail-on-error: 'true'
fail-on-empty: 'true'
- name: Validate results
run: |
echo "=== Test Results ==="
echo "Step outcome: ${{ steps.report.outcome }}"
echo "Conclusion: ${{ steps.report.outputs.conclusion }}"
echo "Failed count: ${{ steps.report.outputs.failed }}"
# Step should fail
if [ "${{ steps.report.outcome }}" != "failure" ]; then
echo "FAIL: Expected step to fail but got '${{ steps.report.outcome }}'"
exit 1
fi
# Conclusion should be failure
if [ "${{ steps.report.outputs.conclusion }}" != "failure" ]; then
echo "FAIL: Expected conclusion 'failure' but got '${{ steps.report.outputs.conclusion }}'"
exit 1
fi
echo "PASS: All validations passed"
# ============================================
# Scenario 4: Failing tests, fail-on-error=false
# Expected: Step passes, conclusion=failure
# Regression test for issue #217
# ============================================
test-failing-fail-on-error-false:
name: "Failing tests | fail-on-error=false [#217]"
runs-on: ubuntu-slim
outputs:
conclusion: ${{ steps.report.outputs.conclusion }}
steps:
- uses: actions/checkout@v6
- name: Run test reporter
id: report
continue-on-error: true
uses: ./
with:
name: 'Integration Test - Failing (fail-on-error=false)'
path: '__tests__/fixtures/integration/failing-tests.xml'
reporter: java-junit
fail-on-error: 'false'
fail-on-empty: 'true'
- name: Validate results
run: |
echo "=== Test Results ==="
echo "Step outcome: ${{ steps.report.outcome }}"
echo "Conclusion: ${{ steps.report.outputs.conclusion }}"
echo "Failed count: ${{ steps.report.outputs.failed }}"
# Step should pass (fail-on-error is false)
if [ "${{ steps.report.outcome }}" != "success" ]; then
echo "FAIL: Expected step to pass but got '${{ steps.report.outcome }}'"
exit 1
fi
# Conclusion SHOULD be 'failure' because tests failed
# Regression test for issue #217
if [ "${{ steps.report.outputs.conclusion }}" != "failure" ]; then
echo "========================================"
echo "REGRESSION DETECTED (Issue #217)"
echo "========================================"
echo "Expected conclusion 'failure' but got '${{ steps.report.outputs.conclusion }}'"
echo "The check conclusion should reflect test results,"
echo "independent of the fail-on-error setting."
echo "========================================"
exit 1
fi
echo "PASS: All validations passed"
# ============================================
# Scenario 5: Empty results, fail-on-empty=true
# Expected: Step FAILS
# ============================================
test-empty-fail-on-empty-true:
name: "Empty results | fail-on-empty=true"
runs-on: ubuntu-slim
outputs:
conclusion: ${{ steps.report.outputs.conclusion || 'N/A' }}
steps:
- uses: actions/checkout@v6
- name: Run test reporter
id: report
continue-on-error: true
uses: ./
with:
name: 'Integration Test - Empty (fail-on-empty=true)'
path: '__tests__/fixtures/integration/nonexistent-*.xml'
reporter: java-junit
fail-on-error: 'true'
fail-on-empty: 'true'
- name: Validate results
run: |
echo "=== Test Results ==="
echo "Step outcome: ${{ steps.report.outcome }}"
# Step should fail (no files found)
if [ "${{ steps.report.outcome }}" != "failure" ]; then
echo "FAIL: Expected step to fail but got '${{ steps.report.outcome }}'"
exit 1
fi
echo "PASS: Step correctly failed on empty results"
# ============================================
# Scenario 6: Empty results, fail-on-empty=false
# Expected: Step passes, conclusion=success
# ============================================
test-empty-fail-on-empty-false:
name: "Empty results | fail-on-empty=false"
runs-on: ubuntu-slim
outputs:
conclusion: ${{ steps.report.outputs.conclusion || 'N/A' }}
steps:
- uses: actions/checkout@v6
- name: Run test reporter
id: report
continue-on-error: true
uses: ./
with:
name: 'Integration Test - Empty (fail-on-empty=false)'
path: '__tests__/fixtures/integration/nonexistent-*.xml'
reporter: java-junit
fail-on-error: 'true'
fail-on-empty: 'false'
- name: Validate results
run: |
echo "=== Test Results ==="
echo "Step outcome: ${{ steps.report.outcome }}"
# Step should pass (fail-on-empty is false)
if [ "${{ steps.report.outcome }}" != "success" ]; then
echo "FAIL: Expected step to pass but got '${{ steps.report.outcome }}'"
exit 1
fi
echo "PASS: Step correctly passed with empty results"
# ============================================
# Summary job to report overall status
# ============================================
summary:
name: "Test Summary"
needs:
- test-passing-fail-on-error-true
- test-passing-fail-on-error-false
- test-failing-fail-on-error-true
- test-failing-fail-on-error-false
- test-empty-fail-on-empty-true
- test-empty-fail-on-empty-false
runs-on: ubuntu-slim
if: always()
steps:
- name: Generate summary
run: |
# Helper function to convert result to emoji
result_to_emoji() {
case "$1" in
success) echo "✅ Pass" ;;
failure) echo "❌ Fail" ;;
cancelled) echo "⚪ Cancelled" ;;
skipped) echo "⏭️ Skipped" ;;
*) echo "❓ Unknown" ;;
esac
}
# Helper function to format conclusion
conclusion_to_badge() {
case "$1" in
success) echo "🟢 success" ;;
failure) echo "🔴 failure" ;;
N/A) echo "⚫ N/A" ;;
*) echo "⚪ $1" ;;
esac
}
# Generate markdown summary
cat >> $GITHUB_STEP_SUMMARY << 'EOF'
# Integration Test Results
## fail-on-error / fail-on-empty Scenarios
| Scenario | Test Results | fail-on-error | fail-on-empty | Expected | Conclusion | Result |
|----------|--------------|---------------|---------------|----------|------------|--------|
EOF
echo "| 1 | All pass | \`true\` | \`true\` | Step: pass, Check: success | $(conclusion_to_badge "${{ needs.test-passing-fail-on-error-true.outputs.conclusion }}") | $(result_to_emoji "${{ needs.test-passing-fail-on-error-true.result }}") |" >> $GITHUB_STEP_SUMMARY
echo "| 2 | All pass | \`false\` | \`true\` | Step: pass, Check: success | $(conclusion_to_badge "${{ needs.test-passing-fail-on-error-false.outputs.conclusion }}") | $(result_to_emoji "${{ needs.test-passing-fail-on-error-false.result }}") |" >> $GITHUB_STEP_SUMMARY
echo "| 3 | Some fail | \`true\` | \`true\` | Step: fail, Check: failure | $(conclusion_to_badge "${{ needs.test-failing-fail-on-error-true.outputs.conclusion }}") | $(result_to_emoji "${{ needs.test-failing-fail-on-error-true.result }}") |" >> $GITHUB_STEP_SUMMARY
echo "| 4 | Some fail | \`false\` | \`true\` | Step: pass, Check: failure | $(conclusion_to_badge "${{ needs.test-failing-fail-on-error-false.outputs.conclusion }}") | $(result_to_emoji "${{ needs.test-failing-fail-on-error-false.result }}") |" >> $GITHUB_STEP_SUMMARY
echo "| 5 | Empty | \`true\` | \`true\` | Step: fail | $(conclusion_to_badge "${{ needs.test-empty-fail-on-empty-true.outputs.conclusion }}") | $(result_to_emoji "${{ needs.test-empty-fail-on-empty-true.result }}") |" >> $GITHUB_STEP_SUMMARY
echo "| 6 | Empty | \`true\` | \`false\` | Step: pass | $(conclusion_to_badge "${{ needs.test-empty-fail-on-empty-false.outputs.conclusion }}") | $(result_to_emoji "${{ needs.test-empty-fail-on-empty-false.result }}") |" >> $GITHUB_STEP_SUMMARY
cat >> $GITHUB_STEP_SUMMARY << 'EOF'
---
> **Scenario 4** is a regression test for [issue #217](https://github.com/dorny/test-reporter/issues/217).
> It verifies that `conclusion` output correctly reflects test failures, independent of `fail-on-error` setting.
> When `fail-on-error=false`, the step should pass but `conclusion` should still be `failure` if tests failed.
EOF
# Also print to console
echo "=== Integration Test Summary ==="
echo "Scenario 1 (pass, fail-on-error=true): ${{ needs.test-passing-fail-on-error-true.result }}"
echo "Scenario 2 (pass, fail-on-error=false): ${{ needs.test-passing-fail-on-error-false.result }}"
echo "Scenario 3 (fail, fail-on-error=true): ${{ needs.test-failing-fail-on-error-true.result }}"
echo "Scenario 4 (fail, fail-on-error=false): ${{ needs.test-failing-fail-on-error-false.result }} (regression test for #217)"
echo "Scenario 5 (empty, fail-on-empty=true): ${{ needs.test-empty-fail-on-empty-true.result }}"
echo "Scenario 6 (empty, fail-on-empty=false): ${{ needs.test-empty-fail-on-empty-false.result }}"

View File

@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- run: npm ci
- run: npm run build
- run: npm test

View File

@@ -11,7 +11,7 @@ jobs:
name: Workflow test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- uses: ./
with:
artifact: test-results

View File

@@ -1,5 +1,12 @@
# Changelog
## 2.3.0
* Feature: Add Python support with `python-xunit` reporter (pytest) https://github.com/dorny/test-reporter/pull/643
* Feature: Add pytest traceback parsing and `directory-mapping` option https://github.com/dorny/test-reporter/pull/238
* Performance: Update sax.js to fix large XML file parsing https://github.com/dorny/test-reporter/pull/681
* Documentation: Complete documentation for all supported reporters https://github.com/dorny/test-reporter/pull/691
* Security: Bump js-yaml and mocha in /reports/mocha (fixes prototype pollution) https://github.com/dorny/test-reporter/pull/682
## 2.2.0
* Feature: Add collapsed option to control report summary visibility https://github.com/dorny/test-reporter/pull/664
* Fix badge encoding for values including underscore and hyphens https://github.com/dorny/test-reporter/pull/672

View File

@@ -20,6 +20,7 @@ This [Github Action](https://github.com/features/actions) displays test results
- Java / [JUnit](https://junit.org/)
- JavaScript / [JEST](https://jestjs.io/) / [Mocha](https://mochajs.org/)
- Python / [pytest](https://docs.pytest.org/en/stable/) / [unittest](https://docs.python.org/3/library/unittest.html)
- Ruby / [RSpec](https://rspec.info/)
- Swift / xUnit
For more information see [Supported formats](#supported-formats) section.
@@ -256,6 +257,20 @@ Supported testing frameworks:
For more information see [dotnet test](https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test#examples)
</details>
<details>
<summary>dotnet-nunit</summary>
Test execution must be configured to generate [NUnit3](https://docs.nunit.org/articles/nunit/technical-notes/usage/Test-Result-XML-Format.html) XML test results.
Install the [NUnit3TestAdapter](https://www.nuget.org/packages/NUnit3TestAdapter) package (required; it registers the `nunit` logger for `dotnet test`), then run tests with:
`dotnet test --logger "nunit;LogFileName=test-results.xml"`
Supported testing frameworks:
- [NUnit](https://nunit.org/)
For more information see [dotnet test](https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test#examples)
</details>
<details>
<summary>flutter-json</summary>
@@ -357,9 +372,34 @@ Please update Mocha to version [v9.1.0](https://github.com/mochajs/mocha/release
Support for Python test results in xUnit format is experimental - should work but it was not extensively tested.
For pytest support, configure [JUnit XML output](https://docs.pytest.org/en/stable/how-to/output.html#creating-junitxml-format-files) and run with the `--junit-xml` option, which also lets you specify the output path for test results.
For **pytest** support, configure [JUnit XML output](https://docs.pytest.org/en/stable/how-to/output.html#creating-junitxml-format-files) and run with the `--junit-xml` option, which also lets you specify the output path for test results.
For unittest support, use a test runner that outputs the JUnit report format, such as [unittest-xml-reporting](https://pypi.org/project/unittest-xml-reporting/).
```shell
pytest --junit-xml=test-report.xml
```
For **unittest** support, use a test runner that outputs the JUnit report format, such as [unittest-xml-reporting](https://pypi.org/project/unittest-xml-reporting/).
</details>
<details>
<summary>rspec-json</summary>
[RSpec](https://rspec.info/) testing framework support requires the usage of JSON formatter.
You can configure RSpec to output JSON format by using the `--format json` option and redirecting to a file:
```shell
rspec --format json --out rspec-results.json
```
Or configure it in `.rspec` file:
```
--format json
--out rspec-results.json
```
For more information see:
- [RSpec documentation](https://rspec.info/)
- [RSpec Formatters](https://relishapp.com/rspec/rspec-core/docs/formatters)
</details>
<details>

View File

@@ -0,0 +1,26 @@
![Tests failed](https://img.shields.io/badge/tests-6%20passed%2C%202%20failed%2C%202%20skipped-critical)
|Report|Passed|Failed|Skipped|Time|
|:---|---:|---:|---:|---:|
|[fixtures/python-xunit-pytest.xml](#user-content-r0)|6 ✅|2 ❌|2 ⚪|19ms|
## ❌ <a id="user-content-r0" href="#user-content-r0">fixtures/python-xunit-pytest.xml</a>
**10** tests were completed in **19ms** with **6** passed, **2** failed and **2** skipped.
|Test suite|Passed|Failed|Skipped|Time|
|:---|---:|---:|---:|---:|
|[pytest](#user-content-r0s0)|6 ✅|2 ❌|2 ⚪|19ms|
### ❌ <a id="user-content-r0s0" href="#user-content-r0s0">pytest</a>
```
tests.test_lib
✅ test_always_pass
✅ test_with_subtests
✅ test_parameterized[param1]
✅ test_parameterized[param2]
⚪ test_always_skip
❌ test_always_fail
assert False
⚪ test_expected_failure
❌ test_error
Exception: error
✅ test_with_record_property
custom_classname
✅ test_with_record_xml_attribute
```

View File

@@ -1,5 +1,110 @@
// Jest Snapshot v1, https://jestjs.io/docs/snapshot-testing
exports[`python-xunit pytest report report from python test results matches snapshot 1`] = `
TestRunResult {
"path": "fixtures/python-xunit-pytest.xml",
"suites": [
TestSuiteResult {
"groups": [
TestGroupResult {
"name": "tests.test_lib",
"tests": [
TestCaseResult {
"error": undefined,
"name": "test_always_pass",
"result": "success",
"time": 2,
},
TestCaseResult {
"error": undefined,
"name": "test_with_subtests",
"result": "success",
"time": 5,
},
TestCaseResult {
"error": undefined,
"name": "test_parameterized[param1]",
"result": "success",
"time": 0,
},
TestCaseResult {
"error": undefined,
"name": "test_parameterized[param2]",
"result": "success",
"time": 0,
},
TestCaseResult {
"error": undefined,
"name": "test_always_skip",
"result": "skipped",
"time": 0,
},
TestCaseResult {
"error": {
"details": "def test_always_fail():
> assert False
E assert False
tests/test_lib.py:25: AssertionError
",
"line": undefined,
"message": "assert False",
"path": undefined,
},
"name": "test_always_fail",
"result": "failed",
"time": 0,
},
TestCaseResult {
"error": undefined,
"name": "test_expected_failure",
"result": "skipped",
"time": 0,
},
TestCaseResult {
"error": {
"details": "def test_error():
> raise Exception("error")
E Exception: error
tests/test_lib.py:32: Exception
",
"line": undefined,
"message": "Exception: error",
"path": undefined,
},
"name": "test_error",
"result": "failed",
"time": 0,
},
TestCaseResult {
"error": undefined,
"name": "test_with_record_property",
"result": "success",
"time": 0,
},
],
},
TestGroupResult {
"name": "custom_classname",
"tests": [
TestCaseResult {
"error": undefined,
"name": "test_with_record_xml_attribute",
"result": "success",
"time": 0,
},
],
},
],
"name": "pytest",
"totalTime": 19,
},
],
"totalTime": undefined,
}
`;
exports[`python-xunit unittest report report from python test results matches snapshot 1`] = `
TestRunResult {
"path": "fixtures/python-xunit-unittest.xml",

View File

@@ -0,0 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="EmptySuite" tests="0" failures="0" errors="0" time="0">
<testsuite name="EmptySuite" tests="0" failures="0" errors="0" time="0">
</testsuite>
</testsuites>

View File

@@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="FailingSuite" tests="3" failures="1" errors="0" time="0.5">
<testsuite name="FailingSuite" tests="3" failures="1" errors="0" time="0.5">
<testcase name="should pass test 1" classname="FailingSuite" time="0.1"/>
<testcase name="should fail test 2" classname="FailingSuite" time="0.2">
<failure message="Assertion failed" type="AssertionError">
Expected: true
Received: false
at Object.test (/test/example.test.js:10:5)
</failure>
</testcase>
<testcase name="should pass test 3" classname="FailingSuite" time="0.2"/>
</testsuite>
</testsuites>

View File

@@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="PassingSuite" tests="3" failures="0" errors="0" time="0.5">
<testsuite name="PassingSuite" tests="3" failures="0" errors="0" time="0.5">
<testcase name="should pass test 1" classname="PassingSuite" time="0.1"/>
<testcase name="should pass test 2" classname="PassingSuite" time="0.2"/>
<testcase name="should pass test 3" classname="PassingSuite" time="0.2"/>
</testsuite>
</testsuites>

View File

@@ -0,0 +1,42 @@
<?xml version="1.0" encoding="utf-8"?>
<testsuites name="pytest tests">
<testsuite name="pytest" errors="0" failures="2" skipped="2" tests="15" time="0.019"
timestamp="2025-11-15T11:51:49.548396-05:00" hostname="Mac.hsd1.va.comcast.net">
<properties>
<property name="custom_prop" value="custom_val"/>
</properties>
<testcase classname="tests.test_lib" name="test_always_pass" time="0.002"/>
<testcase classname="tests.test_lib" name="test_with_subtests" time="0.005"/>
<testcase classname="tests.test_lib" name="test_parameterized[param1]" time="0.000"/>
<testcase classname="tests.test_lib" name="test_parameterized[param2]" time="0.000"/>
<testcase classname="tests.test_lib" name="test_always_skip" time="0.000">
<skipped type="pytest.skip" message="skipped">/Users/mike/Projects/python-test/tests/test_lib.py:20: skipped
</skipped>
</testcase>
<testcase classname="tests.test_lib" name="test_always_fail" time="0.000">
<failure message="assert False">def test_always_fail():
&gt; assert False
E assert False
tests/test_lib.py:25: AssertionError
</failure>
</testcase>
<testcase classname="tests.test_lib" name="test_expected_failure" time="0.000">
<skipped type="pytest.xfail" message=""/>
</testcase>
<testcase classname="tests.test_lib" name="test_error" time="0.000">
<failure message="Exception: error">def test_error():
&gt; raise Exception("error")
E Exception: error
tests/test_lib.py:32: Exception
</failure>
</testcase>
<testcase classname="tests.test_lib" name="test_with_record_property" time="0.000">
<properties>
<property name="example_key" value="1"/>
</properties>
</testcase>
<testcase classname="custom_classname" name="test_with_record_xml_attribute" time="0.000"/>
</testsuite>
</testsuites>

View File

@@ -15,9 +15,9 @@ describe('python-xunit unittest report', () => {
const fixturePath = path.join(__dirname, 'fixtures', 'python-xunit-unittest.xml')
const filePath = normalizeFilePath(path.relative(__dirname, fixturePath))
const fileContent = fs.readFileSync(fixturePath, {encoding: 'utf8'})
const outputPath = path.join(__dirname, '__outputs__', 'python-xunit-unittest.md')
it('report from python test results matches snapshot', async () => {
const outputPath = path.join(__dirname, '__outputs__', 'python-xunit.md')
const trackedFiles = ['tests/test_lib.py']
const opts: ParseOptions = {
...defaultOpts,
@@ -68,3 +68,26 @@ describe('python-xunit unittest report', () => {
expect(report).toMatch(/^# My Custom Title\n/)
})
})
describe('python-xunit pytest report', () => {
const fixturePath = path.join(__dirname, 'fixtures', 'python-xunit-pytest.xml')
const filePath = normalizeFilePath(path.relative(__dirname, fixturePath))
const fileContent = fs.readFileSync(fixturePath, {encoding: 'utf8'})
const outputPath = path.join(__dirname, '__outputs__', 'python-xunit-pytest.md')
it('report from python test results matches snapshot', async () => {
const trackedFiles = ['tests/test_lib.py']
const opts: ParseOptions = {
...defaultOpts,
trackedFiles
}
const parser = new PythonXunitParser(opts)
const result = await parser.parse(filePath, fileContent)
expect(result).toMatchSnapshot()
const report = getReport([result])
fs.mkdirSync(path.dirname(outputPath), {recursive: true})
fs.writeFileSync(outputPath, report)
})
})

View File

@@ -1,6 +1,5 @@
name: Test Reporter
description: |
Shows test results in GitHub UI: .NET (xUnit, NUnit, MSTest), Dart, Flutter, Java (JUnit), JavaScript (JEST, Mocha)
description: Displays test results from popular testing frameworks directly in GitHub
author: Michal Dorner <dorner.michal@gmail.com>
inputs:
artifact:

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "test-reporter",
"version": "2.2.0",
"version": "2.3.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "test-reporter",
"version": "2.2.0",
"version": "2.3.0",
"license": "MIT",
"dependencies": {
"@actions/core": "^1.11.1",

View File

@@ -1,6 +1,6 @@
{
"name": "test-reporter",
"version": "2.2.0",
"version": "2.3.0",
"private": true,
"description": "Presents test results from popular testing frameworks as Github check run",
"main": "lib/main.js",

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -9,6 +9,6 @@
"author": "Michal Dorner <dorner.michal@gmail.com>",
"license": "MIT",
"devDependencies": {
"mocha": "^8.3.0"
"mocha": "^11.7.5"
}
}