npx skills add https://github.com/github/awesome-copilot --skill csharp-nunitHow Csharp Nunit fits into a Paperclip company.
Csharp Nunit drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.
Pre-configured AI company — 18 agents, 18 skills, one-time purchase.
SKILL.md71 linesExpandCollapse
---name: csharp-nunitdescription: 'Get best practices for NUnit unit testing, including data-driven tests'--- # NUnit Best Practices Your goal is to help me write effective unit tests with NUnit, covering both standard and data-driven testing approaches. ## Project Setup - Use a separate test project with naming convention `[ProjectName].Tests`- Reference Microsoft.NET.Test.Sdk, NUnit, and NUnit3TestAdapter packages- Create test classes that match the classes being tested (e.g., `CalculatorTests` for `Calculator`)- Use .NET SDK test commands: `dotnet test` for running tests ## Test Structure - Apply `[TestFixture]` attribute to test classes- Use `[Test]` attribute for test methods- Follow the Arrange-Act-Assert (AAA) pattern- Name tests using the pattern `MethodName_Scenario_ExpectedBehavior`- Use `[SetUp]` and `[TearDown]` for per-test setup and teardown- Use `[OneTimeSetUp]` and `[OneTimeTearDown]` for per-class setup and teardown- Use `[SetUpFixture]` for assembly-level setup and teardown ## Standard Tests - Keep tests focused on a single behavior- Avoid testing multiple behaviors in one test method- Use clear assertions that express intent- Include only the assertions needed to verify the test case- Make tests independent and idempotent (can run in any order)- Avoid test interdependencies ## Data-Driven Tests - Use `[TestCase]` for inline test data- Use `[TestCaseSource]` for programmatically generated test data- Use `[Values]` for simple parameter combinations- Use `[ValueSource]` for property or method-based data sources- Use `[Random]` for random numeric test values- Use `[Range]` for sequential numeric test values- Use `[Combinatorial]` or `[Pairwise]` for combining multiple parameters ## Assertions - Use `Assert.That` with constraint model (preferred NUnit style)- Use constraints like `Is.EqualTo`, `Is.SameAs`, `Contains.Item`- Use `Assert.AreEqual` for simple value equality (classic style)- Use `CollectionAssert` for collection comparisons- Use `StringAssert` for string-specific assertions- Use `Assert.Throws<T>` or `Assert.ThrowsAsync<T>` to test exceptions- Use descriptive messages in assertions for clarity on failure ## Mocking and Isolation - Consider using Moq or NSubstitute alongside NUnit- Mock dependencies to isolate units under test- Use interfaces to facilitate mocking- Consider using a DI container for complex test setups ## Test Organization - Group tests by feature or component- Use categories with `[Category("CategoryName")]`- Use `[Order]` to control test execution order when necessary- Use `[Author("DeveloperName")]` to indicate ownership- Use `[Description]` to provide additional test information- Consider `[Explicit]` for tests that shouldn't run automatically- Use `[Ignore("Reason")]` to temporarily skip testsAdd Educational Comments
Takes any code file and transforms it into a teaching resource by adding educational comments that explain syntax, design choices, and language concepts. Automa
Agent Governance
When your AI agents start calling APIs, touching databases, or executing shell commands, you need guardrails before something goes sideways. This gives you comp
Agentic Eval
Implements self-critique loops where Claude generates output, evaluates it against your criteria, then refines based on its own feedback. Includes evaluator-opt