testing

#define testing: \ I-------------------------------------\ I _____ _ _ \ I |_ _| | | (_) \ I | | ___ ___| |_ _ _ __ __ _ \ I | |/ _ \/ __| __| | '_ \ / _` | \ I | | __/\__ \ |_| | | | | (_| | \ I \_/\___||___/\__|_|_| |_|\__, | \ I __/ | \ I |___/ \ I-------------------------------------I • the act of searching for bugs • the absence of evidence is not the evidence of absence • there are known knowns, there are known unknowns, but there are also unknown unknowns: things that we dont know that we dont know • untested non-trivial software WILL have bugs • well testable systems treat tests as an internal component; they are not simply an after thought or a secondary consideration, but something integrated into the core project architecture • for maximum testability, consider having a dedicated test interface • a test regarding individual components {branches; functions; objects; module-s} is called a component test • a component test regarding a method is called a unit test (see BELOW) • a test performed on the whole software is called a system test • a test regarding the relation between 2 or more components or systems is called a integration test • an alpha test is a test performed by internal people {managers; stake holders}, but not the developers • a beta test is a test performed by a small subset of the end users • a deployment test is a test performed by the end users outside of production • a fault is a static defect in the code • an error is a in incorrect internal state caused by a fault • a failure is an incorrect, observed behaviour with regards to the expected behavior • FIRST - "First, Independent, Repeatable, Self-validating, Timely" (catch-phrase to be applied to tests) • testing is the act of failure and error discovery Black_box: • specification based • emulates real world usage • usually performed by a second party White_box:"structured testing" • source code based Gray_box: • transition between white and blackbox testing • the source code is partially known Tests_as_code: • each test shares a common interface, this is usually defined by the tool, library or even framework the project uses — common interface paradigms: • each test is a method inside a class inheriting from a special ancestor • each test is a function within a special file • each test has a special annotation • each test must throw on error • each test must return 0 to signal success (C convention) • each test must return 1 to signal success (Ada bool function convention) oop_and_tests: • a common problem is that OOP basically prohibits component testing • since class-es encapsulate, nothing should be able to access its private internals not even tests • this is usually resolved with various hacks and workarounds {reflections} • friendship can be good solution on the langauge design level Unit_test: • the most loved kid of the test-type family • quite often people only write unit tests because thats the only thing they can remember from their worthless education • there are various unit testing frameworks; each is language specific, because we live in a dark age where no one has heard about code generation • people often use the term unit test to refer to component tests; this is rooted in radical OOP's lack of free functions, but is also a rather reasonable convention . ### Unit test example in Javascript ### • javasript, because thats the most unsafe language i can think of • unit tests are technically defined as tests around methods, because they originate from the radically OOP lands of java; if anyone asks, mention the ambient class and call it a day { // Function to be tested function add(a, b) { return a + b; } // Test basic assumptions regarding the result function add_test1() { let i = add(1, 1); return typeof i === 'number'; } function add_test2() { let a = 1; let b = 1; let i = add(a, b); /* pretend the tested function is more complicated, * so that such mathematical condition would make * practical sense */ return (i > a && i > b); } // Throw shit at the fan using known outputs /* NOTE: on a conceptual level, this is also how * (most forms of) AIs are validated */ function add_test3() { return (add( 1, 1) == 2 && add( 3, 2) == 5 && add(100000, 1) == 100001 && add( -1, -1) == -2 && add( -1, 1) == 0 ); } // Call to all tests so you may insert this to a browser console console.log(add_test1()); console.log(add_test2()); console.log(add_test3()); } # { // Horrid unit tests: // the following example is "Martin R. // 'Agile Software Development, Principles, Patterns and Practices' // Listing 4-2" public void testPayroll() { MockEmployeeDatabase db = new MockEmployeeDatabase(); MockCheckWriter w = new MockCheckWriter(); Payroll p = new Payroll(db, w); p.payEmployees(); assert(w.checksWereWrittenCorrectly()); assert(db.paymentsWerePostedCorrectly()); } // Not sure if you caught it: but that tests jackshit. // Any compiler will catch if a -mind you trivial- // object cannot be initialized or its functions have // invalid returns. // I would like to stress that theres no immediate context missing, // this is painfully obvious when you consider that // we are using a mock implementation. // The larger context however is TDD. // This sentiment that we would have not been able to come up // with this genius interface if not for creating this test. } TDD:"Test Driven Development" • tests are written before the code which shall pass them • makes sense on a basic level, but then manages to become radical crazy-speak • unless you are a drooling retard, it cripples development speed ○ laws 1. You may not write production code until you have written a failing unit test. Also refered to as "red-green-red", due to the iterative process of: I. Writting a failing test II. Making the test pass III. goto I 2. You may not write a unit test with more code than what sufficient for it to fail, and not compiling is considered failing. 3. You may not write more production code than what is sufficient to pass the currently failing test. BDD: Cucumber • "Behaviour Driven Development" • utter insanity • "TDD II: now with more bloat" • the root of presenting user-stories as pseudo natural language, then wrapping it in a DSL and implementing a test accordingly in the initial language of the project; which, adds more code to maintain, a sloppy compiler to satisfy, an extra layer of indirection to test with no guarantees and the mental overhead of reading natural text and processing it as code (no indentation, barely any keywords to recognize; no syntax highlighting, no conventions); FOR NO BLOODY BENEFIT AT ALL • the theory is, that you will make your analyst write code, while pretending it's not code, closing the window for miscommunication • for the last bloody time, normies are unwilling and unable to code, especially with any quality; see: SQL, excel, GPT FDD:"Fear Driven Development" https://agvxov.github.io/fdd/fdd_manifesto.pdf Prototype: Throw_away: • single purpose • will not be reused • code can be extremely low quality {slow; unreadable; hard to expand; unsecure}, because it will not influence the end result • cheap to make • used for demonstration and proof of concept purposes • a throw away prototype of how the end product will look like is called a screen designs • there are so called mockup and wireframe tools allowing for very quick creation of semi-functional GUIs (clickable and navigable, but there's no backend providing meaningful functionalities) Evolutionary: • will be reused • code must comply with the end quality • trashing is expensive • not uncommon that overcommitment to it, holds development back Risks_mitigation: • the seriousness of risk is the product of its aspectsaspects of risk • probability • potential damage ○ steps • identification • eval • reduction • communication — TOE: • "Target of Evaluation" • the software — PP: • "Protection Profile" • special type of documentation • paper specifying privilege groups — ST: • "Security Target" • list of security requirements • PPs included — SFR: • "Security Functional Requirements" • special type of documentation — SAR • "Security Assurence Requirements" • special type of documentation ○ guides — COBIT: • "Control Objectives for Information and related Technologies" • created by ISACA • nobody knows what it actually does, but it sure as hell is important to mention in classes (for whatever reason) — ITB: pass — IBK: • "Informatikai Biztonsági Koncepció"^HU — CCITSE • "Common Criteria for Information Technology Security Evaluation" — EAL • "Evaluation Assurance Level" • num val to grade the ST • higher grades include the ones smaller than itself 1. Functionally tested 2. Structurally tested 3. Methodically tested 4. Methodically designed and revisited 5. Semi-formally designed and tested 6. Semi-formally revisited 7. Formally verified Test_design: 1. Do math or analysis to obtain test requirements 2. Find input values that satisfy the test requirements 3. Automate the tests 4. Run the tests 5. Evaluate the tests ---------------- cmdtest: ---------------- • "cli unit testing utility" • written in, and uses ruby • pretty satisfactory • since you are given a whole-ass language, you could pass in hacky stuff through ENV https://holmberg556.bitbucket.io/cmdtest/doc/cmdtest.html cmdtest [options] [testfile] --fast : cmdtest waits by default; its dumb, this option disables it Files: Defaults_search_paths: • similar to Make's Makefile • in order 1. t/CMDTEST_*.rb 2. test/CMDTEST_*.rb 3. CMDTEST_*.rb Tests: • a test file is a ruby script • all test files inherit from Cmdtest::Testcase • testing is defined as methods • the environment is not modified • each test executes in its own, sterile directory Methods: setup : called before each test; can set up the environment for tests teardown : called after each test; can free resources; temp files are deleted by default test_* : test to run Functions: • these functions are provided by Cmdtest to ease testing cmd "<command>" <function> skip_test environment: import_file(src, desc) import_directory(src, dest) assertive: exit_zero exit_nonzero exit_status created_files changed_files removed_files written_files affected_files file_equal file_encoding stdout_equal stderr_equal { // test/CMDTEST_myproject.rb class CMDTEST_example < Cmdtest::Testcase def test_1 cmd "program.out" do exit_status 17 end end end // to accept any output: // using regex stderr_equal /.+/ // to pull in input files to the sandbox: def setup import_file "muh_file.txt", "./" end } --------------- postman: --------------- • industry standard REST API testing tool • CURL, but with buttons • perfect example of over-engineering • i wish it was good; for the time being httpie seems ones best bet --------------- Cypress: --------------- • browser testing framework in node.js • comperable to selenium Programs: cypress <verb> open : run gui Files: cypress/ : root directory of a cypress project; │ usually sits integrated inside another project └── e2e/ : "end-to-end"; legacy name; user test container └── *.cy.js : user test Test_files: object cy: visit(<url>) ElementList get(<selector>) request(<string-method>, <string-path>, <object-body>) contains(<string>) class Element type(<string>) class ElementList eq(<int>) : returns the <int>th element ---------------- Cucumber ---------------- • BAHAHAAHAHAAHAHAAAAAAAHAAAAAAAAAAHHAAAAAAHAAHAHAHAHAHA — uses *.feature files to defile tests • DSL • "natural language" (arbitrary atomic strings) are used to describe events • events are connected by operators • ordered into "scenario"s • the arbitrary strings are annotated to vanilla functions • you write boiler plate, so you can express your tests as this not-natural-language, not-code bullshit