Testing - Validation - Stability

Discussion in 'General Webmaster Support' started by rover2341, Apr 27, 2015.

  1. rover2341

    rover2341 Is riding a roller coaster...Wee!

    Ratings:
    +114 / 0 / -0
    Problem: You Create website/application, etc. that starts to get complex, and at the very least large in functionality. Some Functions that don't use any resources (sql, file, ect) can be tested using, functional and unit testing. But others that use sql become more tricky. My goal is address those issues here, and to get feedback on how others deal with this issue.

    Functional:

    Think User Does Something and expects a output. Could hit 1 function could hit 100.

    Unit:

    Think a sub section of code that is tested as a unit. Start high and work down.

    Code Coverage:

    Full Code coverage would mean every function in the program is covered.


    Testing
    1 Test Case For every expected out come.
    Determine every expected outcome by reviewing the code.

    Data Layer: Used to project Integrity of Data (SQL, File, Ect)

    In Conclusion: This amount of testing should be sufficient if the framework your code is siting on is reliable .net java ect. And all cases to cover the code are covered.

    It should not be required to test with 10000 test cases.

    This can be done manual, or auto.

    Code:
    public int Select(int x)
    {
         if(x == 1)
             return "One";
         else if(x == 2)
             return "Two";
         else
             throw(new Exception("Invalid Selection");
    }
    Input: 1 Output: One
    Input: 2 Output: Two
    Input: 3 Output: Exception: Invalid Selection

    Code:
    public int Add(int x, int y)
    {
        return x + y;
    }
    Input: 5, 5
    Output: 10

    Input null
    5 Output: Error

    Code:
    public int AddPoint(string name)
    {
        //Gets Person From SQL
        Person x = GetPersonByName(name);
        x.Points = x.Points + 1;
    
        //Function saves Person
        SavePerson(x)
       
        return x.Points;
    }
    Input: Tom
    Output: 1

    Input: Tom
    Output: 2

    Input: Frank
    Output: Error Get Person Failed

    This seems testable. But its only testable if the table has valid condition's.
    For Example. What someone changed Points to Null on another function. or a invalid number like say -100
    Then this function that relies on that wont work.

    So you have 2 options. validate the things you work with any time you work with them Like add checks in this function for null and what not.

    OR

    Create a Data Layer:

    This layer would have all query's in it. and would have validation that everything being updated/created/deleted is valid for the program to work. Instead of query's throughout the program.



    If I am wrong on this, let me know! I personally don't do this as of today, but i plan to start doing it as of tomorrow. But I hate when people ask me does this work, and I cant tell them it does with a very high amount of certainty. Testing things at random or as i code, is not sufficient. Documenting these things are important, so you know whats been tested and what hasn't.

    Things can quickly become invalid if anything related to any of the tests change. So test once i don't believe is the right way. But test efficiently, and effectively. Auto Testing is neat if you can get to that level.


    Do you test your code? If so how?
     
  2. Slapshot136

    Slapshot136 Divide et impera

    Ratings:
    +483 / 2 / -0
    I usually stick to unit tests (and attempt to have enough to test each function), but have also had the same issues where testing SQL or similar isn't really feasible.. the best I have gotten is doing a "unit test" on stored procedures, and then just relying on those - for higher-level tests I will create mock clients or situations that test the entire code (i.e. a test suite in soap UI or similar), but really those are more of an addition to the documentation than for testing purposes

    not really sure I understand how your "data layer" is supposed to work - it seems like it would be at least as complicated as a correctly configured back-end database (with the proper data/structure in it)
     
  3. seph ir oth

    seph ir oth Mod'n Dat News Jon Staff Member

    Ratings:
    +261 / 2 / -0
    I've spent a good deal of time writing automated test cases for the service layer calls I've written, and while it's a different database, the concept is all the same: keep your service calls simple, clean, and only grab the barebones. If you have to get a lot of data, break it up into smaller calls if possible. If, for efficiency sake, you would rather do a bulk call for something, then expect longer times for the function to finish.

    Not sure if you're working with .NET or not but check this out:

    https://msdn.microsoft.com/en-us/magazine/dn818493.aspx

    It's best to run asynchronous calls on db unit tests.

    EDIT: Noticed I'm late to the party by nearly a month. Oh well ;)

    EDIT2: As for valid conditions for testing against db's, you should always have a "test db" on-hand. One that can be reset and re-tested. That way the data is always the same at the start and is slated for being reset afterwards. SQL should support reverting back to a specific state using a .bak file, no? Create a unit_test.bak file, and write a little VB script that executes a db revert then runs the db unit tests.
     
  4. Accname

    Accname 2D-Graphics enthusiast

    Ratings:
    +1,551 / 4 / -4
    I never made a website but I would use abstraction. Build a facade around the data base and only interact with the facade from the point of view of your application. Then, have different implementations of the facade; for testing use a memory-based backend. For the actual implementation use a regular data base. Then make a mathematical prove that your memory-based implementation and your data base implementation work exactly the same way (same inputs => same outputs) and you are golden.
    If your problem is with synchronization and having to fear that one function might result in a bug in another function because of a dirty write I would suggest always checking data when writing. This has, of course, a negative effect on performance, but it can always be deactivated when deployed. If it doesnt break during testing / debugging then its not too likely to break during use.
     

Share This Page