Question

  • Creator
    Topic
  • #2141581

    code coverage data of technically incorrect unit tests

    by harshil-b1e7a ·

    Tags: 

    When I joined my company as a new comer and I was exploring the unit test suite of the product code. It is using gtest framework. But when I checked all the tests, they were testing the whole functionality by calling real functions and asserting expected output. Below is one such test case as an example:

    “`
    TEST(nle_26, UriExt1)
    {
    int threadid = 1;
    std::shared_ptr e = std::make_shared(threadid, “./daemon.conf”);
    std::shared_ptr attr = e->initDefaultLSAttrib();
    e->setLSAttrib( attr );
    std::shared_ptr ndb = e->initDatabase(datafile,e->getLogger());
    e->loadASData(ndb);
    e->setVerbose();

    std::shared_ptr m = std::make_shared(e->getLogger());
    ASSERT_TRUE(m != nullptr);
    ASSERT_TRUE(e != nullptr);
    m->readFromFile(“../../msgs/nle1-26-s1”);
    e->scanMsg(m, &scan_callBack_26, NULL);
    std::map> Parts = e->verboseInfo.eventParts;
    std::vector uris = Parts[“prt.uri”];
    ASSERT_EQ(uris.size(), 2 );
    ASSERT_EQ(uris[0] , “mailto:www.us_megalotoliveclaim@hotmail.com”);
    ASSERT_EQ(uris[1] , “hotmail.com”);
    }
    “`
    I found all the tests in the unit test directory having the same pattern like:

    – Creating and initialising actual object
    – Calling actual function
    – Starting actual daemon
    – Loading actual database of size around 45MB
    – Sending actual mail for parsing to daemon by calling actual scanMsg function, etc.

    So, all the tests appear more of as functional or integration tests, rather than unit tests, which ideally should be testing about a particular unit or function code.

    But, the critical part is, on their official intranet site, they have projected the code coverage percentage of this product as 73%, computed using gcov.

    Now, code profiling tools like gcov computes coverage on the following params:

    1. How often each line of code executes
    2. What lines of code are actually executed
    3. How much computing time each section of code uses.

    As, these tests are running actual daemon, loading real database and calling actual functions to scan the message including actual network (socket) calls.

    So my bothering question is:
    Apparently, test suite consists of all technically incorrect unit tests by its standard definition. So does gcov generated coverage on such test suite, can be trusted or reliable? [I honestly don’t think so.]

You are posting a reply to: code coverage data of technically incorrect unit tests

The posting of advertisements, profanity, or personal attacks is prohibited. Please refer to our Community FAQs for details. All submitted content is subject to our Terms of Use.

All Answers

  • Author
    Replies
    • #2415146
      Avatar photo

      Sounds like GIGO to me.

      by rproffitt ·

      In reply to code coverage data of technically incorrect unit tests

      Anyhow, we always take time to learn the company’s stance on software testing. Sometimes, such in game software, they deliver even when testing fails.

      Does the app work well enough and meet the current goals and stories? (Think Agile.)

      I find that some new to software development and I must write a caveat here to note mission critical and life support will be different, take a view that all must be perfect before release.

      To be perfect before release (with the caveat noted) may mean a missed opportunity, the end of the company, or insufficient resources and time to fix everything.

      -> So there’s that.

      Let me pick on your item 3 about time. Is there a specification about this? Sometimes I have a new team member that calls out we’re not ready because a passage took too long but they didn’t supply what requirement or spec we didn’t meet. Again, I call out the caveat above.

    • #2415145
      Avatar photo

      Thought about it more.

      by rproffitt ·

      In reply to code coverage data of technically incorrect unit tests

      As gcov and gprof are well, what they are (nod to https://alex.dzyoba.com/blog/gprof-gcov/ for a primer) these tools are better than nothing but fall far short of newer tools like Coverity. https://en.wikipedia.org/wiki/Coverity

      So your test cases are only as good as your test cases and fall short of full blown static analysis.

      But I don’t know your company. Maybe they can’t afford better tools and have decided this is good enough and to rely on humans to avoid shipping out a disastrous release.

      PS. Here’s a meme. Remember that policy and guidelines often mean nothing when it’s crunch time (or no money!)
      [image]https://i.imgur.com/gwUXeSp.jpg[/image]
      – Imgur!

Viewing 1 reply thread