[ / / / / / / / / / / / / / ] [ dir / 3rdpol / 8cup / arepa / fast / leftpol / qanon / soyboys / tacos ][Options][ watchlist ]

/tech/ - Technology

You can now write text to your AI-generated image at https://aiproto.com It is currently free to use for Proto members.
Email
Comment *
File
Select/drop/paste files here
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Expand all images

File (hide): 41419145636e824⋯.png (2.35 MB, 1080x1305, 24:29, kystpmagk0711.png) (h) (u)

[–]

 No.943786>>944004 >>944691 >>948987 [Watch Thread][Show All Posts]

What do anons think about test driven development?

 No.943791

more like tard driven development


 No.943796>>943801 >>947122

>Why don't we write code that just works? Or absent a "just works" set of patches, why don't we revert to code that has years of testing? This kind of "I broke things, so now I will jiggle things randomly until they unbreak" is not acceptable. [...] Don't just make random changes. There really are only two acceptable models of development: "think and analyze" or "years and years of testing on thousands of machines". Those two really do work.

t. Torvalds


 No.943801

It has some use, for example if the base software has some deprecated dependencies and need to change a bunch of things.

Create a bunch of tests, make changes until what needed replacing is changed, and validate with test.

Otherwise >>943796


 No.943826

I like the idea in theory, but in practice it's a pile of crap.

Maybe there are development teams that made it work for them, but I've only seen it waste time with subpar tests that people hacked together just to be done with it.

And when they tried to make them good, management was bitching about how slow everything went.

In the end they're not worth it. Maybe on software with a very long lifespan, I don't know.

I'd suggest only writing tests for the complex parts, while skipping the mundane parts. It's not how it's supposed to be done, but it'll be less of a timesink and more useful.


 No.943859>>943861 >>943863

I'm not experienced enough to have a well-formed opinion, but I think instead of speedrunning the writing of your program (probably leaving dozens of TODOs in the process) and then trying to unbreak it is a terrible plan of development and results in bad code.


 No.943861

>>943859

Eh, I fucked that post up a bit.


 No.943863>>944009

>>943859

If you know exactly how something is going to work before programming it, then you shouldn't have to test it. And if you don't know how it's going to work beforehand, then writing tests is just a complete waste of time.


 No.943876

It's wonderful at getting rid of regressions and trading off runtime bug hunting for compile time test writing. The main problem lots of folks have with it is its inability to integrate well within workflows. Opening a seperate file and writing tests for my program is a bit of a pain especially when they get more complex and I'm thinking more about the behavior of my test than the behavior of my functions. Contract driven design with automatic procedural test generation like in Eiffel would be wonderful to have in more languages. Not only does it allow you to maintain your workflow by staying on the topic of writing functions to perform the task at hand but it also allows you to gain greater test coverage in a number of situations while doing this.


 No.943880>>943883 >>943887

Automated unit tests are pointless for most well designed software. 99% of the time you'll uncover the problem when you are manually testing the changes yourself.

In cases when you can't test the changes yourself having some form of testing is important.


 No.943883>>943889

>>943880

I mean that's more or less agreeable but you seem to downplay how often you can't manually test things. As common things as referenceing lots of external state, or compiling for multiple platform, or lots of internal state, or generally anything even moderatily complicated manual testing is a real pain in the ass. Also the debug information is so useful, not only does it tell you that something went wrong like you would know if your program flat out doesn't work but it tells you where your program went wrong. It's difficult to imagine something more powerful than that when dealing with programs with runtime errors.


 No.943887>>943889

>>943880

>Automated unit tests are pointless for most well designed software.

You're insane. Making non-trivial changes to a complex piece of software with no way to verify you didn't fuck some edge-case causes code rot.


 No.943889>>943895 >>943899 >>944009 >>944015

>>943883

>it tells you where your program went wrong

Well it goes wrong in the section I just modified.

>>943887

If it's that complex it will be easier to manually go through the edge cases as it would be a LOT more work to write a test.


 No.943895

>>943889

>it will be easier to manually go through the edge cases

Modify the JVM. Manually test all edge cases. Try to finish before dying of old age.


 No.943899>>943992

>>943889

>manually testing every edge case after every modification so you know where the error is going to be on a complex program.

That's a damn good meme there.


 No.943992

File (hide): a9ef81d7f3af1b4⋯.jpg (51.94 KB, 1079x213, 1079:213, IMG_20180718_074054.jpg) (h) (u)


 No.943995

It's nice when you're writing code to conform with specifications, but otherwise it is best for testing points of interface such as api/abi endpoints.


 No.944004>>944009 >>948963

>>943786 (OP)

Tests are fine. Writing them before actual code is bullshit. Testing completely trivial things is also bullshit.


 No.944009>>944018

Aside from what the other anons have said, adopting TDD and its derivatives like BDD provide a major benefit for open source projects in CY+3. That being that the sentence "the automated build system is misogynistic because it rejected my merge request on the grounds that all my tests failed, we need a CoC to remedy this" sounds completely insane to 99.9999% of developers and serves to act as another bulwark to protect people who actually care about meritocracy and their projects from the SJW menace.

>>943863

>If you know exactly how something is going to work before programming it

If you know exactly what the end result is going to be then you are working on an already solved problem, in which case you are wasting your time.

>>943889

>If it's that complex it will be easier to manually go through the edge cases as it would be a LOT more work to write a test.

You clearly have no fucking idea what you are talking about.

The seL4 test suite takes over 8 hours on a high end system to run all tests, and that's only ~10,000 LOC and >200,000 lines of tests. To date its the only kernel proven to be functionally correct.

The most complex codebase I have worked on in terms of testing was one which had so many possible permutations that it would take months to test them all, and since it was for an application which required high reliability we needed to make damn sure it worked. Through a combination of architecting the software to make testing easier and only testing a subset of permutations inhouse (the software would run a test on the configuration chosen by the customer during installation) we managed to get it down to 3-4 hours including a small amount of benchmarking, even then this would still cause development bottlenecks since the build system would run the tests on every single merge request to ensure no bugs were being introduced.

>>944004

Prototyping things before you have written any tests is fine, but the minute you start writing code which you intend to ship to customers its better to already have at least some tests in place. A good development framework will have tests and benchmarks run on sections of code with every merge request so that regressions can be detected and remedied as they happen, the worst thing that can happen is a small bug or slow code introduced early on and only be realised later on in development since you risk having to rewrite large sections of code to fix it.


 No.944015>>944018

>>943889

>writing tests is a LOT of work

How are you writing tests and why don't you know how to do it an easier way?

Unit testing exists for a reason; is it that hard to verify that, given a certain input, you get the expected output? Is it that hard to write a test suite?


 No.944018>>947128

>>944009

>The seL4 test suite takes over 8 hours on a high end system to run all tests

That seems very unproductive. Every time you make a trivial change you would have to wait 8 hours. I'm sorry, but manually testing your change is going to be much faster.

>which had so many possible permutations that it would take months to test them all

You make it sound like that any change can break any part of the codebase. In reality a change is going to only effect a localized section of your code. It's not like fixing a font rendering issue is going to break your networking code.

>>944015

A. The project is complex.

B. It's not written in a way you can write tests meaning that you will have to spend a long time creating proper mock objects for potentially hundreds of different objects.


 No.944024>>944632

https://www.sqlite.org/testing.html

>As of version 3.23.0 (2018-04-02), the SQLite library consists of approximately 128.9 KSLOC of C code. (KSLOC means thousands of "Source Lines Of Code" or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 711 times as much test code and test scripts - 91772.0 KSLOC.

>Three independently developed test harnesses

>100% branch test coverage in an as-deployed configuration

>Millions and millions of test cases

>Out-of-memory tests

>I/O error tests

>Crash and power loss tests

>Fuzz tests

>Boundary value tests

>Disabled optimization tests

>Regression tests

>Malformed database tests

>Extensive use of assert() and run-time checks

>Valgrind analysis

>Undefined behavior checks

>Checklists


 No.944632

>>944024

>>Extensive use of assert() and run-time checks

Nigger I work with thinks that asserts exist to verify conditions on user input. He's an okay idea guy (has not entirely shitty ideas), but sometimes he just throws shit at everyone with the code he writes.

OP

It can be helpful. I have found stupid mistakes in my code from the brain-dead tests written for it. The problem is that a good bulk of unit tests are written for code that is trivially correct. That 80/20 or 90/10 rule: 90% of bugs are in 10% of code, which means that 90% of unit tests cover code that is statistically unlikely to be incorrect.Most bugs peek their heads when you start integrating the pieces together and find that somebody assumed something that wasn't true.

Example from nigger I work with:

A) Read in information

B) Validate information

C) Do some transformations

D) Report validation errors and prompt to continue with invalid data

Nigger I work with was working on step C, and decided that since validation had already been done, that he would never get a NULL pointer and didn't have to check for one. The requirements are that the program can continue with invalid input, a requirement he disagreed with. All the little pieces work as tested, however the full stack mysteriously crashes. His defense: "Is that so? I didn't know that that was a requirement! I didn't notice that every other function around the one I modified checks for NULL pointers."


 No.944691

>>943786 (OP)

>TDD

Outdated methodology; the good parts such as iterative development and acceptance testing-as-living-documentation are better implemented in BDD for large organizations, because all stakeholders can read the tests (not just the devs) and automated testing is put in reach of non-programmer analysts, making better use of everyone's time. For smaller projects, it's overkill. There's just no use case for it.


 No.947122

my company has the ttd mandate (I don't know what else call it) buy nobody does that in reality, tests are written afterwards and it's just natural. I agree 100% with Torvalds on tests in code

>>943796

But linux has the benefit of not having deadlines and the need to sell the product, unlike 5he corporations. They hire such code monkeys you literally cant go without tests. And such bullshit like TTD was invented because of that (just like java)


 No.947124>>947166

I also hate that all of our code has all the methods virtual just so they can be mocked in the gay Google test and it's just to check if a function is called.

void func1 ()

{

//do something in this function

func2 (); // virtual just so you can check if it was called in your UT

}


 No.947128

>>944018

>It's not like fixing a font rendering issue is going to break your networking code.

Tell that to the NT kernel developers.


 No.947166>>947282

>>947124

You're almost certainly doing something wrong. Either write a preprocessor macro that automatically makes a mockable function, or get hardcore with your test suite and have it look at debugging symbols to find out of a method got called.


 No.947282

>>947166

I suggested the debug symbols approach but got turned down. This is not my own code, Jesus Christ anon...


 No.947400>>947502

it sucks massive cocks


 No.947502

>>947400

t. anon who never developed a single piece of software

and no ricing arch and writing shitty scripts does not count as developing software


 No.948963>>948978 >>950046

>>944004

Agreed, writing tests before the code implies you know what the API/spec of your code should be, which is rarely true.

E.g. if I'm refactoring some private functions, then I will write the new version, iterate until I'm happy with the API of the new version (i.e. it's simpler and results in simpler code where it's called) and the existing tests pass, then write tests for the new function.

The same is true with user facing APIs except I will iterate on sample user code.


 No.948978

>>948963

This is pretty much what I've been thinking lately.

Write code -> Write tests while I refactor -> Optimize and/or Simplify

It seems to me that TDD makes the most sense when you're refactoring because by the time you're refactoring you know more or less what your functions are going to do but you haven't started iterating on functions yet. TDD is really made for makeing iteration easier.


 No.948987>>950046

>>943786 (OP)

Coming from the QA side, TDD is fan-fucking-tastic when you have developers on a SOA application, particularly in the case of microservices. I only end up with the difficult fun problems in those cases not the fucking retarded curry-niggers who write shit code and don't even test it. Don't be a curry nigger.


 No.949952

It needs to be zero-cost during runtime.

https://github.com/onqtam/doctest


 No.950046

>>948963

>writing tests before the code implies you know what the API/spec of your code should be, which is rarely true.

Architecting based on known requirements will give you a good enough idea of what the API/spec should be before starting.

> if I'm refactoring some private functions, then I will write the new version

You shouldn't be writing tests for private members,

friend
isn't something that should be used often.

>>948987

>Coming from the QA side, TDD is fan-fucking-tastic when you have developers on a SOA application

TDD makes the QA process far more efficient, every thorough test written results in the relevant piece of code not needing to be manually checked for regressions every merge (unless the spec changes).

A good build system will run code linters and all sorts of tests and benchmarks on a variety of hardware configurations and reject anything which doesn't meet standards or regresses the codebase.




[Return][Go to top][Catalog][Screencap][Nerve Center][Cancer][Update] ( Scroll to new posts) ( Auto) 5
35 replies | 2 images | Page ???
[Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / 3rdpol / 8cup / arepa / fast / leftpol / qanon / soyboys / tacos ][ watchlist ]