C is used even though it has huge flaws and misdesigns precisely because it's extremely unproductive and badly designed. C is unproductive, so C programs need more programmers, so there are more C programmers and a bigger demand for them. C is incompatible with any language with real arrays and strings, so if you want to be compatible, you either have to convert all your data types each way or stick with C. C sucks so much that techniques used on systems like Multics and VMS to allow programs written in different languages to work together won't work with C because of array decay and null-terminated strings.
The bad design also means C (and other UNIX languages that inherited this brain damage) are more complicated, which is why there are so many "graduates" that only know one language when it used to be common to learn several different languages. The standard is also much longer even though C has fewer features. The language is harder to compile efficiently, which is why compilers are so bloated. The C standard itself is so poorly written and ill-defined that the standards committee could not add a single feature between 2011 and 2018 (besides increasing the version number) because they spent all that time fixing bugs in the standard itself.
Another reason is the huge revisionism campaign started by AT&T shills in the 80s, which is why there are people out there who believe C was the first high-level language used to write an operating system and other bullshit. There are even some weenies out there who said OOP came from C and C++.
Yes, and they've succeeded. Hordes of grumpy C hackers
are complaining about C++ because it's too close to the
right thing. Sometimes the world can be a frightening
place.
I've been wondering about this. I fantasize sometimes
about building better programming environments. It seems
pretty clear that to be commercially viable at this point
you'd have to start with C or C++. A painful idea, but.
What really worries me is the impression that C hackers
might actively avoid anything that would raise their
productivity.
I don't quite understand this. My best guess is that
it's sort of another manifestation of the ``simple
implementation over all other considerations'' philosophy.
Namely, u-weenies have a fixed idea about how much they
should have to know in order to program: the amount they
know about C and unix. Any additional power would come at
the cost of having to learn something new. And they aren't
willing to make that investment in order to get greater
productivity later.
This certainly seems to be a lot of the resistance to
lisp machines. ``But it's got *all* *those* *manuals*!''
Yeah, but once you know that stuff you can program ten times
as fast. (Literally, I should think. I wish people would
do studies to quantify these things.) If you think of a
programming system as a long-term investment, it's worth
spending 30% of your time for a couple years learning new
stuff if it's going to give you an n-fold speed up later.