[–]▶ No.959133>>959145 >>959146 >>959167 >>959179 >>959180 >>959189 >>959449 >>959607 >>959758 >>959889 >>960078 >>960446 >>960861 >>964503 >>972362 >>972705 >>972879 [Watch Thread][Show All Posts]
Unironically why is the C language used other than legacy devices?
> inb4 you've never used it
I've written C code before. I know what pointers are, you're not cool
> inb4 so low-level
We don't live in the 80's anymore. Bell labs is shut down. Rust trades the milliseconds of compile and running time for infinitely more secure and elegant code.
> inb4 rust shill
Example language, but is a decent alternative to C.
> inb4 elegant code
Don't you just love being totally unable to increase the size of a string without literally editing raw memory. 'Cuz I do.
> inb4 Code of Conduct
What do you mean I can't write nazi propaganda in the official documentation. (((them))) at work
Besides the gatekeeping, is this a meme language? Talking about actual professional code writing, not the autistic-tier hobbyist shit.
▶ No.959135>>959136
>Rust trades the milliseconds of compile and running time for infinitely more secure and elegant code.
Rust is a NIH Ada.
▶ No.959136
>>959135
Like I said, just an example language. I agree that the small amount of time Ada takes to compile is fine because it has security features, though.
▶ No.959143>>959332
Hi leftypol.
>Unironically it's current year!!!
I agree we don't live in Bell labs in the 80s.
>inb4 shill literally nazi gatekeeping
rust is very elegant some would say infinitely elegant. Personally I am a fan of the infinite security. Though I must warn you that I am also not a shill.
>Talking about actual professional code of conduct
Exactly. Code without conduct is like words without action. Conduct first, then we can code.
>is this a meme language
I think at best we can say there is room for it be and not be but also both or neither after all who is to say what is means.
▶ No.959145
>>959133 (OP)
C is dominant in embedded for a couple of reasons. A) Momentum, once a product is in production then "If it ain't broke don't fix it" rules the day. B) The same reasons it allowed these machines back in the day when they were first coming into existence; it's a very simple language and the compiled code could fit in the very tiny storage on these devices. As for why engineers continued using it even after the device grew in size, see A above. As for other domains, C simply is not dominant, and for good reasons. Apart from dominant OSs (all legacy, old code, see above + accidental ABIs) C isn't on very many large codebases. Java, C++, and (shudder) .Net is much larger representation there.
>recommended literally the most pozzed, cock-mongling language, filled with -ism of all stripes, as a good alternative to C.
Faggots not welcome, GTFO.
▶ No.959146>>959157 >>959516 >>962867
>>959133 (OP)
C works on everything, it's very close to the hardware and so much so it can even be used as an intermediate language by other languages, it can be reduced to no startup time and overhead at all for use in smol areas like old MBR-style boot sectors, and it can usually be coaxed into producing the exact assembly a programmer wants without having to sacrifice portability. It's an excellent language to sit right above the assembly level and provide a common ground for everything else. It's not used because it's entrenched, there's just no competing language that can fully replace it.
>We don't live in the 80's anymore
>le current year man XD
>compile time differences are mere milliseconds, goy!
Rust currently can't replace C as it takes nearly 10 minutes to compile even small (for a systems language) projects like servo in debug mode. Productivity and scaling are measured by the length of the build-run-test cycle and that's 100x longer that what I deal with daily on a project 10x larger in C. It's also unusual to do development and testing of C programs with different compiler options than will be used for the release as you might not notice bugs that only happen in the released version so that comparison should really be against optimized Rust which takes another 3x the time.
The language's other issues aren't worth talking about as it's already excluded by just not being usable at all.
>Don't you just love being totally unable to increase the size of a string without literally editing raw memory
Ever consider that you're just not at the level of systems work and see everything as more complicated than it is?
>Code of Conduct
They excluded all the angry "fuck you" greybeards from the process yet those were the systems guys who were supposed to be their target audience. And the result was they made a tool that didn't fit into their development model at all and bombed on release.
▶ No.959157>>959166
>>959146
> C works on everything
C works on mass-market hardware that target C as a language (It's in the fucking design goals).
> it's very close to the hardware
Try compiling C to an 8008. Or even an 8080.
> it can even be used as an intermediate language by other languages
All languages can be used as intermediaries by all other languages. This is what it means for a language to be Turing-complete.
> it can be reduced to no startup time and overhead at all for use in smol areas like old MBR-style boot sectors
Any language can do this.
> it can usually be coaxed into producing the exact assembly a programmer wants without having to sacrifice portability
And any language can do this just as well as C does (and you're full of shit).
> It's an excellent language to sit right above the assembly level
You have a poor definition of excellence.
< there's just no competing language that can fully replace it
> there's just no compelling business case to replace it
> ftfy : users accept that code is shit, so no one has to make things better!
▶ No.959162>>959576 >>960252 >>972304
>posting a literal cuckold who dreamed up an ideology to make himself feel less bad about his wife fucking other men
>>>/leftypol/
>That she did what she wished, and that Stirner let her do what she wished-that of course may have let her appear in the eyes of the marriage-slaves as detestable as it later did to her, but it can only make the two of them more likeable to us. Every act of making up the mind for the other, for that matter, would not have fit at all into the nature of those involved, for whom "marriage" meant only a loose band that was thrown around them purely externally. And not on the "unfaithfulness" of the wife-how ridiculous!-did "this marriage perish," but simply and only under the pressure of the circumstances in which he and she unfortunately all too soon found themselves. Marie Dahnhardt's good taste always kept her from shouting her affairs from the housetops, which were her business and only hers and which naturally will not be pursued here. Toward the public she was always and for everyone the unapproachable wife, whom no one would have dared to approach. Only once did it come to a scene: she had at first not understood the ambiguous meaning of a remark; when she was made aware of it, her indignation is said to have been quite apparent
From "Max Stirner - His Life and His Works" By Mackay
▶ No.959166>>959494 >>959501
>>959157
>Try compiling C to an 8008. Or even an 8080.
What possible relevance is dead hardware to this thread? Why would it matter?
>All languages can be used as intermediaries by all other languages
Do you completely not understand the point of doing so in context is to generate machine code?
>Any language can do this.
You can boot Java?
Stick to communism, it has a better chance of working out than you do.
▶ No.959167>>959170
>>959133 (OP)
>Unironically why is the C language used other than legacy devices?
Sunk costs. Millions of lines of C code power billions of computing devices. The Windows kernel is written in C. The Linux kernel is written in C (which means that every Android device fundamentally runs on C). The Mac OS and iOS kernels are written in C. Tons of useful libraries have been written in C. Many useful tools have grown up around C programming. Many more people can program in C than in your boutique lang-of-the-week.
Billions (or tens or hundreds of billions) of dollars worth of engineering effort have gone C programs in the last 40 years. Rust is a pickaxe chipping away at a glacier.
▶ No.959170>>959178
>>959167
>Rust is a pickaxe chipping away at a glacier.
If it wasn't unusable we'd be linking in bits of Rust code to C/C++ projects today and projects would generally become more Rusty over time. That's not happening at all because it's trash and few expect it to ever be usable. Mozilla is dogfooding it because they know it's not working out. They're trying to fix it by forcing themselves to feel the pain that others would have when mixing it in.
I do somewhat low level networking for a living and I'd love to replace some very dangerous sections of my code with something safer - it's not like the desire's not there.
▶ No.959178>>960250
>>959170
you can trivially replace parts of a C codebase with ATS, which is much safer than Rust and doesn't have its problems. It's really good at doing exactly what you want to do.
... the only problem is that you have to sacrifice a schoolbus full of children to evil gods, to be granted the ability to read its error messages
▶ No.959179
>>959133 (OP)
Because it has been used, everything is optimized assuming that you will use it, so you use because it's optimized because it's assumed that you will use it. If there is a better way of doing things, make your own computer or forget about it. Stagnation won't go away until the industry is dead.
▶ No.959180>>959252
>>959133 (OP)
> inb4 rust shill
KILLYOURSELF RUSTFAG
yet another shill thread brought to you by our friendly mozilla paid kike shill.
▶ No.959189
>>959133 (OP)
>nazi propagandain the official documentation
a more realistic problem is not remembering that some pre-suicidal moron demands that you address it in a special way, then casually referencing that person without satisfying that demand that you didn't remember, and then having to choose "be banished to the void" over "grovel and apologize" because your sperm count is too high for that shit.
meanwhile, Rust has Communist propaganda in the official documentation
>why is the C language used
legacy projects and legacy knowledge.
it's widely known, and not that big -- it's much easier and faster to learn than any modern replacement candidates (maybe Pascal variants had a chance. not anymore). that it's very hard to extend is a bonus: you never run into projects that are exhibiting their own shitty variant of the language going on
▶ No.959206
To clarify a bit:
>Rust Shill!
I am not, as much as one would believe, a rust shill. It's a popular low-level language. That was in the tech news sites for various reasons that most people would know. You can use ada or pony or whatever language you want to use. I don't care. I use C more than I use Rust.
>BTFO LeftyPol XD!!!!!
It was a mistake bringing politics to this board, obviously.
Actual Stuff:
>Legacy Code
Makes sense, what I thought it was
>Cross-Platform
The weird thing about C is that the low-level stuff is commendably uniform cross-platform, but anything higher-level than memory management, math and some well-implemented libraries become high-platform quickly.
>Coc's allow (((degenerates))) to yell at me
Using C is not going to fix the snowflake problem
▶ No.959211>>959217 >>960363
>Unironically why is the C language used other than legacy devices?
C is used because it sucks. C requires more programmers, more bullshit "research", and more wasted time. Good languages need less programmers, can reuse decades of existing research, and save time. Since a C program needs more programmers just to produce a lower quality result, there is more demand for C programmers, so more universities teach C. Since C is unable to use 60 years of research that works, there is more demand for bullshit "research" to make it safer, which never works because of "unrelated" C flaws, so the problems can be "researched" again. "If it worked, it wouldn't be research!"
A few examples of this bullshit are integer overflows, buffer overflows, and incompatibility, which were all solved in the 60s but are still problems with C. These were known to be old problems with C in 1990 and it's not like nobody tried to fix them.
There are "guidelines" about how to check for overflow with addition and multiplication in C. In assembly, it's just checking a flag or comparing the high register to 0 after performing the operation, if the computer doesn't have a way to trap automatically. C needs about 8 branches (7 ifs, 1 &&) and 4 divisions (>>954468) just to check whether the result will be in range after multiplication. That's what professional "coding standards" recommend.
Most vulnerabilities, segfaults, and other errors in C are due to overflowing the bounds of strings and arrays. All this bullshit like ASLR, heap canaries, and stack canaries were invented to put a band-aid on this one problem. This was solved around 60 years ago by having the compiler compare the index to the array bounds. Tagged architectures like Lisp machines have an array data type and bounds checking in hardware to make it faster and easier for programmers and compilers. There are also around 60 years of techniques to determine when it's possible to optimize away unnecessary bounds checks.
A big feature of mainframes and workstations (not including UNIX and RISC) is compatibility between different languages. Multics and VMS have data descriptors so you can pass data between programs written in any language. Fortran and Pascal on Lisp machines share bignums, arrays, and GC with Lisp. Microkernels treat drivers as ordinary user programs, so drivers can be written in any language including interpreted languages if they're fast enough. What really sucks is that AT&T shills did not want C to be compatible because it kept people from moving away from C and UNIX.
C was "designed" in a way that these solutions don't work for it. The solutions work for other languages, like Lisp, Ada, Cobol, and BASIC, but they don't work for C, because C sucks. Instead of using decades of simple and proven solutions, the C "solution" is to throw more programmers at it and, if that doesn't work, blame the user or hardware.
Doesn't it give you that warm, confident feeling to
know that things aren't Quite Right, but that you'll
have to wait 'til you need something to know Just What?
Life on the Edge.
Get with the program -- remember -- 90% is good enough.
If it's good enough for Ritchie, it's good enough for
me!
"It's State of the Art!" "But it doesn't work!" "That IS
the State of the Art!"
Alternatively: "If it worked, it wouldn't be research!"
The only problem is, outside of the demented heads of the
Unix weenies, Unix is neither State of the Art nor research!
▶ No.959217>>959235
>>959211
> A few examples of this bullshit are integer overflows, buffer overflows, and incompatibility, which were all solved in the 60s but are still problems with C.
the unix hater, ladies and gentlemen. Buffer overflows?! We solved those! By removing your ability to directly work with memory! We solved it, damnit!
gosh I guess since you solved it so well, you're probably not at all interested in how dependently-typed languages can protect you from buffer overflows while still giving you direct access to memory and with no runtime bounds checks or other runtime costs, at all.
▶ No.959235>>959256
▶ No.959236
C is popular is because C is good. If it wasn't good it wouldn't be popular. Sounds like a case of sour grapes and pure butthurt.
▶ No.959252
>>959180
>Implying mozilla has enough money to pay for shills
They don't even have enough money to their developers.
▶ No.959256>>959336
>>959235
rather, you didn't solve the problem. You had an expensive alternative which had its own problems, and in many cases people preferred to manage the first problem.
▶ No.959278>>959285
>Rust
>Elegant
Lmao get a life. Or actually, try to write something serious in Rust.
▶ No.959285
>>959278
it's only a zero-cost abstraction if your brain is free :)
▶ No.959332>>959349
>>959143
>Anti-IP meme
>/leftypol/
Dude calm your asshole, just cus someone sticks Stirner on a fucking meme doesn't make them /leftypol/ brainlet
▶ No.959336>>959355 >>959366
>>959256
>people preferred to manage the first problem.
>manage the first problem
>manage
Shiggy diggy. 10K cve's with the word buffer in it says otherwise.
Not that I'm suggesting we move to lisp. Any language that doesn't make buffer overflows trivial would do.
▶ No.959338
its super optimized, close to machine code, barebones language written for computers that are 1000 times slower than todays
i say its quite useful
▶ No.959349>>959357 >>959362
>>959332
Stirner memes are dogshit and haven't caught on outside /leftypol/ for good reason.
▶ No.959355>>959363
>>959336
C programmer here. There are no usable alternatives. You can complain all you want, but solutions would be more useful. The closest anyone came to replacing C was with Ada but it was a warzone, required 10x the manpower, and had its own problems. Supposedly it's not nearly as shit as it used to be but the community around it is completely dead now so I dunno.
▶ No.959357>>962144
>>959349
Except they literally have just not on 8chan. The only reason I learnt about Stirner was Telegram meme channels.
▶ No.959362>>959545
>>959349
I think the first time I saw them was on /lit/, pre-8chan
▶ No.959363>>959371 >>959374
>>959355
Well, maybe it's time to dust off Ada again, if that is even required. It is still the only non-meme safety oriented language with an actual track record that has performance comparable to C/C++.
The only problem in the past, as far as I can tell, was the lack of compilers, but that is solved by the gcc and Adacore compilers.
Looking at Ada2012, I can't see anything that would require 10x the manpower. And any extra time spend on typing will pay for itself every time somebody looks at the code.
▶ No.959366>>962885
>>959336
>cve's with the word buffer in it
rust?
▶ No.959370>>959373
1) C works because it has a solid core (symbol ratio in syntax, reach of the POSIX stdlib, simplicity of the language) with a shit coating and historical baggage. It's still better than the opposite of having a shit core with gold plating.
2) C has no replacements. C++ has tons, but not C. If you can't understand the difference between C and C++, kill yourself.
▶ No.959371>>959398 >>959494
>>959363
>Well, maybe it's time to dust off Ada again
Ehh I'd want an all-clear from a programming community on that one. It was full of thread safety issues and memory leaks last I tried. Most of the safety features you guys think were being used were being avoided because of the verbosity required.
▶ No.959373
>>959370
C needs replacement
▶ No.959374>>959401 >>959494
>>959363
>performance comparable to C/C++
I should also note that getting good performance in Ada is all about disabling safety checks so it's that it's comparable to C/C++. Otherwise it's very slow by default.
As an example, this is a function that adds two integers:
0000000000000000 <add>:
0: 48 63 c6 movslq %esi,%rax
3: 48 63 d7 movslq %edi,%rdx
6: 48 01 c2 add %rax,%rdx
9: b8 00 00 00 80 mov $0x80000000,%eax
e: 48 01 c2 add %rax,%rdx
11: b8 ff ff ff ff mov $0xffffffff,%eax
16: 48 39 c2 cmp %rax,%rdx
19: 77 04 ja 1f <add+0x1f>
1b: 8d 04 37 lea (%rdi,%rsi,1),%eax
1e: c3 retq
1f: 48 8d 3d 00 00 00 00 lea 0x0(%rip),%rdi # 26 <add+0x26>
26: 48 83 ec 08 sub $0x8,%rsp
2a: be 04 00 00 00 mov $0x4,%esi
2f: 31 c0 xor %eax,%eax
31: e8 00 00 00 00 callq 36 <add+0x36>
Kinda huge for what should be 3 instructions isn't it? It's because every add and subtract is now autistically checked for overflow by default and it also now has to deal with raising errors.
▶ No.959398
>>959371
>Ada was full of thread safety issues and memory leaks
Interesting, but it sounds unlikely. I'm gonna need a source on that. Unless your source is people making basic mistakes like not cleaning the heap.
▶ No.959401>>959406
>>959374
Someone on /tech/ actually understands that the languages themselves don't provide any inherrent safety or security features, and that it is the compilers/intrepeters that do all of those checks behind the scenes, and that a language is only secure by trusting the programmers to make secure compilers/interpreters. Is this /bizarrotech/?
▶ No.959403
One of the reasons I write in C is because I like C. Another reason is the simple binary interface C libraries have, which allow them to be used by any programming language out there. Another reason is access to hardware and system calls, which isn't automatic in most other languages and even when it's available it's strange due to pointers wrapped in objects rather than first class objects that are part of the language.
Also, it depends on the complexity of the program. I usually write very specific programs in C, and orchestrate them from Python, Lua or something.
▶ No.959406>>959427
>>959401
> languages themselves don't provide any inherrent safety or security features
That's bullshit and you know it. You can't shift every responsibility to the compiler. Language features determine what the compiler (and programmer) have to work with.
▶ No.959420>>959426
>make low effort post
>reap dem (you)s anyway
Trolling used to be a art.
▶ No.959426
>>959420
Have a (you)
* an art
▶ No.959427>>959433
>>959406
>The readable code is ultimately translated to machine code to be executed
>It's not the compiler's fault if something isn't safe
<The compiler just have to add additional code that checks things for the programmer
▶ No.959433>>959437
>>959427
#include "share/atspre_staload.hats"
implement main0(argc, argv) =
println!(argv[0], "'s first arg: ", argv[1]);
fails with a compile time error: typechecking has failed: there are some unsolved constraints. After Satan rewards your sacrifices with his power, you'll be able to read the rest of the error message, which plainly points to the '1' of 'argv[1]' and says
for you to do that requires argc to be greater than one, and you haven't proven that.
so what do you do? easiest way is just to branch off the value of argc:
#include "share/atspre_staload.hats"
implement main0(argc, argv) =
if argc > 1 then
println!(argv[0], "'s first arg: ", argv[1])
else
println!("usage: ", argv[0], " <some arg>")
the compiler doesn't add any checks. and although you're adding an explicit check here, that's not really what the compiler asked for. It just asked you to prove you aren't going out of bounds. Which means this is OK:
#include "share/atspre_staload.hats"
implement main0(argc, argv) =
if argc > 3 then (
println!("first: ", argv[1]);
println!("second: " ,argv[2]);
println!("third: ", argv[3]);
) else
println!("usage: ", argv[0], " <a> <b> <c>")
and how many checks does it have? three of them, around each of those argv[] lookups? nope. just the one check. compiler can answer simple linear questions like "if argc must be >3 at this point, is it is also > 2 like it needs to be for this lookup?"
▶ No.959437>>959446
>>959433
>>959433
My problem with these compiler reasoning and proofs is we have no idea they're happening. We have no idea what the compiler's thinking. You might add a check for overflow and assume the compiler will produce code that checks the number, but it might reason that overflow is undefined and thus can't happen and thus the check is unecessary and gets removed. We have no idea this happened and will end up shipping unsafe code that isn't doing a check you explicitly wrote into your code.
So don't go overboard with these compiler proofs -- they have their limits. The compiler is implementing the standard, and nobody really cares about the standard. People care about reasonable C code producing reasonable assembly code.
▶ No.959445>>959494
>> inb4 you've never used it
>I've written C code before. I know what pointers are, you're not cool
You've written a hello world. Good for you. C's elegance can only be understood through experience.
>> inb4 so low-level
>We don't live in the 80's anymore. Bell labs is shut down. Rust trades the milliseconds of compile and running time for infinitely more secure and elegant code.
Try writing fast code in python
>rust is elegant
>> inb4 rust shill
Need i say more
>> inb4 elegant code
>rust is elegant
You keep using that word...
>> inb4 Code of Conduct
>Muh official documentation
Documentation should be decentralized, just like the implementation.
▶ No.959446>>959483
>>959437
>you might add a check for overflow
>compiler removes it
that never happens in ATS. The theorem-proving is purely a compile-time, static analysis matter. Its not like Ada; there's no runtime code that might be added to support some guarantee, and ATS also doesn't remove 'useless' runtime code based on static knowledge. (if you compile to C, gcc might do some bogus dead code elimination, but it won't do that based on your ATS proofs at least since gcc doesn't know about them.)
it's very easy to lie to ATS though: anything you assert in an FFI declaration is going to be assumed to be completely true, and you can also introduce assertions directly -- you can brainwash ATS into believing that x < 0 after a 'val x = 5'. Example:
#include "share/atspre_staload.hats"
extern fun returns_zero(): int(0) = "ext#"
%{
int returns_zero(void) { return 5; }
%}
fun no_positives_please{zneg:int | zneg < 1}(n: int(zneg)): void =
println!("This will never be greater than zero: ", n)
implement main0() = no_positives_please(returns_zero())
output:
This will never be greater than zero: 5
▶ No.959449
>>959133 (OP)
I could offer a well thought out response, but why waste time on a Rastfag? I'll settle for calling you a nigger, you stupid nigger.
▶ No.959465>>959496
Rustfags are such a sad story. They disconnect themselves from the adults and end up producing a toy language. Adults in charge at mozilla are horrified and try to force them to see it's a toy language by making it an obstacle to build furryfux. Rust children continue playing in the ball pit instead and write lots of useless toy libraries that no one outside of their ball pit has any interest in using. They don't see the infinite compile times because as children time seems infinite.
▶ No.959483
>>959446
I don't know this ATS language so I will refrain from opining.
▶ No.959494>>959498 >>959500
>>959166
> Why would it matter?
Because it demonstrates that the "C is close to the hardware" meme is just that. It is difficult compiling C to this dead hardware because C isn't close to the hardware, the (modern) hardware supports C.
> Do you completely not understand the point of doing so in context is to generate machine code?
If the language has a compiler that generates machine code, then any language is suitable for this. You're trying to defend C with plain ignorance about how the tools work.
> You can boot Java?
Yes you can. There is no reason that Java must be compiled to the JVM. You can generate native code with it, GCC did so for a few years. Then again, there are (wait for it) processors that execute Java bytecode natively (I heard a gasp). There are actually computers for which Java isn't a virtual machine.
>>959374
The thing is, the Ada language specification requires those checks and specifies how they can be removed. C doesn't require them and most compilers have no way to add them. That Ada was used for defending human lives and C for exchanging porn speaks to the quality of each language.
>>959371
>It was full of thread safety issues and memory leaks last I tried.
Ada was full of thread safety issues because people tried to recreate pthreads with rendezvous.
>>959445
>C's elegance can only be understood through experience.
Elegance:
/*
Removes the first item from the list.
*/
void * removeFirst (void ** head)
{
void ** c;
c = *head;
if (*head != NULL)
{
*head = *c;
*c = NULL;
}
return c;
}
▶ No.959496
>>959465
>be Mozilla
>have an opportunity to start from a blank slate and create the new hip and cool language for low-level programming
<give your language some horrible anarchic cargo cult pseudo-C++ syntax
<make the compilation process so arduous, normal rust programs take as much time to compile as heavily templated C++ code
<the borrow checker scares programming newbies away and is nothing but a useless straitjacket for experienced programmers
<have a community made entirely of literal communist hipsters and shove an AIDS-ridden CoC in there for good measure
What a tragic joke.
▶ No.959498
>>959494
I should learn Ada someday. Seems comfy. I remember reading an article about static memory allocation with Ada and how it's reliable and predictable; I can't seem to find that article.
▶ No.959500>>959504 >>959505
>>959494
>That Ada was used for defending human lives and C for exchanging porn speaks to the quality of each language.
Maybe you're unaware but those industries switched back to C a long time ago.
▶ No.959501>>959525
>>959166
>Stick to communism, it has a better chance of working out than you do.
How exactly is that germane? Why do people here fall back to calling you a pinko/apparatchik if you blow them the fuck out? You're probably someone that prattles on about how "/tech/ is not for politics" too. Idiot.
▶ No.959504>>959506
>>959500
Yes, and it's causing them all kinds of problems. The reason they relinquished the mandate is because they could not find people who still wrote Ada. *Highly* critical systems still use Ada (the fly-by-wire controller on the Boeing, hospital/hospice embedded systems, etc).
▶ No.959505>>959523
>>959500
The avionics industry does seem to be moving to C, but I hear it's not going well. The F-35 as a whole is some kind of joke NuFighter thing...
▶ No.959506>>959508 >>959518 >>959698
>>959504
>The reason they relinquished the mandate is because they could not find people who still wrote Ada
Oh come on, can't they make people learn the language? I simply don't understand what's so hard about this. Programmers should be better than whatever language they got through school with. Are we going to have aircraft programmed in Javascript if somehow all the C programmers die out and babby generation refuses to learn it?
▶ No.959508>>959513
>>959506
Ada, like Erlang (and the L word) has good documentation--that is, if you know where to look. The problem is, contemporary programmers are used to consulting Stack Overflow/blogs/trawling the Web for "documentation" and then piecing it all together. With Ada, you just have PDFs.
▶ No.959513>>959521
>>959508
I dunno how to feel about "contemporary programmers". I was born in the 90s, so technically a millennial, right? Yet I have no problem with this. I don't have any problem reading the C standards, or Javascript's ECMA standard. Most languages out there aren't standardized, and I read their reference implementation's source code instead.
I think people are just lazy and want cookbooks on how to solve whatever problems they're having without having to understand anything too deeply. This is the reason I stopped posting on Stackoverflow.
▶ No.959516>>959519 >>959532 >>959654
>>959146
> It's also unusual to do development and testing of C programs with different compiler options than will be used for the release
What? One of the first things I do in my makefiles is to set up bin/debug/ and bin/release/ targets. So do you release your software with -g enabled? If not, how do you debug? Do you always wait for -O3? Have you ever needed pgo? I can barely comprehend not having at least 1 debug build, are you sure this is what normal people do?
▶ No.959518>>959520 >>959831
>>959506
One of the F-35's problems is that it runs Javascript in Internet Explorer. I've read that the bases that fly the 35 have to get exemptions from DoD network security mandates due to how the flight software communicates status information to the ground controller.
▶ No.959519>>959954
>>959516
>One of the first things I do in my makefiles is to set up bin/debug/ and bin/release/ targets
How do you do this? How do you manage these targets?
▶ No.959520>>959656
>>959518
>One of the F-35's problems is that it runs Javascript in Internet Explorer.
.... What.
▶ No.959521>>959527 >>959604 >>959711
>>959513
You'd be surprised. The other day, I met a tenderfoot programmer, must have been 16 or so. He only knew Javascript--I, being a codger in my 20s, told him to try Erlang, and I told him where he could find documentation. He then responded that he'd only really learned Javascript through YouTube and blogs. I know that he's still a kid, and he may regret that in the future, but I was still surprised. And yes, I know that "Kids these days..." is a sentiment held in almost every society throughout history.
▶ No.959523>>959527 >>959528
>>959505
It's not a language choice issue, it's that they can't hire any new blood because they can't compete with Silicone Valley. Would you want to work a government job on a locked-down PC using software that is 30 years out of date for certification and conformance reasons and be routinely drug tested and get no shares or chance to make it big and not be allowed to wag your dick on social media? Only the rejects would take that offer, so you get a plane designed by an army of rejects.
▶ No.959525>>959526
>>959501
>apparatchik
Spotted the commie.
▶ No.959526
>>959525
And I spotted the libertard. Your playground ideology is a total laugh; at least reds can talk about how living under the Soviet Union was was still better than helotry under the Csar. State industrialization turned the Soviet Union from a slum into a global superpower with far higher literacy rates, life expectancies, and standards of living. (Because I brought up the Soviet Union, you're probably going to call me a pinko again. So, to be clear, I'm not ignoring their egregious shortcomings.)
▶ No.959527>>959528
>>959523
>Would you want to work a government job
Well, in my country government jobs are highly sought after because they're highly paid and it's essentially impossible to be fired. The best programmers and IT professionals end up in the government, the rest go work in some startup where they make good yet inherently unstable money. Lots of people tend to favor the stability of government jobs.
I dunno how America works. Don't they subcontract the development of military aircraft to private companies? I mean, this seems to be the case everywhere in the world.
>>959521
What I hate about these people is how they seem to be allergic to learning. Learning new languages means learning new ways to solve problems, new ways to think about stuff. "Oh I just learned JS on youtube" and how does that justify not learning about Erlang or anything else for that matter?
▶ No.959528>>959544
>>959523
> 30 years out of date for certification and conformance reasons
Actually, you'd be surprised. The government doesn't like to run software with known vulnerabilities, and 30 year old software has a lot of known vulnerabilities. There's an interesting tug-of-war between "we know the software has issues" and "we don't know what we don't know".
> be routinely drug tested
I have never been drug tested beyond my initial hiring.
> you get a plane designed by an army of rejects
To be fair, government work is reject work: you go into government because you couldn't make it in the corporate world. The only non-rejects are either ex-military or lazy bastards who didn't want to move. One of my government bosses ran a submarine (La Jolla) into a fishing boat.
> can't compete with Silicone Valley
Many of my coworkers see their work as a stepping stone into Silicon Valley. So, they want to use the languages and "technologies" that are hot there.
>>959527
>Don't they subcontract the development of military aircraft to private companies?
Government contracting doesn't pay as well, however it is difficult to be fired. And there's more stability in contracting, despite the US government being fickle. As far as I know, no contractor has ever been told "if your badge fails to work tomorrow, you've been let go", which is how a local company did a mass layoff once.
▶ No.959532>>959954
>>959516
>One of the first things I do in my makefiles is to set up bin/debug/ and bin/release/ targets
MSVC hurt you. Don't do that.
>So do you release your software with -g enabled?
Kinda, yes. We used detached debugging symbols. The binaries as shipped have no debug info but it matches the detached stuff so I can pull up a core dump made by a release binary and have full debugging info without exposing sensitive info. At runtime it's exactly the same.
>Do you always wait for -O3?
I only use O3 on small pieces of code that need it to vectorize (e.g. a bundled copy of SFMT) as it has a tendency to make things worse if used everywhere. Everything else is always O2.
>Have you ever needed pgo?
It's too difficult to use and use well for most things.
>are you sure this is what normal people do?
Yep, at least where reliability is needed and there's a lot of process in place to achieve it.
▶ No.959536
OP you're such a faggot there is nothing wrong with C or C++ get the fuck out
▶ No.959544>>959548
>>959528
>Actually, you'd be surprised.
Yeah, I was the last time I contracted on a government job and had to learn how to use MUMPS. That's a 50 year old system they're still using in healthcare and even inter-office communication felt like I was launching a nuclear missile. They currently plan to have replaced it by 2025.
>I have never been drug tested beyond my initial hiring.
They've all been trying to reduce how much they're required to do it because of the braindrain. The FBI on the issue 4 years ago:
https://archive.is/Z0PSm
>La Jolla
That's where I'm at. Fun tales of briefly contracting for Northrop Grumman..
▶ No.959545
>>959362
Well, /leftypol/ is an offshoot of /lit/.
▶ No.959548>>959560
>>959544
>MUMPS
That's interesting. Can you post more details about this system? Google gives me articles about parotitis.
▶ No.959560>>959574
>>959548
It's a pretty tangled mess. Where to start would be looking into VistA:
https://en.wikipedia.org/wiki/VistA
It's mostly all in MUMPS. MailMan is their 1950s style nuclear launch "email" program.
MUMPS is open sores now out of desperation, see here:
http://debian-med.debian.net/tasks/his.fr.html
MUMPS itself is a tangled mess, too. An overview with absolutely no hint of shilling:
https://www.datasciencecentral.com/profiles/blogs/mumps-the-most-important-database-you-probably-never-heard-of
▶ No.959574
>>959560
Is this some shit used by the Veteran's Administration? If so I'm not surprised it's worthless garbage.
▶ No.959576
>>959162
>Nazi butt mad there’s an ideology that demands his own be critically analyzed.
>Obviously haven’t read Stirner, and hate the idea of not yielding to any authority
Lol Stirner busting spooks hundreds of years after his death.
▶ No.959604>>959605
>>959521
I was into Erlang for a while, a few years ago. Erlang was interesting in that it had really good documentation -- exhaustive, very clear (not Haskell "here's a bunch of types. That's all you want, right?"), well written, with interlinked reference and tutorial sections. I'd never seen such a well-documented language since Perl.
But it was unpopular so you couldn't Google shit.
And I about lost my shit with people who stopped there. HOW ABOUT, INSTEAD OF FUCKING SEARCHING THE ENTIRE WEB FOR KEYWORDS, YOU GO TO THE-FUCKING-SUBJECT-AT-HAND DOT COM AND THEN CLICK ON "WHAT I WANT TO KNOW" YOU STUPID MOTHERFUCKER
▶ No.959605
>>959604
>a few years ago
more like 15 years ago come to think of it...
▶ No.959607
>>959133 (OP)
Because real niggaz hack c, you sorry ass bitch.
▶ No.959654>>959705 >>959954
>>959516
It means that you're using all the same optimization and control flags, except -g and -s. That's because these flags control the assembly output (-O-class flags especially) and if you have them set up differently for debug and release version, they can (and will) produce different assembly. The repercussion is that you may have obscure weird bugs that work themselves out with one specific assembly flag set but manifest under others. So if you only test against debug flag set you gonna miss these bugs that will manifest in release flag set.
Generally, you only use debug build for step by step execution using a debugger. You do final [unit] testing using release build.
▶ No.959656>>959739 >>959814 >>959831
>>959520
Oh you missed out on the fun. The contractor has developed a major mission & loadout control component as a web app. Sure enough it works like shit. The airplane firmware is presumably C++ but it's fucking dogshit just as well, british F-35 couldn't even use missiles nor bombs yet. Also it's tail fins crack if it flies supersonic, so in real life scenario it can't even break the sound barrier due to safety limits.
▶ No.959698
>>959506
>Oh come on, can't they make people learn the language?
Anon, they have trouble making their programmers poo in the loo.
▶ No.959705>>972824
>>959654
They're generally very rare, though. In my experience, the only bugs that have popped out due to different debug and release flags have been either threading issues: actual logic errors in the code that get exposed by some section of code running faster or slower; or static initialization order issues: actual logic errors in the code that get exposed by some variable being initialized before or after another, because C++ doesn't define that order (and you can force the order, but most pajeets don't know how).
▶ No.959711
>>959521
I've seen similar. I've had to tell younger people, hey, if you're getting an error on that function, instead of clicking random tutorial videos for 40 minutes, you can just look that function up in the docs. The docs will tell you how to use that thing you're trying to use. And the response is like, oh yeah good idea.
▶ No.959739
>>959656
>holy shit
>I should go program for the military, show them--
<new guy, no seniority, distrusted by insiders, no power to make decisions
>if I stick around I'll become an old guy
<finally able to make decisions
<consultant comes in, says Python is the new COBOL, we gotta use that
<a lifetime of making shit with no escape
... well
at least we can't sell Israel anything good
▶ No.959758
>>959133 (OP)
>What do you mean I can't write nazi propaganda in the official documentation. (((them))) at work
You're post was only retarded and naive until it went full retard right here. Ironically a CoC is more close to something like a nazi idea than whatever it "protects" against. Go suck a CoC.
▶ No.959776
>the progressive left has been dragged so far left that they see the hippie left as Nazis
▶ No.959814>>959830 >>959831
>>959656
What the fuck? How many years of development has it been?
▶ No.959830
>>959814
However many until you stop funding it.
▶ No.959831
>>959518
>One of the F-35's problems is that it runs Javascript in Internet Explorer.
>>959656
> The contractor has developed a major mission & loadout control component as a web app.
The system for the self scheduling of maintenance also presumably runs in javascript. Part of the overall F35 package is a system where if the plane detects a fault during flight it sends a message to the hangar to order the required parts and schedules a repair crew.
>>959814
Too many, the F35 is an extreme example of scope creep which has resulted in a plane which doesn't do any one thing well but does everything poorly. Throughout its development the various branches of the US military decided they needed a new plane, every time this happened the requirements changed and expanded slightly.
▶ No.959889>>959894 >>959932 >>959971 >>961254
>>959133 (OP)
>Why C?
Because I don't like change, I already settled with C and I treat all your arguments against it as personal attacks, because if they were to be valid it would mean I wasted and I am still wasting time to program in this shit and unsafe miserable language instead of D or Rust or god forbid FreePascal, it would mean I had to justify to myself this wasted time, but I wouldn't be able to therefore I wouldn't be able to look in the mirror next morning, I would totally collapse mentally in days and I would kill myself within a week.
▶ No.959894>>960185
>>959889
This is what lispfags really believe.
▶ No.959932
>>959889
>if you don't use Rust y-your an idiot
>reee I'll kill myself
that's funny anon
▶ No.959954>>960017 >>960042 >>960096
>>959519
> How do you do this? How do you manage these targets?
Something like
# Special flags for debugging modes
ifeq ($(DEBUG),true)
BUILD_MODE=debug
CFLAGS+=-g -O0 -Wall -Werror
else
BUILD_MODE=release
CFLAGS+=-O3 --static
endif
Then later
all: bin/$(BUILD_MODE)/program
And I just use `make DEBUG=true` when building. I usually have build/debug and build/release too for differentiating the .o files and letting make figure out what needs to be updated.
>>959532
> MSVC hurt you. Don't do that.
Guilty as charged
> [-O3] has a tendency to make things worse if used everywhere. Everything else is always O2
Really? Why? I always though O3 was like O2, but with more optimizations applied. Is there somewhere I can read more about what specific optimizations get applied at each level?
>>959654
> It means that you're using all the same optimization and control flags, except -g and -s.
I try to make my makefiles cross platform, so usually windows needs some special stuff, like .dll s in the /bin/ folder, and various -mwindows or -lmswsock flags
▶ No.959971
>>959889
The soy is strong on this one.
▶ No.960017
>>959954
> Is there somewhere I can read more about what specific optimizations get applied at each level?
https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
▶ No.960042>>960102
>>959954
O3 is like O2 but with sketchy things that don't always help enabled. Sadly, they stuck autovectorization in there and keep changing what's necessary to enable it on O2 so older code designed for that needs it. Today, everyone has given up on the idea of compilers ever being able to do it well so use non-standard vector extensions that let you be explicit.
▶ No.960078>>972826
>>959133 (OP)
I'm so fucking tired of rustfags trying to attemp to be neutral by making threads about other stuff and then actually making a rust thread. Fucking stop shilling this shit tier language already. We have seen it, we have not liked it. Go away please, or make explicit rust threads. Stop spamming.
▶ No.960096
>>959954
>>959954
Haha. My makefile has grown much too complicated for a simple solution like this... I don't wanna fuck with it too much. Thanks anyway, anon.
▶ No.960102>>960130
>>960042
O3 worse than O2 is a meme from before gcc 4.9. Especially since -fvect-cost-model and the inlining detector got better.
▶ No.960130>>960140 >>960149 >>960204
>>960102
You're free to use it, but modern code uses O2, uses the vector extesions rather than coax the compiler into what was wanted, and just to piss you off, C++ templates use non-standards to force inlining where desired. The compiler can't ever get a lot of that stuff right as it doesn't have any program-level awareness, it can only see one file.
▶ No.960140>>960153
>>960130
Do you even use -ffast-math? Friendly reminder that ISO math is absolutely worthless. All it does is significantly reduces floating point math speed for nothing more than improving """stability""" marginally, as in, by one or two least significant bits. If you're using floats it's because you've given up on accuracy and precision to begin with, this tradeoff is integral part of floating point numbers design. So why the fuck would you care if a floating point operation comes to an exactly expected value? If you're using a system that's inherently inaccurate, you should design your code to take live with this inaccuracy.
▶ No.960149>>960152
>>960130
<The compiler can't ever get a lot of that stuff right as it doesn't have any program-level awareness, it can only see one file.
>what is full link time optimization, FLTO
▶ No.960152
>>960149
>what is full link time optimization, FLTO
Doesn't work. The compiler will guess a depth to stop expanding an "inline" recursive template at even though you told it to inline and will emit a function call variant. The depth this occurs at can differ per compilation unit as the decision to do it is based on cost analysis within that file. LTO can't elide them because they aren't duplicates, they're each slightly different.
The workaround for now is using compiler extensions that forcefully inline. Where possible, unwrapping template args via parameter packs rather than recursion also helps.
▶ No.960153>>960155
>>960140
>Do you even use -ffast-math?
No.
▶ No.960155>>960157
>>960153
You should. Omitting -ffast-math automatically enables -fgimp-floating-point-math.
▶ No.960157>>960162
>>960155
No. It makes the result differ by architecture and breaks NaN. If I were doing gamedev I would but it's not a safe option to use in general.
▶ No.960162>>960168
>>960157
You shouldn't rely on NaNs and you shouldn't care if results are marginally different. You forego ISO if you make games, you forego ISO if you make scientific calculations... what are the applications of ISO math anyway? I mean beyond placebo, perfectionism wank, and "fixing" the floating point math related bugs by making edge case handlers for ISO math specifically instead for all possible types of it, instead of fixing the code that causes them to appear to begin with.
▶ No.960168>>960172
>>960162
>you shouldn't care if results are marginally different.
While I'd love to have results again differ depending on what machine they were obtained on, I'm going to pass.
>you forego ISO if you make scientific calculations
That's a pretty broad statement. "scientific calculations" might at one point be done in binary16 (although this is now IEEE) and in others require full doubles. The increased demand for the latter is what killed Cray. It's also one of the features Nvidia used to paywall to prevent cheap gamer gear from filling that niche.
▶ No.960172>>960193 >>960360
>>960168
You see, when you use floats instead of fixeds/arbitrarys or integers, already you forfeit all pretense for accuracy and repeatability. So you might as well stop pretending like everyhting's still cool and precise and embrace the whole inaccuracy deal.
▶ No.960185>>960196
>>959894
Don't shit on Lisp. Learning it made me do better, more structured C code. Soyfag's just baiting.
▶ No.960193>>960223
>>960172
>You see
I could hear the lisp when I read that.
>when you use floats instead of fixeds/arbitrarys or integers, already you forfeit all pretense for accuracy and repeatability
Except you don't. Most operations will be byte for byte identical for any conforming machine when everything is configured to follow the standard, and the sketchy operations that are allowed to differ are documented as such.
>embrace the whole inaccuracy deal
I assume you've just recently encountered floating point and are still terrified and confused, and that's a bad way to be when trying to discuss advanced usage. Think of it like SQL's isolation levels where there are properties you can relax in return for speed as long as you're sure you don't need them and are willing to risk it. Distributed systems and parallel algorithms are the same way, where coherence and repeatability are the properties to decide on.
If you want reliable code, you probably don't want data races and mixed architectures creating different results. While you can just shrug and say that it'll probably be fine, you can shrug and set all your SQL transactions to READ UNCOMMITTED, too. And for some applications, yeah, it will be fine. Which is why I mentioned I'd do -ffast-math were I in gamedev. But you can't blindly take something that might require stronger properties for correctness and break them and expect it not to be broken. People do put a lot of effort into designing such software correctly.
Some further reading:
https://software.intel.com/en-us/mkl-developer-reference-c-repeatability-and-coherence
▶ No.960196>>960343 >>960407
>>960185
C code written anything like lisp code will be terrible C code. You'll end up learning a fear of mutability from lisp and be crippled in C.
▶ No.960204>>960234
>>960130
>The compiler can't ever get a lot of that stuff right as it doesn't have any program-level awareness, it can only see one file.
What is LTO?
▶ No.960223>>960256
>>960193
Give me an example of such software. It must be an example where using floating point math was an appropriate choice, i.e. where using fixed point or arbitrary precision instead wouldn't be proper.
▶ No.960234>>960354
>>960204
You don't seem to understand what LTO does. It isn't going to un-un-inline a bunch of different variants of one template and rework all the code that was compiled differently because of it. The information necessary to do that is way beyond what the bytecode can carry. It's not magic.
▶ No.960250
>>959178
ATS was written by some random Chinese associate (t. nontenured) professor at a second rate American university most known for being an 'ivy league backup'.
I'll stick to Rust, thanks.
▶ No.960252
>>959162
Marriage is a spook and Mackay's writing style is a dumpster fire.
Also, where can I find images of that cat girl's asshole being messed up?
▶ No.960256>>960257
>>960223
>Give me an example of such software.
Anything where an unpredictable result is a bad thing, usually anything that shares data between different architectures. The project I'm currently on has to deal with repeatability in the context of message replay always leading to the exact same state.
>It must be an example where using floating point math was an appropriate choice, i.e. where using fixed point or arbitrary precision instead wouldn't be proper.
That's your fear of float speaking. You're going to have to learn to work with them as they're what's supported by the hardware. They can be made predictable, other than for a few functions.
▶ No.960257
>>960256
>other than for a few functions
I should add, there's crlibm for that, but I've never needed it.
▶ No.960343
>>960196
>lisp
>fear of mutability
Lisp is one of the most mutable languages.
▶ No.960354
>>960234
I didn't mean it could that, just that the sentence I arrowmemed was kind of wrong.
▶ No.960360>>960492
>>960172
> already you forfeit all pretense for accuracy and repeatability
Except you don't. Only if you lived in the bad old days when floating point arithmetic was the wild west. Today, you can get consistent results, across architectures, and you can do error analysis on it. Really, that's probably what scares you the most: that you can do math to determine the error in modern floating point computations. But, that would be too hard for you, wouldn't it?
▶ No.960363>>960366 >>960414 >>960427
>>959211
>more universities teach C
Gramps everyone is learning Java or Python
t. uni student
▶ No.960366>>960368 >>961087
>>960363
Maybe on unis that churn out webdevs. My old uni still used C a few years back (I'm old though and it was still Pascal for me). I like Python but it's one of the worst languages to learn first, it teaches students to be lazy and most will never be able to work without handholding. It's much better to get started on something like C then they can move to whatever gay shit they want later on.
▶ No.960368
>>960366
At my Uni (when I went there 5 years ago):
EEs learned C
CS started with Java, unless you transferred in from the local Junior College, then you could have started with C++
My "first" language was VB6 in HS (though, really, it was Casio BASIC for their graphing calculators).
▶ No.960376
>hurr durr we don't live in the past we don't have to work with memory
You don't need to work with memory, shitty pajeets and california "programmers" don't need to work with memory. In a scenario where performance matters, such as with hardware intensive mathematical research, it's usually better to hire better programmers as opposed to wasting a lot of money on hardware or maybe even require hardware that does not even exist yet.
>muh milliseconds
A single millisecond less in a function can be imperative on most cases.
>it's not safe
It's open. It allows you do to whatever you need to do, which will of course lead to unsafe code with an programmer that's not used to the language. That's like saying cars are unsafe because there are people that can't drive.
>Don't you just love being totally unable to increase the size of a string without literally editing raw memory.
Wow I had to do a realloc(), I literally had to hax the RAM.
▶ No.960407>>960428
>>960196
Lisp is not haskell, retard.
▶ No.960414>>960418 >>960510 >>961087
>>960363
I feel sorry for American universities. They completely sold out their integrity in order to cater to brainlet students who bring in the shekels in the form of student loans. Using C and Scheme has always been one of the best things you could do to purge brainlets from a course they don't have the brains to handle. When their little brains were exposed to pointers and recursion, they exploded; those people would then drop out or change major to political science. Schools would do this right at the start with tough weedout course so retards wouldn't waste years of their lives. If you think C is hard, wait till you're proving shit mathematically. But alas, making students drop out will cut off a perfectly good stream of money to the administration so what happens is they dumb down their courses to the point any retard can take them and they use feminist inclusivity as an excuse nobody can refute without political damage. Retaining students and expanding the student body; that's the school's motto. Nowhere does learning or producing top engineers come into the picture, and yes this is true even of top schools like Harvard. Even Harvard offers bullshit courses on "human sexuality" for the shekels. America's gonna become a 3rd world country before my life is through.
▶ No.960418>>960421
>>960414
>they use feminist inclusivity as an excuse nobody can refute without political damage.
I'm against that too, but you're clearly misunderstanding--you pay for college to get a piece of paper that says you're qualified. That's it, it's not a debate hall, and you will only waste everyone's time by trying to proselytize the batshit professors. Never, ever, *ever* show your power level.
▶ No.960421>>960424
>>960418
You're the one who doesn't get it. Anyone can get a piece of paper saying they're qualified now. It's not even worth the paper it's printed on. Retards get themselves 200k debt so they can wait tables.
By all means, carry on. It's not like american society can be redeemed anyway. Professors aren't fools, they're just playing along so they don't get pushed out by the administration. Tenure will be made extinct one day, mark my words.
▶ No.960424>>960475
>>960421
You're misunderstanding my point; but it's true that degrees are worthless and inflated these days. Anyways, this pasta should help:
University is not a debate society. It is not where you learn how to think. It is not a philosophy salon. These bad and weird impressions of university are understandable among shitlibs, less so among supposedly conservative students (although can anyone who is 20 years old really be called conservative?) Universities take your money. More importantly they take your time. "But I just heard a prof say Christians are racist!" Refer back to the previous paragraph. If you think arguing with some absurd statement a tenured academic makes is worthwhile, then you made mistake #1 and confused the class for a debate society. It's not a debate society, evidence for which is plentifold: the prof doesn't give a shit what your opinions are, the other students don't give a shit what your opinions are, you will be readily and easily punished for making a nuisance of yourself by arguing, and you will end the confrontation looking like a powerless loser. Plus no one is paying to hear you argue.
Even if you had a cogent analysis, brother (which is not very likely), you will probably damage your own cause. You will be an easy target for the prof's more polished arguments (he's been doing it longer than you) and your views will look silly and poorly thought out as you fumble with your words and fight off Rhetoric 101 attacks. More likely he won't even have to really argue with you, he can easily run you into the rhetorical weeds. His captive audience is not going to back you up.
But even if by some miracle you reduced the prof to babbling in some shitthatdidnthappeninaclassroom.txt whirlwind of words, all you accomplished is that the prof now has a reason to grade you lower, and some twerp in the Young Republicans thinks you are cool. In other words, you wasted the money you paid for the class. Here's a clue: writing a letter to the editor would achieve the same goal (making speeches to people who don't care) with less effort and expense.
Generally, when you are 20 years old you should be doing less talking and more listening. You should focus on preparing for adulthood and not on voicing your opinion, which is worthless and uninteresting. Your masochistic need to get on the bad side of someone whose services you have already paid for won't get you anywhere and won't enlighten anyone. Stop acting like an Occupy Campus Building protester and finish the task of growing up. Even Buckley waited until he graduated to publish his attack on Yale (and he did so in a manner and venue that was more likely to make an impact than arguing about feminism in front of 30 bored university students). I mean, think about what you are advocating, how much sense it really makes. Are you doing it just to feel good? That's not a very good reason to do something.
▶ No.960427>>961087
>>960363
The better CS universities teach C and C++ as they're needed for effective use of CUDA which is the frontier today. 100+ tflops (at fp16) is a monstrous thing and a Python programmer isn't the one getting hired for $300k starting to work with it. If you're paying full price for a university education in becoming a webdev you're getting fucked.
▶ No.960428
>>960407
The existence of something worse about it doesn't change what I said. You're trained to avoid the mutability in lisp that in C you revel in. It stunts your growth.
▶ No.960446>>960450 >>960454
>>959133 (OP)
why are the best programmers always women, gay, or trannies?
>grace hopper
>charles babbage
>steve klabinik
>alan turing
▶ No.960450
>>960446
Not all are programmers, but:
>Leslie Lamport
>50s-70s MIT AI guys
>Bob Widlar
>Heinz von Foerster
>Terry Davis
▶ No.960454>>960460
>>960446
The modern computer was invented by Konrad Zeus. A fucking Nazi mathematician
▶ No.960460>>960463
>>960454
protip: The Turing-Church thesis means that any engineer who understood it together with boolean algebra means they are also capable of creating a computer. He was simply the first to demonstrate it in person
▶ No.960463>>960476 >>960537
>>960460
The Turing-Church thesis means that a highly limited subset of functionality is all that is needed to create a computer that is equivalent, in computational power, with all other computers. It's believed that the Analytical Engine would have also had this property, had Babbage not been Chris Roberts. You don't need to know the thesis. You don't need to know boolean algebra. You just need a representation for numbers, operations on them, and a conditional branch.
▶ No.960475
>>960424
So that's what you mean. I agree with that. I don't view university as some kind of sacred place. School is just a game and an investment to me. An increasingly shitty investment.
My problem with their lack of integrity is they're dumbing down courses so dumb shits can take it. There's less and less math in these courses, because maths is not just hard it's also objective and it makes students drop out. These changes make degrees worthless, because they don't prove anything. This is gonna straight up kill higher education eventually.
▶ No.960476>>960537
>>960463
Yes you are correct. My point is that due to the environment at the time, it was inevitable for humanity to reach the conclusion of a working programmable, and Turing complete computer. The theoretical foundations of the Turing machine and boolean algebra was there for all to see.
▶ No.960492>>960494 >>960503
>>960360
If you're at the point that you're ready to sacrifice large chunk of performance for accuracy, you should use arbitrary precision math instead. That shit is not what floats were designed to do.
▶ No.960494>>960507 >>960839
>>960492
No, I'm going to have my cake and eat it too, for the low, low cost of some up-front math. Floats are designed to represent numbers, and nearly all of the hardware already does the numerically responsible thing these days. Maybe that wasn't what the float in a Cray was designed to do, but that's what IEEE 754 is designed to do. That's what you get when you have a bunch of error analysts on the team.
▶ No.960503>>960839
>>960492
>arbitrary precision math
Almost exclusively used by webdevs afraid of floats. Managing uncertainty is nothing new to science, and if you've not noticed, we don't have companies building supercomputers for arbitrary precision.
▶ No.960507>>960511 >>960780
>>960494
I'm not familiar with this theory. Can you explain it for us? How exactly does one determine the error of floating point computations?
▶ No.960510>>960676
>>960414
>Using C and Scheme has always been one of the best things you could do to purge brainlets from a course they don't have the brains to handle.
That's revisionist bullshit. Computer scientists used to hate C, if they heard of it at all. Computer scientists had more positive things to say about Cobol than they did about C because Cobol introduced new concepts into programming, whereas C did nothing but make existing tasks more difficult and error-prone. Universities "switched" to teaching C because there was more demand for C programmers. That's because C requires many times more programmers and a lot more time than doing the same thing in a good language. If you hate the trend of lower quality education and graduates who can't program, you should also agree that C sucks because that was the start of all this bullshit. Most of these graduates who can't program and are incapable of learning another language are C weenies.
>When their little brains were exposed to pointers and recursion, they exploded; those people would then drop out or change major to political science.
I blame the classes and the language, not the students. Everyone would understand recursion if they taught it with an expression-based or functional language. C pointers don't have any relation to hardware memory addresses or to high level locations or references, so they are harder to understand than assembly or FORTH pointers or high level pointers like Ada access types. There are all these bullshit rules to C pointers that ban valid implementations like garbage collection and tagged memory, while not being useful for systems programming. People with little brains don't care about any of this. They would rather prevent smart people from making things better than help them do it.
>If you think C is hard, wait till you're proving shit mathematically.
What's hard is doing it right because C makes doing it right as hard as possible. C was "easy" because everything was such low quality that bad student code blended right in with "professional" Sun and AT&T code. The people who believed in proving shit mathematically back in the 70s would be rolling in their graves right now. They have done so many great things that have been ignored, if not mocked, by UNIX culture because they go against the UNIX "philosophy" and the "design" of C.
>But alas, making students drop out will cut off a perfectly good stream of money to the administration so what happens is they dumb down their courses to the point any retard can take them and they use feminist inclusivity as an excuse nobody can refute without political damage.
Among people who know what happened, the dumbing down is associated with C and UNIX. Compare the kinds of things computer scientists did in the 70s and 80s with what they do now. They designed and built their own computers with operating systems, compilers, programming languages, GUIs, and all the other software from scratch.
I have some friends who work at Anon MPCSL. MPCSL is
probably the most prestigious computer science laboratory in
the world. It is the place where LANs and printers and
email and such were made practical (if not invented). They
run Sun unix now, and it is totally broken, and no one can
fix it. Their expensive PhD researchers regularly waste
*days* trying to get files to come out of a printer or
sitting around waiting for NFS to get fixed so their machine
will unwedge. It's officially acknowledged both that this
is a disaster and that it won't get fixed.
Along with the degraded level of systems support has
come an extraordinary decrease in access that might let one
fix things for one's self. When I was 17, in 1978, I was a
``tourist'' at the MIT AI lab -- in other words, I used
their PDP-10 although I had no affiliation whatsoever with
the lab. When the system crashed, I would wander into the
machine room, figure out what had gone wrong, and toggle the
relevant boot sequence into the front panel of this $1e6
mainframe.
Here in the future, at most CS labs, you can't get
access to the machine room no matter who you are. If the
file server goes down at 5:01 Friday afternoon, it stays
down until 9:00 Monday when the authorized person comes in.
So a lot of famous expensive computer science PhDs, many of
them capable of designing the file server in their sleep,
grunt in disgust and go home and get nothing done for the
weekend.
I can't understand how this has been allowed to happen.
Besides being fantastically annoying for users, it's
fantastically wasteful for companies. I can't understand
why the CEO of Anon doesn't say ``God DAMN it! We are
going to have reliable mail service around here, or HEADS
ARE GOING TO ROLL!'' I can't understand why people don't
just fix things that are broken, the way they used to ten
years ago. I just can't figure out how it can have gotten
this bad.
▶ No.960511>>960780
▶ No.960537
>>960463
>>960476
The fundamental requirement for a Turing machine is infinite memory. Any computer that has ever been created or ever will be created will not and can never be a Turing machine.
▶ No.960676>>960679 >>960688 >>960764
>>960510
I'm using C as a symbol for "hard thing inept programmers have absolutely no chance of ever comprehending". If someone has trouble with pointers, then they have trouble with indirection which is fundamental to programming. If they have trouble with recursion, they simply won't be able to do many interesting things. If they have trouble understanding resource management, they simply won't be able to understand the impact of the code they write, or even mentally process what the fuck an "object" is. C is so barebones it forces people to learn many concepts which are absolutely vital, but they're obviously not exclusive to C. The point is I don't trust javascript bootcamper to understand these things, and I don't trust recent college graduates either. Those who learn it do it by exploring their own curiosity while the rest programs by cookbook, blogs and stackoverflow.
>Everyone would understand recursion if they taught it with an expression-based or functional language.
Hoho. I listed Lisp/Scheme as a weedout language for a reason.
>C pointers don't have any relation to hardware memory addresses
That's due to virtual memory, another extremely important concept I fully expect competent programmers to understand.
Look, I have nothing but the utmost respect for the things our ancestors have accomplished. Every time I read a paper from the 70s or 80s, I get BTFO by the clarity and the amazing systems tthey created. You need to look at what I mean instead of interpreting it literally.
▶ No.960679>>961033
>>960676
>"hard thing inept programmers have absolutely no chance of ever comprehending"
That's funny because most of the c programmers are I know are retarded embedded developers that know almost nothing about how things actually work anymore.
▶ No.960688>>961033
>>960676
>If they have trouble with recursion,
Nothing wrong with that. The C language doesn't include tail call elimination.
▶ No.960764>>960776 >>960781 >>961033 >>961037
>>960676
>Those who learn it do it by exploring their own curiosity while the rest programs by cookbook, blogs and stackoverflow.
The problem with this is that you get people like thread related: people who think that how C programs work is how a computer works. People who think that two's complement is how computers represent negative numbers, and IEEE-754 is how computers represent floating point numbers. "Competent" C programmers write code that relies on two's complement signed overflow and consider the compiler broke for knowing that there are other ways to represent signed numbers and thus there expectations are false. There was a time when I made two of these mistakes.
These people would be dumbfounded that Pascal allows access to local variables of higher stack frames. They would be puzzled by a machine that has an integer negative zero. They have demonstrated in this thread that they think floating-point arithmetic is black magic. This is what happens when you learn solely by probing the machine you have access to.
▶ No.960776>>960798 >>961044
>>960764
A computer is an infinite tape. That's why NUL terminated strings make sense. People are puzzled by this. That's what happens when you learn on physical machines.
▶ No.960780
>>960507
I don't know of any really good sources. >>960511 is a start. There is also:
https://www.mpfr.org/algorithms.pdf
This has a section on analyzing errors for proving the algorithms' correctness. Read the first ten pages.
http://people.eecs.berkeley.edu/~wkahan/Mindless.pdf
William Kahan rambles on like a madman, but there are gems to be found here and there.
▶ No.960781>>960787
>>960764
>how C programs work is how a computer works
We use it for drivers because it is how a computer works.
>People who think that two's complement is how computers represent negative numbers
That is how it works on all the modern architectures I know of. You seem to think it's just a language construct. It's not. It's understood at the assembly level.
>and IEEE-754 is how computers represent floating point numbers
This is also pretty normal on modern hardware other than for embedded.
>These people would be dumbfounded that Pascal allows access to local variables of higher stack frames.
Most of the C programmers over 40 today have written Pascal as it was better supported in DOS. No one wants to go back to it.
>They would be puzzled by a machine that has an integer negative zero.
Yeah, we would, as that shit hasn't been a thing in almost 60 years as far as I know.
>They have demonstrated in this thread that they think floating-point arithmetic is black magic
The C programmers here are the ones who understand it and are teaching the webdevs.
TL;DR you're a faggot.
▶ No.960787>>960804
>>960781
> We use it for drivers because it is how a computer works.
You use it for drivers because it is more portable and easier to understand than assembly. If you wrote it in assembly, some jackass would use instructions only available on one VIA chipset.
> It's understood at the assembly level.
Yes, but it is important to understand that it is a design decision, not a magical limitation of computer hardware.
> This is also pretty normal on modern hardware
< All hardware I have access to is all computers ever.
> No one wants to go back to it.
Then why do people continue to write programs in Delphi? (It's disgusting, really.) I will argue that most C programmers don't want to go back to Pascal because they associate it with DOS, not because of the language.
> Yeah, we would, as that shit hasn't been a thing in almost 60 years as far as I know.
< All hardware I have access to is all computers ever.
▶ No.960798
>>960776
<muh Turing machine
Turing machines a shit.
▶ No.960804>>960820
>>960787
>it is a design decision, not a magical limitation of computer hardware
That's nice and all, but it is a magical limitation since nothing as far as I know has supported anything else in hardware since I've been alive. The only place I still have to think about it is doing IP checksums as they require ones' compliment math. You can roll your own signed magnitude library but it will be slow as fuck and pointless.
>All hardware I have access to is all computers ever.
It's unlikely you'll ever write code on something general purpose that supports floats but does it differently. Even shitty Cortex-M4Fs support the important parts of the standard. It's more likely you'll write code for things with no FPU at all.
>Then why do people continue to write programs in Delphi?
Is this 1998? Shit's dead. I know embarcadero tries to claim that millions of people are using its attempt to monetize the corpse but you go look at their forum post rate and tell me that it's a 'millions strong' community: https://community.embarcadero.com/forum There's probably more code being written in Cobol today.
>I will argue that most C programmers don't want to go back to Pascal because they associate it with DOS
From using Turbo Pascal back in the day, it only worked because we could easily mix Pascal and assembly. That's no longer necessary/fashionable, and without assembly, the type system is suffocating for systems work. It's dead at that level.
▶ No.960819
Pascal was also popular on MacOS, and 8-bit systems. The compilers actually worked, unlike C. Frankly I'm surprised C became a thing outside of Unix. Did Microsoft start pushing C or something?
▶ No.960820>>960821 >>960826 >>972163
>>960804
I went for a walk, and I realized the thing that I've been missing: the final nail in your bullshit coffin. Microcode. Every modern x86 processor is another processor entirely that runs one program: an emulation of x86. C can't be close to the hardware because the hardware actually prevents anyone from using it. C isn't close to the hardware, the hardware spends a lot of time and effort to execute C efficiently. And RISC ISAs were all developed to execute C efficiently. C isn't close to that hardware either, that hardware supports C.
And all of this hardware does things that C does not support. Even RISC has operations that need inline assembly to actually use, because C is not close to the hardware.
▶ No.960821
>>960820
Which hardware has direct support for interpreting the C language? As far as I know, all programs written in the C language needs to be compiled into machine code before the machine will execute that program.
▶ No.960824
Lol at all these butthurt polyps and I'm a C-fag. I'm the man who should be angery at this. Nice one op.
▶ No.960826>>960834
>>960820
>get BTFOed repeatedly
>"the final nail"
hee
>C can't be close to the hardware because the hardware actually prevents anyone from using it.
That's about as autistic as one can get. It doesn't matter how anything below the hardware interface is implemented because you can't target it - switching from C to Pascal isn't going to let me write my own microcode. You've completely lost sight of the topic.
▶ No.960834
>>960826
> >get BTFOed repeatedly
< demonstrates that he knows dick more about how a computer works than a pajeet webdev or a skiddie
< thinks he's won the argument
> That's about as autistic as one can get
Yeah. The truth is out there. Which means that you don't have it.
▶ No.960839>>960865
>>960503
If managing uncertainty isn't an issue, then how is it an issue if you use slightly uncertain but much faster floating point math engine? That's a direct contradiction. Surely you would take 5 days of computation over 2 weeks of computation if results are practically the same to you.
>webdevs
And bank software devs. And basically anything where precision is actually important. Sure enough, scientific calculations will only give you ballpark numbers anyway since your formulas are incomplete and source data has wide margin of error, to start with. So you wouldn't care a whole lot if the last few digits were wrong.
>>960494
Well except you're not having your cake. By choosing to use ISO math chop your floating point speed by a factor of anywhere between 3 and 10.
▶ No.960861
>>959133 (OP)
>faggot doesn't want to use the white man's language
I am shocked
▶ No.960865>>960867
>>960839
>then how is it an issue if you use slightly uncertain but much faster floating point math engine?
Uncertainty that is the same every time a program is run and no matter what runs it is uncertainty that can be reasoned about and planned for. If not, it's not repeatable, and random results are generally not something that reliable software wants to be producing. You don't always need repeatability, but you don't always need mutexes, either. Blindly removing either from code is not wise. But in both cases, you'll probably not notice a difference immediately.
By enabling options like -ffast-math (and -fassociative-math), the compiler can generate code that will behave differently by platform. It's not an issue of quantifiable floating point error, it's something that as a programmer you have no way to handle. That's fine for things like video games, not fine for things like databases.
>And bank software devs
They don't. They use fixed point. Cobol's type system is built for their needs and it's easy to declare a type like PIC S99V999 that exactly matches the database's type. Arbitrary precision involves allocations and error handling which make them difficult to verify and are avoided in reliable systems. They really are /mostly/ used by webdevs.
▶ No.960867>>960868 >>960883
>>960865
How exactly is it a problem if identical inputs produce outputs that diverge by a couple of least significant bits, if inputs differing by one least significant bit produce outputs that diverge by a couple of least significant bits anyway. In context of not caring about least significant bits it makes no sense. Surely you realize that the whole premise, i.e. identical inputs, is not realistic to begin with, with 64/80 bit floating point numbers. That's the core reason for something as basic as not using the == operator on floats.
▶ No.960868
>>960867
>identical inputs and outputs
▶ No.960883>>960885
>>960867
>How exactly is it a problem if identical inputs produce outputs that diverge by a couple of least significant bits
Because "a couple of least significant bits" can be enough to change the branches taken and the code can do totally different things. Example in python:
>>> 1.0/3.0 + 1.0/3.0 + 4.0/3.0 >= 2
True
>>> 4.0/3.0 + 1.0/3.0 + 1.0/3.0 >= 2
False
▶ No.960885>>960886
>>960883
Remember the not using == operator rule? You just violated it. Naturally you get some shitty behavior. You don't use discontinuous functions when operating on floats, you compute coefficients with some 0..1 window instead.
▶ No.960886>>960887
>>960885
Don't rush to autism. If you want to rewrite it in terms of flipping a sign and checking for >0 you can. It's just an example that order changes things.
▶ No.960887>>960890
>>960886
No I mean you're using floats wrong. You never check them against a value to do different things (unless it's purely for optimization). You compute a coefficient and apply it to your values. Fuzzy logic, ever heard of it? You ONLY do fuzzy logic wherever floats are involved. Leave discrete logic for integers and booleans.
▶ No.960890>>960898
>>960887
>No I mean you're using floats wrong.
You're too autistic to even understand why I'm calling you out for autism.. Jesus.
Here, because otherwise we'll be going in circles for hours:
>>> int(1.0/3.0 + 1.0/3.0 + 4.0/3.0) > 1
True
>>> int(4.0/3.0 + 1.0/3.0 + 1.0/3.0) > 1
False
Again, this is just an example that branches in code can change. Example.
▶ No.960898>>960901
>>960890
It's not I'm autistic, it's you're stupid. It's the third time I'm telling you that using discontinuous functions (e.g. branch that does entirely different things) is invalid for floats. That's because floats are representing continuous real numbers but their binary representation makes their computations be chaotic. When you apply discontinuous function to a chaotic function, you get random looking garbage at the output. This is why you use CONTINUOUS functions for this, i.e. fuzzy logic.
```float sigma = (1.0/3.0 + 1.0/3.0 + 4.0/3.0) * 0.3; //~0.6
return function ( input * sigma );```
Or you use discrete numbers (integers) where discontinuity of a function bears no meaning as the whole number set was not continuous to start with. If you do the branching shit, make sure both sides of the branch produce output that's continuous with each other, such as in the zeta function.
▶ No.960901>>960903
>>960898
>using discontinuous functions (e.g. branch that does entirely different things) is invalid for floats
What the fuck, anon.
▶ No.960903
>>960901
Well what the fuck do you think "don't use == on floats" means, brainiac? Don't use >= or even just > either because they're the same fucking thing, dressed differently.
▶ No.960905>>960907 >>960910
Interesting topic. I tried it out on my machine with Ada and was somewhat surprised at the result.
Now what black magic could going on here?
with ada.Text_IO; use ada.Text_IO;
procedure Main is
begin
Put_Line(Boolean'Image(1.0/3.0 + 1.0/3.0 + 4.0/3.0 = 2.0)); --prints TRUE
Put_Line(Boolean'Image(4.0/3.0 + 1.0/3.0 + 1.0/3.0 = 2.0)); --prints TRUE
end Main;
▶ No.960907>>960913
>>960905
Could be compile-time math, there would be no problem to compute perfectly correct result and store final value in the binary. Try enforcing runtime calculations.
▶ No.960910>>960913
>>960905
Python math is extreme bullshit and maximizes the error, I just used it as it's convenient for people to play with. You might need to tweak the numbers used.
▶ No.960913>>960914 >>960915
>>960907
>compile time
If I promt the user for input at run time it still works.
>>960910
I tried the canonical example of 0.1+0.2=0.3 as well, that alsa gives the expected, or rather unexpected result of true.
I'll dig into this a bit more at later time I think. Interesting stuff.
▶ No.960914
>>960913
Check documentation if the == on floats has a few bits wide sigma.
▶ No.960915
>>960913
I'm not sure what requirements Ada has on float literals, but you might want to hide them from the compiler by passing them as arguments to the program so you can be sure it's not cheating. If that's gnat it's able to do some pretty crazy transforms thanks to gcc as well.
▶ No.961033>>961044
>>960679
That's unfortunate... I don't really know what to say.
>>960688
Just because the standard doesn't mandate it doesn't mean compilers don't do it. Standards don't run or compile code. Always look at what your compiler does.
>>960764
True. This is a huge problem. People's mental model of the machine is specific to modern computers, while C was created in a much more diverse context. Personally, I use compiler flags that force the compiler to define behavior so that the compiler and I are on the same page. Foe example, I explicitly make GCC assume 2's complement.
I wish C types were less "portable", to be honest. Instead of "float", I wanted "iee754_float". Instead of "unsigned int", I wanted "unsigned complement int32". Portability must be earned through typedefs, or not exist at all.
▶ No.961037>>961044
>>960764
>these "competent" losers are angry with the compiler for knowing about other worlds
Compilation does not happen in a vacuum. It happens with a target architecture -- one that is known to the compiler throughout the entirety of compilation. Any but the most useless of programs is going to depend on some characteristics of its environment. If I make a tool for cPanel servers, it's going to need some work (some porting, an application of a verb) to work elsewhere, and it might still make architectural decisions contrary to the interests of the new platform: f.e., a program running as a utility on a cPanel server is not going to eagerly consume tons of memory and use as many CPUs as possible to get that extra % of speed, the way a program intended for a desktop might.
I have a program that dies with an error if the environment is even as unexpected as /etc/trueuserdomains not existing, and you want me to give a shit about one's complement and other bullshit that no server I will buy in the next 20 years will care about? You want GCC to say, "well I know I'm building shit shit for x86_64, but what if computers with infinite-range primitive numbers become popular?" You want to let college sophomores piss sensible-for-no-earthly-machine optimizations into otherwise practical language compilers so that they can get a degree and I can get fucked?
No. I'm perfectly aware of integer negative zero. The brain-addled one here is you.
▶ No.961044>>961063 >>961094 >>961106
>>960776
Or you could do the white man's thing, specify data chunk length in the header.
>>961033
All of that shit is hardware-defined. If you don't like it, your only other option is emulation. If you think emulating special snowflake types on hardware that doesn't support it using C of all languages is a good idea, then you should stick to Javascript. Being platform-agnostic while being fast is the core reason why C is good.
>>961037
>Any but the most useless of programs is going to depend on some characteristics of its environment.
Not if you write them properly. Tying something to your specific platform is piss easy, just like it is piss easy to make a program that takes 10 million years to run. Making it compile the same code on different machines is a whole other story. You would have a point if you were talking about writing microcontroller firmware, pinouts and internal components are chip-specific so you can't really get around that. But you're writing a fucking x86 app. You're nowhere remotely close to the hardware. The only thing there worth noting is that there are millions of frameworks that do exactly the same job to choose from, in addition to the choice of implementing that functionality yourself, but that's like saying that riding a bike is harder than you think if you keep putting sticks in your own spokes.
▶ No.961063>>961068 >>961086 >>961104
>>961044
>Being platform-agnostic while being fast is the core reason why C is good.
Except fast code can't be platform agnostic retard.
▶ No.961068>>961073
>>961063
Right, right. The only way to achieve any kind of good performance is to use Assembly.
▶ No.961073
>>961068
Wrong. It just won't be cross platform code.
▶ No.961086>>961091
>>961063
This.
Currently C fails to be an abstract representation of the machine, it's too high level, for example C don't have a operator or function to do bit-rotation, neither vector operations, or population count. So you must rely on compiler specific intrinsic functions to do that, or write conformant but slow code for something that can be translated into a single machine instruction if available.
▶ No.961087
>>960366
>>960414
>>960427
Luckily only the first sequential course in programming at my uni teaches java, they have it that way because many non CS/CE/SE majors require programming, programming 2 is C++ and the advanced courses deal with assembly and C (for comp engineering, which is my major)
Programming is practically a Gen Ed course for Engineering students here, though ironically CS is a liberal arts major.
▶ No.961091>>961095
>>961086
For vectorization if you only rely on compiler-specific pragmas your code is still portable although its performance isn't.
▶ No.961094>>961100 >>961110
>>961044
>specify data chunk length in the header.
This is really dumb for strings. It adds bloat, it adds an unwanted maximum length, it prevents writing until the full length is known or requires a seekable stream, and it adds byte ordering issues. Unsurprisingly, string tables don't use pascal strings.
▶ No.961095>>961096
>>961091
Sorry but compiler intrinsics are not C
▶ No.961096
>>961095
Yes, but, I repeat, if you only require pragmas it'll work even if your compiler doesn't implement that pragma.
▶ No.961100>>961118
>>961094
Being unable to use null character sure is convenient. Being unable to tell string length without counting the entire thing character by character is convenient too.
>chunk size metadata is bloat
Just when I thought you said the stupidest thing ever, you just keep talking.
▶ No.961104
>>961063
Fast code can be platform agnostic, it's exactly why we use C. A good example is vector support (the modern, explicit kind that is a non-standard compiler extension but is going to be standardized) where modern gcc is actually astonishingly good at producing optimized assembly per platform. That's great as vector ops get pretty crazy with a lot of ways to do the same thing and there is massive variation between platforms.
It's actually a bit spooky how good it is at optimizing vectors now, it will even optimize across use of __builtins, using the logic of what you wrote to choose a different instruction than you thought you were forcing. I don't know how they got from the shitheap that was gcc 4's optimizer to gcc 7's but I'm impressed.
▶ No.961106>>961112 >>961113
>>961044
> If you don't like it, your only other option is emulation.
That is what I want. If I write that a variable is of type iee754_float, I want the compiler to generate correct code for that. It will be fast on the overwhelming majority of mahcines out there. In case the hardware doesn't do standard floats, I want the compiler to do it in software for me. Just generate the code that does it, somehow. I don't want it do silently switch to whatever unknown implementation of floating point the hardware has. It will be slow in an extremely small number of targets, but my assumptions about how the code works will not be violated and that's extremely important for me. Later on I can judge the cost/benefit of writing a custom version for the odd architecture.
>If you think emulating special snowflake types on hardware that doesn't support it using C of all languages is a good idea, then you should stick to Javascript.
Please. Do you know what libgcc is?
<GCC generates calls to routines in this library automatically, whenever it needs to perform some operation that is too complicated to emit inline code for.
<Most of the routines in libgcc handle arithmetic operations that the target processor cannot perform directly.
<This includes integer multiply and divide on some machines, and all floating-point and fixed-point operations on other machines.
It's already being done. Language features as basic as multiplication and division are already being emulated in software if the hardware doesn't support them.
▶ No.961110>>961125 >>961155
>>961094
> It adds bloat
The most C common string operation is strlen. Keeping around the most commonly accessed information about a buffer isn't "bloat", it's the next logical step.
>it adds an unwanted maximum length
So? A size_t length is able to hold the size of any object that can fit in memory. Sizeof(size_t) is essentially the same as sizeof(void *). Obviously a single byte for the size is retarded.
>it prevents writing until the full length is known or requires a seekable stream, and it adds byte ordering issues
Explain.
▶ No.961112>>961128
>>961106
There's a good reason why specific types are not implemented. It's the same reason why all pointer types (except char *) are incompatible. And that's to discourage using platform-specific quirks. If you're gonna rely on those you might as well rely on bugs in the OS, don't forget to drop all pretense to being cross-platform in the process.
▶ No.961113>>961128
>>961106
>Do you know what libgcc is?
I don't think you understand it. It used to be pretty common to leave multiply and divide out of the ISA as they're essentially compound instructions. On some architectures that meant the toolchain needed to provide an implementation, on others that meant calling a function library on the CPU (the Alpha's PAL code is the progenitor of microcode). It's not creating wacky types out of thin air or doing oddball math, it's just necessary platform support for CPUs that aim for smol rather than IPC. It's not implementing signed magnitude or whatever other stupid shit you're dreaming of.
▶ No.961115>>961121
>>961099
It's actually pretty large. Most strings in compiled binaries tend to be 5-6 bytes long. Adding 8 bytes of length to get the same effect as a 1 byte null will likely double the size of a stringtable. It's bloat, and is why it isn't done that way.
▶ No.961118>>961121 >>961122
>>961100
Why do you want to put nulls in text? They're not valid there. Stop abusing types.
▶ No.961121>>961134
>>961115
Funny you should say that, the pointer to the string is bigger than the string itself already. Inclusion of string length header drops efficiency from 35% to 25%. It wasn't good to start with so it isn't terrible after the change.
>>961118
UTF-32 would have a word with you.
▶ No.961122>>961134
>>961118
>They're not valid there.
hurr durr ascii is the only encoding
▶ No.961125>>961131
>>961110
>Explain
Null terminated strings can be written and processed serially. Strings that provide the length first can't as the length isn't known until the processing completes. The byte order issue is in loading data files where the endianness of a string length might differ. It's all extra complexity and bloat to optimize the wrong thing.
▶ No.961128>>961132 >>961133 >>961145 >>961269
>>961112
>It's the same reason why all pointer types (except char *) are incompatible. And that's to discourage using platform-specific quirks.
Pointer types are incompatible due to strict aliasing, which is pretty stupid for systems programming since people can't use a uint8_t to alias some buffer without turning that idiotic "feature" off. Incidentally, linux does compile it with off.
>If you're gonna rely on those you might as well rely on bugs in the OS
That's not unreasonable. People do rely on OS bugs every single day. Buggy OSes will remain buggy forever, simply because fixing things would break ABI. They will create new interfaces and support the old ones forever. Linux does this; no matter how crap some interface is, it will be there forever because user space is sacred.
>don't forget to drop all pretense to being cross-platform in the process.
I never pretended to care about that in the first place.
>>961113
If it can implement all fixed and floating point opertions in software, why can't the compiler provide a consistent implementation of something even in platforms with no hardware support?
▶ No.961131
>>961125
Oh geez, endianness of the data stream can impact the result. It's gonna be super hard to work around it. It's not like every single piece of software in existence that can connect to the internet already does this, not only proving that it's trivial to accomplish but also by itself eliminating the need to actually do it.
>serial
Still not seeing how processing an array without metadata is different from processing an array with one.
▶ No.961132>>961152
>>961128
If you want something consistent that way, use Java. Evidently you don't give a sliver of shit about writing fast or stable software - this language seems right up your alley.
▶ No.961133
>>961128
>If it can implement all fixed and floating point opertions in software, why can't the compiler provide a consistent implementation of something even in platforms with no hardware support?
Better languages than C do.
▶ No.961134>>961143
>>961121
>>961122
Why would you allow nulls in your unicode text? What possible value is that to you? Reject that shit.
▶ No.961143>>961154 >>961155 >>961156
>>961134
Why would you not allow nulls in strings? You realize the only thing this accomplishes is gimps the data type? By having special bytes you preclude the use of strings as universal byte strings, now they're only good for ASCII-compatible text strings. Plus you can't allocate the space properly without knowing the length in beforehand, not the case when the first thing you know about the string is its size. Come to think of it, whenever string transmission is involved, the size is somehow also passed, making it an awkward side-by-side implementation and also making the terminator redundant. The only context where using null-terminator makes any sense at all is something like
while (*src) *dst++ = *src++;
but even this can be rearranged into a simple for-loop (or while-loop) if you know the length in beforehand, to the same effect.
▶ No.961145>>961152
>>961128
>why can't the compiler provide a consistent implementation of something even in platforms with no hardware support?
It does. gcc uses soft-fp for this and is slow as fuck. No C programmer is going to actually write floating point heavy code on hardware without floating point though so it's just bitching about nothing.
▶ No.961152>>961153
>>961132
Using iee754_float will be exactly as fast as regular C code on targets that support it (90% of them?), and it will *work as expected* on targets that don't. It's fast in most cases, explicit, unambiguous, predictable and stable. The regular float type is whatever the machine supports; it might be iee754 in most cases, or something else entirely.
Do you actually have something substantial to say to this, or are you just gonna tell me to go back to java again?
>>961145
You're right, I would avoid such code in those cases. I'd rather produce a custom algorithm for that hardware because I don't believe in cross-platform code. However, in the event that such FP code does get compiled for the FP-less architecture, it's nice to know that GCC does the right thing. It's slow but it works. Better than giving me completely wrong results. The problem should be "this code is too slow for us to use; how do we improve that?" not "this code is giving me completely non-sensical results; wtf is going on?"
Optimization shouldn't come at the expense of your program's correctness.
▶ No.961153>>961159 >>961162
>>961152
My point the entire time was that if you use discontinuous functions on floating point numbers, then you're doing it wrong and if you believe you're doing it right it's because you're a clueless codemonkey of a programmer.
▶ No.961154
>>961143
>Why would you not allow nulls in strings?
Because it doesn't code for a character. It's only allowed in unicode for performance reasons to avoid having to handle invalid codepoints at the lowest layer. As higher layer code, you're supposed to reject garbage. And there's an awful lot of garbage in unicode. There are entire ranges that aren't even allowed to be encoded at all.
If you want an array of bytes, use one. If you want a string of characters use one. Don't confuse the two.
▶ No.961155>>961166
>>961110
>Obviously a single byte for the size is retarded.
>PASCAL on suicide watch
>>961143
here's the really nice thing about null-terminated strings:
void cpy(char * src, char * dst){
if(!*src)return
*dst++ = *src++;
cpy(src, dst);
}
basically, you can use recursion for iteration. Not very useful here, but in certain cases it allows for very elegant code.
▶ No.961156>>961217
>>961143
>Why would you not allow nulls in strings? You realize the only thing this accomplishes is gimps the data type? By having special bytes you preclude the use of strings as universal byte strings, now they're only good for ASCII-compatible text strings.
Treating strings as byte buffers is the red pill. Notice how similar many of the str* and mem* functions are? The only difference is the NUL handling. If we get rid of this NUL garbage, we end up with an unified discipline for dealing with memory of any kind. It becomes simpler to deal with.
The idea of a "string" is stupid. Having a NUL doesn't make it text. Being encoded as text makes it text. The NUL is irrelevant and is only there as a sentinel. "Strings" are really 0-terminated byte arrays.
struct memory { size_t size; unsigned char *pointer; };
enum encoding { UTF8, UTF16BE, UTF16LE, /* ... */ };
struct text { struct memory memory; enum encoding encoding; };
>Plus you can't allocate the space properly without knowing the length in beforehand, not the case when the first thing you know about the string is its size.
>Come to think of it, whenever string transmission is involved, the size is somehow also passed, making it an awkward side-by-side implementation and also making the terminator redundant.
Correct. Look up any I/O system call. You give it the number of bytes you want read and it will give you the number of bytes actually read. You give it the number of bytes you want write and it will give you the number of bytes actually written. The buffer size and the length of the content are always given to you.
The operating system does it right. It's the C world that adds this asinine NUL business on top of the reasonable OS interface. "String" literals are the ones that end up with NULs in them. Command line arguments get NULs due to convention/standards. C makes the same mistake replacing return codes with errno.
▶ No.961159>>961251
>>961153
Wait, are we talking about the same thing? Because I wasn't part of the floating point theory discussion above. My point is I'd like it if C let me ask for IEEE floating point explicitly so that I always know how my program will behave. I'm not against floating point in any way. It will be fast on most machines and always behave according to how I understand floating point. Same thing about fixed point; I'd like it if I could pretend everything was 2's complement, and not have to worry about my compiler deleting overflow checks because they're undefined dead code.
▶ No.961162>>961251 >>961253
>>961153
You were too autistic to see that it was an example of behavior changing due to a loss of a property.
▶ No.961166>>961169
>>961155
C doesn't usually have tail call elimination although I'm sure there's a gcc flag for it.
▶ No.961169>>961170
>>961166
GCC does do TCO. It's a pretty basic optimization. Just because C doesn't mandate it doesn't mean compilers won't do it.
▶ No.961170>>961217
>>961169
It does it sometimes. That the programmer can't control when makes it dangerous.
▶ No.961217>>961227 >>961257 >>961283
>>961170
It just iteration, so at any decent optimization level gcc will optimize it so. I know sufficiently smart compiler is a meme, but to my knowledge there's no reason that every call followed by a ret can't be replaced with a jmp.
>>961156
int open(const char *pathname, int flags);
where do we pass the length, friend?
▶ No.961227>>961339
>>961217
Depends on a lot of factors. Templated tail recursion is the example that most frequently blows everything up due to the compiler arbitrarily switching to non-recursive.
▶ No.961251>>961308
>>961159
This board has no IDs, so by engaging in an argument you automatically assume the proponent's/opponent's identity, both of which are singular.
As I said, it's very easy to do something that ties your code to one specific platform. But we as a species came long way to ditch those shitty habits, and you shouldn't drag on in the metaphorical stone age where using assembly bugsploiting out the ass was the norm.
>>961162
Do I still need to point out that the only reason there's any property loss is because you use discontinuous function? If your function was continuous there would be no appreciable difference whatsoever between different implementations of floats that produce slightly different results.
if ( input < 0 ) return sin ( input ) else return input;
This function is continuous, you can smoothly plot its outputs and at no point there will be any jumps or holes. You can even smoothly differentiate this particular example, even though that's not strictly necessary.
if ( x < 0 ) return 10 else return 20;
This function is discontinuous and will behave badly at the edge due to chaotic nature of floating point number computations. You should never use this. Ideally you should just use mathematical functions with no branching whatsoever.
▶ No.961253>>961259
>>961162
Oh wait
>nasal demons
Opinion discarded.
▶ No.961254
▶ No.961257
>>961217
>length
If it was string type with length metadata, you wouldn't need to. Length data is embedded in the string.
▶ No.961259>>961260
>>961253
The OpenCV people clearly don't have the deep understanding of math like you do, anon.
▶ No.961260
>>961259
Being a good mathematician does not precludes being shitty programmer. In fact, it's often the case that professional mathematicians produce king-pajeet tier code.
▶ No.961269>>961271
>>961128
>Pointer types are incompatible due to strict aliasing, which is pretty stupid for systems programming since people can't use a uint8_t to alias some buffer without turning that idiotic "feature" off. Incidentally, linux does compile it with off.
n1570
6.5 Expressions
7 An object shall have its stored value accessed only by an lvalue expression that has one of the following types:88)
* a type compatible with the effective type of the object,
* a qualified version of a type compatible with the effective type of the object,
* a type that is the signed or unsigned type corresponding to the effective type of the object,
* a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object,
* an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or
* a character type.
6.2.5 Types
15 The three types char, signed char, and unsigned char are collectively called the character types.
*(unsigned char *)p is always legal and does not break strict aliasing rules.
▶ No.961271>>961274 >>961276
▶ No.961274>>961368
>>961271
If you need to alias a bunch of structs, unions or arrays over a byte string, you declare that byte string as char * instead of some other random type.
▶ No.961276>>961279 >>961368 >>965232
>>961271
5.2.4.2.1 states CHAR_BIT must be at least 8. Char is the smallest integer type and must be able to encode at least -127 to 127 or 0 to 255. Hence, if uint8_t is available on your system, it must necessarily be a character type. So uint8_t can alias anything you'd like.
▶ No.961279>>961280 >>961284
>>961276
What if your system is using 10 bit words? uint8_t won't alias to jack shit.
if ( value.toString().charAt(1) == 45 ) { ABC } else { XYZ }
if ( value & 0x80000000 ) { ABC } else { XYZ }
Notice how both of these are ass-backwards and retarded, except one is distinctly currynigger approach and the other is distinctly bitnigger approach. Both of these are dogshit code, if you do anything of the sort you should apply yourself. And if you're not doing it, then by default you're writing perfectly good and portable C code.
▶ No.961280>>961281
>>961279
If you have a 10-bit char system then you don't have uint8_t.
▶ No.961281>>961285
>>961280
Even better, your code with uint8_t in it wouldn't compile.
▶ No.961283>>961288 >>961339
>>961217
just expose a syscall that accepts the length.
geez do I have to do everything for you?
patch your OS, then require that anyone using your software needs to use that patch.
it's 1000% as practical as worrying about 10-bit machines that not even your grandchildren will encounter outside of some kind of challenge.
▶ No.961284
>>961279
I guess I could exagerrate the bigniggership a bit more so it's closer match to curryniggership. Also to better exemplify all that crap I suppose.
if ( value >> ( sizeof ( value ) - 1 ) * 8 & 0x80 ) { ABC } else { XYZ }
▶ No.961285>>961290
>>961281
Right, because the system is effectively unable to comply with your demands. For all practical purposes you're unable to express what you desire on the given architecture. You would be violating a whole host of rules of the language itself, not to mention a large amount of assumptions people make.
You might have uint_least8_t, which will then be a character type and 10 bits wide, but you could circumvent all of this by just using an unsigned char to begin with and not depending on behavior that requires it to be 8 bits wide (such as left shifting zeroing bits above 8).
Alternatively the compiler writer might expose it to you as an 8-bit char system with 2 padding bits. I can't remember the rules for those but I believe the intent was to take existing 9-bit systems and exposing them as 8-bit ones. Crippling the functionality a little for the sake of compatibility.
▶ No.961288
>>961283
Futureproofing is not about what's currently practical, ya dingus.
▶ No.961290>>961292 >>961293
>>961285
Why would you demand that your number type was exactly 8 bits, in the first place? Seems pretty arbitrary and not at all necessary.
▶ No.961292
>>961290
To get the overflow semantics of a byte. Would be unfortunate if the index into a 256 entry array suddenly was like 300.
▶ No.961293>>961297
>>961290
It's a pretty convenient assumption to make when working with byte-oriented things, which basically comprises all communication with the outside world, be it file storage or networking. It provides you with the knowledge that any chunk of information will be at least 0 and at most 255 in value, so you don't need to do any testing for it - nor do you need to do any translation between internal and external representations.
It is pretty damn arbitrary though. My guess is that the 8-bit byte came out of stitching together 2 4-bit nibbles from really early programmable computing, because it was the most convenient.
▶ No.961297>>961298 >>961299
>>961293
Communications are bit streams though, not bytes. They're transferred serially, one at a time. As you should remember, those include all manner of stuff, such as parity bits, making it 9 transferred bits per byte. Translations between representations are already done - network-to-endian functions are used out of the ass in the network code. So your argument is kinda completely moot there. Besides, you SHOULD do the checks anyway, just in case stray corrupted/compromised data makes its way in.
▶ No.961298>>961307
>>961297
Communications are by no means bit streams. Your standard 1000BASE-T uses two quinary streams in parallel with plenty of strange modulation and checks. However what you do on a transparent transport layer versus what you do on a non-transparent storage or communication layer are wildly different things. We've sort of settled on everything being done in 8-bit words for all manner of storage or communication, which is why you practically never have to deal with shifting things in and out of mismatched datatypes when writing software.
▶ No.961299>>961300 >>961302
>>961297
>Besides, you SHOULD do the checks anyway, just in case stray corrupted/compromised data makes its way in.
No, this is one of the mistakes of the current year internet. Streaming video already has ways to recover from bit errors but when sent over UDP an entire packet will be dropped if a single bit is flipped. There are a lot of types of data like that, where they're still good with some error. We'd like to undo that bad decision but it requires a "new" protocol and much of the internet blindly firewalls anything that isn't TCP or UDP. So we're fucked.
▶ No.961300>>961303
>>961299
Would you shut up.
▶ No.961302>>961303
>>961299
I'm not talking about dropping video stream packets. I'm talking about preventing viruses from taking over the host system using nothing more than a little of network spoofing.
▶ No.961303>>961304 >>961305
>>961300
Is that a shattered shitter?
>>961302
Then you're a moron as the checksum of the spoofed packet will be valid.
▶ No.961304>>961308
>>961303
I was trying to have a civil conversation with the guy and then you chime in with your asinine off-topic comments.
▶ No.961305>>961308
>>961303
>not doing security checks beyond checksums
>not doing input sanitization altogether
▶ No.961307>>961313 >>961317
>>961298
Compressed data are definitely bit streams. And pretty much the entire internet traffic is compressed. Frankly you got to be a complete and total faggot to send anything over the internet uncompressed.
▶ No.961308>>961309 >>961313 >>961458
>>961304
>reee I was having a private conversation in this thread with a dozen different posters
Watch this magic trick, I'll pick a (you) out of this deck: >>961251
Was that your card, autist?
>>961305
>not doing security checks beyond checksums
Didn't say that.
>not doing input sanitization altogether
Sanitizing is unsanitary. Data is precious, don't shit it up to make it easier to pass securely through an insecure system.
▶ No.961309>>961312
>>961308
>Sanitizing is unsanitary.
Enjoy your SQL injections faggot.
▶ No.961312>>961321
>>961309
this is what PHP programmers really believe
Prepare your statements and bind its parameters, and it won't matter what 'unsanitary' characters you have in those parameters.
Believing that the hygienic thing to do is to sanitize input is like thinking you can drop meat onto a public urinal and then pick it up, shake it a bit to "knock the germs off", and then eat it.
▶ No.961313>>961318 >>961458
>>961307
Most of the compression you know about operates on words (much) larger than 1 bit or groups of larger words, and produces bytes. It confers certain advantages to not have to deal with units smaller than your encoding.
As a trivial example: base64 which is a 6-bit encoding of 8-bit words encodes groups of 24 bits at a time. However at the end of the 6-bit string you may need to encode a 16- or 8-bit group, which you can do by over-encoding them as 18- or 12 bits respectively. This means you don't need a stored length to tell you how many bits to chop off at the end. It's entirely implicit in the encoding.
>>961308
So when you're out in public, do you go up to random people having a conversation and just start yelling at them about random crap?
▶ No.961317>>961458
>>961307
At the physical level, network communications are rarely a bit stream today. Fiber has carried multiple wavelengths of light for something like 20 years in networking.
>Compressed data are definitely bit streams.
No, most compression algorithms you'd run into in networking don't work on bits. LZ-based algorithms without huffman blocks are used for their speed.
>Frankly you got to be a complete and total faggot to send anything over the internet uncompressed.
Compression mixed with encryption is dangerous which is why a lot of internet traffic isn't compressed.
▶ No.961318
>>961313
>yelling at them about random crap
>this is how my warped brain interprets being corrected for saying something wrong
▶ No.961321>>961322
>>961312
>prepare all your statements
LOL
▶ No.961322>>961328
>>961321
>webdevs think this is unusual
▶ No.961328>>961331
>>961322
>c-fags think this is normal
go back to your embedded systems retard
▶ No.961331>>961336
>>961328
Go back to your state of ignorance where you're unaware your streetshitter language is doing its best to cache prepared statements internally and that you're doing your best to fuck that up by not using positionals.
▶ No.961336>>961351
>>961331
It's literally not you retarded piece of shit. I can't wait until the last c-fag dies off from old age.
▶ No.961339>>961349
>>961227
>templated tail recursion
Is this a c++ thing?
>>961283
>expose a syscall that accepts the length.
Why bother? The current syscall works fine, by banning nulls from appearing in filenames. No shit read(2) returns the length, it can't very well ban nulls from all binary files.
▶ No.961349
>>961339
Everything should be raw buffers agnostic of any encoding or size.
▶ No.961351
▶ No.961368>>961396 >>961458 >>961515
>>961274
I know. The question is: why? If two types look like they can alias each other and by all means do in fact perfectly alias each other in practice, why must C prohibit it and allow the compiler to fuck it up? Because Fortran has similar rules and they allow optimizations and surely we can't have Fortran beating C in benchmarks, right? Systems programmers can either bend over backwards or turn strict aliasing off (Linux does this).
The C standard sucks, and the more literally a compiler interprets the standard the more it will fuck perfectly reasonable code up.
>>961276
And yet I've replied to threads on this very board where OP had aliasing issues with uint8_t and uint16_t that were vompletely fixed with -fno-strict-aliasing. I remember digging up GCC mailing list emails and bug tracker posts where developers explain that even though unsigned char is literally equivalent to uint8_t the compiler does NOT consider uint8_t to alias anything because that's against the rules.
▶ No.961396>>961398
>>961368
There have been multiple GCC bugs related to this, and a famous example from linux where a memcpy was reordered with another assignment. In the linux example, as presumably in yours, the operation that got reordered did not in fact access the pointer as a character type. In the linux example the memcpy was inlined and cast the char* argument, originally cast from some struct type, to a long* to do some clever stuff, which in turn violated aliasing rules.
There are several paragraphs describing how those alleged posts you discuss are wrong. For example:
6.7.8 [2] ... A typedef declaration does not introduce a new type, only a synonym for the type so specified.
6.2.7 [1] Two types have compatible type if their types are the same.
▶ No.961398>>961465
>>961396
At this point I don't think what the standard says matters very much since -fno-strict-aliasing is pretty much standard for me. Are those Linux bugs recent? I've read fairly old mail where Linus ripped GCC people new assholes because of this, and it said Linux always compiled with strict aliasing disabled. Is there no way to stop compilers from fucking us?
▶ No.961458>>961473 >>961538
>>961308
Even if you're positive that data was valid when it was generated, there's no telling if it somehow got corrupted on its way. Not when you receive it over the internet, of all things.
>>961313
The last round of all (good) compression algorithms is something like huffman coding, which produces variable bit width words, effectively turning byte stream into bit stream.
>>961317
Only if you're bottlenecked by the CPU, not the network. Which, considering modern hardware, is not usually the case. Using CPU time to do extra data compression immediately before sending will allow you to process more clients at a time, most of the time. Given that it's a simple flag in the config file, it's a no-brainer to use that comes at no cost whatsoever.
Both compression and encryption absolutely require that the data is preserved as it was. If as much as a single bit is off, the entire thing will fail to process. Therefore using both is no bigger liability than using either.
>>961368
Because what aliases just fine on one platform, might break on another. And Linux is, you know, actually supposed to be multiplatform.
▶ No.961465
>>961398
If you were to ask Ritchie, Kernighan, Thompson or Pike directly they'd probably tell you that aliasing being disallowed in any way was not only never intended but a really stupid idea in general. Thompson's own compilers don't give a damn about what you cast your pointers to.
I was going to write a whole post with proper trivial examples to show it would actually have a net positive effect in tight loop vectorization but then I couldn't get GCC or clang to actually produce any better code, so it's probably not just harmful but also pointless in practice.
▶ No.961473>>961482 >>962878
>>961458
>Because what aliases just fine on one platform, might break on another.
So? I never claimed the same code had to work everywhere. I really don't understand this obsession with portability.
>And Linux is, you know, actually supposed to be multiplatform.
There's lots of platform-specific code in the kernel, and there's nothing wrong with that. Not even the kernel ABI, the most important feature of the kernel, is portable: every architecture has its own specific system call entry point, trapping instruction and calling convention; they probably have different sets of system calls; even the ones they do have in common probably have different numbers. The ABI is stable, but it's so far away from being portable it's not even funny, and yet the world still functions just fine. Only the various libcs out there make an attempt at being portable, but they're just a userspace APIs and so not even part of the kernel to begin with, and they achieve portability by making a ridiculously huge number of implementations, one for each platform where ridiculously unportable things like casting pointers to longs which get shoved onto specific registers happen.
▶ No.961482>>961527
>>961473
>I really don't understand this obsession with portability.
they think that portability is a fixed attribute of software--probably, because the word itself suggests that, and because it's in the interest of Java and standard weenies and other types to reinforce that. The superior conjugation is ported. You don't write once and run everywhere; you write with a target in mind and then people actively work to port the software to other targets. Is this a blatantly less-ideal, neanderthal throw-back of an idea, when we could much more easily write "year 2018" code that's instantly portable to everything? people have been saying that for multiple decades, and it's still the case that no, for a lot of stuff it's a lot easier to do it the neanderthal way.
neanderthal bonuses: the original code is very easy to write; the porting can be done by other people, who know the target platform better than the original coder perhaps does; porting can involve not just tedious machine details but also UI and convention: Java is alien to every platform; an app with multiple GUI toolkit frontends can look natural under each platform it supports.
neandertal maluses: if there's little interest in porting, then it doesn't get done. meanwhile lots of other shit will "just werk", if awkwardly or at a higher resource cost.
▶ No.961508
To keep retards like you out.
▶ No.961515>>961527 >>961580
>>961368
>Because Fortran has similar rules
It's nothing like C. Their pointers are way more restrictive and were bolted on late in the language's life to try to stop the migration to C. Fortran was primarily about crunching indexed arrays where the inability to alias is what allowed primitive compilers to autovectorize well.
>it will fuck perfectly reasonable code up
Perfectly bad code. The reason why some people disable strict aliasing optimizations is they prefer to trade performance and portability for less debugging. If you're writing code that isn't just going to run on Linux and gcc/clang, it's unwise to depend on a bad optimizer, especially with Microsoft having been working to get their optimizer caught up in the last couple years (example: it recently started assuming no signed overflow).
>And yet I've replied to threads on this very board where OP had aliasing issues with uint8_t and uint16_t that were vompletely fixed with -fno-strict-aliasing.
Then it was shit code for reasons you didn't notice as it's just a typedef. But it's stupid to use uint8_t * to alias rather than char *, anyway. char, unsigned char, and signed char are treated as three different types which causes lots of trouble especially in C++ where all sorts of horrible unexpected bloat and bugs can occur if you're not using char *.
As an example, hit each with g++ -c and shit brix:
int fooc(char c) { return 1; }
int fooc(unsigned char c) { return 2; }
int fooc(signed char c) { return 3; }
int fooi(int i) { return 1; }
int fooi(unsigned int i) { return 2; }
int fooi(signed int i) { return 3; }
▶ No.961527>>961551 >>961556
>>961482
Nice comment. I agree with everything. I'd add that portability fanatics tend to make a huge amount of abstractions instead of designing a straightforward solution. Instead of being a small, nice, easy to understand thing, it becomes a fuckhuge application with tens of dependencies that does things in the most indirect "portable" abstracted way possible. They do things like hide structures behind opaque pointers and create lowest common denominator libraries that never quite do what you want.
People talk about 'old days' in a bad light but when I see computers such as a C64 being programmed I can't help but think how comfy it must have been. Came with schematics and a detailed manual, didn't it? You knew what to do in order to make things happen. These days if you want graphics you're pretty much fucked if you don't have a GL implementation. I don't like that.
>>961515
>If you're writing code that isn't just going to run on Linux and gcc/clang
I'm not. Linux is actually the only OS I care about. People on Windows and BSD might think differently than me, but for me if the program can talk to Linux it's perfectly OK.
>But it's stupid to use uint8_t * to alias rather than char *, anyway.
Not being able to do that is stupid. If I want to treat things as an 8 bit array, then obviously the correct type is uint8_t* and if I want a 16 bit array then the correct type is uint16_t* and so on. If I make two identical structures with different names, they should be able to alias each other. I don't care what the standard says; there's no reason why the compiler shouldn't be able to deal with this simple notion.
▶ No.961538>>961552
>>961458
>there's no telling if it somehow got corrupted on its way
"Sanitizing" isn't about safely handling risky data, it's about a lossy conversion of data to safely get it through risky code. It's pure webdev streetshitting that was popular during the PHP era due to the PHP devs being retarded and promoting it. It's incredibly fragile, unsafe, and has side-effects.
>The last round of all (good) compression algorithms is something like huffman coding
Untrue. All the fast algorithms lack huffman coding and only deal with bytes and fast compress/decompress is very popular today. You'll find LZ4 and similar LZ variants like LZF used all over the place, even places you might not expect. E.g. Windows 10 transparently compresses ram by default using a LZ variant. LZ4 is so fast it's faster than reading uncompressed data from the disk in most cases so found its way into a lot of software, especially games, and it's also a popular choice for transparent network compression. Snappy's similar but it's a dumpster fire from Google webdevs.
>Only if you're bottlenecked by the CPU, not the network. Which, considering modern hardware, is not usually the case
That's my field, we're /massively/ bottlenecked by CPU in the WAN optimization market. It makes it fun as I get to do a lot of optimization work and hack on the kernel (LARP, etc.).
>Both compression and encryption absolutely require that the data is preserved as it was. If as much as a single bit is off, the entire thing will fail to process.
Which is why we use erasure codes. In networked video that's usually tornado or some variant (RaptorQ).
▶ No.961551
>>961527
>People talk about 'old days' in a bad light but when I see computers such as a C64 being programmed I can't help but think how comfy it must have been. Came with schematics and a detailed manual, didn't it? You knew what to do in order to make things happen.
Yes, but what you could make happen was relatively limited. If you demand complex behavior from your machine, it will require complex software. And the very fact that you're posting here means that you do. Even if you're browsing 8chan with lynx on FreeDOS and ya ain't, you're still relying on software that is vastly more complex than what you could get PEAKing and POKEing around on a C64.
▶ No.961552>>961560 >>961564 >>961614 >>962870
>>961538
>"Sanitizing" isn't about safely handling risky data, it's about a lossy conversion of data to safely get it through risky code. It's pure webdev streetshitting that was popular during the PHP era due to the PHP devs being retarded and promoting it. It's incredibly fragile, unsafe, and has side-effects.
If that's true, what's the robust, safe (and side-effect-free?) alternative to handling untrusted user input?
▶ No.961556>>961614
>>961527
>when I see computers such as a C64 being programmed I can't help but think how comfy it must have been. Came with schematics and a detailed manual, didn't it?
I used to code on a C64, but almost exclusively in BASIC. The manual for the C64 came with programming tutorials in BASIC and computer magazines at the time would also come with multiple pages of BASIC you could type in (this could take hours, and there'd always be at least one line with an error) and it'd be a little game or whatever. There was an "everyone's a developer" feel to that community. Doing anything in BASIC was hell though, especially the hacky way sprites were handled. There was a huge disconnect between what you could do with assembly and what you could do with BASIC but I didn't get into that until I had an XT. There's an anon here with extensive knowledge of C64 scene coding, though.
>Not being able to do that is stupid.
No, it isn't. Aliasing cripples a C compiler's ability to optimize and is why when looking at the generated assembly it will appear to 'pointlessly' reload registers frequently. They've limited the number of types the compiler has to assume might alias to allow it to optimize. It's a difficult problem but one they expect the programmer to participate in the solution of.
▶ No.961560>>961645
>>961552
>what's the robust, safe (and side-effect-free?) alternative to handling untrusted user input?
There are usually fully safe methods provided. Languages where that's not the case shouldn't be used. For databases, rather than do like
query("INSERT INTO post(content) VALUES(' + streetshitterlib_strip_dangerous_characters(post) + "'")
there'll be something like
query("INSERT INTO post(content) VALUES(?)", post)
where the implementation takes care of safely handling your data. That also allows it to cache the prepared query as the query is the same every time other than for bound values. Same for outputting to a web page where you should be escaping text rather than trying to strip tags.
Basically, don't fudge the data to hide that your code is shit and unsafe. Fix your code.
▶ No.961564
▶ No.961580>>961590
>>961515
#include <stdlib.h>
#include <stdio.h>
int fooc1(char c) { return 1; }
int fooc2(unsigned char c) { return 2; }
int fooc3(signed char c) { return 3; }
int fooi1(int i) { return 1; }
int fooi2(unsigned int i) { return 2; }
int fooi3(signed int i) { return 3; }
int main() {
printf(
"%3d = (char c)1\n"
"%3d = (unsigned char c)2\n"
"%3d = (signed char c)3\n"
"\n"
"%3d = (int i)1\n"
"%3d = (unsigned int)2\n"
"%3d = (signed int i)3\n",
fooc1(0), fooc2(0), fooc3(0),
fooi1(0), fooi2(0), fooi3(0));
return 0;
}
$ g++ -Wall typ.c -o typ
$ ./typ
1 = (char c)1
2 = (unsigned char c)2
3 = (signed char c)3
1 = (int i)1
2 = (unsigned int)2
3 = (signed int i)3[/code]hmm no bricks. or are the bricks in this output, the errors you get if 1/2/3 aren't tacked onto the names?
$ g++ -Wall typ.c -o typ
typ.c: In function ‘int fooi(int)’:
typ.c:9:5: error: redefinition of ‘int fooi(int)’
int fooi(signed int i) { return 3; }
^~~~
typ.c:7:5: note: ‘int fooi(int)’ previously defined here
int fooi(int i) { return 1; }
^~~~
typ.c: In function ‘int main()’:
typ.c:20:9: error: call of overloaded ‘fooc(int)’ is ambiguous
fooc(0), fooc(0), fooc(0),
^
typ.c:4:5: note: candidate: int fooc(char)
int fooc(char c) { return 1; }
^~~~
typ.c:5:5: note: candidate: int fooc(unsigned char)
int fooc(unsigned char c) { return 2; }
^~~~
typ.c:6:5: note: candidate: int fooc(signed char)
int fooc(signed char c) { return 3; }
^~~~
typ.c:20:18: error: call of overloaded ‘fooc(int)’ is ambiguous
fooc(0), fooc(0), fooc(0),
^
typ.c:4:5: note: candidate: int fooc(char)
int fooc(char c) { return 1; }
^~~~
typ.c:5:5: note: candidate: int fooc(unsigned char)
int fooc(unsigned char c) { return 2; }
^~~~
typ.c:6:5: note: candidate: int fooc(signed char)
int fooc(signed char c) { return 3; }
^~~~
typ.c:20:27: error: call of overloaded ‘fooc(int)’ is ambiguous
fooc(0), fooc(0), fooc(0),
^
typ.c:4:5: note: candidate: int fooc(char)
int fooc(char c) { return 1; }
^~~~
typ.c:5:5: note: candidate: int fooc(unsigned char)
int fooc(unsigned char c) { return 2; }
^~~~
typ.c:6:5: note: candidate: int fooc(signed char)
int fooc(signed char c) { return 3; }
To date I've managed to avoid learning any C++. I credit that to the invention of time travel by an alternate-future me who comes back to the past, first to infect Hillary with tuberculosis, and then to induce a phobia of learning C++ while I slept, after alternate-future-me discovered ATS and realized he'd taken the wrong path in life.
It's just a theory. It fits the facts.
▶ No.961590>>961593
>>961580
>fooi1
>fooi2
>fooi3
... why would you go the trouble of turning a cut+paste example that uses an operator overloading type collision into something that can't collide as the names are unique? That's extra effort spent to not only get the wrong result, but to miss the point entirely.
If you'd prefer learning of this particular clusterfuck straight from the standard,
3.9.1 Fundamental types [basic.fundamental]
Objects declared as characters (char) shall be large enough to store any member
of the implementation's basic character set. If a character from this set is
stored in a character object, the integral value of that character object is
equal to the value of the single character literal form of that character. It
is implementation-defined whether a char object can hold negative values.
Characters can be explicitly declared unsigned or signed. Plain char, signed
char, and unsigned char are three distinct types. A char, a signed char, and an
unsigned char occupy the same amount of storage and have the same alignment
requirements (basic.types); that is, they have the same object representation.
For character types, all bits of the object representation participate in the
value representation. For unsigned character types, all possible bit patterns
of the value representation represent numbers. These requirements do not hold
for other types. In any particular implementation, a plain char object can take
on either the same values as a signed char or an unsigned char; which one is
implementation-defined.
It causes far more trouble than most C++ programmers are aware of.
▶ No.961593
>>961590
I don't get it. That paragraph says a lot of things that obviously must be true, but I wonder why the standard would belabor these details -- does it repeat all this verbiage for int and short and long and long long, also?
ATS doesn't object to separate overloadings that differ only by signedness:
#include "share/atspre_staload.hats"
(* templates *)
extern fun {a:t@ype} fooc: a -> int
extern fun {a:t@ype} fooi: a -> int
implement fooc<char>(c) = 1
implement fooc<uchar>(c) = 2
implement fooc<schar>(c) = 3
implement fooi<int>(c) = 1
implement fooi<uint>(c) = 2
implement fooi<sint>(c) = 3
extern castfn uchar: char -> uchar
extern castfn schar: char -> schar
extern castfn uint: int -> uint
extern castfn sint: int -> sint
(* overloading *)
fn fooc_char(x: char) = 1
fn fooc_uchar(x: uchar) = 2
fn fooc_schar(x: schar) = 3
fn fooi_int(x: int) = 1
fn fooi_uint(x: uint) = 2
fn fooi_sint(x: sint) = 3
symintr barc
symintr bari
overload barc with fooc_char
overload barc with fooc_uchar
overload barc with fooc_schar
overload bari with fooi_int
overload bari with fooi_uint
overload bari with fooi_sint
implement main0() =
println!(
"(* templates *)\n",
fooc<char>('0'), " = char -> 1\n",
fooc<uchar>((uchar)'0'), " = uchar -> 2\n",
fooc<schar>((schar)'0'), " = uchar -> 3\n",
"\n",
fooi<int>(0), " = int -> 1\n",
fooi<uint>((uint)0), " = uint -> 2\n",
fooi<sint>((sint)0), " = sint -> 3\n",
"\n",
"(* overloading *)\n",
barc('0'), " = char -> 1\n",
barc((uchar)'0'), " = uchar -> 2\n",
barc((schar)'0'), " = uchar -> 3\n",
"\n",
bari(0), " = int -> 1\n",
bari((uint)0), " = uint -> 2\n",
bari((sint)0), " = sint -> 3\n")
output as you'd expect.
▶ No.961614>>961637
>>961552
Parsing.
http://langsec.org
>>961556
What about the restrict keyword? Couldn't they have allowed aliasing and then let us use restrict to say what is and isn't allowed to alias?
▶ No.961637
>>961614
Restrict is more like a stronger version of strict aliasing and can be used in addition to it when it's considered to be worth it. The case it's handling that isn't handled via strict aliasing (same-typed pointers) is dangerous since that type of pointer juggling is everywhere and so takes some planning to use for anything non-trivial and can lead to a lot of debugging.
For an example of where the dividing line is, the following has two pointers where I've forced the compiler to not know anything about where one points (it loses track frequently in normal code) and might have to do a potentially pointless reload to be safe:
typedef int X;
typedef int Y; // Try int vs long
X x[] = { 1 };
Y y[] = { 2 };
extern Y * /*restrict*/ mystery_pointer;
X foo() {
X *xp = x;
Y *yp = mystery_pointer;
xp[0] = 3;
yp[0] = 4;
return xp[0]; // Will it reload?
}
xp[0] gets reloaded after the assignment of yp[0] in the matching non-char types case or with -fno-strict-aliasing:
movq mystery_pointer(%rip), %rax
movl $3, x(%rip)
movl $4, (%rax)
movl x(%rip), %eax
ret
And doesn't with mismatched non-char types or if you add restrict to the extern:
movq mystery_pointer(%rip), %rax
movl $3, x(%rip)
movq $4, (%rax)
movl $3, %eax
ret
▶ No.961645>>961647
>>961560
By sanitizing I meant enforcing that your data is actually valid. I.e. if you receive a serialized struct over the internet, you check every single value against your boundaries, otherwise shit can (and usually will) go south pretty quick.
▶ No.961647>>961651 >>961670
>>961645
So you say.
To what degree would you sanitize a password?
▶ No.961651
>>961647
Don't be obtuse. If you receive something like enum value, chances are your enum doesn't span all 4.3 billion possible variations.
▶ No.961670>>961672 >>961718
>>961647
>cfag does not know what sanitize means
just die already your out dated knowledge is useless
▶ No.961672>>961679
>>961670
Give me a ring when webdev becomes anything more than un(der)paid undergrad intern occupation.
▶ No.961679>>961681 >>961687
>>961672
Average SF webdev makes 120 grand a year not counting benefits. Meanwhile you are stuck in the back of some embedded shop.
▶ No.961681>>961698
>>961679
> 120 grand a year webdev
Where do I send my resume?
▶ No.961687>>961698
>>961679
Average SF webdev spends 120 grand a year on rent. I made 165k last year in networking outside of SF and I didn't have to swallow a CoC or attend diversity seminars or suppress the urge to kill myself rather than deal with more code from "jquery experts". I'm pretty happy. Sorry to hear you aren't.
▶ No.961698>>961714
>>961681
>Where do I send my resume?
Literally any SF based tech company
>>961687
>120 grand a year on rent
Wrong
> I made 165k last year
Bullshit faggot. Nice try larper.
▶ No.961714
>>961698
Now, answer this question: why should anybody listen to someone who's okay with being in San Francisco?
▶ No.961718>>961727
>>961670
>retard talking about input "sanitization"
>calls anyone else's knowledge outdated
Lmao. You probably filter HTML from user input using refex
▶ No.961727>>961908
>>961718
>cfag cant spell regex
▶ No.961908
>>961727
You mean regexpen?
▶ No.962144
>>959357
Come here you fucking loud mouth ball.
▶ No.962659
who gives a fuck, only thing special about programming languages are precendence and context.
▶ No.962760
itt: elitist fags arguing over which language is the best while webdevs make 6 figure salaries
▶ No.962867
>>959146
>MBR-style boot sectors
Yet I bet you're one of those faggots who advocates Grub 2.0 and EFI.
▶ No.962870
>>961552
>he thinks it's impossible or even hard to handle user input safely
found the PHP dev
▶ No.962878
>>961473
>i don't understand why portability matters
and your typical C will crash when moving between x86 and amd64
▶ No.962885>>964506
>>959366
oh yeah you got him. you are so right: C programmers are managing the fuck out of the vulnerabilities
▶ No.964503
>>959133 (OP)
because it's the most suitable high-level language for microcontrollers.
because it works in environments where you absolutely must manage your memory.
because it has a robust implementation on every platform, from 8-bit micros to advanced research architectures.
because it's not patent encumbered and has zero licencing costs.
because it has a stable spec and well defined engineering standards.
>you can use rust on an arduino
would unsupported extensions/bindings off of GitHub be the professional way to develop ATMega code?
▶ No.964506>>972430
>>962885
They are. Security holes are a webdev thing, now. Go look, it's not 1999 anymore.
▶ No.965232
>959954
>-O0
>not -Og
>>961276
>Hence, if uint8_t is available on your system, it must necessarily be a character type.
No, it doesn't. It might be typedef'd to a non-standard built-in type.
▶ No.972163
>>960820
dumbest thing i've read all day. do you know what an abstraction is?
▶ No.972362>>972568
>>959133 (OP)
>promoting the gayest, most cockmongling language du jour as a replacement for C
GTFO.
▶ No.972430>>972564
>>964506
WTF? How can anyone still be doing SQL injection? Probably the first SQL book I read around 1999 told me to pass everything in as bind values (this was for Perl DBI) since that's the only safe way. Plus it also lets the DB server use cached queries, so it's faster.
▶ No.972564
>>972430
Because SQL is inherently unsafe and it's wielded by the lowest tier of "programmer" on the planet. A common thing is they want to make the sort order be dynamic but doing it properly with bound variables is relatively difficult and probably not possible on some shit like sqlite so they instead do it with a dynamic string. And of course that means passing "ASC" via the URL and not checking it or escaping it.
Languages that make it easier to do something the wrong way are very dangerous and should not be used by morons, yet here we are.
▶ No.972568>>972603
>>972362
I'll take Rust code over the average C developers garbage. C devs are pajeet tier.
▶ No.972603>>972630
>>972568
You'd take it if rust devs actually wrote code, you mean. It's been out for years yet servo is still the largest thing in it and they did that themselves as dogfooding.
▶ No.972630>>972780
>>972603
My terminal emulator is in rust. GPU accelerated font rendering and all that jazz.
▶ No.972705
>>959133 (OP)
unironically, gtfo /leftypol/ Stirner would hate you
▶ No.972780>>972850
▶ No.972824
>>959705
> because C++ doesn't define that order
yes it does. the order of declaration is the order of initialization. as you mentioned however, pajeets and their ilk are not consistent and can create bugs this way. but all the modern compilers specifically warn you of this type of bug anyway so meh.
▶ No.972826
▶ No.972850
▶ No.972879
>>959133 (OP)
Bell labs never closed, it went off planet