Is anyone surprised that UNIX brain damage is responsible for remote vulnerabilities? It was already pathetic in 1991 when Multics and other operating systems that did it properly were around for decades. Linux (the kernel) wastes billions of dollars and countless years and still isn't as good as software made by much smaller groups in a much shorter time. That's because it's written in C, which sucks.
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11477
>Jonathan Looney discovered that the TCP_SKB_CB(skb)->tcp_gso_segs value was subject to an integer overflow in the Linux kernel when handling TCP Selective Acknowledgments (SACKs).
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11478
>Jonathan Looney discovered that the TCP retransmission queue implementation in tcp_fragment in the Linux kernel could be fragmented when handling certain TCP Selective Acknowledgment (SACK) sequences.
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11479
>Jonathan Looney discovered that the Linux kernel default MSS is hard-coded to 48 bytes. This allows a remote peer to fragment TCP resend queues significantly more than if a larger MSS were enforced.
Integer overflows, heap fragmentation, and hard-coded sizes, all problems that UNIX-Haters complained about back in 1991 that have been solved in various ways since the 60s. Another thing that sucks are all the bullshit overlapping configuration files UNIX/Linux use that are scattered everywhere. One of them, "before.rules" takes sets of command line options on lines. Another one, "sysctl.conf" uses something that looks like a UNIX file path but isn't. Another "sysctl.conf" looks almost the same but uses dots instead of slashes. How many configuration "formats" and pseudo-filesystem hierarchy "namespaces" does Linux have? I'd bet the code needed just to parse these files is larger than some entire operating systems.
Date: Mon, 7 Jan 91 23:09:32 EST
Subject: What you once thought was a brain-dead misimplementation is now the protocol definition!
or, Unix Historical Revisionism At Work Again,
or, IETF-approved RFC1196
This whole thing is pretty sad, or pathetic, or depressing
or something.
Firstly, there's the rewriting of a protocol to conform
to a ubiquitous misimplementation -- the unix story over and
over.
Then there's the growing Balkanisation (or
Multics-ification) of the net -- I remember laughing out
loud when I found that MIT-MULTICS refused finger service on
security grounds.
Then, or course, there's the pathetic implementational
warnings about how one should be very very careful in
implementing this sensitive and dangerous protocol -- as if
this perilous protocol somehow innately offered a direct way
to shove fingers up unix' sockets. Or something.