▶ No.938371>>938385 >>938443 >>938806
>Well, /tech/, is C and UNIX worship justified?
C? Naw
UNIX? Yes in that it could be considered the first "modern" OS and all OS design thereafter owes itself to UNIX. But as far as *nix systems in general go? Naw
▶ No.938373>>938375 >>938806
C and UNIX are amazing. In hindsight, the people whining about them back then failed to accomplish anything in 50 years so we can look at their complaints today as moronic bleating.
▶ No.938375>>938515
>>938373
What's your opinion on C being responsible for HW array bound checking no longer being a thing?
▶ No.938379>>938384 >>938385 >>938443
C was created to be low-level enough to write an OS in, and Unix was written in C. This allowed both to spread to other architectures and operating systems. You can run C on your toaster (and chances are that your toaster is running C code). Lisp is a higher-level language that required specialized hardware to get the performance necessary.
▶ No.938384>>938385
>>938379
>toaster is running C code
You've never worked in embedded, have you?
▶ No.938385>>938428
>>938371
>>938379
Okay, we know our modern world is full of C and UNIX. But does that make them good?
>>938384
Tbh my toaster has three resistors and two springs. Anything beyond that is bloat and probably buttnut.
▶ No.938428>>938431 >>938432
>>938385
>Not having a photocell to detect browning of your toast
>Not having a camera in your toaster
>Not having image recognition in your toaster
>Not having an AI look at your toast to determine when it's done.
>Not having a Toastero with subscription and buying pre packaged toast with a QR code on it you have to scan and get verified before your Toastero will activate.
Luddite
▶ No.938431
>>938428
This needs a fancy touchscreen GUI with oversized icons and screen. It should always be connected to the internet and monitor you for "convenience" as well.
▶ No.938432
>>938428
>Not having a Toastero with subscription and buying pre packaged toast with a QR code on it you have to scan and get verified before your Toastero will activate.
You're joking, but this is what Iotafags unironically want to do
▶ No.938443
>>938371
UNIX is just a stripped down Multics though, and some OSes did not do things the UNIX way after UNIX showed up. Lisp machines/their respective OSes, Smalltalk (it was possible to boot directly to Smalltalk), and Oberon are very different from UNIX. I don't see how Classic Mac OS, BeOS, Windows (before you could run GNU on it), DOS, CP/M, etc were Unix-like either (unless you consider using C or C++ "Unix-like" which is a very broad way to define it), and arguably Windows is probably still mostly non-Unix-like internally although I would not be surprised if they're repurposing some permissively licensed code here and there into it due to laziness.
>>938379
>C was created to be low-level enough to write an OS in, and Unix was written in C. This allowed both to spread to other architectures and operating systems.
It is possible to write an OS in other languages, such as Pascal, Ada, and Forth (or just an assembly language but thats not portable), and of course theres C-like languages like C++ and Rust too that can be used for that.
>Lisp is a higher-level language that required specialized hardware to get the performance necessary.
In the 1980s, and these were basically minicomputers, probably what the Lisp machine companies could afford to customize so much back then. I think Moore's law has probably fixed this, you've got like what, 4-8 cores on consumer hardware (that a lot of programs don't use because multi-threading turns out to be hard in most languages), 64 bits, integrated GPUs usually, SATA, solid state drives, a lot more main memory, maybe even a ridiculously powerful discrete GPU (but drivers are hard too so its often wasted and grorious C does not make that any easier) etc. You're telling me that you can't even have some Emacs tier OS? Come on now.
▶ No.938449>>938451 >>938462
>>938369 (OP)
Okay Sham, how would you like an operating system to be designed? What design decisions would you make to avoid the mistakes of Unix? What do you want to see in an operating system?
▶ No.938451>>938457 >>938504
>>938449
Not him, but something like GuixSD seems to be moving in the right direction to me. It doesn't follow file system hierarchy design, doesnt use systemd, and is 100% free software.
▶ No.938453
>unironically conversing with tripfags
▶ No.938457
>>938451
Are packages dynamically linked?
▶ No.938462>>938465
>>938449
A Modern Operating System™shouldn't have everything as "a file". Sockets are not fucking files. At most, they are input/output streams. This is what programs should deal with. Typed objects, with inheritance. InputStream, OutputStream, File, TCPSocket, etc all the way up to "PNGImage" or "ODTDocument". File extensions and magic bytes are an ugly hack and a poor joke that must go.
Drop the FHS. It sucks, it's a dinosaur from the era when the goal was to save as many keystrokes as possible. Understandable on a terminal connecting to a PDP-11, but not in current_year. /System, /Applications, /Storage, etc as GoboLinux does, is I believe the right way to go.
Drop editable text config files. There should be a database, managed by the OS. In fact, the whole filesystem should be a database. With version control.
Is your config stored in .config/, a dotfile in $HOME, or in /etc, or maybe /var/local/? Remove that shit. They're all stored in your application's config table. At most, you've got one system-wide config entry, and one config per user. The OS should provide a way for applications to get the config values directly, without any pain.
Some command like "$ config <myapplication>" would be available.
All libraries should also be stored in the OS database, instead of 30 different folders. Is it /lib? /usr/lib? /usr/local/lib? LOL you can't know because UNIX doesn't enforce that!
The OS should provide a standard logging interface, with standard levels of logging priorities. stderr vs. stdout? Not anymore. Now there's what would be a sort of stderr, stdwarn, stdlog, stddbg, etc.
Programs should be callable as functions from within other programs.
▶ No.938465>>938466 >>938484 >>938887
>>938462
>Drop editable text config files. There should be a database, managed by the OS
Yeah and it should be called something like the "Registry", right?
Although I do agree with renaming the FHS stuff. there shouldn't be a /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, and /usr/local/sbin.
it should just be /bin. That's where all binaries go. /bin. That's it. /etc should be renamed /conf or /config. /usr looks too much like user, which confuses new people. It should be /resources, or /res to save keystrokes (keep in mind that saving keystrokes is not just because of dinosaur hardware. It's also more pleasant to type). /home can then be /usr or /user. MacOS actually does call /home /Users, so it's not out of the question.
▶ No.938466
>>938465
>Yeah and it should be called something like the "Registry", right?
Well the idea is badly executed on Windows, but otherwise, yes, something like that.
▶ No.938484>>939851
>>938465
If you're so keen on saving keystrokes, you should switch to keyboard that doesn't require for example pressing shift AND 0 to get parentheses. Parentheses are oftentimes used in programming, and often used in normal conversations. Same with the quotation mark, question mark, exclamation point, etc.
Personally, I suggest removing hierarchical-based filesystems altogether. They're nothing but a confusing mess.
▶ No.938504>>938510 >>938513
>>938451
> GuixSD
Can you tell me more about GuixSD (and perhaps Nix OS)? All I know is that it has a functional package manager, which means that you can install and uninstall shit all you want without having everything crash down like a house of cards because dependencies are an eldritch abomination.
▶ No.938510
>>938504
It basically handles everything for you. Each package has its own directory in /nix/store, under the name of the hash of the build process, where everything is stored (binaries, resources, etc). This means you can easily delete EVERYTHING that's correlated with the package without cruft in your system. Packages (and in fact, the whole system) are either installed via a main configuration file (/etc/nixos/configuration.nix) or on a per-user basis (with nix-env -i), or (here's what's really cool) in a kind of chroot development environment (nix-shell -p <packages>). This makes it extremely easy to know what's on your system and what isn't. The development environment is of course isolated, and you don't have to install millions of libraries with apt-get and then forget about them.
▶ No.938513
>>938504
Is your use case dozens of machines that need the exact same setup?
>which means that you can install and uninstall shit all you want without having everything crash down like a house of cards because dependencies
lol
▶ No.938515>>938526
>>938375
when did x86 ever have hardware bounds checking for arrays? or the 6502. or the z80.
▶ No.938526>>938541 >>938557 >>938923
>>938515
My lad, the 8086 is about a decade younger than C, and the 6502 and z80 are even younger. But even then, x86 has the BOUND instruction, although it is a subpar instruction, and more recently (we're talking last 3 years), Intel has introduced MPX.
The Lisp machines also did have hardware support for array bounds checking.
https://stackoverflow.com/questions/40752436/do-any-cpus-have-hardware-support-for-bounds-checking
▶ No.938537>>938545
/tech/ doesn't understand hardware bounds checking and thinks it's magic and free. In the bad old days of processor design, general-purpose instruction-level parallelism wasn't available and processors would instead provide higher-level constructs that allowed the processor to do work in parallel. That's what was being done with tagged memory, so enough information was available per instruction to do the bounds check at the same time. But ever since we got things like pipelining (and everything that came after), there was no reason to do that as it would just be less efficient. And the general purpose approach also allowed the compiler to elide checks when they aren't necessary. There's a misconception here that we've lost a technology to time instead of the same thing happening today in a more efficient way in type-safe languages.
▶ No.938541>>938650 >>938696
>>938526
>MPX
Is this a troll?
▶ No.938545
>>938537
>/tech/ doesn't understand hardware bounds checking and thinks it's magic and free
Well there is no such thing as free lunch but if you're okay with storing the space of the array and use some extra space on the die, adding in bounds checking wouldn't have any performance cost.
▶ No.938557>>938650 >>938806
>>938526
What does age of the CPU's have to do with the argument? It's clearly evident to see your entire argument is only to paint some kind of idea that lisp machines were better, but they failed and are hence failures. If these designs were so important and monumental, then their implementation would've been present in modern hardware, but it isn't. It is entirely absent because they serve no use outside of academic pursuit. Stay mad lispcuck.
▶ No.938650>>938681 >>938690
>>938557
>What does age of the CPU's have to do with the argument?
1) I mention C being responsible for hardware array bound checking, a thing in the 60's/70's, disappearing
2) You say 80's architectures don't have array bound checking
3) C was a very popular language by the time these architectures came to be
4) Thus it can be hypothesized that C was indeed responsible for that
>It's clearly evident to see your entire argument is only to paint some kind of idea that lisp machines were better
Not at all. I mentioned Lisp machines because they happened to have hardware bounds checking via special parallel instructions.
You also seem to reason in the following terms
1) Popular -> Good
Therefore,
x) ¬Popular -> ¬ Good
Not only the logic is incorrect, but the premise is an ad populum. Even if we accept the premise as correct, the result is
x) ¬Good -> ¬Popular
which doesn't quite explain why Windows is so popular
>>938541
I know it's not perfect, but that wasn't the point, anon
▶ No.938659>>938663
>lispkike tries to create a hugbox containment thread
This is by far the best thing you ever done lispkike.
▶ No.938663
>>938659
>lipscuck
>lispkike
Holy shit when did I land in 2015?
▶ No.938681>>938684
>>938650
You are really stretching it with your lies. C wasn't developed until the late 70's, and it didn't gain popularity until the mid to late 80's, hence the standardization of it in '89/'90. 8086 was developed in the mid 70's. If it was necessary for hardware bounds checking of arrays, surely Intel would've included it in their flagship ISA. You completely ignore that with your lies and continue to lie and build your argument on lies. You've proved nothing, your lisp machines failed for reason; they are slow, singular purpose, and incapable for adapting to the needs of commercial and consumer demands. If it could, then they would still exist and people would still be using them. They don't exist, they failed and are thus failures. Academics btfo again. Stay mad lispcuck.
▶ No.938684
>>938681
>singular purpose
I can easily come up with more than one purpose. How about these two:
1. expert system
2. computer graphics
▶ No.938690>>938694 >>938698
>>938650
Pipelining showed up in the late '70s and spread to everything during the '80s. That obsoleted wacky processors as that functionality could be done in software. There's no C conspiracy, those hardware techniques just became obsolete.
▶ No.938694>>938696
>>938690
Is this why they're reimplementing them?
▶ No.938696
>>938694
It's anyone's guess why Intel made MPX since it's slower than software bounds checking, less safe than software bounds checking, and they seem to have abandoned support for it. See >>938541 . I assume it was so tech illiterate faggots like you would see it on a feature list and get hyped to buy all new Jewtel processors. It appears to have been purely a marketing stunt.
▶ No.938698>>938699
>>938690
Pipelining has little to do with why bounds checking is no longer done in hardware. People have worked out and successfully implemented into hardware pipelines in which traps are able to be raised by any section of the pipeline. The fact that something if a bounds check fails and destroys the pipeline, doesn't matter much as you'd probably want to halt the program and drop into a debugger anyways due to there being a problem.
▶ No.938699>>938701
>>938698
That's an impressive amount of gibberish, anon.
▶ No.938701
>>938699
>That's an impressive amount of gibberish, anon.
I will clarify any section that you didn't understand. In order to save us some time, I tried to explain further each statement I made.
>Pipelining has little to do with why bounds checking is no longer done in hardware.
I am saying that functionality from "wacky" processors needing to be done in software is not a problem with pipelining itself.
>People have worked out and successfully implemented into hardware pipelines in which traps are able to be raised by any section of the pipeline
Depending on the architecture, traps / interrupts / whatever you want to call them can be raised in more than one location. For example, if you decode a instruction which doesn't exist, you could trap in the decoding stage of the pipeline. The same architecture could also trap when an arithmetic operation overflowed. This trap would obviously happen in a different part of the pipeline and not in the same place as the decode stage. Depending on the architecture, different things are done to handle this problem.
> The fact that something if a bounds check fails and destroys the pipeline, doesn't matter much as you'd probably want to halt the program and drop into a debugger anyways due to there being a problem.
Upon trapping, you'll likely need to flush the pipeline (at least part of it). This means that you will take a performance hit. My statement here is saying that a performance hit for an outof bounds read / write does not matter as it is an edge case that should never happen in correct software.
▶ No.938702
>>938369 (OP) (is) (a) (faggot)
who could be behind this post?
▶ No.938704>>938732
You all should really be deferent towards Xerox; because the world would have been *very* different without them -- the people behind UNIX on the other hand, they're nothing special. (Read "Dealers of Lightning" if you want to know more about Xerox PARC.) Xerox gave birth to the idea of a personal computer (that is to say, single user) -- the Xerox Alto booted from a hard disk, had a graphical screen, proportional fonts, a mouse driven GUI, a WYSIWYG text editor, a drawing program, IDE with overlapping windows, Ethernet, Net booting, laser printing, full office suite, and the first optical mouse. In the book I mentioned, there's a story about Jobs meeting a manager at Symbolics; Jobs told this person to "demo" their machine. After the demo, Jobs said something along the lines of "This bitmapped display... wouldn't it be nice if it could scroll pixel by pixel?" -- they went to a file, changed some of the Lisp code, and it scrolled pixel by pixel. Jobs was blown away. When Jobs left, he still felt they hadn't showed them what they *actually* had (they were the vainglorious academic type, and regarded Jobs as a hobbyist). There's also a Symbolics chief engineer describing when "the spirit of Xerox PARC evaporated" -- that is a must read. We're not living in the future -- we're living in 1991 with slightly more lucid graphics.
▶ No.938732>>938748
>>938704
That's a very interesting post, anon, I will definitely check out this Xerox thing. I knew they were pioneers, but I never paid much attention to what they had actually achieved.
▶ No.938748
>>938732
smalltalk is gay tho
▶ No.938756>>938766
>>938725
> Literally no one but you spergs mentioned Lisp
Objectively wrong. You mention one that does right in your next post.
Even if you don't consider that, unix hater fag (and/or maybe similar others, like yourself) are shilling their lisp and lisp machines everywhere.
There are a lot of people over here who have no idea of what their talking about and instead go on to shill and take up "sides" of whatever they see fit, much like literal normalfags.
▶ No.938758>>938772
>giving attention to tripfags
are you fags retarded?
▶ No.938766>>938772 >>938805
>>938756
I mentioned a specific HARDWARE feature of Lisp machines, nothing more.
▶ No.938772>>938774 >>938805
>>938758
Ah yeah, forgot about that
>>938766
And that's why I said OBJECTIVELY.
and EVEN if you IGNORE that, you guys have been shitting up this board with your lisp machine shilling and your failure to come up with a design of a better system than unix, which has already been done roughly 20-30 years ago by bell labs.
And I have seen you fags over IRC. You guys know jack shit about programming aside from, maybe fizzbuzz.
This will be the last (you) I'll give you.
▶ No.938774>>938805
>>938772
>You guys know jack shit about programming aside from
Top kek, who's the midget behind this post? var-g?
▶ No.938777>>938779 >>938798
>>938369 (OP)
Hang yourself.
▶ No.938798>>938805
>>938779
>>938777
𝙵𝚘𝚞𝚗𝚍 𝚝𝚑𝚎 𝚄𝙽𝙸𝚇 𝚞𝚜𝚎𝚛𝚜. 𝚆𝚑𝚎𝚗 𝚊𝚛𝚎 𝚢𝚘𝚞 𝚐𝚘𝚒𝚗𝚐 𝚝𝚘 𝚜𝚝𝚊𝚛𝚝 𝚞𝚜𝚒𝚗𝚐 𝚊𝚗 𝚊𝚌𝚝𝚞𝚊𝚕 𝚐𝚘𝚘𝚍 𝚙𝚛𝚘𝚐𝚛𝚊𝚖𝚖𝚒𝚗𝚐 𝚕𝚊𝚗𝚐𝚞𝚊𝚐𝚎, 𝚕𝚒𝚔𝚎 𝙲++?
▶ No.938805>>938824 >>938849 >>940602 >>940953
>>938766
>>938772
>>938774
>>938798
a-am I allowed to come back? uwu
Umm, so since one of you said to "start using an actual good programming language", that got me thinking. Is there any other language out there that has the same level of performance as C/C++? That seems to be the most common argument in favor of those languages: writing low level stuff that needs to be fast.
Is the Rust meme capable of that? I know people make fun of it a lot here, but people have already been trying to write an OS in it, and the shills say it's safer or something, so wouldn't that be a possible C-replacement? of course the community sounds like a bunch of controlling meanies, but is it as good as they say on a technical level?
Do any of you have any other possibilities? ^.^ I think it would be nice to see, as it seems whether we want to replace C/C++ or not, we can't do it without finding something that's as fast.
▶ No.938806>>938921
>>938371
>Yes in that it could be considered the first "modern" OS and all OS design thereafter owes itself to UNIX.
Bullshit. UNIX weenies believe that all Multics innovations came from their "eunuch" OS because they don't know anything about Multics. The hierarchical file system and other parts of VMS, VME, Xerox workstations, and Lisp machines are based on Multics, not UNIX.
https://en.wikipedia.org/wiki/ICL_VME
>VME is structured as a set of layers, each layer having access to resources at different levels of abstraction. Virtual resources provided by one layer are constructed from the virtual resources offered by the layer below. Access to the resources of each layer is controlled through a set of Access Levels: in order for a process to use a resource at a particular access level, it must have an access key offering access to that level. The concept is similar to the "rings of protection" in Multics. The architecture allows 16 access levels, of which the outer 6 are reserved for user-level code.
>>938373
>C and UNIX are amazing.
They're an amazing lack of quality control.
>failed to accomplish anything in 50 years
Accomplishments don't disappear just because you don't know about them.
>>938557
Modern hardware does have bounds checking. x86 has segment limits and a BOUND instruction that can be used for bounds checking since the 80s. RISCs don't have it because the PDP-11 is not modern.
What I find disgusting about UNIX is that it has *never*
grown any operating system extensions of its own, all the
creative work is derived from VMS, Multics and the
operating systems it killed.
If you want to remember the actual last time you edited
those files, then keep your own damn database of dates
and times, and stop bothering us Unix Wizards.
I thought this is what RCS is for.
I'm TA'ing an OS course this semester. The last lecture was
an intro to Unix since all other operating systems were only
imperfect and premature attempts to create Unix anyway.
Some lecture highlights...
An aside during a discussion of uid's and many a unix
weenie's obsession with them: "A lot of people in the Unix
world are weird."
When asked if Ritchie et al regretted some other
inconsistency in Unix metaphysics, "These guys probably
don't care."
Have another twinkie.
Some Andrew weenie, writing of Unix buffer-length bugs, says:
> The big ones are grep(1) and sort(1). Their "silent
> truncation" have introduced the most heinous of subtle bugs
> in shell script database programs. Bugs that don't show up
> until the system has been working perfectly for a long time,
> and when they do show up, their only clue might be that some
> inverted index doesn't have as many matches as were expected.
Unix encourages, by egregious example, the most
irresponsible programming style imaginable. No error
checking. No error messages. No conscience. If a student
here turned in code like that, I'd flunk his ass.
Unix software comes as close to real software as Teenage
Mutant Ninja Turtles comes to the classic Three Musketeers:
a childish, vulgar, totally unsatisfying imitation.
▶ No.938824>>938833 >>938865
>>938805
>people make fun of it a lot here
>people
You mean LARPers? Anyways Rust is better than C/C++ unless you want to target some obscure hardware for which LLVM doesn't have a backend.
▶ No.938833>>938835
>>938824
>You mean LARPers? ... Rust is better...
Lol
▶ No.938849
>>938805
hey friend !!!!!! :D xoxoxoxoxo
you are reeeeeeeeeealy cute !!!!! ^.^
you are reeealy a biiig klutz, aren't you? :P ;) ;) you can't even keep yourself in character when you speak!!!!!
▶ No.938864
>>938835
you're right. its not an argument. faggot.
▶ No.938865>>938867 >>938868
>>938824
Rust has nice ideas, but ultimately the syntax ruins it all to the point where C is still better. C++ rules them all, of course, and you won't even be able to close the gap when C++2x is out.
▶ No.938867>>938869
>>938865
>syntax ruins it
fucking larpers caring about curly braces and keywords instead of shit that matters like memory models.
▶ No.938868>>938869
>>938865
>muh syntax
good argument, koolfaggot
▶ No.938869>>938870 >>938872
>>938867
>>938868
>syntax doesn't matter
Why don't you write in brainfuck then?
▶ No.938870>>938871
>>938869
You do realize brainfuck would not be better if you changed all the symbols to words, and added parenthesis right? It would still be the useless shit that it is, only now more verbose. Do you even know what syntax is?
▶ No.938871>>938876
>>938870
Thou understandest what is meant well, verily.
If there were to be a kind of Rust with every keyword replaced by a Unicode symbol, wouldest thou still want to use such a speech?
▶ No.938872>>938874 >>938875
>>938869
If size doesn't matter, why are you an incel?
LOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOL
get rekt, [k00l]n1gg3r
▶ No.938874
>>938872
When was the last time thou hast sex with a maiden? Verily, I believe it was never.
▶ No.938875
>>938872
Dude, just filter the namefag.
▶ No.938876>>938877
>>938871
>with every keyword replaced by a Unicode symbol, wouldest thou still want to use such a speech?
This is common in certain languages used by major financial institutions. It is also very common in theorem proving languages. Shit like that is really not a big deal. The main annoyance is setting up your keyboard bindings to deal with it.
▶ No.938877>>938878
>>938876
Thou hadst not replied to my enquiry. Wouldst thou use it? Dost thou think such a speech would have quick adoption in the craft?
▶ No.938878>>938880
>>938877
> Wouldst thou use it?
I would not use Rust with or without it :^). I have used unicode in a few languages that require it though.
▶ No.938880>>938882
>>938878
>I would not use Rust
Have fun in prison once writing software in unsafe languages becomes illegal.
▶ No.938882>>938883
>>938880
>Rust is the only memory safe language
▶ No.938883>>938885
>>938882
Rust is the only ethical language :^)
▶ No.938887
>>938465
That's exactly how they do it in Plan 9. Unfortunately it will never be done in Linux because it'll piss of the sysadmins
▶ No.938921>>938974
>>938806
>cannibalizing decent ideas from failed operating systems is a bad thing
Cry more, Lispfag.
▶ No.938923>>938959
>>938526
Is the bound instruction emitted by any modern compilers?
▶ No.938959
>>938923
It was never used as far as I know and I did a lot of assembly and cracking in the early '90s. It was in a large collection of garbage opcodes intended to coax pascal compilers onto the platform that pascal compilers didn't end up using. It's been deprecated longer than most of you have been alive, and was effectively removed by amd64 in 2003.
▶ No.938965
Heh, good thing i don't use unix
>t. someone who uses unix
▶ No.938974>>938975 >>938993 >>939003 >>939090
>>938921
UNIX weenies hate over 60 years of good ideas, like segmentation, dynamic linking, strings, bounds checking, overflow checking, usable error handling, and so on. If UNIX weenies say UNIX is good because "all OS design thereafter owes itself to UNIX" and I say the OS design that newer OSes were based on actually came from Multics, they change the subject. Now it doesn't matter who invented what or what influenced what, just popularity. When someone points out that Plan 9 is neither innovative nor popular, they say some bullshit about how the industry is against innovation, but also that they should be against all innovation that didn't come from AT&T, whether it's older than the PDP-11 or from the 80s or the last few years. And you thought UNIX brain damage was metaphorical.
I don't see how being "professional" can help anything;
anybody with a vaguely professional (ie non-twinkie-addled)
attitude to producing robust software knows the emperor has
no clothes. The problem is a generation of swine -- both
programmers and marketeers -- whose comparative view of unix
comes from the vale of MS-DOS and who are particularly
susceptible to the superficial dogma of the unix cult.
(They actually rather remind me of typical hyper-reactionary
Soviet emigres.)
These people are seemingly -incapable- of even believing
that not only is better possible, but that better could have
once existed in the world before driven out by worse. Well,
perhaps they acknowledge that there might be room for some
incidental clean-ups, but nothing that the boys at Bell Labs
or Sun aren't about to deal with using C++ or Plan-9, or,
alternately, that the sacred Founding Fathers hadn't
expressed more perfectly in the original V7 writ (if only we
paid more heed to the true, original strains of the unix
creed!)
▶ No.938975>>938982
>>938974
Segementation is still used today in the MMU.
▶ No.938982>>938986
>>938975
Segmentation is practically unused for its original purpose today. It's now ghetto tagged memory on Intel for thread-local support and stack protection. Even back in the DOS days we tried to avoid using limited range selectors. VCPI allowed doing it the modern way and DPMI the backwards, archaic way with non-overlapping ranges.
▶ No.938986>>938987 >>938988
>>938982
Do you ever source your claims?
▶ No.938987
>>938986
Yes he does! from a 9000-year old outdated book in the form of blockquotes!
▶ No.938988>>938990
>>938986
I am the source, faggot.
▶ No.938990>>938994
>>938988
Why would anyone trust an anonymous source for information? Get lost.
▶ No.938993>>938996 >>938999
>>938974
At least Plan 9 exists, works on modern hardware, and actually has autists dedicated enough to continue working on it. That's more than can be said for anything you'd like, with the best progress being on shit like some mostly dead LispOS projects and a sad Multics emulator running in a web browser.
▶ No.938994>>939062
>>938990
What are you even mad about you little shit? What do you think isn't true? Do you think compiled code today is full of segment selectors? You could just fucking check rather than look like a fool.
▶ No.938996>>938998
>>938993
You can buy OpenGenera for $5k and run it on top of GNU/Linux.
▶ No.938998
>>938996
>spend $5k on an OS for a hobby
>runs on top of Linux
▶ No.938999>>939002
>>938993
Plan 9 was a research dumpster fire where they intentionally gave every bad idea air to see if it was actually a bad idea or not. Spoiler: no good ideas were found. It's been the Edgy McEdgerson UNIX for several decades of contrarian kiddos.
▶ No.939002>>939005
>>938999
>where they intentionally gave every bad idea air to see if it was actually a bad idea or not
Such as?
▶ No.939003
▶ No.939005>>939011
>>939002
Take your pick. I can't think of a single feature I'd want to steal. No other OS programmers could, either.
▶ No.939011>>939015 >>939017
>>939005
Liar, liar, pants on fire.
▶ No.939015>>939016
>>939011
>collection of software that is only used by imageboard contrarians
I don't see the lie.
▶ No.939017>>939056 >>939092
>>939011
2 can play at that game
▶ No.939021
>>939016
I'm sure any day now someone will realize the genius of plan 9 and include absolute dogshit revolutionary tools like acme in /usr/bin by default. Stay strong, anon.
▶ No.939056
▶ No.939062>>939065
>>938994
I won't lol
Give me a reputed source or else your argument is invalid
▶ No.939065>>939066
>>939062
That's all pretty basic information, nothing surprising. Bing it yourself.
▶ No.939066>>939069
>>939065
So your argument is invalid
Okay
▶ No.939069>>939077
>>939066
>it's not true unless a Jew in the media or academia says it is
Who taught you to think this way.
▶ No.939077
>>939069
> implying jew media has to say it
> can't even pull up a link giving the source of the information
You just don't have a valid argument lol
▶ No.939083>>939090 >>940060 >>940061
>>938369 (OP)
>Did C and UNIX set back computer science by several decades?
Yes and no. They set operating system development and programming language design back. These two fields have not yet recovered.
As for everything else, they flourished in the essentially shit ecosystem that is Unix software development. You can get a library for anything, which generally depends on three other libraries (two of which are obsolete). And somehow, people have always managed to tie these libraries together and form bigger, better libraries, which include three or more obsolete versions of the same library because its dependencies have that fucked up of a dependency tree.
Alan Kay, The Computer Revolution Hasn't Happened Yet, OOPSLA 97 Keynote
> To me, the most distressing thing that happened to Smalltalk when it came out of Xerox PARC, was, for many respects and purposes it quit changing. I can tell you, at Xerox PARC there are four major versions---completely different versions of the language—over about a ten year period, and many dozens and dozens of significant releases within those different versions. I think one of the things we liked the most about Smalltalk was not what it could do, but the fact that it was such a good vehicle for bootstrapping the next set of ideas we had about how to do systems building. That, for all intents and purposes—when Smalltalk went commercial—ceased. Even though there is a book—the famous blue book that Adele and Dave wrote, that had the actual code in it for making Smalltalk interpreters and starting this process oneself—almost nobody took advantage of this.
I think about Go, and Rust, and Java. The developers want to take the languages places. The problem with this is that it makes the language a poor choice for development. You can't develop libraries and applications and user code for a language that is constantly changing. You need a foundation to build a building on, and part of being a foundation is not being a moving target. Think of how much code from the '80s and '90s is still in GCC, GNU in general, and Linux.
▶ No.939090>>939091 >>939093 >>939099 >>940140
>>938974
>>939083
>You can get a library for anything, which generally depends on three other libraries (two of which are obsolete). And somehow, people have always managed to tie these libraries together and form bigger, better libraries, which include three or more obsolete versions of the same library because its dependencies have that fucked up of a dependency tree.
And that is why you use libraries with permissive licenses and static linking.
>Alan Kay book quote:
>Smalltalk is good. Our xerox machine used it. We used it to build systems but nobody used it
Your Xerox machines and smalltalk have relevance to what you are saying at the moment.
> Go - Garbage colleted. Has a C Foreign function interface (FFI)
> Rust - Garbage collected, compiles to LLVM bytecode, then into actual machine code. Also has a C FFI
> Java - Garbage collected, compiles to bytecode and runs in a VM. Also has a C FFI
None of these languages allow direct manipulation of memory or pointer arithmetic in the language itself and not by referencing to c or assembly.
Not allowing direct access to memory addresses means no access to hardware control registers.
No access to hardware control registers means inability to manipulate the system on your own and write system software.
As long as this is true, these languages will never replace C.
> You need a foundation to build a building on, and part of being a foundation is not being a moving target.
> Think of how much code from the '80s and '90s is still in GCC, GNU in general, and Linux.
So are you implying that the languages need to mature? Having so much code from the 90s simply means that the operating system is stable and tested. New code is not necessarily good code.
You tripfags and shitposters know absolutely jack shit about what you're talking about, yet you people keep spewing your shit everywhere.
▶ No.939091>>939094
>>939090
>Not allowing direct access to memory addresses means no access to hardware control registers.
C does not even guarentee that you can do this.
▶ No.939092
>>939017
I don't know what you're showing in that pic. I'm not playing any game, merely pointing out that plan9 software is actually useful enough to be in the ports tree of OpenBSD (and probably other BSD's and Linux's as well, maybe even OS/X has some packages). The other guy stated that plan9 yielded nothing useful whatsoever, but if that was the case, then these ports wouldn't exist. http://openports.se/plan9
Then he goes on to claim that only some imageboard autists will use them, which is basically the same pattern that Microsoft shills used a couple decades ago when trying to convince people Linux would never get anywhere. The details change, but the pattern remains the same.
▶ No.939093>>939095
>>939090
>Rust - Garbage collected
>None of these languages allow direct manipulation of memory or pointer arithmetic in the language itself
>You tripfags and shitposters know absolutely jack shit about what you're talking about
▶ No.939094>>939105 >>939107
>>939091
Even if that's true, so? You can.
The other languages you mentioned can't
You're just trying to nitpick at this moment because you don't have a point.
This is how easy it is to get stuff going on a screen on a Gameboy Advance just by manipulating hardware registers.
http://www.loirak.com/gameboy/gbatutor.php
#define RGB16(r,g,b) ((r)+(g<<5)+(b<<10))
int main()
{
char x,y;
unsigned short* Screen = (unsigned short*)0x6000000;
*(unsigned long*)0x4000000 = 0x403; // mode3, bg2 on
// clear screen, and draw a blue back ground
for(x = 0; x<240;x++) //loop through all x
{
for(y = 0; y<160; y++) //loop through all y
{
Screen[x+y*240] = RGB16(0,0,31);
}
}
// draw a white HI on the background
for(x = 20; x<=60; x+=15)
for(y = 30; y<50; y++)
Screen[x+y*240] = RGB16(31,31,31);
for (x = 20; x < 35; x++)
Screen[x+40*240] = RGB16(31,31,31);
while(1){} //loop forever
}
▶ No.939095>>939096
>>939093
Nitpicking again.
Fine. It does allow memory manipulation. However, according to the article below, it is highly discouraged and the ways to do it are quite tedious.
It also shows a method to do it via C, further showing that all the main, dirty work is left to C.
http://www.cs.brandeis.edu/~cs146a/rust/doc-02-21-2015/std/ptr/index.html
▶ No.939096
>>939095
> further showing that all the main, dirty work is left to C.
(for the most part)
▶ No.939099>>939106
>>939090
Are you illiterate?
> C and Unix allowed all portions of computer science except for OS and language design to flourish.
< Durr1 I disagree with you by agreeing with you1
> Quote that sets up further discussion.
< Durr1 I disagree with this because reasons1
> Discussion of why an assortment of current languages are unsuited to research topics outside of the language itself.
< Durr1 These languages are bad because I don't know anything about programming1
< in the language itself and not by referencing to c or assembly
This is a retarded and you should feel bad. Have you ever used BASIC? That language that has ungodly staying power? Its support for direct memory manipulation, in an age when this was important, was two statements: PEEK and POKE. Guess what they did. For any language, I can wrap these into standard library functions and magically get all of the functionality that C supposedly has over that language. Will it be written in assembly? You don't know and don't care. It may even be written in the language itself: it is possible to write a compile for a language in that language. C compilers are written in C, and none of your supposed "C is magical" bullshit is what allows that.
< So are you implying that the languages need to mature?
No, I'm saying that the language has staying power because it hasn't changed. I can still compile C libraries written 30 years ago. I can still compile FORTRAN libraries written 30 years ago. I have trouble compiling Java code written 5 years ago. One of these things will not last as long as the others.
▶ No.939101>>939102
>>938820
Ah the old
>M-My handwritten assembly is faster than the compiler's output!
meme.
That's a good one friend, thanks.
▶ No.939102
>>939101
It's easy to beat a C compiler with handwritten assembly as the compiler has no way to know that variables aren't modified by function calls, aliased pointers, etc. so has to keep reloading registers even when the programmer knows those modifications can't happen.
▶ No.939105>>939107 >>939109 >>939111
>>939094
I was just pointing out that it's implementation dependent and not required to work by the C standard.
>while(1){}
This looks like a battery life killer. Does the GBA not have any sort of low poer mode or something?
▶ No.939106>>940037
>>939099
Are you fucking illiterate?
>> C and Unix allowed all portions of computer science except for OS and language design to flourish.
>< Durr1 I disagree with you by agreeing with you1
I never fucking quoted that statement.
And I did not disagree. I ADDED words to your statement and ignoring that, you go on to say this.
>> Quote that sets up further discussion.
>< Durr1 I disagree with this because reasons1
I did not fucking disagree here either. I asked the RELEVANCY of this quote to the rest of your post.
Tell me how you fucking xerox machine and smalltalk is related to your current discussion. All it says is that smalltalk was cool and it was used in a xerox machine.
> in an age when this was important
Are you saying that today's computers don't work by manipulating hardware registers to draw things to the screen, read your hard drive and basically any sort of input, processing and output?
> Guess what they did.
Manipulate memory? If you are trying to say that they will corrupt your memory, modern operating systems already disallow programs from manipulating such sensitive memory already (or that was the case. See meltdown).
> For any language, I can wrap these into standard library functions and magically get all of the functionality that C supposedly has over that language.
So are you saying that you will call a BASIC interpreter whenever you'd want to use PEEK AND POKE? Anyway, why didn't these people who implemented the language do it, and instead depend on mostly assembly and C?
> It may even be written in the language itself: it is possible to write a compile for a language in that language. C compilers are written in C, and none of your supposed "C is magical" bullshit is what allows that.
Yeah. It's easy to generate machine code and slap an executable header on it.
And I never implied that C was magical. Just that it had one main thing that prevented the trendy languages from becoming system languages by themselves. (Well, with the exception of maybe x86 real mode code. You need to use assembly at that point.)
▶ No.939107>>939110
>>939094
>Using the simplest microprocessor of the last three decades to show your point
Good luck writing to the screen on anything but handheld consoles tbh fam
>>939105
There are timer interrupts, so I guess you could set them, and then feed it with NOPs.
▶ No.939109
>>939105
The GBA loops over main() indefinitely. If you do not add the while(1) {}, it will constantly repeat those instructions forever, which is more of a battery killer.
▶ No.939110>>939114
>>939107
Does that disprove my point about how your hardware is manipulated by software.
▶ No.939111>>939112 >>939120
>>939105
> I was just pointing out that it's implementation dependent and not required to work by the C standard
So? You're just changing values, and C allows you to do that.
▶ No.939112>>939115
>>939111
And most of the times it works in that manner (at least in x86 systems)
▶ No.939114>>939120 >>939126
>>939110
No, but you're picking one example that make it seem super easy. With anything more complex than a Gameboy, it's much more... complex.
That C "changes values" doesn't make the language super special, directly manipulating specific, compile-time-known memory addresses is something you almost never do.
>>939108
HALT, no, but there's a WFI (wait for instruction), so you could set a timer and then feed a WFI. True, it's better than NOP for power saving.
▶ No.939115
>>939112
And system independent stuff like the C standard library functions depend upon implementations of those functions specific to each system.
▶ No.939120>>939123 >>939141
>>939113
> rest of instructions take more power than an infinite loop?
Doing more things takes more energy.
And you would be repeatedly overwriting those values on those places (unless your compiler optimizes it, maybe)
>>939113
>>939111
Fine, the way I said it was a bit inadequate. If the system, for example, allows things like interrupts to do stuff like this (like in the x86 real mode). However, hardware register manipulation (which is usually pre-mapped) is something that is present in a lot of systems (or at least x86).
https://wiki.osdev.org/Memory_Map_(x86)
https://wiki.osdev.org/Detecting_Memory_(x86)
> Arbitrary points on some architectures just don't make any sense. If you want a language that guarentees that you can do that, then you'll need to look for another language.
For example?
>>939114
>directly manipulating specific, compile-time-known memory addresses is something you almost never do.
You won't, but someone has to.
▶ No.939123>>939126 >>939142
>>939120
>You won't, but someone has to.
The point is, because it's so little used, not fucking everything should be written in C. Hell, even the parts of a kernel that directly manipulate memory are written in ASM. ASM allows you to use interrupts and such (C doesn't). Yet we don't use ASM for everything. Using C for userland tools and 99% of a kernel is insane, yet here we are.
▶ No.939126>>939128
>>939114
>>939123
More specifically the more higher level languages are "pussifying" the new programmers and driving them away from any sort of real communication and manipulation of the system, resulting in people like javascript "programmers" who are almost illiterate about how their system works, and you are advocating for a system to be written with such a language.
Just tell me what else would you add to C as a language to suit your tastes.
>>For example?
>For a language? Assembly.
>For an arbitrary pointer? (void *) 0x1
Ah I quoted it wrong. Give me an example of a system that does not have hardware registers and such.
>>939122
>How is it doing more things?
like drawing to the screen, which is comparatively more expensive instead of just sitting around.
▶ No.939128>>939130 >>939138 >>939425
>>939126
Ah, the classic "real programmers work directly with hardware" maymay.
Writing imperative, linear programs requires very little intelligence, whereas other paradigms start to require a good amount of abstract thought.
If you think C is for 150+ IQ people only, wait until you actually work with legacy C code. Not that JS isn't full of pajeets, but writing proper, clean, functional JS actually requires to put more thought into it than writing C, which in the end just means making sure you don't get dangling pointers.
There's more to programming than knowing your hardware registers, which isn't hard anyway.
▶ No.939130
>>939128
Addendum: the real programmer is lazy. He optimizes his resource usage, including time. Purposefully increasing your required time to do anything (for instance, using C) is not something a real programmer does.
▶ No.939138
>>939128
>Writing imperative, linear programs requires very little intelligence, whereas other paradigms start to require a good amount of abstract thought.
Apparently hack programmers sound like hack artists.
▶ No.939141>>939150
>>939120
That while loop will melt the battery. It's obvious wherever you copied the code from is just doing it like that to avoid having to get into the complexity of a game loop when giving an example of writing to the screen.
▶ No.939142>>939146 >>939149 >>939425
>>939123
> Ah, the classic "real programmers work directly with hardware" maymay.
I never implied that. I said that
char a[100];
strcpy(a, "string 1 ");
strcat(a,"string 2");
printf("%s",a);
Tells you more about how the system works than something like
a = "string 1 " + "string 2"
print(a)
and beneath the above statement is something similar to the above C code
I am not saying that C is the best language to do your stuff in, but other languages are simply making the future developers into illiterates and hacks. A programmer's laziness is not necessarily a virtue in this case as shown in pic related.
It's similar to how figure drawing and other disciplines went out of fashion in art schools and what we and up with is today's contemporary art colleges and calarts.
>>939133
It doesn't need to, you're making it do it if you remove the while(1) {}.
▶ No.939146>>939150
>>939142
>but other languages are simply making the future developers into illiterates and hacks
How so? C and very low level stuff is still taught in colleges today.
And I don't know the state of art colleges where you live, but here, art students do have to learn how to properly draw shit, and they have a shitton of assignments to that effect. They are still pink-haired brats, but the education itself does teach them things. The same happens with tech colleges. Sure, they teach how to do shit in Java, but they also teach you how a processor works.
▶ No.939149
>>939142
Just a side note. In C you could do that with snprintf() too
char a[100];
snprintf(a, 100, "%s %s", "string 1", "string 2");
printf("%s", a);
▶ No.939150>>939151 >>939164 >>939838 >>940015
>>939141
Of course the GBA is going to consume power if you keep it running. It's just doing nothing in this case. I don't know if GBA has a low power mode or not. The while loop itself is the game loop. The reason the drawing operations aren't in the while loop is because it is expensive to do so, not required and will consume more battery.
Here's a game from the same guy demonstrating the game loop (see main.c):
http://www.loirak.com/gameboy/tank.php
>>939146
>Implying colleges teach you practical skills
>Implying things being taught means that the students are completely understanding 100%
>Implying you don't know about the state of CS graduates
>Implying you don't know about vomit paintings
Also, have a look at /co/. Here's an excerpt from the book "The Animator's Survival Kit"
▶ No.939151>>939161 >>939170
>>939150
What's wrong with the state of CS graduates? Some are useless, some aren't. About art, I've told you, I know a number of art students, and they're all capable of realistic drawing, as they were taught in fine arts school.
▶ No.939161>>939162
>>939151
>>939154
Holy fuck, are all of you guys newfags?
▶ No.939162>>939168
>>939161
I've been on here since mid 2015.
▶ No.939164>>939168
>>939150
>It's just doing nothing in this case
It's not doing nothing. It's doing something as fast as it possibly can. It will melt the battery.
>Here's a game from the same guy demonstrating the game loop
WaitForVblank();
This is what's limiting CPU use in his real game loop.
▶ No.939168>>939172
>>939162
Right around when the cancer arrived here. Confirmed newfag.
>>939164
Alright, yeah. Makes sense.
▶ No.939170>>939173
>>939151
CS grads struggle to write "Hello, World" examples in interviews today. And I've had people (who are probably attending a CS college) tell me that there's nothing wrong with that as CS isn't supposed to teach programming. This is nothing like it was 15 years ago. CS grads used to be able to write complex software.
▶ No.939172>>939228
>>939168
>le newfag maymay
Wow, you've been here since 2012 haven't you?
Also, nice dodge. Maybe try answering the question for once.
▶ No.939173>>939177
>>939170
CS shouldn't be about programming. That would be a waste of time.
A 7 year old can teach themselves programming.
▶ No.939177>>939179 >>939196
>>939173
Have fun being unemployed with a $100k student loan.
▶ No.939179>>939182
>>939177
>living in the only country where education will ruin your life
I'm not that dumb
▶ No.939182>>939189
>>939179
Have fun being unemployed in refugee territory.
▶ No.939189>>939192
>>939182
there is a 95% employment rate from the top 6 unis.
If you're unemployed it's most likely by choice. Average starting salary is $70k which isn't much but it's decent.
As long as you avoid germany and sweden you can avoid refugee territory
▶ No.939192>>939196
>>939189
All I know about the EU is that all your good programmers move to California right after graduating.
▶ No.939196>>939201 >>939210
>>939177
>student loans
Top kek.
>>939192
Our education is by far superior, but American salaries are higher.
▶ No.939201
>>939196
being a tripfag invalidates your opinion.
▶ No.939203>>939739
Linux is really, really fucking amazing and a lot of people don't realize that. Unlike plan9 which is a garbage mix of C and Go, Linux is UNIX's concept with a few adjustments that ended up benefiting it in the long run.
▶ No.939210>>939213
>>939196
It isn't superior. You never had anything like MIT or CMU, the closest you ever got was TU Delft. Europe gets fed the meme that it has the best education in the world yet experiences with citeceer suggest otherwise.
▶ No.939213>>939223
>>939210
The university of Zurich, Oxford, the Sorbonne or the Polytechnic school of Paris have nothing to envy to the MIT or Harvard.
▶ No.939218
48% of the top 100 unis are american.
So it's about even.
▶ No.939223>>939224
>>939213
>Oxford
Oxford is a finishing school for refugees.
▶ No.939224>>939227
>>939223
WOW THIS REALLY TAUGHT ME, FUCKING YOUTUBE COMMENTS LMAO
Kill yourself my man, real hard.
▶ No.939227
>>939224
>getting this defensive that Islam is all over your shit now
You obviously know it's true. Do something about it.
▶ No.939228>>939230
>>939172
>lol you dodged a question
>ignores all other places where I go on to answer every single argument
It was bad enough that you were tripfags, but I tolerated it for a while, but you kids are newfags as well. You are lost causes.
>since 2012
lol
▶ No.939230>>939232
>>939228
>he had to go on Wikipedia to check the 8chan founding date
▶ No.939232>>939235
>>939230
And where else should I get the evidence from, sham?
▶ No.939234
Also, you tripfag kids don't leave even a corner while trying to justify your existence, don't you?
▶ No.939235>>939238
>>939232
We know when 8chan was founded, lad. I don't know about zdu, but I was on /g/ since 2012. It's in 2015 when I made the switch from halfchan to here.
▶ No.939238
>>939235
>second exodus
no wonder you are the way you are.
▶ No.939241>>939308 >>939328 >>939331
>>939222
It's only slightly exaggerated. I used to do interviews and I'd ask them to write a function to reverse a linked list (students mostly from the UC system but also other good CS colleges, interviewing for a job requiring strong C skills), just as my own fizzbuzz to exclude the fakes. Over the last 15 years, it grew to exclude almost everyone and I'd just sit through a whole interview watching them fail anything that involved actual programming.
>You are saying that they've made compilers and operating systems
Those courses have been gutted in a push to enroll all the "STEM" faggots who want a degree at any cost and don't care about things like placement rate. When I went to school in the '90s we wrote optimizing compilers from scratch in C for Oberon with an intermediate assembly stage and SPARC codegen with a lexer/parser we wrote ourselves. Today, the same course has them modify a partially written template of a compiler in OCaml with a parser generator and directly produce unoptimized x86 code. It's like 1% of the difficulty of what it once was. As a bonus, they're allowed to collaborate (read: get carried).
▶ No.939308>>939371
>>939241
So what would your advice be to current CS/SE students then? I have a few side projects already under my belt; some little system utilities, some steganography, other semi-unique things but nothing too complicated. I am going to start working on a distrubuted and concurrent application soon, will create my own programming language and interpreter for it in a few weeks, and later this year I will create my own Linux distro utilizing Linux From Scratch(I'll write my own package manager, probably my own build server software since I would like the binaries to be statically linked, etc). I've concluded that many of these are fruitfull endeavors but what would you suggest? How should a student separate themselves from the mediocrity and demonstrate real value for when they graduate? How does one make up for the aspects that have been stripped from CS/SE degrees in recent years?
▶ No.939328
>>939241
Degrees in CS are literally irrelevant to any employer with a modicum of intelligence, go write stuff instead of wasting your time at uni.
▶ No.939331>>939332 >>939373
>>939241
I'm an idiot with no education who doesn't know shit, but isn't that just something fairly simple like:
void *m = malloc(maxlength); //Not ideal, but I don't want to bother writing something more complex
long *ap = m;
*ap = NULL;
*ap++;
for (structpointer = &firstelement; structpointer != NULL; structpointer = structpointer->nextelement)
{
*ap = structpointer;
*ap++;
}
while(*ap != null)
{
*ap--;
structpointer->nextelement = *ap;
structpointer = *ap:
}
//I don't recall how to unallocate memory.
▶ No.939332
>>939331
nice formatting brah
▶ No.939371
>>939308
>So what would your advice be to current CS/SE students then?
Jump into large open source projects and learn by doing. Use it to learn, as resume material, and to get jobs (if the project is used in industry, cold contact companies using it). The best programmers were the best programmers before they went to college.
▶ No.939373>>939399
>>939331
It requires no allocations, it's just pointer manipulation. Take the first element off the list, put it at the beginning of a new list, repeat until done. Just a couple lines of code.
▶ No.939399>>939420
>>939373
You can do it in "O(1)" the fat stack is obviously going to take O(n) space while doing it and O(n) time by making a recursive function that accepts a node and the pointer to its parent. All you have to do now is to execute it recursively until you find the last member, then return all the way up while changing the current pointer eith the pointer of its father.
▶ No.939420>>939426
>>939399
You're overthinking it. This is just a trivial fizzbuzz showing that someone understands the basics of working in C.
>recursively
Never use recursion. It's the shittiest thing you get taught in school. Unlearn it.
Here, I wrote you an example. It's not necessary to write a list_pop/list_push but it's cleaner than just doing it raw in reverse() and shows better form.
struct node *list_pop(struct node *list) {
struct node *retval = list->next;
if(!retval) return NULL;
list->next = list->next->next;
return retval;
}
void list_push(struct node *list, struct node *node) {
node->next = list->next;
list->next = node;
}
void reverse(struct node *list) {
struct node reversed = {0};
struct node *node;
while((node = list_pop(list))) {
list_push(&reversed, node);
}
list->next = reversed.next;
}
▶ No.939425>>939426
>>939128
C sucks because most of the work is bullshit that compilers from the 60s could already do automatically.
>>939142
>and beneath the above statement is something similar to the above C code
Most of the time, that + would be evaluated at compile time, but C can't do that because it doesn't really have strings or arrays.
>simply making the future developers into illiterates and hacks.
That's what C does. These graduates do not know what a compiler can do or what hardware can do even at a 60s level because they learn "gee, I don't know, whatever the PDP-11 compiler did" instead of good languages.
>It's similar to how figure drawing and other disciplines went out of fashion in art schools and what we and up with is today's contemporary art colleges and calarts.
That's what happened when C and UNIX replaced real languages and OSes like PL/I and Common Lisp and Multics and VMS, but with UNIX it's worse because people still know those art styles exist. UNIX schools don't even teach what other OSes can do, let alone how they work, so UNIX weenies can't understand "that better could have once existed in the world before driven out by worse."
Q. Where did the names “C” and “C++” come from?
A. They were grades.
But it's much worse than than because you need to invoke
this procedure call before entering the block.
Preallocating the storage doesn't help you. I'll almost
guarantee you that the answer to the question "what's
supposed to happen when I do <the thing above>?" used to be
"gee, I don't know, whatever the PDP-11 compiler did." Now
of course, they're trying to rationalize the language after
the fact. I wonder if some poor bastard has tried to do a
denotational semantics for C. It would probably amount to a
translation of the PDP-11 C compiler into lambda calculus.
▶ No.939426>>939430
>>939420
>Never use recursion. It's the shittiest thing you get taught in school. Unlearn it.
why?
>>939425
based
▶ No.939430>>939435 >>939454
>>939426
Recursion is a stupid language trick that is inefficient and severely limited by stack size, and the amount of stack you have available at the time your function is called is unpredictable. Additionally, tail-recursive solutions that people claim match an iterative solution depend on compiler optimizations yet almost no languages mandate these optimizations so you're not writing portable code. You'll be surprised how many language implementations don't do it. E.g. only Safari does it for Javascript.
▶ No.939435>>939463
>>939430
>Recursion is a stupid language trick
Except it's not. You're just calling a function. It's the most basic abstraction every language has.
>inefficient
Depends on how much work you are doing inside the function.
>severely limited by stack size
Just increase it.
>You'll be surprised
I'm not. I'm not a Lispnigger.
▶ No.939454>>939463
>>939430
> that is inefficient and severely limited by stack size
What retarded language are you using where that is true?
▶ No.939463>>939464 >>939794
>>939435
No one's going to create ridiculous stacks per thread in every process your code runs in just because you don't know what the fuck you're doing. They're just going to laugh at you.
>>939454
Any language that tries to be close to the hardware will have small stacks. C/C++, Java, C#, and Rust for example.
C++ and Rust have additional hard recursion limits on metaprogramming. C++ introduced ways to iteratively expand parameter packs which avoids the problem in new code but Rust still has problems with recursion in macros.
Don't write recursive code. It's objectively bad.
▶ No.939464>>939470 >>939477
>>939463
The stack size has nothing to do with the language.
I don't know about C++ but Rust's macro recursion limit can be specified at compile time. https://doc.rust-lang.org/reference/attributes.html#crate-only-attributes
>It's objectively bad.
You still haven't provided proof for that.
▶ No.939470>>939474
>>939464
>where are the proofs
It's a paradigm that is either less efficient or reliant on non-standard compiler optimizations just to hit parity. That's what we call objectively worse. Prove otherwise, nigger.
▶ No.939474>>939476
>>939470
>make claim
>can't back it up
>no u, nigger
found the LARPer
▶ No.939477
>>939464
It does vary by language as many shitty languages implement their stack on the heap.
▶ No.939480
▶ No.939576
▶ No.939651>>939719
>>939643
Song? I know I've heard it standalone before, but I can't remember the name of the song.
▶ No.939719
>>939651
Triple H - Rock Over Japan
▶ No.939739
>>939203
>UNIX's concept
You mean it used to act like SVR4 back before systemd? Clunky and based on legacy usage? Essentially, Solaris from a decade ago, but without a pricetag and with more developers?
▶ No.939740
Linux is holding back the internet as it is being used as a gimp to deliver Node.js and other Lisp trainwrecks. Javascript is the final, perfect reincarnation of Lisp, and look where it has gotten us. You cannot trust programmers,"code monkeys" or DEVELOPERS
▶ No.939761>>939762 >>939764 >>939765 >>939795 >>939892 >>940060
I'M SO FUCKING GOD DAMN SICK OF SUDO
Every fucking time I want to piss I have to sudo. Every fucking time I want to run ANY fucking god damn utility I have to sudo. I'm so fucking god damn sick of this absolute fucking bullshit. I can't run filemanagers because none of them let me MOVE A FUCKING FILE FROM ONE DISK TO ANOTHER DISK because fucking sudo.
OSX doesn't have this problem. I want to run docker I fucking run docker. That's it. That's all it is. Works without any sudo bullshit. I want to copy a fucking file from my hard drive to a USB stick I just drag and drop, done. Why in the god damn hell can't Linux be like this?
▶ No.939762>>939763 >>939794 >>940062
>>939761
>I want no security
LOL
▶ No.939763>>939764 >>939769 >>939892
>>939762
A user shouldn't have to run sudo and type in their password a billion times just to move files from the disk to a flash drive. This is not something that should ever need root privileges. Ever.
▶ No.939764
>>939761
>>939763
This exactly. What we should REALLY have for HOME computers (actual home user computers, not retarded mainframes) is ONE FUCKING USER. TempleOS does it right (except it has an Adam user, but that's a different story). Hopefuly KKKOS will fix that.
▶ No.939765
>>939761
ちょっと横槍を刺させてもらうのん。今お兄ちゃんがLinuxと呼んでいる物は実は正しく言うとGNU/Linuxって言うのん。最近はGNUプラスLinuxとも言うのん。Linux単体は実はOSじゃなくて、GNUと言う完成されたシステムのフリーコンポーネントの一つとしてGNUのcorelib、shellユティリティ、その他バイタルシステムと一緒にPOSIXの定義上OSとして実装可能になってるん。パソコンの利用者は皆知らずい毎日改変されたGNUシステムを使ってるのん。とある経緯を経て、昨今よく使われるGNUがLinuxと呼ばれるようになっているのん。なのに、GNU Projectに作られたGNUシステムと知らずにいるのん。Linux自体は実際存在していて、皆も使ってるん。ただ、それはシステムの一部としてなのん。Linuxはkernalと言って、他のプログラムにパソコンのリソースを振り分ける役目を持つのん。KernalはOSの大事な一部だけど、単体では意味が無いのん。完成されたシステムの一部の中でしか機能できないのん。Linuxはよく、GNU OSの一部と一緒に使われるのん、いや、GNU/Linuxなのん。Linuxデストリは実は皆GNU/Linuxのデストリなのん!
▶ No.939767>>939768
>939764
>ONE FUCKING USER
I'm having a hard time visualizing that. OSes typically need system or internal 'users' for handling various tasks. Even Windows has a bunch of special accounts for this sorta stuff. Separation of privilege also means any random motherfucker who gets access remotely or physically can't take total control over the system.
The real question is: how do we properly balance making things more logical and convenient for a desktop OS, and not making the OS security swiss cheese?
▶ No.939768>>939770 >>939773
>>939767
We can start by not designing security around bad actors with physical access to the fucking machine because by that point you've already lost.
▶ No.939769>>939794
>>939763
>This is not something that should ever need root privileges. Ever.
Yeah any asshole should be able to write to any disk they want! Who gives a fuck!
Look man if you want a single user system, run your system as root.
▶ No.939770>>939772 >>939773
>>939768
>with physical access to the fucking machine because by that point you've already lost.
People always repeat this but its retard level incorrect.
▶ No.939772
>>939770
What more could you possibly need beyond an encrypted disk with a password on boot?
▶ No.939773>>939776
>>939768
Perhaps, but let's hear >>939770 's side of the argument.
Speaking of which, whatever happened to physical locks on computers? I know a lot of old PCs had them, and I semi-recently interacted with some Dell enterprise-tier workstations that had them, but you don't see them in most cases.
▶ No.939776>>939777
>>939773
>Speaking of which, whatever happened to physical locks on computers?
Do you mean those things that chained the computer to a desk? Because those are not even half as useful as bicycle chains, which are already useless because anybody can just snip it with wire cutters.
If you mean something else like some sort of fancy physical turnkey lock that engages power or something then I've never heard of such a thing before.
▶ No.939777>>939778 >>939817
>>939776
>If you mean something else like some sort of fancy physical turnkey lock that engages power or something then I've never heard of such a thing before.
Yeah that's what I mean. The computer wouldn't turn on if you didn't have it 'unlocked'. Or in the case of those dells I mentioned, the computer would turn on, but it wouldn't be able to detect the disk and boot from it (you had to lock the disk into a special tray)
▶ No.939778>>939779
>>939777
Honestly it just sounds like a physical version of an encrypted disk password. I mean it's COOL and I want one just for the sake of over-engineering and shit, but it doesn't sound any different from the password. You've just exchanged typing in a key with physically inserting a key -- which may itself be more insecure because unlike software passwords which should go through inhuman lengths of hashing and salting, the combination to a physical keyslot is stored right there in the shape of the lock and limited by the size of the lock and key and it would only take some fucking with a lockpick to get it right.
▶ No.939779
>>939778
I can't seem to find the one I saw (it was on the front of the case, not the back), but this is kinda the idea.
https://hooktube.com/watch?v=xyds3QL4blY
Even if you shoved the hard drive+caddy into the case, the workstation wouldn't be able to read from it until you locked it in.
Afaik, the older computers i was talking about actually locked the power supply, and I think some workstations nowadays let you do that too.
I get what you're saying though. It's probably not any more secure than just having some kind of password. I wasn't really trying to make a point about what we should be doing, but more just talking about a really cool thing that you just don't see every day.
▶ No.939794>>939798
>>939463
>Any language that tries to be close to the hardware will have small stacks. C/C++, Java, C#, and Rust for example.
This is an example of UNIX brain death. Multics and x86 have a separate stack segment, so the stack can fit whatever size the program needs and grow without running into other data. UNIX came from the 16-bit PDP-11, so they never had the idea to use large stacks. C also has serious flaws that make it difficult to have a pointer to something in a separate segment, just because the PDP-11 didn't do it that way.
>>939762
>>939769
Lisp machines and Multics are a lot more secure than UNIX could ever be. Tagged memory and segmentation are faster, more secure, and more usable. This is why 4 or 16 rings are useful instead of just kernel vs user. Drivers run in a ring outside the kernel ring. Applications run in a ring outside the user ring. Microkernels are a way to do this in software, which is usually slower, but still better than not doing it at all.
The fundamental design flaw in Unix is the asinine belief
that "programs are written to be executed by computers
rather than read by humans." [Now that statement may be
true in the statistical sense in that it applies to most
programs. But it is totally, absolutely wrong in the moral
sense.]
That's why we have C -- a language designed to make every
machine emulate a PDP-11. That's why we have a file system
that forces every file to be viewed as a sequence of bytes
(after all, that's what they are, right?). That's why
"protocols" depend on byte-order.
They have never separated the program from the machine. It
never entered their tiny, pocket-protectored with a
calculator-hanging-from-the-belt mind.
▶ No.939795>>939801 >>939811 >>939892 >>939924
>>939761
Don't use sudo. Use a real root account you get to via switching your virtual terminal.
sudo is the biggest security joke in all of UNIX. Do not trust anyone on security who encourages you to use it. What's your threat model, that someone might hack your account and you want to be sure they don't hack root? Well if they get into a sudo user's account, they can trivially wrap sudo and wait for you to use it. So in trying to be safer, you've actually made things less safe by making some shitty user account that uses the GUI and a browser effectively a root account. You might as well just run as root if you use sudo, you're no more secure.
▶ No.939798>>939838
>>939794
>segmentation
Why'd this become part of the conversation. Segmentation is archaic, from before we had page tables, and from when memory was banked. There's no reason to have it today. It was replaced by better hardware systems.
▶ No.939801>>939811 >>939892
>>939795
Good point, and this is something I was going to highlight earlier.
If sudo is going to be a thing, it needs to run the command after asking for the ROOT user's password. not the regular user's password.
Now to be perfectly fair, doing it the current way makes a lot of sense when you consider that you can have rules in the sudoers file specifying commands that can/can't be accessed by he sudoing user, and from that perspective it's quite logical. On a computer that has multiple users, you don't want to straight-up give the user root's password, as they'd just be able to do literally everything. You could instead give them sudo access to the commands they need, and nothing more.
However, assuming you're not making use of these rules (in other words, you're most people), then you're right, and sudo is worthless.
I know OpenBSD replaced sudo with "doas" at one point as they claimed sudo was insecure. Was this one of the reasons why, or was it some other issue they had with it?
▶ No.939811>>939892 >>939924
>>939795
>>939801
If someone gets into the user account not only can they wrap sudo, they can wrap the login daemon itself, so your point is moot. User accounts need to be abolished.
▶ No.939817>>939819 >>939855
>>939777
I remember those locks on the front of some PC cases. Seemed kinda flimsy, and I wouldn't have trusted it. Anyway back then you could just store any sensitive files on floppy disks and lock those in your safe or whatever. Back then floppies were reliable enough to use as primary storage (although that changed sometime in the mid 90's when production got sent to China).
▶ No.939819
>>939817
These days corporate infosec is structured around cloud storage. They'd rather you keep your shit in the company dropbox than removable storage media.
▶ No.939838>>939861
>>939798
>Segmentation is archaic, from before we had page tables, and from when memory was banked. There's no reason to have it today. It was replaced by better hardware systems.
Segmentation was not replaced by any better system, unless you mean a global garbage collector like on Lisp machines. The PDP-11 memory model used by UNIX, AMD64, and RISCs sucks compared to segmentation. Multics uses both segmentation and paging because they serve different purposes, as explained in this paper. Multics uses segments to map all files into the address space. Programs have different segments like stack, code, dynamic libraries, and so on. Segments are made of pages that can be swapped individually instead of as a whole segment.
The picture in >>939150 is a pretty good analogy. UNIX schools are so bad that the professors can't teach properly because "they never learned it themselves."
http://multicians.org/multics-vm.html
>In segmented systems, hardware segmentation can be used to divide a core image into several parts, or segments [10]. Each segment is accessed by the hardware through a segment descriptor containing the segment's attributes. Among these attributes are access rights that the hardware interprets on each program reference to the segment for a specific user. The absolute core location of the beginning of a segment and its length are also attributes interpreted by the hardware at each reference, allowing the segment to be relocated any where in core and to grow and shrink independently of other segments. As a result of hardware checking of access rights, protection of a shared compiler, for example, becomes trivial since the compiler can reside in a segment with only the "execute" attribute, thus permitting users to execute the compiler but not to change it.
>In a system in which the maximum size of any segment was very small compared to the size of the entire core memory, the "swapping" of complete segments into and out of core would be feasible. Even in such a system, if all segments did not have the same maximum size, or had the same maximum size but were allowed to grow from initially smaller sizes, there remains the difficult core management problem of providing space for segments of different sizes. Multics, however, provides for segments of sufficient maximum size so that only a few can be entirely core-resident at any one time. Also, these segments can grow from any initial size smaller than the maximum permissible size.
>By breaking segments into equal-size parts called pages and providing for the transportation of individual pages to and from core as demand dictates, the disadvantages of fragmentation are incurred, as explained by Denning [9]. However, several practical problems encountered in the implementation of a segmented virtual memory are solved.
The lesson I just learned is: When developing
with Make on 2 different machines, make sure
their clocks do not differ by more than one
minute.
Raise your hand if you remember when file systems
had version numbers.
Don't. The paranoiac weenies in charge of Unix
proselytizing will shoot you dead. They don't like
people who know the truth.
Heck, I remember when the filesystem was mapped into the
address space! I even re<BANG!>
▶ No.939851
>>938484
>If you're so keen on saving keystrokes, you should switch to keyboard that doesn't require for example pressing shift AND 0 to get parentheses.
The solution is to rebind your keyboard not buy a new one dumbass
▶ No.939855
>>939817
The locks were easily opened by paperclip, they only had one point of contact. The point of them was for businesses to keep out the casual snooping of other employees. Sure, they could paperclip it, but there'd be no "oh, he left it on and I was just checking my mail" excuses for that one.
▶ No.939861>>939865 >>940015
>>939838
>Segmentation was not replaced by any better system
>here's some examples from the '80s
dohoho.
I've mentioned in another thread that segmentation is kill on modern systems like x86 and has been re-purposed as pseudo tagged memory for the stack and thread-local variables (a historical quirk of x86, reusing an existing system rather than make a new one) but got the "show me the proofs" faggot who refused to just objdump and see for himself. So for you, I give you kikepedia telling you you're totally fucking wrong and segmentation is no longer used. Paging has been used instead since the 386, and before x86_64 the page table had been extended to address memory beyond the 32 bit virtual addressing space (you know this as PAE). Segments and banks are '80s shit that is DEAD and will never return.
▶ No.939865>>939891 >>940015
>>939861
>>Segmentation was not replaced by any better system
>segmentation is kill
He wasn't saying that segmentation wasn't replaced
He was saying that he thinks segmentation was better than what we have now because "MUH MULTICS" and "MUH LISP"
▶ No.939891>>939899 >>940015
>>939865
Where do these kids even learn about multics and lisp machines. They were dead before they were born. Who's teaching them bad ideas that didn't stand the test of time?
▶ No.939892>>939896 >>939898
>>939761
>>939763
>dissing sudo because you don't know how to use mount or fstab properly
As expected from a Macfag. Either read their respective manpages or install udevil if you're lazy.
>>939795
>>939811
>You might as well just run as root if you use sudo, you're no more secure
>this approach has potential vulnerabilities exist so you might as well spread your ass all the way
Nigger, that's the mindset of the Gentoo user who ran everything as root and got infected with ransomeware through a Firefox vulnerability. Sudo isn't the end-all to computer security but it certainly helps if used alongside other tools.
Ideally you should sandbox network-facing or proprietary software using something like Firejail anyways.
>>939801
>If sudo is going to be a thing, it needs to run the command after asking for the ROOT user's password. not the regular user's password
It already exists and it's called su.
▶ No.939896
>>939892
>Sudo isn't the end-all to computer security but it certainly helps if used alongside other tools.
explain how. If someone can log in to a user account that has sudo access, what's there to stop them from just using sudo to gain control of the entire system?
▶ No.939898>>939924
>>939892
>it certainly helps
No. Only if you tightly tie down the commands used is it in any way useful, but no one does that, or they allow commands that aren't designed to prevent user abuse that are easily turned into a root shell, or on 'upgrade' these regular commands that aren't being audited for security gain bugs that can be abused. It's so fucking easy to wrap sudo and take control, try it yourself.
▶ No.939899>>939903
>>939891
A majority of the complaints found in these blockquotes seem to have to do with specific quirks that existed on the old UNIX systems used at the time of writing, and more than likely do not apply to modern implementations.
However, some of these quotes don't make sense even within the context of the book's publication date of 1994.
>The lesson I just learned is: When developing with Make on 2 different machines, make sure their clocks do not differ by more than one minute.
The Network Time Protocol, otherwise known as NTP, has existed in some form since at least 1985: 9 years before the book. In fact, NTPv3 was out 2 years before publication, so it had already had multiple revisions.
>The big ones are grep(1) and sort(1). Their "silent truncation" have introduced the most heinous of subtle bugs in shell script database programs.
SQL has existed since 1974: 20 years before the book, and the first commercially available RDBMS, Oracle, was released in 1979. Judging from https://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems there have been 25+, maybe even 30+ relational database programs that existed prior to the publishing of the Unix Haters Handbook.
>Raise your hand if you remember when file systems had version numbers.
you mean like ext2 (1 year before publishing), ext3, and ext4?
To be a bit more serious though, yes it is true that Unix and Unix-like operating systems don't have separate version numbers for the filesystems, preferring to tie them to kernel versions. Why is this a problem again? Because it's not how Multics or whatever used to do it? This isn't even a monolithic vs. microkernel argument, as even the microkernel OSes don't seem to bother putting version numbers for filesystems. And why should they?
▶ No.939903>>939910 >>939924
>>939899
The UNIX haters thing was a lolcow movement that was never taken seriously, kind of like suckless today. It's been weird to see it resurface on /tech/ - blast from the past. I started coding in the '80s on SunOS so I've seen all this noise come and go.
▶ No.939910
>>939903
at least suckless has a comfy application launcher, wm, and terminal emulator. these lisp and multics fags just sit around and complain
▶ No.939924>>939934
>>939795
>>939898
>>939811
>muh sudo wrapping
>muh login daemon wrapping
>wrapping is so fucking easy I swear
Since you probably aren't talking about traditional software wrappers, what are you on about? I assume you're referring to some shell scripting shit my bash-fu is lacking right now and not just using a retarded name for keyloggers.
>inb4 why should I spoonfeed you
You're trying to convince anons to abandon sudo and just use root for security reasons, and proper security involves understanding both how your security measures work and potential flaws in your approach. I would be retarded if I took such drastic advice without understanding such a fundamental yet badly explained part of your reasoning.
>>939903
At least the suckless movement actually produces working software, the Unix haters just bitch about life and never finish anything.
▶ No.939934>>939948
>>939924
>being this clueless about security
After your sudo-user account is hacked you're typing passwords and running root commands in a completely untrusted environment that is absolutely trivial to fuck with as there is zero security at that level unlike Windows and Mac where there are some privileged actions that cannot be intercepted or logged even on a hacked account. On Linux, the right way to deal with this is to only log in as root on a different VT as malware can't intercept the VT switch. We used to talk about SAK (Secure Attention Key) back in the day and that VT switch is the best you'll get on Linux. Btw, this is why NT required control+alt+delete to login, although they didn't follow through in the end.
▶ No.939948>>939975
>>939934
The point of sudo's default configuration is that if an attacker gains access to your account but not your password (either through a remote vulnerability or physically accessing your machine because your retarded ass forgot to lock it), he can't completely fuck over root and other users until he gets your password one way or another. Ideally an attacker should never get to this point and that's what other security measures are for, but it's still an extra layer of defense that keeps an attacker from completely fucking over the system until he gains the user's password.
Are there flaws with this approach? Definitely, and that's why it should be supplimented with sandboxing and access control systems. Does that mean sudo should be thrown out in favour of using root for everything? Fuck no, that's far more insecure.
▶ No.939975>>939981 >>940017
>>939948
>he can't completely fuck over root and other users until he gets your password one way or another
Yeah, so he adds a couple lines of shellscript to your .profile and gets it when you next run sudo. It's too easy. Again, try it yourself - LARP a hacker and try intercepting sudo on your own account so you see how easy this is, and how unsafe using sudo is.
>it's still an extra layer of defense
No, this has been part of skiddie scripts for decades. All it is is a false sense of security that keeps you using sudo instead of uninstalling it.
>Does that mean sudo should be thrown out in favour of using root for everything? Fuck no, that's far more insecure.
You're not reading. I'm advocating switching VTs, logging in as root, performing root actions, then switching back as a secure replacement for sudo. The VT switch can't be intercepted.
▶ No.939981
>>939926
Oh ok.
I thought that was about literal version numbers. As in, "this is the version of the filesystem."
So you want the filesystem to store every single old version of every single file in it? That seems incredibly inefficient. When I delete something, it's because I want it deleted.
I think what you're really asking for is version control. Do you know what git is? Are you asking for something like git, but for the entire filesystem or something?
>>939975
I agree with you on this. Although one thing I think would be helpful would be to just give us the literal thing you'd put in the .profile. The shellscript. It's $CURRENT_YEAR, and we all have hypervisors in our kernels now. We can test this shit out and see it for ourselves
▶ No.939993>>940002
>>939988
>I literally cannot think for myself
Take away permissions from the user on their own files? What would that look like I wonder. You'd need to make their shell config read-only. And take away the ability for them to write to their home directory so they don't replace your read-only files. And take away the ability to write to their local desktop config so they don't change its startup. Or its links to applications. And pretty much any dotfile like their ssh config, their text editor config, etc.. Yes, this surely sounds viable, how silly of me. Thank you for sharing.
▶ No.940002>>940006
>>939993
>completely ignoring MAC and sandboxing so you can excuse logging into root through a virtual terminal
▶ No.940006>>940009
>>940002
You know he's right.
▶ No.940009>>940011 >>940014
>>940006
He has a point, sadly. Disabling root login altogether and MAC+sandboxing+careful sudo usage is objectively more secure but it's also more complicated.
▶ No.940011>>940018
>>940009
Right, and do you seriously expect anyone to bother with MAC? Last I checked one of the most common SELinux questions is "how do I turn this shit off?", or something to that effect.
▶ No.940014>>940018
>>940009
How does one disable root on a sysinitv compatible unix install?
▶ No.940015>>940028 >>940039
>>939861
>>939865
If you read the Multics Virtual Memory paper, you would understand why segments are a good idea.
>>939891
>Where do these kids even learn about multics and lisp machines. They were dead before they were born.
Multics was used until 2000 and Lisp machines were still made in the early 90s.
>Who's teaching them bad ideas that didn't stand the test of time?
I don't know anyone who teaches Plan 9, but they have to learn good ideas themselves because the UNIX schools don't teach them. That /co/ analogy in >>939150 is very similar to what happened here. It could be a real OS developer giving a talk on how error handling has worked in his OS since the 60s or 70s, when the "laid-back greybeard professor" says "What do you mean? All of us here use OOM killers and panic very well."
Subject: Revisionist weenies
For reasons I'm ashamed to admit, I am taking an "Intro
to Un*x" course. (Partly to give me a reason to get back on
this list...) Last night the instructor stated "Before
Un*x, no file system had a tree structure." I almost
screamed out "Bullshit!" but stopped myself just in time.
I knew beforehand this guy definitely wasn't playing
with a full deck, but can any of the old-timers on this list
please tell me which OS was the first with a tree-structured
file system? My guess is Multics, in the late '60s.
▶ No.940017>>940042
>>939975
<The VT switch can't be intercepted.
>what is .bash_profile
>what is using a insecure shell like bash on every major distro
>what is the sysrq magickey
>what is buggy DRM drivers and serial lines over remote code execution over the network
>what are insecure hypervisors running at ring -4
I thought this was the unix haters thread, you should know this shit already and I prefer unix over windows even if they are both piles of shit.
▶ No.940018
>>940011
Even with less painful MACs like AppArmor and Tomoyo, not likely. At least setting up a sandbox using something like Firejail is fairly easy.
>>940014
passwd -l root
▶ No.940028
>>940015
Memory segmentation still exists through the MMU.
▶ No.940037>>940050 >>940140 >>940149
>>939106
>Are you fucking illiterate?
I assumed bad faith, and that was impolite. I am sorry.
> Tell me how you fucking xerox machine and smalltalk is related to your current discussion.
Alan Kay discussion of SmallTalk is why Go, Rust, and Java are unsuitable for business application development. He wanted a language that constantly evolved. As soon as it became commercial, the language stopped evolving. Either a language is a platform for language research, or it is used for commercial development. A language that is constantly changing is a CM nightmare and a developer nightmare: either I have to work in several different versions of the language to do my job, or we have to constantly be updating our entire technical baseline. Is that not what you see from the ecosystems of those languages?
> Are you saying that today's computers don't work by manipulating hardware registers to draw things to the screen, read your hard drive and basically any sort of input, processing and output?
The average programmer will never do this. With a simple hack to the language, like PEEK and POKE, a language of any oddness, even Prolog, can be turned into a systems programming language. There's nothing magical about pointers and manual memory management in C that make it better suited to any programming.
There are 3 ways to write a compiler for a language:
1) You write a compiler for that language in that language and compile it with a pre-existing compiler for the language. This is how most C compilers happen.
2) You write a compiler for that language in that language and compile it with a proto-compiler, a boot-strapping compiler, written in a second language.
3) You write the compiler in a second language.
Most programmers demonstrate that they are shitty programmers by thinking that option three is the only option, and that option one is option three when we are talking about C. They can't even comprehend a compiler that doesn't fall back on assembly or C to implement something like garbage collection or exception handling. You can write the LISP garbage collector in LISP? Inconceivable!
> Anyway, why didn't these people who implemented the language do it, and instead depend on mostly assembly and C?
Option one requires a compiler for the language to exist: it doesn't if you are creating a new language.
Option two requires you to write two compilers: one in a second language for a subset of the language, and a second in the language for the language.
When you choose option three, your compiler can be run on architectures that it doesn't support.
Because "C is portable assembly" and assembly is just readable machine code.
> Just that it had one main thing that prevented the trendy languages from becoming system languages by themselves.
And my point is that your point is sophistry. C has nothing that prevents the trendy languages from becoming system languages.
▶ No.940039>>940050 >>940149
>>940015
>If you read the Multics Virtual Memory paper, you would understand why segments are a good idea.
>educate yourself, shitlord
If you've read it but still can't explain why it's a good idea then why should I?
>Multics was used until 2000 and Lisp machines were still made in the early 90s.
That's no excuse for why you've heard about them, as I'm quite sure plenty of older technologies that are still in very wide use today like MUMPS you've never heard of. I know the reason you've heard about dead tech like Multics and Lisp machines and it's because Jew school pushes them into your brain to harm you.
▶ No.940042
>>940017
>VT switching can be intercepted by a .bash_profile of a non-root user
Oh, do tell, anon. I'd love to hear your explanation of how this is supposed to work.
▶ No.940050>>940065
>>940037
>>940039
This guy is arguing with himself to hide that he was proven wrong.
▶ No.940060>>940061
>>939083
>>939761
>I can't run filemanagers because none of them let me MOVE A FUCKING FILE FROM ONE DISK TO ANOTHER DISK because fucking sudo.
What the fuck? On Gentoo I can do this fine.
▶ No.940061
▶ No.940062>>940087
>>939762
with capabilities you could have security without having to ask for everything.
▶ No.940087>>940131
>>940062
What the fuck do you think user groups do? Add your user to a group that has general USB access.
▶ No.940131
>>940087
That's still shit as a remote user who is sometimes a local user could access another local user's USB device. It needs to be mediated by a daemon like systemd that can safely put the device node itself in the user's namespace when they're local.
▶ No.940132
>>940065
totally organic post
▶ No.940140>>940141 >>940149
>>939090
> have relevance to what you are saying at the moment.
*have no relevance to what you are saying at the moment.
>>940037
The Alan quote didn't quite describe what you were trying to say, at least not very obviously. Although I do agree with you on the constantly evolving part, some of these languages, like Java have been successfully deployed for business purposes (at least the base language and related tools for web deployment). A language does not necessarily have to be good to not have widespread adoption. I doubt about this for rust and even more so for Go as well.
These may be however unsuitable for system applications (like drivers), which require interfacing to hardware unless you allow facilities for this in the language, which most of the above don't provide or call the system to do so, which is written in a language that does.
> The average programmer will never do this.
Doesn't nullify it's inadequacy in this area.
I will say this again. I never implied that C is a "magical" language. I said that It has features that allow it to interface to hardware and yes, any language can be extended to do so.
> Option one requires a compiler for the language to exist: it doesn't if you are creating a new language.
These are languages like Python, Java, Rust, Go etc. These can be theoretically written and compiled in their own language provided there are facilities in the language to do so (with varying efficiency). These, languages, if written from the ground up, will have to be written in a language that does not much overhead, allows compilation to (preferably) machine code, allow fine control over the memory of the system, and allow fine control over how the language the compiler/VM/interpreter is being written in translates to machine code, in order to put out an efficient compiler/interpreter/VM.
> Option two requires you to write two compilers: one in a second language for a subset of the language, and a second in the language for the language.
C and Assembly in this case, which you would have to do for about any language you would like to compile into machine code.
> When you choose option three, your compiler can be run on architectures that it doesn't support.
Compilers and interpreters written in a language that compile to VM bytecode or are interpreted can be technically run on architectures/systems that implements the virtual machine or the interpreter. They would run, but like you said in the above paragraph, won't be exactly efficient. Same goes for other programs. But the VMs and interpreters will have to be written in a language that compiles to the machine code of that architecture.
> Because "C is portable assembly" and assembly is just readable machine code.
Exactly. And all of your above options mandate the existence of C or something like C. A language that bridges the gap between giving instructions to the computer by a series of symbols and operands and logical programming with a low, but reasonable amount of abstraction and (not necessarily) is sufficiently easy to write a compiler for, which was one of C specification's main objectives. I did not imply that C is the best language in the world. I did not imply that it is bad to use any sort of higher level languages or that other programming languages do not have the potential to become system programming languages. Embedded hardware is also a field where a language like C is required, unless you are writing in assembly or a properly optimised interpreter or compiler for another language. Lastly, like you've said (conversely), it is a well establshed, documented and tested language, of whose common mistakes, error patterns and bugs have been documented.
As long as a language does not come by that satisfies all the requirements C fullfils, and maybe does it better than C itself, C will not be replaced or go out of popular use, nor does it have any reason to. This was my original point in >>939090 and I seemed to have deviated from it for a bit. There are languages like Ada and Pascal which have proven to be able to fullfill these requirements, but at this point, the discussion boils down to the language grammar and structure.
Most of the tripfags and the unix hater cabal are arguing against C and asking for C to be phased out is because it is not good as a language that alleviates common errors, manual memory management, has garbage collection (maybe they aren't using this as a point), and is easy to type, but this is necessary for a language which is close to hardware and with technically little to no overhead aside from the standard library functions which are themselves implemented in terms of the hardware or kernel facilities (for example, a function which prints a string at the most basic level would use a software interrupt to do so (write syscall in linux's case)).
▶ No.940141>>940143 >>940149
>>940140
Also, the "programmer is always right" 'philosophy' of C is also a reason why C is hard to write.
▶ No.940143
>>940141
And it allows finer control over the program as a byproduct.
▶ No.940149>>940183 >>940225 >>940253 >>941042
>>940037
>With a simple hack to the language, like PEEK and POKE, a language of any oddness, even Prolog, can be turned into a systems programming language. There's nothing magical about pointers and manual memory management in C that make it better suited to any programming.
>They can't even comprehend a compiler that doesn't fall back on assembly or C to implement something like garbage collection or exception handling. You can write the LISP garbage collector in LISP? Inconceivable!
That's all true, but they don't teach this anymore, just like that /co/ analogy. C has a lot of irrational bullshit, so there is no way to teach the rationale for anything in C, except for a few parts they copied from other languages. That is why they no longer teach how to critique programming languages.
>>940039
>If you've read it but still can't explain why it's a good idea then why should I?
I've explained it many times already. Segments change I/O from calling a kernel using sequential tape drive emulation functions (pipes) to accessing data like RAM with ordinary CPU instructions, so there is no need for serializing everything like there is in tape-based I/O. Segmentation also makes record-based I/O easier, which is why Multics has it and UNIX doesn't. Segmentation eliminates copying and buffering, which reduces unnecessary I/O and kernel usage. Segmentation allows every program to have its own code segment, so programs and libraries are not modified or copied by a loader. Segments can grow and shrink independently. Segments can be shared between programs with different access rights for each program.
>In addition, the relatively large number of segment descriptors eliminates the need for buffering, allowing the user program to operate directly on the original information rather than on a copy of the information. In this way, all information retains its identity and independent attributes of length and access privilege regardless of its physical location in main memory or on secondary storage. As a result, the Multics user no longer uses files; instead he references all information as segments, which are directly accessible to his programs.
>I know the reason you've heard about dead tech like Multics and Lisp machines and it's because Jew school pushes them into your brain to harm you.
Multics and Lisp machines are the antidote to the brainwashing and Stockholm syndrome caused by shills and weenies.
>>940140
>And all of your above options mandate the existence of C or something like C. A language that bridges the gap between giving instructions to the computer by a series of symbols and operands and logical programming with a low, but reasonable amount of abstraction and (not necessarily) is sufficiently easy to write a compiler for, which was one of C specification's main objectives.
He's right. If you have PEEK and POKE in your language, you can do anything C can do, and probably better because the compiler doesn't have all this anti-user bullshit. You don't need a language other than Lisp to write a Lisp compiler, or a language other than Pascal to write a Pascal compiler, and so on. If you have special assembly instructions, you can add them to the compiler.
>Lastly, like you've said (conversely), it is a well establshed, documented and tested language, of whose common mistakes, error patterns and bugs have been documented.
C is so poorly documented that the standard doesn't explain whether some programs are even defined or not. The common mistakes have been "documented" hundreds of times since the 70s, but nobody ever does anything to fix them. If you showed this blog post to programmers in the 70s (outside of UNIX and AT&T), they would say this person was an idiot who had no idea what high level languages are about. C really does damage your brain.
https://blogs.msdn.microsoft.com/oldnewthing/20140627-00/?p=633
>>940141
C has a "the AT&T programmer is always right" philosophy. If the programmer is from AT&T, the software is perfect and the user is just using it wrong. They never intended lines to be longer than 100 characters, so it's the user's fault for not reading the source code. If the programmer is not from AT&T, it's because the programmers are idiots and the language is perfect.
You have obviously been brainwashed. You can't tell working
software from broken software. If you don't have some
horror story, or some misdesign to point out, KEEP YOUR
POSTS OFF THIS LIST!! Kapeesh? We don't want to hear your
confused blathering. Go bleat with the rest of the sheep in
New Jersey.
▶ No.940183
▶ No.940218>>940221
C and every other procedural and function language certainly set us back far.
Logical and Functional-Logical languages are going to be the only solution for managing the sheer amount of uncertainty that is generated by the complexity of modern internetworking
▶ No.940221>>940224
>>940218
Because mental gymnastics trump procedural logic, right?
▶ No.940224>>940230
>>940221
Its the only programming paradigm that can handle fuzzy logic and other types of uncertainty in a way that is easily understandable.
It makes for nice constructs such as allowing the user to set fine grained prioritization of goals. Want safety over efficiency? Just set that parameter and the Bayesian engine will take care of it for you.
▶ No.940225>>940229
▶ No.940229
▶ No.940230
>>940224
If that were actually the case all the ML crap would be written in C instead of Python.
▶ No.940247>>940249 >>940271
a few months ago i wrote a wiki to write down some thoughts i had about what is wrong with unix and what could be better about it, here are the articles i have made for it so far https://unixhaters.miraheze.org/wiki/Special:AllPages i would be interested to know what thoughts people have on the stuff written there, it is also of course open for anyone to edit and contribute to as well
▶ No.940249>>940251
>>940247
>running 2 scripts on the page
You hate Unix like Dixie Kong hates shredding!!!
▶ No.940251
>>940249
i don't understand what you meant by that
▶ No.940253>>940255 >>940917 >>941042
>>940149
>Segments change I/O from calling a kernel using sequential tape drive emulation functions (pipes) to accessing data like RAM with ordinary CPU instructions
I see the problem, you have no idea what you're talking about.
Segmentation is all about defining limited ranges on physical memory that are accessed via some form of segment descriptor. The goal is to partition physical memory such that applications can't access each other's segment. It was a system for before we had virtual memory to provide some degree of memory protection (and some oddballs used it to handle when there was more physical memory than physical addressing space). Before segmentation, applications would just be told they owned that range of physical memory but it wasn't enforced by anything.
I read the Multics paper that you failed to understand and it's very close to the same thing as you'll find on x86 and used by Linux, but the wording would confuse a moron like yourself. I assume you were forced to read this for class and have no frame of reference. The hardware they're using has the same kind of segmentation and paging as systems we use today, their kernel is mapped into the same physical address space similar to modern OSes (although they do this with a separate segment rather than page it into a common segment), they're implementing a form of mmap() which at the hardware level is the same as we do it today, but their software (Multics) is enforcing a one segment per file rule.
Completely unrelated to segmentation itself (again, you're likely getting confused here), they also had a system where segments were created and destroyed outside of the application, published by name by the kernel, and shared in a directory hierarchy. This is very similar to how we use /dev/shm/ and ipcmk today. It's unclear to me if every application had to deal with this or if this was only intended for shared segments. There's a dangerous amount of language in the paper that suggests it was for all segments. It's possible given the age of Multics that it really was that crazy.
The question one should ask after reading the Multics paper is "why not just use one segment that covers the whole address space and do everything with paging?". And indeed that's how later systems used the same hardware. It's why reading about Multics is a history lesson rather than an education on modern OS design.
Anyway, I wasted my morning looking into this for you, and I'm sure you'll now ignore it and continue talking crazy shit about tape drives, pipes, and shills but the answer's there if you really want to learn something instead of LARP.
▶ No.940255>>940369
>>940253
omg stfu about flat memory models dad no one cares
▶ No.940271>>940282
>>940247
>Many users have to resort to sifting through large amounts of strace output just to figure out where it is getting information from
>To answer questions like "where does a program look for its configuration files", you shouldn't need to know what a system call is.
Why are you running strace to find where a program's config file is supposed to be? The man page or other documentation would tell you that.
>Why does your PDF reader need to be able to read all of the files on your disk?
It doesn't?
just as a test, I used some common UNIX PDF readers.
Evince:
$ evince path/to/file.pdf
*pdf opens*
$ evince path/to/file.txt
*window opens with error*
Unable to open document "file:///path/to/file.txt"
File type plain text document (text/plain) is not supported
$ zathura path/to/file.pdf
*pdf opens*
$ zathura path/to/file.txt
*window opens, all black. console output shows error*
error: Unknown file type: 'text/plain'
And hey look! I used the code tags for something other than blockquotes!
▶ No.940279>>940280
>>940277
Ditto. strace is great. If you ever have to do this for a systemd service you're in for some shit, though. Requires attaching to pid 1 ahead of time, using systemctl to message-pass to pid 1, then sifting through thousands of lines of garbage before it gets to opening files.
▶ No.940280>>940286
>>940277
>Not going to lie, I use strace to find the config files too
I mean I guess you could, but to me it sounds like trying to cut a sandwich with a scalpel. It just sounds horribly inefficient compared to the usual method
>>940279
>If you ever have to do this for a systemd service you're in for some shit, though. Requires attaching to pid 1 ahead of time, using systemctl to message-pass to pid 1, then sifting through thousands of lines of garbage before it gets to opening files.
THE ABSOLUTE STATE OF SYSTEMD
▶ No.940282
>>940271
>Why are you running strace to find where a program's config file is supposed to be?
I have done this many times before, for the exact reason I stated - it's often difficult to know where programs are looking for things (either because it is not documented, or it is documented but the distribution has changed where it looks for things, for example. My proposed solution to this is to make programs declare which files they require access to, so not only is it easy to know what files can affect the execution of the program in general, you know this without even running the program.
>It doesn't?
PDF readers are a common exploit vector on modern computers, because of the combined complexity and ubiquity of the PDF format. On a standard Linux system, programs can read anything the user they run as has access to. This means that, for example, find can recurse over every file in your filesystem, which is good, but it also means your PDF reader can too, if it is programmed to do so. This is really bad, because in order to do its job, your PDF reader shouldn't need to access anything other than the PDF file you are trying to read, some configuration files/program state, the Xorg socket, and maybe a few other things. That it can access everything else means that if your PDF reader is compromised, and you run it as your user (which most people do), then it can be programmed to do anything your user can do, to any of the files your user has access to, instead of just the files it needs in order to perform its function. The only real solution to this is a fundamentally different permissions model. Stuff like selinux policies might help, but as it stands they are difficult to use and understand.
▶ No.940286
>>940280
It's good for finding where the problem is. Like, is there some issue with nss, the locale files, is there some local ~/.conf file causing issues, is there some /var/run directory it's colliding with, are there stale files in /tmp, is there a config in /usr/share that's overriding my config, is there some wacky config override directory in /etc that overrides mine, is there a config in /var/lib that's overriding it, did Debian dynamically glue the config together from multiple parts and fuck it up, etc..
▶ No.940369
▶ No.940602
>>938805
No fuckoff
Same goes for all you IRCniggers shitting up this board.
▶ No.940850>>940889 >>940907 >>940913
>>938369 (OP)
For their time, C and UNIX were awesome. C still has its uses as a sort of "portable assembly language" but its really long in the tooth and we are overdue for a portable low level language that can take its place.
UNIX was a marvel of its time but its time is long since passed. And taking the architecture of a multi user time sharing OS like UNIX and using it as the basis for a single user desktop OS (Linux) or especially as the basis for a smartphone OS is fucking retarded.
Terry A Davis might be crazy as fuck but he is right when he says that UNIX sucks. We desperately need a new desktop OS paradigm that sheds all the UNIX garbage that currently infects Linux, Mac. Android, etc.
BeOS probably would have been the best bet but its long dead now.
▶ No.940889>>940913
>>940850
I agree but I only see the current Unix and Unix-like OS's as a single substrate of the generalized Unix philosophy(since most have some strong connection or direct lineage). It's possible to adhere to the Unix philosophy and end up with a radically different operating system. A lot of the current Unix practices exist the way they do because of the environmental conditions of the time(1970's); implementing features without those concerns in mind(or different ones that are appropriate for CY+3) would give you a different piece of software.
A lot of the Unix-hater's arguments, and arguments made by similar thinking anons here, are quite sound, certainly have their merits, and I find myself agreeing with many. Perhaps I have misinterpreted some of the rhetoric, but I just don't see the need to dump the Unix philosophy because one particular derivation didn't age well(and as some argue, wasn't designed correctly during its' inception; could it have been?).
▶ No.940907>>940918
>>940850
>reddit spacing
>terry davis
>>>/reddit/
▶ No.940913>>940914 >>940917 >>940932
>>940850
>For their time, C and UNIX were awesome. C still has its uses as a sort of "portable assembly language" but its really long in the tooth and we are overdue for a portable low level language that can take its place.
For their time, C and UNIX sucked. C being a "portable assembly language" is an after the fact excuse for why so much about it is broken. C was intended to have the same purpose in UNIX as PL/I in Multics, Algol in Burroughs machines, Lisp in Lisp machines, and so on. AT&T shills wanted to stop comparisons with other languages used for systems programming and have people thinking C was an assembly language instead. They want you thinking "C is better than as" instead of "C in 2018 is worse than PL/I in 1965."
>UNIX was a marvel of its time but its time is long since passed.
UNIX was a marvel by how much it sucked. If you compare it to the standards for engineering of the day, UNIX is like saying shitting your pants (panic) is the same as building a water and sewer system. Those OSes were better because they were real multi-user OSes and panic was unacceptable because a problem with one user or device should never kill everything everyone is doing and force you to reboot. That's not even acceptable on a single-user machine. However, panic isn't even 1% of what's wrong with UNIX.
>And taking the architecture of a multi user time sharing OS like UNIX and using it as the basis for a single user desktop OS (Linux) or especially as the basis for a smartphone OS is fucking retarded.
Lisp machines are a much better choice for desktops and smartphones because all of these use programs with GC and lots of memory allocation, which is much faster on a Lisp machine. A real multi-user OS like Multics and VMS is better for a multi-user machine. That's why these choices in OS design exist. UNIX weenies believe UNIX is the only OS that should exist, and it sucks.
>Terry A Davis might be crazy as fuck but he is right when he says that UNIX sucks. We desperately need a new desktop OS paradigm that sheds all the UNIX garbage that currently infects Linux, Mac. Android, etc.
UNIX weenies do not want you replacing it, originally because AT&T or some other UNIX company like Sun would lose money. Now it's probably because they wasted 40 years learning the intricate flaws of broken "tools" like ps, ls, tar, and grep, and would be mad if their replacements took 5 minutes to learn.
>BeOS probably would have been the best bet but its long dead now.
The problem with BeOS is that C++ compilers suck. You are forced to use an obsolete version of GCC because of 1970 linker misdesigns in UNIX.
>>940889
The UNIX philosophy sucks. There are a lot of good philosophies that you can copy instead. One of the things they have in common is the tendency to have a language powerful enough to be used for everything.
http://archive.adaic.com/pol-hist/history/holwg-93/holwg-93.htm
>There are a lot of things that one can do differently because there is a truly common language. Ada provides a lingua franca for communication beyond programming. Pure Ada is used as a design language, a command language, database entry and checking, and for message format definition, resulting in enormous benefits in development time and system reliability. But much of the applications are of conventional design and have not exploited the full range of novel Ada techniques.
There are times when I feel that clocks are running faster
but the calendar is running backwards. My first serious
programming was done in Burroughs B6700 Extended Algol. I
got used to the idea that if the hardware can't give you the
right answer, it complains, and your ON OVERFLOW statement
has a chance to do something else. That saved my bacon more
than once.
When I met C, it was obviously pathetic compared with the
_real_ languages I'd used, but heck, it ran on a 16-bit
machine, and it was better than 'as'. When the VAX came
out, I was very pleased: "the interrupt on integer overflow
bit is _just_ what I want". Then I was very disappointed:
"the wretched C system _has_ a signal for integer overflow
but makes sure it never happens even when it ought to".
It would be a good thing if hardware designers would
remember that the ANSI C standard provides _two_ forms of
"integer" arithmetic: 'unsigned' arithmetic which must wrap
around, and 'signed' arithmetic which MAY TRAP (or wrap, or
make demons fly out of your nose). "Portable C
programmers", know that they CANNOT rely on integer
arithmetic _not_ trapping, and they know (if they have done
their homework) that there are commercially significant
machines where C integer overflow _is_ trapped, so they
would rather the Alpha trapped so that they could use the
Alpha as a porting base.
▶ No.940914
▶ No.940917>>940920 >>940923
>>940913
>GC and lots of memory allocation, which is much faster on a Lisp machine
Blatantly false, cheap microcomputers ran Lisp programs faster than specialized Lisp machines.
Why the mods haven't banned you for spamming age-old usenet posts and parroting their often outdated or downright retarded opinions while never responding to thought-out criticism of your shit (for example, >>940253 is beyond me. You've essentially become a worse version of the person described in pic related, except unlike him you flat out won't respond to valid criticism as a desperate attempt to keep other anons from noticing it.
▶ No.940918>>940921
>>940907
>all paragraphs are reddit spacing
▶ No.940920
▶ No.940923>>940926
>>940917
>butthurt UNIX weenie whining about the lack of a safe space in /tech/
based UNIX haters poster is my spirit animal. fuck off
▶ No.940926>>940927 >>941026
>>940923
>idolizing an attention-whoring faggot who spams the same couple usenet posts in (usually) unrelated threads and never responds to genuine criticism of his rhetoric
Shit taste.
▶ No.940927>>940930
>>940926
UNIX a shit. C also a big shit.
stay mad
▶ No.940930>>941121
>>940927
I couldn't agree more.
Post made by the KKK
©2016-2018 K00l K1d5 K1ub Inc.
▶ No.940932
>>940913
>varg hidden 2 minutes /tech/ Dismissed a report for post #940913
LOOOOOOOOOOOOL. butthurt UNIX weenies are reporting based UNIX haters poster. go suck a dick, fags.
▶ No.940953
>>938805
I wonder if McAfee would die of AIDS if he'd fuck you in the ass - he's dodged a few rounds of Russian roulette and can't actually die in this universe, you know.
▶ No.940985
>>938369 (OP)
This, fuck UNIX. GNU is better in every way
▶ No.941026>>941042
>>940926
>criticizing rhetoric and not the substance of the arguments
leave
▶ No.941042
>>941026
>substance
Lispfag's posts have no thought or substance of their own, it's all parroted rhetoric from other people. Notice how the only thing he ever does is use other people's posts as an excuse to repeat the same precanned points, points he can't even get right (for a recent example, see >>940149 for his explanation of segmented memory and anon's response at >>940253 ), and he falls silent whenever responses don't fit one of his template topics or invalidate them. Quite often the quotes he stuffs at the bottom of every post don't even fit his post, they're just stuff he picks from a small pool and uses as a signature.
The sad part is that there's other anons with actual complaints about Unix who can hold their own in threads, they're just overshadowed by this idiot spammer everyone grew sick of talking to weeks ago.
▶ No.941074>>941103 >>941125
>>941068
Hello, poettering! How nice of you to cherrypick the worst possible alternative to your shite in an attempt to make all other options look bad.
Reminder to not fall for this scummy tactic.
http://jdebp.eu/FGA/run-scripts-and-service-units-side-by-side.html
▶ No.941076
>>941068
>He bitches about readability on the non-systemdicks ssh
>while ignoring that everything on the shitstemd is hidden in files and the program is calling the files
▶ No.941103
>>941074
>completely remove all service dependency info in the [Install] sections of unit files because the others are too shit to do it
>make an "equal" comparison
wew, there's some serious butthurt behind that graphic. There are lots of other things wrong but that one's huge.
▶ No.941121
▶ No.941125
>>941074
>>941068
>People somehow think it is acceptable to have an operating system this convoluted, a configuration system this bad, an argument system this insecure, just because it makes them feel more 31337