[–]▶ No.921189>>921205 >>921222 >>922449 >>923004 >>928209 [Watch Thread][Show All Posts]
Lisp machines are shit and never amounted to anything. Unix haters bitch all day about Unix problems from 20 years ago and yet cant even build a basic functional system. Remember guys lisp OS is simple and elegant! In fact its so simple we are incapable of writing a modern version and can only bitch all day.
▶ No.921190>>921193 >>922301
UNIX machines are shit and never amounted to anything. Unix haters bitch all day about Unix problems from 20 years ago and yet cant even build a basic functional system. Remember guys UNIX is simple and elegant! In fact its so simple we are incapable of writing a modern version and can only bitch all day.
▶ No.921193>>921194 >>921365
>>921190
>UNIX machines are shit and never amounted to anything
<What is LINUX, BSD, OSX
> In fact its so simple we are incapable of writing a modern version and can only bitch all day.
<What is almost every OS
▶ No.921194>>921197 >>921245
>>921193
><What is LINUX, BSD, OSX
Not UNIX
><What is almost every OS
Not UNIX
▶ No.921197>>921198
▶ No.921198>>921200 >>921201
>>921197
>Not to be confused with Unix, Unix-like, or Linux.
hmmmmmmmmmmmmmmmmmmmmmm...
▶ No.921200>>921203
>>921198
You realize unix haters hb is not literally about original at&t unix right.
▶ No.921201>>921203
>>921198
Just replace everything above with *posix if your brain is capable
▶ No.921203>>921205
>>921200
UNIX a shit. No matter what flavor.
>>921201
POSIX is also shit.
▶ No.921205>>921207
>>921203
>POSIX is also shit.
See initial post >>921189 (OP)
Bitch and moan with no productivity
▶ No.921207>>921209
>>921205
???
>Bitch and moan with no productivity
So like OP?
UNIX, POSIX, and LISP is all a shit.
▶ No.921209>>921210
>>921207
>UNIX, POSIX, and LISP is all a shit.
I am comfy with my POSIX machine.
▶ No.921210>>921211
>>921209
>no permissions
I'm not.
▶ No.921211>>921212 >>921213
>>921210
>What are accounts
▶ No.921212>>921213
▶ No.921213>>921215
▶ No.921214>>921215
anyways i'm going to stop. this thread a shit.
▶ No.921215
▶ No.921220>>921329
>In fact its so simple we are incapable of writing a modern version and can only bitch all day.
It takes a tough man to make a tender chicken.
It Takes a Tough Guy to Make a Tender Chicken
Cooking chicken badly is easy. (Search for the "Rubber Chicken
Circuit.")
In large family reunions at my grandmother's, preparations for
barbecuing the chicken started months before, with gathering the wood
out in the trees. (Yes, you could just pick up wood on the day of the
barbecue. Wood that had seasoned by lying on the ground. It wouldn't
make good coals, or any coals at all, really. You'd end up with
chicken that was half burnt and half raw.)
So you gathered green wood months/years before, and put it where it
would season properly. You gathered real wood, instead of buying
charcoal briquets, because it flavored the chicken better.
You also chopped the wood into small enough chunks to make coals of
the right size to spread across the bottom of the barbecue and heat all
the food evenly. You also got up before dawn to light the seasoned
wood and partly cover it so that it would make good coals. It would
take hours for the wood to burn down to coals, but anything cooked on
them would not taste of lighter fluid. And you had to watch the wood
so that nothing went wrong.
The coals would be taken from the open pit where they had been
prepared and put into closeable barbecue pits for the actual cooking.
(Moving lit coals is not something you should undertake lightly.)
Barbecuing would start about 10:00am, or earlier, to have the food
ready at 1:00pm, or later. That's how long it took to cook chicken,
etc, to be tender and juicy. (Just cooking the food that long wasn't
enough, remember. The coals had to be right so that the food didn't
overcook or undercook.)
The proper way to barbecue chicken so that it's not rubbery I never
learned properly. It's probably too long to relate here anyway.
▶ No.921222>>921329
>>921189 (OP)
>Lisp Machine Hate Thread
Why would I hate technology that I never, in any way, interacted with?
▶ No.921226>>921236 >>921241 >>921260 >>921289 >>921757
I hate how with LISP machines I can hit the help button and click on a gui element to bring up the documentation for that element. Why should the machine try to treat me like someone who has a small brain or something. Since I have a larger than average brain, the system should be as cryptic as possible so that I can flaunt my mental superiority by showing that I know how to use it.
▶ No.921236
>>921226
>LISP keyboards have Facebook thumbs up/down keys built in
▶ No.921241>>921265
>>921226
All high IQ people spends years memorizing as many esoteric commands and flags as possible on Unix. Asking for helping is what brainlets do.
▶ No.921245
>>921194
If none of that counts as UNIX then nearly nobody on this board uses UNIX. And if that's the case then why do you spam unix.haters copypastas all day long?
▶ No.921260
>>921226
Yes anon thats what we need more GUI.
▶ No.921265
>>921241
because apropos exists and so do man
▶ No.921289>>921291
>>921226
Do you want more people that don't know how to use a computer shitting everything up? Why not shill Apple instead then?
▶ No.921291
>>921289
>Do you want more people that don't know how to use a computer shitting everything up
Are you in favor of having no documentation at all then? Maybe if we gave them documentation they would then know how to use a computer.
>Why not shill Apple instead
Why would I praise a weenix like operating system?
▶ No.921299>>921326 >>922444
lisp machines stands for the faggy lisp those people have lol
Gradually, I realized that the Social Democratic press was conducted predominantly by Jews. But I did not put any
special significance on this circumstance because the conditions were exactly the same in the other papers. Only one
fact was obvious: there was not a single paper with Jews present on it that could be designated as truly national, at least
according to my education and conceptions.
When I mastered myself enough to read these kinds of Marxist press productions, the aversion grew to such proportions
that I now sought to get to know about the manufacturers of these thrown together villainies.
From publishers on down, they were all Jews.
I gathered all the obtainable Social Democratic brochures and sought out the names of their authors: Jews. I noted the
names of almost all the leaders: they were in by far the greatest part also members of the “Chosen People,” whether
acting as members of the parliament or in the secretariats of the trade unions, heads of organizations, or street
agitators. It was always the same uncanny picture. The names Austerlitz, David, Adler, Ellenbogen, etc. will remain
eternally in my memory.
One thing had become clear to me: the leadership of the Party, with whose petty members I had been carrying on a
violent battle for months, lay almost exclusively in the hands of an alien people. For that the Jew was no German I now
knew to my inner satisfaction and with finality.
Only now did I learn to know the seducers of our people completely.
A year of my sojourn in Vienna had sufficed for me to become convinced that no worker could be so stubborn as to be
beyond better knowledge and better explanations. Slowly I mastered their doctrine and employed it as a weapon in the
struggle for my own inner convictions.
Almost always now I was victorious.
The great mass was to be saved but only after the heaviest sacrifices of time and patience.
Never, however, was a Jew to be freed from his viewpoint.
I was still childlike enough at that time to want to make the madness of their doctrine clear to them; I talked my tongue
sore and my throat hoarse and thought that I must succeed in convincing them of the harmfulness of their Marxist
insanity. In fact, I achieved just the opposite. It seemed as though the mounting insight into the nihilistic effect of Social
Democratic theories and their realization only served to strengthen them in their determination.
The more I argued with them the more I learned their dialectic. At first they calculated on the stupidity of their adversary.
Then, when they could find no other way out, they played stupid themselves. ...Whenever you attacked one of the
apostles, your hand closed around slimy matter which immediately separated and slipped through the fingers and the
next moment reconstituted itself. If you struck such an annihilating blow that, observed by the audience, he had no
choice but to agree with you, and thus you thought you had taken one step forward, the next day your amazement would
be great. The Jew knew nothing at all about yesterday and repeated his same old twaddle as though nothing had
happened; if you angrily challenged him on this, he could not remember a thing other than he had demonstrated the
correctness of his assertions on the previous day.
Many times I stood there astonished.
I didn’t know what to be more amazed at: their verbal agility or their art in lying.
Gradually, I began to hate them.
▶ No.921326
>>921299
Based post, fellow MAGApede
▶ No.921329
>>921222
Checked!
And THIS.
How can we hate what we never used?
Many may hate *NIX, but it's there, it's being used. This is infinitely better than not have anything at all.
>>921220
That's a bullshit excuse.
>I won't try to come up with a cure to cancer because it takes a tough man to make a tender chicken.
▶ No.921340>>921341 >>921515
"Eunuchs" and the "(C)rap" programming language were mistakes.
Proof:
https://hooktube.com/watch?v=6VmJVNYfxDc
https://hooktube.com/watch?v=LIGt5OwkoMA
https://hooktube.com/watch?v=gV5obrYaogU
(Unfortunately they're too big to post here as a webm - scrape them with a streamcatcher).
▶ No.921341>>921344 >>921812
>>921340
C is shit but Lisp is slow.
▶ No.921344>>921345 >>921355 >>921406
>>921341
C is a quick hack of a compiled language that has grown exponentially by adding adipose crap to it, it has "ANSI" "standards" to make the language sound legitimate, but is generally fast. If you want a replacement for C, we already have it; It's Pascal (Ada is good too).
"The designers of Pascal were aware of this problem and "fixed" it by storing a byte count in the first byte of the string. These are called Pascal Strings. They can contain zeros and are not null terminated. Because a byte can only store numbers between 0 and 255, Pascal strings are limited to 255 bytes in length, but because they are not null terminated they occupy the same amount of memory as ASCIZ strings. The great thing about Pascal strings is that you never have to have a loop just to figure out the length of your string. Finding the length of a string in Pascal is one assembly instruction instead of a whole loop. It is monumentally faster.
The old Macintosh operating system used Pascal strings everywhere. Many C programmers on other platforms used Pascal strings for speed. Excel uses Pascal strings internally which is why strings in many places in Excel are limited to 255 bytes, and it's also one reason Excel is blazingly fast.
For a long time, if you wanted to put a Pascal string literal in your C code, you had to write:
char* str = "\006Hello!";
Yes - you had to count the bytes by hand, yourself, and hardcode it into the first byte of your string. Lazy programmers would do this, and have slow programs:
char* str = "*Hello!"; str[0] = strlen(str) - 1;
Notice in this case you've got a string that is null terminated (the compiler did that) as well as a Pascal string. Also, since strcat has to scan through the destination string looking for null terminators each time, again and again, C strings were much slower than they needed to be, didn't scale well, and were obsolete.
▶ No.921345>>921349
>>921344
>Pascal strings are limited to 255 bytes in length
Well thats real great. Have fun with your real language that cant even have a kilobyte piece of text.
▶ No.921349>>921352
>>921345
https://web.archive.org/web/20170223042420/https://www.joelonsoftware.com/2001/12/11/back-to-basics/
Shlemiel gets a job as a street painter, painting the dotted lines down the middle of the road. On the first day he takes a can of paint out to the road and finishes 300 yards of the road. “That’s pretty good!” says his boss, “you’re a fast worker!” and pays him a kopeck.
The next day Shlemiel only gets 150 yards done. “Well, that’s not nearly as good as yesterday, but you’re still a fast worker. 150 yards is respectable,” and pays him a kopeck.
The next day Shlemiel paints 30 yards of the road. “Only 30!” shouts his boss. “That’s unacceptable! On the first day you did ten times that much work! What’s going on?”
“I can’t help it,” says Shlemiel. “Every day I get farther and farther away from the paint can!”
▶ No.921352
>>921349
>This justifies sub kilobyte strings.
▶ No.921355
>>921344
I like a macro challenge. I think this should work for any compiler that supports literal concatenation.
#define PASCAL_STR(X,S)\
char __PASCAL_ ## X[sizeof(S)+1] = " " S; __PASCAL_ ## X[0] = sizeof(S); char * X = __PASCAL_##X+1;
PASCAL_STR(test,"This is a test\n")
for(int i = 0; i < test[-1]; i++)
fputc(test[i],stdout);
▶ No.921365>>921370
>>921193
>Linux is not UNIX
Damn right a kernel isn't UNIX. And GNU is literally an acronym for GNU'S Not UNIX.
▶ No.921370>>921371
>>921365
An operating system is "software that securely abstracts and multiplexes physical resources" (Tanenbaum, 1987). Sure sounds like Linux to me. While we're at it, quoted from The Design of the UNIX Operating System:
The operating system interacts directly with the hardware, providing common services to programs and insulating them from hardware idiosyncrasies. Viewing the system as a set of layers, the operating system is commonly called the system kernel, or just the kernel, emphasizing its isolation from user programs. Because programs are independent of the underlying hardware, it is easy to move them between UNIX systems running on different hardware if the programs do not make assumptions about the underlying hardware.
All GNU programs are userspace and non-essential. Richard is just using this as propaganda and you idiots are falling for it.
▶ No.921371>>921376 >>921437
>>921370
Stallman and his foot fungus was really the worst thing to happen to free software.
▶ No.921376>>921437 >>921621
>>921371
True. Something a lot people don't know: Stallman TOOK James Gosling's code to make GNU Emacs. Same with gcc - he copy-pasted code from the Pastel compiler. When Gosling made his (previously unlicensed) version of Emacs proprietary, they told Stallman to remove what he stole. He went apoplectic. This happened on at least two other occasions, where Stallman took code and was asked to stop later, further enraging him. In fact, he set out to destroy Symbolics by cloning their software. Originally he was stealing it directly out of the source code until they locked him out. Later, he cloned it based on the documentation. This is also what gave him the impetus to make his communist license. Later, when he failed to make a kernel before Linus, he gave up programming and went purely into activism. He was angry at Linus, and set out to make sure everyone knew it should really be called GNU/Linux - and that Linux was only "a single program." Also, you know as well as I do that Stallman would be in court for cloning their software identically and giving it away today. Everything he's done is just a pathetic/petty defense mechanism.
▶ No.921406>>921433
Haskell machines when
>>921344
Jesus fuck man at that point just make a struct.
▶ No.921433
>>921406
>Haskell machines when
We already have the spineless tagless g-machine.
▶ No.921437>>921439 >>921524
>>921376
>>921371
protip: free software works exactly like this, you fork software to make it do new things that the upstream project doesn't do.
▶ No.921439>>921441
>>921437
>protip: free software works exactly like this
They literally were not free software. He was just stealing.
▶ No.921441>>921442 >>921444 >>921446
>>921439
All software was free software by default during that time. This is how all computing and all software worked at the beginning of computers. It worked this way until the 70's when developers decided that software should be controlled and restricted rather than be free.
▶ No.921442>>921449
>>921441
>All software was free software by default during that time
Really anon? Where was the document where the developers waved copyright?
▶ No.921444>>921449
>>921441
You should go check your history books. Even when RMS started to reimplement stuff based off reading from the documentation he still copy and pasted some code from Symbolics (albeit from what I remember it was only some minor stuff). What happened at the MIT AI lab was one tipping points of RMS creating the free software movement.
▶ No.921446>>921449
>>921441
Are you retarded? code is and always has been copyrighted by default, unless you explicitly license it as free software. Even the FSF will say this on their website.
▶ No.921449>>921450 >>921452
>>921442
>>921446
None. When I buy a lawnmower and lend it to a friend, do I create a proof of ownership certificate and get my friend to sign a lease contract before making it happen? This kind of thing doesn't happen between friends. Likewise at the beginning of computing, the computer engineers and mathematicians wrote software and shared the software freely with one other. Sometimes they forked the software to make it do new things that the parent software didn't do and all this happened freely.
>>921444
Stallman started the free software movement because there was a political attack on his community: the attack that software should be hoarded and users be required to capitulate to the owners of the software. He did this because his community was a community that lived by freely sharing software and this was the norm from the beginning of computers up until that point.
▶ No.921450>>921455
>>921449
>When I buy a lawnmower
You can hate copyright law all you want. It existed back then, it exists now. Software was copyrighted all the same.
▶ No.921451
> the attack that software should be hoarded
Yes I see keeping the secrets of what you made to yourself is an attack.
▶ No.921452>>921453 >>921455
>>921449
>because there was a political attack on his community
Guess who did that. It was Symbolics. The MIT AI Lab and Symbolics had a legal agreement that stated that MIT AI Lab was able to get access to the changes to the lisp machine that they made, BUT they could not redistribute these changes to anyone else.
▶ No.921453>>927940
>>921452
Sad! MIT made a voluntary agreement with a company and that is somehow an attack on the MIT community!
▶ No.921455>>921458 >>921459
>>921450
Software was copyrighted in that time but there were no violations of copyright law. The copyright holders gave implicit permission to share and redistribute the software.
>>921452
The Symbolics company was just one part of the problem of proprietary software, they were not the only problem. The political attack on Stallman's community was a much wider problem. The attack is on the very idea that users ought to be free in the software and Symbolics weren't the only people who were advocating the idea of proprietary software to the public at large.
▶ No.921458
>>921455
>The copyright holders gave implicit permission to share and redistribute the software.
Bullshit. Almost all software has always been proprietary. We had proprietary Fortran code bases starting in 1957, proprietary COBOL 1959, shit ton of assembly before both of those. The idea it was all commie freedom land is totally inaccurate.
▶ No.921459>>921467
>>921455
People never gave "implicit permission", it was a violation of copyright law. In the example that was originally mentioned, the copyright holder also understood and observed what RMS did as a violation of copyright law. The copyright holders did not give implicit permission to redistribute the software by distributing their own copyrighted software for free.
▶ No.921467>>921471
>>921459
People did give implicit permission, you simply went to the person who wrote the software and asked for a copy. Your peers would then come to you and ask for a copy. This was the normal way for software at the beginning of software. This also happened in Gosling's Emacs - it was freely shared at the beginning. This state of affairs was changed when he decided that Emacs will be completely controlled by himself. What Gosling didn't understand is that you cannot take back what was already put into the public, you can only restrict new publications of works.
▶ No.921471>>921478
>>921467
>What Gosling didn't understand is that you cannot take back what was already put into the public.
If I put some software on my website with the source code available that DOES NOT MEAN I am giving up copyright to it.
▶ No.921478>>921480
▶ No.921480>>921484
>>921478
And if I give you a copy that does not mean I have given up copyright
▶ No.921484
▶ No.921515>>921523
>>921340
>streamcatcher
>not youtube-dl
▶ No.921521>>921683 >>921718
Year of the microkernel desktop when?
▶ No.921523>>921525 >>921528
>>921515
There are better streamcatchers than youtube-dl, like quvi, cclive, and rtmpdump.
▶ No.921524
>>921437
In theory. In practice you then become maintainer and run at risk of your fork breaking at some point with what it depends on. Happens so often it's silly in so-called pro environments where people dick around with that one part from npm and few months or even weeks later start getting unexpected behavior that's hard to debug.
▶ No.921525>>921528
>>921523
No they are not, youtube-dl offers full range of formats to download and integrates with ffmpeg to wrap them up to whatever you want.
▶ No.921528>>921543
>>921525
<youtube-dl
>unironically recomending the downloader jewgle themselves took over and crippled the functionality of to (((respect copyright))) and download slowly from jewgle's personal servers by shoahing patchsets to bypass throttles
>>921523
Why not streamlink?
▶ No.921543
>>921528
What? I'm getting max speeds with it.
▶ No.921621>>921650
>>921376
>communist license
GPL is most capitalistic license that exists. It allows you to hire whoever you want to modify your software, instead of being at mercy of whoever originally wrote propriety software.
▶ No.921650
>>921621
inb4 developer freedom is more important than user freedom
▶ No.921683>>921699
>>921521
Its sad that lisp machines have no meaningful security and are really the opposite of a microkernel.
>instead of being at mercy of whoever originally wrote propriety software
By the original developer I do believe you mean the capital owner.
▶ No.921699>>921711 >>921718
>>921683
>have no meaningful security
They are single user machines. You can trust all the code that is running on it.
>are really the opposite of a microkernel
How so? I could see an analog being made that about how the LISP interpreter is the microkernel and the lisp code for managing the hardware are the equivalent of the userland drivers.
▶ No.921711>>921718
>>921699
>You can trust all the code that is running on it.
Really the it being an insecure piece of shit is a feature guys!
>are the equivalent of the userland drivers.
Except those userland drivers are not userland. Its all one uni stack.
▶ No.921718>>921719 >>921744
>>921521
When Google starts pushing more for Fuchsia.
Its sad that it's gonna take botnet meanies to bring us the year of the microkernel desktop though >_<
>>921699
What you're saying is the only reason Lisp machines are trustable security-wise is that they aren't networked?
umm... thats not how it works. You could say that about anything.
>>921711
>Except those userland drivers are not userland. Its all one uni stack.
so in second pic related, does it look more like left or right?
▶ No.921719
>>921718
>so in second pic related, does it look more like left or right?
Its user mode vs kernel mode that matters. If you run all your code in the kernel level its not a micro kernel.
▶ No.921744>>921796
>>921718
>is that they aren't networked
They were networked though. They supported both Chaosnet and ethernet. Chaosnet was pretty much a very early LAN at MIT. Since the LISP machine was written in lisp, you don't see the same type of vulnerabilities that you would get with a system made with C. Of course it was possible to write software that was vulnerable over the network, but then it's kind of your fault for doing so.
>so in second pic related, does it look more like left or right?
It looks more like the right. See pic related.
▶ No.921757>>921762 >>921769 >>923089
>>921226
What does the "rub out" key do?
▶ No.921762>>921769
▶ No.921769
>>921762
correct
>>921757
It preforms a backspace. (moves the cursor back one and deletes the character) Some later keyboards for LISP machines also included a BS key. For those keyboards rub out and BS were equivalent.
▶ No.921796>>921803
>>921744
>It looks more like the right. See pic related.
Do you have literally any evidence that its structured that way.
▶ No.921803>>921824
>>921796
What part of it are you skeptical about? Once you tell me, I'll try to find a source pertaining to the claim.
▶ No.921812>>921815
>>921341
Lisp being slow was true enough in the 70s and earlier. But in the 80s it had dedicated hardware to run on and today SBCL is almost a third the speed of C for trivial programs, which makes it much faster than Python or Java, for example.
▶ No.921815>>923107
>>921812
>SBCL 1/3 the speed
That's a laugh. When I was programming in LISP, my program would take 6 hours to complete. In C, that time was 2 minutes.
▶ No.921824>>921848
>>921803
>What part of it are you skeptical about?
That the LISP operating system is at all comparable to a micro kernel.
▶ No.921826
>SBCL is almost a third the speed of C for trivial programs, which makes it much faster than Python
Yes anon I have no doubt that your slow language can be faster than other languages known to be slow.
▶ No.921848>>921853 >>921872
>>921824
>>921824
>That the LISP operating system is at all comparable to a micro kernel.
Just take a look at the picture. The Microkernel based Operating System and LISP based Operating System look similar for a reason. A key similarity is the fact that the arrow connects two things that are on the same layer unlike the monolithic kernel operating system which communicates only vertically.
▶ No.921853>>921860 >>921872
>>921848
>Just take a look at the picture.
I can see the picture anon. Now why do think that represents the lisp OS at all. Having an interpreter somewhere does not magically make that the architecture.
▶ No.921860>>921867 >>921872
>>921853
>Now why do think that represents the lisp OS
The file driver was written in lisp. Device drivers were written in lisp. The code for scheduling the applications was written in lisp. When an application wants to interface with for example the disk it calls a method in the file driver to do so. All of this is running on the same lisp interpreter.
▶ No.921867>>921872 >>921935
>>921860
>The file driver was written in lisp. Device drivers were written in lisp ....
Yes anon and on my Linux machine the file driver is written in C, the drivers are written in C, the scheduler is written in C. That does not make Linux a microkernel.
If these are all isolated userspace implementations then that would be a microkernel. Thats not how it works on shitty lisp machines. They have no user / kernel space isolation.
▶ No.921874>>921882
>>921872
And? You going to bring up the fact that it uses cons cells?
▶ No.921882>>921885
>>921874
LISP is nil terminated.
▶ No.921885
▶ No.921935>>921936
>>921867
>They have no user / kernel separation.
And therein lies their brilliance.
▶ No.921936>>921938 >>922290
>>921935
>Lack of basic security is a feature
▶ No.921938>>921944 >>922290
>>921936
They were single user machines intended for experts. Security was not
a design consideration. You'll find that this was pretty well the norm at the time,
since even though computers could be networked, they were rarely used
by lay-people and intentionally malicious code was almost unheard of.
▶ No.921944>>921978 >>922290
>>921938
I agree, any program should be able to crash the entire machine. The software will be such amazing quality that this can never happen.
▶ No.921978>>921988 >>922227
>>921944
That typically didn't happen though since everything is written in lisp and the parts of the device drivers that invoked low level instructions were small. Worst case, the debugger popped up giving you options such as aborting or restarting the process.
<but what about if someone wrote C code
Unlike something like unix, runtime errors bring up the debugger instead of just crashing. This means you can just abort the process that crashed without having your whole system crash.
Fun fact: Like temple OS, Symbolics C comes with a C listener that let's you type C code like in a REPL. This code also does not need to be linked (it also doesn't get linked in temple on either)
▶ No.921988>>921997
>>921978
> Worst case, the debugger popped up giving you options such as aborting or restarting the process.
Wait tho. I thought the hardware-level debugger on Lisp machines let you actually examine the code, change stuff, and then resume the process?
▶ No.921997
>>921988
Depending on what type of error yes. Usually it's not worth doing though. I typically just look at the backtrace and abort.
▶ No.922227
>>921978
>and the parts of the device drivers that invoked low level instructions were small.
Your answer is literally the green text above. The absolute state of lisp fags.
▶ No.922290>>922299 >>922308
>>921936
>>921938
>>921944
The confusion comes from different ideas of what security means. Lisp users think security means keeping your machine safe from hackers, protecting your data from broken or malicious programs, keeping programs from being corrupted/altered by viruses, and the like. UNIX weenies think security means keeping the user away from parts of the machine, like being unable to replace or delete the MINIX on your chip. They can't understand how an OS that lets the user do anything can prevent programs from doing anything.
Of -course- mail to Unix-Haters fails to blame any
particular responsible individual. -Every- little bug or
problem is actually the responsibility of some individual,
if you could only figure out who. The problem is that
dealing with Unix seems like a grand game of finger pointing
and pass-the-buck (without Harry Truman). Is the real
problem that the programmer didn't check the array bounds?
Or is it ultimately the fault of the designers of C for
designing a language in which programmers must error check
array indices manually?
Eventually, you stop caring about the details that would let
you sort out who was responsible. Recently I was unable to
use FTP on a PC to send a file to my directory on a Unix
machine because on the Unix box I use the `bash' shell.
Heaven help me, I even understand why this restriction
plugged yet another security hole in Unix, and I was able to
remove the restriction as soon as I understood what was
happening, but after enough absurdities like that, your
average user has no energy left to assign blame. What do
all these bad experiences have in common? Unix! Thus, Unix
is the problem.
Hell, Unix even -encourages- this phenomenon. Contrast what
happens on ITS or a Lisp Machine or Multics when a program
error happens, with what happens on Unix. On ITS, Lisp
Machines or Multics your program suspends and you are given
the opportunity to debug the problem and perhaps fix it and
proceed. You are given the chance to assign some blame. On
Unix -- *blam* -- core dumped. -Maybe- you can debug it,
but you certainly can't proceed, so why bother? Ignore that
(huge) core dump file and move on to your next task.
Note that users -like- this behavior. No kidding. Ask half
the graduate students at MIT these days -- they -hate- the
Lisp Machine debugger. All those blasted -choices-. All
those explainations and questions. They don't want to know
who to blame -- all they want to know is that it what they
were doing didn't work so they can try something else.
So if I want to -think- about who to blame for my problems,
I'll go use a Lisp Machine (or an ITS or a Multics). But
these days I use Unix, where I don't have to think.
- A Satisfied Customer
▶ No.922299>>922308
>>922290
>Really the lack of security is a feature suck it UNIX!
▶ No.922301
▶ No.922308>>922317 >>922318 >>922325
>>922290
>UNIX weenies think security means keeping the user away from parts of the machine
Yes because "the user" in some cases may not be "the user" as in the owner of the device. It could be a hacker trying to mess with you. Why would you want him to be able to do anything he wants?
>like being unable to replace or delete the MINIX on your chip
That's an Intel and x86 hardware issue, not an issue with the OS software.
>>922299
Yeah I dont like the Lisp meanie anymore I thought he was funny before but now he's just a baka.
▶ No.922317
>>922308
I'm glad we finally agree on something cuteposter. That's a rare thing.
▶ No.922318>>922321
>>922308
It doesn't have to be one or the other. You could design a computer where only the local user sitting at the console can use the hardware debugger. On a Unix system, the dude who has physical access to the machine can do wtf he wants anyway. There is no security there already.
▶ No.922321>>922337
>>922318
>Physical access
Anon these are multi user networked machines.
▶ No.922323>>922325 >>922863
If Lisp is so good how come my self-contained lisp image hello world is 31 megs in size and I can't make other kind of binaries?
▶ No.922324
(if that abomination is even a binary that is)
▶ No.922325>>922330 >>922335 >>922490
>>922308
>That's an Intel and x86 hardware issue, not an issue with the OS software.
He was making an analogy. Who's the baka now.
>>922323
It's bundling the whole LISP interpreter / compiler with it.
▶ No.922330>>922442
>>922325
>It's bundling the whole LISP interpreter / compiler with it.
Bloat
▶ No.922333>>922336 >>922341 >>922863
C is the mostly bloated language there is. Anyone who does not want to use C must package interpreters and JIT compilers with their code to get it to run. Everyone duplicates their work over and over and over again. Code sizes bloat and all because we are stuck with this stupid little autistic language that was never supposed to leave the lab. C is the equivalent of a fat girl taking a myspace angle picture to present herself at fat.
▶ No.922335>>922442
>>922325
>60 year old language
>still can't make a proper executable
LISP is the ultimate LARP language.
▶ No.922336>>922341
▶ No.922337
>>922321
Lisp machines don't exist anymore though, and they were gone before the age of botnet. If you were to design one today, then yeah you'd have to make sure only the local user has access to the hardware debugger, since you're basically giving him negative gonzo ring priviledges. But hey, if I buy a computer, that's exactly the kind of access I want to have.
▶ No.922341>>922446
>>922333
>C is the mostly bloated language there is
ok let's see why
>Anyone who does not want to use C must package interpreters and JIT compilers with their code to get it to run
...that would make C less bloated
>C is the equivalent of a fat girl taking a myspace angle picture to present herself at fat
1. who the hell still uses myspace?
2. so she's angling it to make herself look fatter?
3. how in the world does this relate to C?
>>922336
pls rember that wen u feel scare or frigten
never forget ttimes wen u feeled happy
wen day is dark alway rember happy day
▶ No.922442>>922443 >>922505 >>922584
>>922330
>>922335
Lisp wasn't meant to be used in an environment where each program is its own self-standing binary.
Instead a system listener should load and evaluate code at runtime. This listener can load compiled code as well as plain text representations. On my machine a compiled hello world in Common Lisp comes to 149 bytes, smaller than any C program you can easily create, as far as I know.
You could also claim C is huge and bloated if you include the size of the kernel with every binary, it's not like most programs will run without it.
▶ No.922443
>>922442
Also it should be noted that there are implementations such as ECL that can create relatively small binaries by using C as an intermediary language and stripping unused functionality such as runtime evaluation.
▶ No.922444
▶ No.922446>>922490
>>922341
>how in the world does this relate to C?
It is a metaphor you autistic faggot. C promises you lean binaries but this is a lie because once you look behind the curtain you realize the code base is massive compared to a higher level language. Just like a fat girl taking a myspace angle picture.
▶ No.922449
>>921189 (OP)
they can't be worse than UNIX cancer
▶ No.922475>>922492
>>921872
Lmao, eat shit homo.
▶ No.922490>>922505
>>922446
>C promises you lean binaries but this is a lie because once you look behind the curtain you realize the code base is massive compared to a higher level language.
>binaries
>code base
one of these things is not like the other~
one of these things is not the same!
also, >>922325
>It's bundling the whole LISP interpreter / compiler with it.
wew
>>922473
*notices bulge* OwO what's this?
▶ No.922492
▶ No.922505
▶ No.922584>>922593 >>922754 >>922863
>>922442
That's either flawed or Jewish thinking mixing up the role of the kernel. Think of it this way: I can make a tiny C plugin for another program whether that program is written in C, Python, or even LISP. But can I make a tiny LISP plugin for a program that isn't written in LISP? The kernel is irrelevant.
▶ No.922593>>922598 >>922602 >>922607 >>922863
>>922584
> I can make a tiny C plugin for another program whether that program is written in C, Python, or even LISP
Thats because literally every language has a C FFI because they all run on Unix or Windows and its nearly an absolute requirement. If everything else had a LISP FFI it would be the same situation.
▶ No.922598>>922605
>>922593
How do computers work?
▶ No.922602>>922605
>>922593
Are you really that clueless or are you merely pretending?
▶ No.922605>>922649
>>922602
>>922598
>Every language has a C FFI because the CPU is a C interpereter (thats what the C means)
Hint: This is wrong.
▶ No.922607>>922612
>>922593
You can write in assembly language and not care about anything besides the hardware itself. And an assembler is a whole lot easier to make than a compiler, so you can ever write your own if need be. It also doesn't limit you to low-level constructs. You can use macros, and you can even make something like this: https://sourceforge.net/projects/hlav1/
Anyway that's how you get off the C bandwagon. First thing then is to write another high level language from scratch (Forth might be a good choice, or hell even BASIC).
▶ No.922612
>>922607
Which you can do with a compiled LISP all the same.
▶ No.922649>>922872
>>922605
Legitimately clueless I guess. Hint: the processor can actually run compiled C directly while it cannot run compiled LISP directly.
▶ No.922754
>>922584
You're entirely correct, as soon as I posted that I realised I was stretching too far with the kernel statement.
The crux of the matter is that Lisp is not at home in the Unix environment and forcing it to work like C doesn't make for a fair comparison. Especially considering that the size and complexity of a C program under this same environment is somewhat obfuscated by extensive system support at the most basic levels, including things like dynamic linking and syscalls.
▶ No.922863>>923037
>>922323
That's because UNIX doesn't have any Lisp libraries included. If you were using a non-UNIX OS and C programs had to bundle a UNIX environment and C library, C programs would be huge too, or if every Java program had to bundle a JVM or every Python program had to bundle a Python interpreter. If Lisp was the OS language, every program would be smaller because there would be less duplicate code, but this wouldn't only be true for Lisp. Most languages would need less code than C.
>>922333
This is the truth, and it gets even worse as time goes on. Memory allocation and error handling suck in C. Numbers, arrays, and strings suck in C too, and they can't be fixed because C programs depend on everything not working. They depend on all the array information disappearing besides the address of the first element when it's used as an argument. That's how all the C string functions "work." They depend on being able to truncate a string with '\0' and then put back another character and read the rest of the string. That's how strtok "works." Fixing C's strings and arrays would break the entire C library, POSIX, and all the C programs.
>>922584
>Think of it this way: I can make a tiny C plugin for another program whether that program is written in C, Python, or even LISP. But can I make a tiny LISP plugin for a program that isn't written in LISP? The kernel is irrelevant.
That's only true when the OS is written in C. C on the Lisp machines has special C data types to handle C bullshit like null-terminated strings and array decay. C can't use the standard string and array types, so it needs a Lisp FFI.
>>922593
>If everything else had a LISP FFI it would be the same situation.
UNIX weenies don't believe that even though they know it's true. Stockholm syndrome sufferers are like that.
Once one strips away the cryptology, the issue is control.
UNIX is an operating system that offers the promise of
ultimate user control (ie: no OS engineer's going to take
<feature> away from ME!), which was a good thing in its
infancy, less good now, where the idiom has caused huge
redundancies between software packages. How many B*Tree
packages do we NEED? I think that I learned factoring in
high school; and that certain file idioms are agreed to in
the industry as Good Ideas. So why not support certain
common denominators in the OS?
Just because you CAN do something in user programs does not
mean it's a terribly good idea to enforce it as policy. If
society ran the same way UNIX does, everyone who owned a car
would be forced to refine their own gasoline from barrels of
crude...
With respect to Emacs, may I remind you that the original
version ran on ITS on a PDP-10, whose address space was 1
moby, i.e. 256 thousand 36-bit words (that's a little over 1
Mbyte). It had plenty of space to contain many large files,
and the actual program was a not-too-large fraction of that
space.
There are many reasons why GNU Emacs is as big as it is
while its original ITS counterpart was much smaller:
- C is a horrible language in which to implement such things
as a Lisp interpreter and an interactive program. In
particular any program that wants to be careful not to crash
(and dump core) in the presence of errors has to become
bloated because it has to check everywhere. A reasonable
condition system would reduce the size of the code.
- Unix is a horrible operating system for which to write an
Emacs-like editor because it does not provide adequate
support for anything except trivial "Hello world" programs.
In particular, there is no standard good way (or even any in
many variants) to control your virtual memory sharing
properties.
- Unix presents such a poor interaction environment to users
(the various shells are pitiful) that GNU Emacs has had to
import a lot of the functionality that a minimally adequate
"shell" would provide. Many programmers at TLA never
directly interact with the shell, GNU Emacs IS their shell,
because it is the only adequate choice, and isolates them
from the various Unix (and even OS) variants.
Don't complain about TLA programs vs. Unix. The typical
workstation Unix requires 3 - 6 Mb just for the kernel, and
provides less functionality (at the OS level) than the OSs
of yesteryear. It is not surprising that programs that ran
on adequate amounts of memory under those OSs have to
reimplement some of the functionality that Unix has never
provided.
▶ No.922872>>923026
>>922649
How do compilers work?
▶ No.923004
>>921189 (OP)
>we are incapable of writing a modern version and can only bitch all day.
I believe GuixSD + StumpWM + Emacs is the closest you can get today.
▶ No.923026>>925360
>>922872
>compilers magically add support for functionality that has to be managed by a virtual machine
▶ No.923037
>>922863
>If you were using a non-UNIX OS and C programs had to bundle a UNIX environment and C library, C programs would be huge too
>Most languages would need less code than C.
Wrong: you can still get tiny statically linked C binaries if you use musl as your libc and don't depend on bloated libraries like GTK+ or Qt. Don't confuse the failings of C with the failings of glibc and overengineered GUI libraries, it only makes you look stupid.
▶ No.923089
▶ No.923107>>923112 >>923131 >>925116
>>921815
There is just no way this is right. That's just orders of magnitude off.
▶ No.923112>>923129
>>923107
Are you calling me a liar?
▶ No.923129>>923133
>>923112
Only a /pol/ would lie this hard.
▶ No.923131>>923147
>>923107
The last time we had a programming contest on /tech/ the LISP entry had a run time nearly 1,000 times slower than the C entry. IIRC it was about counting the second most frequent character in a string, if anyone still has that thread saved. Would be a shame if not as it was the last time there was programming on /tech/.
▶ No.923133
>>923129
/pol/ is notoriously always right you know.
▶ No.923147>>925107
>>923131
That doesn't sound legit for a compiled language. Even Perl will do much better.
▶ No.923161>>925100 >>927783
Does Emacs count as (virtual) Lisp machine?
▶ No.925100
>>923161
Of course it does.
▶ No.925107
>>923147
What's 'compiled' in times of JITs, anyway?
▶ No.925116
>>923107
>There is just no way this is right. That's just orders of magnitude off.
Maybe he used bc to do the math.
Subject: bc Follies
The following was brought to my attention by VP.
Try using bc to calculate the value of 163/ln(163)
Here's a Sun4:
titanic:~[13] bc -l
163/l(163)
32.0-75570-60420-20649243140-49
Here's a DecStation:
hindenburg:~[1] bc -l
163/l(163)
32.0-85189980504841560572
▶ No.925229>>925259
▶ No.925259
>>925229
(((l(((((((i))(sp)))))))))
Ftfy
▶ No.925360
>>923026
there would have to be a runtime but that doesn't mean LISP isn't already compiled nor that it couldn't be actual machine-level bootloader code
▶ No.927536>>927567 >>927818 >>927854 >>928149
http://www.loper-os.org/?p=300
>The GUI of my 4MHz Symbolics 3620 lisp machine is more responsive on average than that of my 3GHz office PC.
>The former boots (into a graphical everything-visible-and-modifiable programming environment, the most expressive ever created) faster than the latter boots into its syrupy imponade hell.
>And this is true in spite of an endless parade of engineering atrocities committed in the name of “speed.”
▶ No.927567
>>927536
But does it run far cry?
▶ No.927778
It's time for lisp fans to realize that lisp was never good.
▶ No.927783
>>923161
It counts as a bloated lisp machine
▶ No.927818>>927837 >>927842 >>927854
>>927536
>the loperos idiot doesn't understand that his 3GHz office pc is running multiple decades of garbage code
I guarantee if lisp machines had survived to the present day they'd be just as bloated. Everything was fast in the 90s, that doesn't mean anything.
▶ No.927837>>927838 >>927849 >>927871
>>927818
Ahktually, I bought a shitty Sierra game that ran like dogshit on my Amiga 500 (that had even been upgraded to 3 megs ram and a 40 meg HD), probably because the game was written in C by people who thought it would magically make their shit all portable like some kind of voodoo magic.
This thing was so shitty that you could go take a piss or go down 3 flights of stairs and grab a soft drink before it finished redrawing the screen once you walked off the current screen.
I didn't buy anymore Sierra games after that. Good thing too, because they started using a dumb software filter to downscale 256 color VGA to 32 color Amiga, with the ugliest possible results. In contrast, LucasArts' Monkey Island 2 was almost indistinguishable between the two platforms.
▶ No.927842
>>927818
This. see >>923110
>>we need more than 30 million lines of code just to post on this website.
>We would need that much even if one of those other OSes became the mainstream. Notice that PDP-11 Unix isn't 30 million lines. Notice that from what I can tell, these meanies in the blockquotes are referring to quirks in the SunOS Unix or some other variant that existed at the time of writing (that right away should tell you how outdated the book is, when even the unices mentioned don't exist anymore).
>Notice that there is not a fully feature-complete lisp OS or Multics clone for today's world. You and the quote meanies are literally comparing things from several decades apart with wildly different feature sets.
▶ No.927849>>928073
>>927837
I don't know man, Quake and Doom ran beautifully under C. It's just the fault of incompetent programmers, not the language itself.
▶ No.927854>>927855 >>927858 >>927860 >>927879 >>927908 >>927946
>>927536
>>And this is true in spite of an endless parade of engineering atrocities committed in the name of “speed.”
That's because C sucks. C weenies refuse to check for errors for "speed" but then they use slow null-terminated strings and array decay.
>>927818
>his 3GHz office pc is running multiple decades of garbage code
That's why it sucks. Lisp machine code quality is better. People don't have to reinvent wheels all the time because the code that comes with the OS is good. This is what allows the OS to stay smaller with more functionality.
https://en.wikipedia.org/wiki/Genera_(operating_system)
>Genera is written completely in Lisp (using Zeta Lisp and Symbolics Common Lisp)
>Even all the low-level system code is written in Lisp (device drivers, garbage collection, process scheduler, network stacks, etc.)
>The source code is more than a million lines of Lisp and available for the user to be inspected and changed. The source is relatively compact, compared to the provided functionality, due to extensive reuse
>I guarantee if lisp machines had survived to the present day they'd be just as bloated.
Even a full browser with JavaScript would be smaller on a Lisp machine.
>Everything was fast in the 90s, that doesn't mean anything.
Lisp machines were fast with GC and dynamic typing.
Date: Sun, 23 Dec 90 16:19:19 EST
Subject: Think how much faster and more efficiently this
smaller program is able to trash the filesystem compared
to a bloated AI-weenie one (which would all its time
checking for uninitialised variables, doing bounds
checking, allocating and freeing storage, actually caring
about exceptional cases, signalling errors, etc.)
▶ No.927855>>927903
>>927854
>C weenies refuse to check for errors for "speed"
Proofs? Also that'd be a flaw of programmers, not of the language
>array decay.
doesn't affect performance
>Lisp machine code quality is better.
Proofs?
>People don't have to reinvent wheels all the time because the code that comes with the OS is good.
That's nice, but that's possible in almost any language, including C
>Lisp machines were fast with GC and dynamic typing.
Compared to what?
▶ No.927858>>927873
>>927854
since you seem to be a huge lisp fanboi, can you explain what Richard Stallman is talking about in these parts of his "How I do my computing" article?
>When you start a Lisp system, it enters a read-eval-print loop. Most other languages have nothing comparable to `read', nothing comparable to `eval', and nothing comparable to `print'. What gaping deficiencies!
>I skimmed documentation of Python after people told me it was fundamentally similar to Lisp. My conclusion is that that is not so. `read', `eval', and `print' are all missing in Python.
▶ No.927860>>927903
>>927854
> but then they use slow null-terminated strings
Remember guys you should store 8 extra bytes for every little string you want instead of just one byte!
▶ No.927871
>>927837
Conquests of the Longbow was a fun game. I played it on DOS, though.
▶ No.927873
>>927858
Not him, but it mainly stems from python's eval()'s deficiencies.
You can do read in python with: eval(input())
You can do eval in python with: eval()
You can do print in python with: print(repr())
I would say that doing print in python this way is close to how it works in a LISP. The other two are not because eval can only operate on an expression. A simple test to show how inferior it is, is to convert a python program into a string and then try to eval it. It will just complain about invalid syntax instead of actually evaluating it.
▶ No.927879>>927903 >>927927 >>927928 >>927930
>>927854
>That's because C sucks. C weenies refuse to check for errors for "speed" but then they use slow null-terminated strings and array decay.
Isn't in this very thread a post saying how Lisp uses nil terminated strings and it simply got a reply of "and?" ( >>921872 >>921885 ). I don't know if this is Lisp implementation dependent, but I'm just saying how hypocritical some people are when they language of choice do something they consider bad in others.
>Lisp machine code quality is better.
Better compared to what? Basically, for ALL of your posts, you need QUANTITATIVE metrics, you're only providing qualitative, which are subjective.
>People don't have to reinvent wheels all the time because the code that comes with the OS is good. This is what allows the OS to stay smaller with more functionality.
Agreed, but such a concept is not exclusive to any OS whatsoever.
>Even a full browser with JavaScript would be smaller on a Lisp machine.
How do you know that? Have you implemented a web browser in a Genera Lisp Machine? It may be, or may be not, that depends on many factors, from the competency of the developers to the problem's complexity.
And now for the old quote:
Lisp machines were fast with GC and dynamic typing.
Date: Sun, 23 Dec 90 16:19:19 EST
Subject: Think how much faster and more efficiently this
smaller program is able to trash the filesystem compared
to a bloated AI-weenie one (which would all its time
checking for uninitialised variables, doing bounds
checking, allocating and freeing storage, actually caring
about exceptional cases, signalling errors, etc.)
Imagine how slower and inefficiently a bloated AI-weenie program is able to trash the filesystem compared to a smaller and more efficient one? Because someone incompetent can fuck up in any language in the same way you can use axe to chop some wood or chop your own hand off.
In some languages you have way more freedom to produce something powerful at the cost of being capable of fucking everything up or you can have a hand-holding environment to protect you from yourself at the cost of being bloated. Both are valid approaches, one for implementing something well understood and the other for experimenting.
▶ No.927903>>927905 >>927908 >>927914 >>927946 >>928119
>>927855
>Proofs? Also that'd be a flaw of programmers, not of the language
Some languages check most errors automatically so you don't have to do as much work and some make it impossible, like overflow and array bounds checks in C. If someone made a magical C compiler that did check all these things, it would still suck.
>>array decay.
>doesn't affect performance
Bullshit. Array decay is one of the biggest problems with C. C compilers assume any pointer can be used as an array with pointer arithmetic, which is extremely slow. There are problems besides the slowness, like not being able to pass an array as an argument or treat it as a single object in any other way. Arrays on Lisp machines are objects like structures and classes and functions.
>>Lisp machine code quality is better.
>Proofs?
Lisp machines do more with less code and don't have UNIX bullshit like broken commands, panic, and OOM killers.
>That's nice, but that's possible in almost any language, including C
It's been tried with all those microkernels in the 90s, but they didn't work because they were written in C. UNIX weenies blamed everything but C, which gave microkernels a bad reputation.
>Compared to what?
Compared to hardware that doesn't have tags for GC and dynamic typing. Those computers were all much slower than today's computers, which are fast enough to emulate Windows in your browser. Lisp machines would be even faster today because of how many programs use a GC and dynamic typing compared to the 80s and 90s.
>>927860
>Remember guys you should store 8 extra bytes for every little string you want instead of just one byte!
Anyone who has enough strings that saving 7 bytes per string is worth it wouldn't want to be looking for all those null characters. Storing an 8 byte length more than makes up for all the problems null-terminated strings cause, and it's much faster too.
>>927879
>Isn't in this very thread a post saying how Lisp uses nil terminated strings and it simply got a reply of "and?" ( >>921872 >>921885 ).
Lisp does not use nil terminated strings, it uses arrays for strings, which means strings have all the power of Lisp arrays. It has nil terminated lists.
>How do you know that? Have you implemented a web browser in a Genera Lisp Machine?
No, but JavaScript on a Lisp machine wouldn't have to reinvent wheels since most of it is a bad implementation of what's already available in Lisp.
>Both are valid approaches, one for implementing something well understood and the other for experimenting.
UNIX weenies "experiment" but they very rarely fix anything. Changing =- to -= is the only one I can think of off hand. The UNIX file system would suck even if there were no bugs.
Mr. A is being hurt by a Unix bug, a bug accidentally
introduced into the file system code when Unix was first
written.
You see, a file is internally described by an inode entry,
which contains 13 pointers to the disk blocks that
constitute the file. The first 10 of these pointers behave
normally, but a strange mutation occured in some code left
sitting overnight on a disk in a room where the air
conditioning had failed. As a result of this mutation, the
eleventh pointer came to point, not to a block of the file,
but to a block containing a whole bunch more (like 256)
pointers. Later on, replication of the erroneous code
fragment due to gamma ray damage made things even worse:
Now, the twelfth pointer points to a block of pointers to
blocks of pointers, and the thirteen to a pointer to blocks
of pointers to blocks of pointers to blocks of pointers (or
something like that).
A result of this bug was the cancerous growth in Unix file
sizes. Remember that a cancer is uncontrolled growth, in
the wrong place at the wrong time. In the intended scheme,
no file could be more than 13 blocks long, and most of Unix
was designed around that assumption. The mutations
introduced the potential for growth way beyond this design
parameter. Needless to say, nothing has worked quite right
since. The X window system is probably the worst of example
of metastasized Unix code. I think it's safe to say that no
piece of X would have managed to survive on a Unix system
with only the original 13 block pointers.
And what can you say about a language which is largely used
for processing strings (how much time does Unix spend
comparing characters to zero and adding one to pointers?)
but which has no string data type? Can't decide if an array
is an aggregate or an address? Doesn't know if strings are
constants or variables? Allows them as initializers
sometimes but not others?
(I realize this does not really address the original topic,
but who really cares. "There's nothing wrong with C as it
was originally designed" is a dangerously positive sweeping
statement to be found in a message posted to this list.)
▶ No.927905>>927906
>>927903
>pointer arithmetic
>slow
Do you think people actually believe your bullshit? Why don't you go back to MIT and pout with your loser academic buddies while real work is done by real men.
▶ No.927906>>927909
>>927905
Here we have another barely sapient C/C++ programmer using the word "real" - "real" programmers work on critical systems, so they'd be using Ada/SPARK if they have a brain.
▶ No.927908
>>927854
>>927903
Hey lispjew, if lisp is so great, then why did everyone drop it about as hard your mother did with you?
Kikes like you should be rotting in a gas chamber.
▶ No.927909>>927911 >>927912
>>927906
We sure know you don't program ada/spark, because how else would you have time to shitpost all day? no one programs in ada because it's unnecessary.
▶ No.927911
>>927909
No one programs in Ada because everything fucking fails in it.
Pic related is an Antares launch vehicle running Ada, and crashing.
Thank God they didn't use Lisp, they might of destroyed a whole city if they did.
▶ No.927912>>927915
>>927909
>no one programs in ada because it's unnecessary.
Except for the fact it was mandated for a long time.
>>927506
▶ No.927914
>>927903
>Lisp does not use nil terminated strings, it uses arrays for strings, which means strings have all the power of Lisp arrays. It has nil terminated lists.
So that's a nil list at the end of the list, not an null terminator. My mistake then.
>No, but JavaScript on a Lisp machine wouldn't have to reinvent wheels since most of it is a bad implementation of what's already available in Lisp.
What reinventing the wheels means? This term is used a lot in programming but never actually defined. You mean with features out-of-the-box like how Python is capable of handling dictionaries and lists without third party libraries? And again, you just use qualitative comparison like "bad implementation", how is it bad? Does it takes 10 seconds to accomplish a task that takes 3 in another implementation? Does it crash with numbers larger than 4.295e+09? Give us something factual.
>UNIX weenies "experiment" but they very rarely fix anything. Changing =- to -= is the only one I can think of off hand.
Using UNIX weenies is just name calling, not an argument, like using "something around quotes" to signal an irony.
>The UNIX file system would suck even if there were no bugs.
How is it bad? What are the alternatives? I know that BeOS (Haiku) uses a database approach, have you tried it? Can you compare the two (Haiku's or anything else)?
▶ No.927915
>>927912
What does it mean when the law says
>where cost effective
Sounds like Ada isn't used if it costs too much. Uh oh.
▶ No.927917
The greatest national threat to security since x86.
#STOPADA2018
▶ No.927918>>927920
It's almost as if the language isn't the problem but the programmer. Geez, I wonder why that is?
▶ No.927920>>927921
>>927918
In assembly an idiot will do a fucked up memory access every few lines with dangerous results (remote code execution etc). In javascript they can't if they tried (would have to escape the sandbox). Languages sure as hell matter. I hate web devs as much as everyone else but lets not pretend everything is just equal when it comes to errors.
▶ No.927921>>927936
>>927920
You make a lot of assumptions.
▶ No.927926>>927929 >>927942
It must always be the fault of Ada!
http://www.adapower.com/index.php?Command=Class&ClassID=FAQ&CID=328
Stop drooling, you barely ambulatory (C)hildren. There's a reason why pretty much all of NATO adopted it too - it's the only language that satisfies the Steelman language requirements. Imagine wanting to program aircrafts and weapon systems with a footgun (and thinking we'd be better off for it). C programmers have zero sentience.
▶ No.927927>>927928 >>927930
>>927879
>Lisp uses nil terminated strings
It doesn't though. Most implementations implement them as some form of an array. Arrays store the length of the array. Multiple characters packed into each word so I guess if your string didn't precisely fill up a whole number of words there would be some empty spaces. There may have been zeros there, but I'd hardly call those terminating characters.
Zetalisp/Symbolics
3600 Zetalisp originated as Lisp Machine Lisp, which was developed from MacLISP,
but is a much larger language. Many parts of the Common Lisp design were first
tried in Zetalisp. Moon [109] has written an extensive overview of data structures in Zetalisp.
The Symbolics 3600 design (originally derived from the MIT Lisp Machine [18])
supports tags in hardware, which means that many primitive operations exhibit
some concurrency and can be quite fast. It is basically a 36-bit machine with a
28-bit address space (of 36-bit words, not bytes). A word can be broken down in
several different ways. An object reference has a 2-bit cdr code which implements
cdr-coding (see section 2.1.20), a 2-bit major tag, and possibly a 4-bit minor tag.
Small integers and IEEE single-precision floats use only the major tag, thus they
are each 32 bits, while pointers also have a minor tag, leaving 28 bits. Figure 2.11
illustrates some of these combinations.
More complex objects such as arrays also have a header word that can have
several different formats. For instance, the array header word consists of another
6-bit tag and 28 bits of type and length information, followed by the array data.
Specialized arrays such as strings are packed. Function objects are quite complex.
The header word has a tag and a size, followed by an additional three words of
various info. Then there is a table of constants and external references (essentially
a local symbol table), followed by the instructions, which are tagged as a distinct
type of data.
The GC method has been publicly described [110]. It is based on a notion of
ephemeral and static objects, and attempts to minimize VM thrashing.
[18] Bawden, A., Greenblatt, R., Holloway, J., Knight, T., Moon, D., and Weinreb, D. Lisp machine progress report. AI Memo 444, MIT AI Lab, August 1977.
[109] Moon, D. Symbolics architecture. IEEE Computer 20, 1 (Jan. 1987),43-52.
[110] Moon, D. A. Garbage collection in a large Lisp system. In Proc. 1984 AC}v[ Symposium on LISP and Functional Programming (Austin TX, Aug. 1984), ACM SIGPLAN/SIGACT/SIGART, pp. 235-246.
▶ No.927928>>927930
>>927879
>>927927
pt. 2
Array Formats:
Every array has an ARRAY HEADER word. The pointer field is divided into fields
which hold various information about the array. The array may optionally have an
ARRAY LEADER which is formed of a number of words BEFORE the array header. If
there is a leader, then the Q immediately before the header word is a FIXNUM Q
holding the number of array leader words. Then before that are the array leader
words, which may have any datatype (since any object can be stored there), and
before that is a word of datatype ARRAY LEADER which is a self-relative pointer
to the ARRAY HEADER. The presence of the ARRAY-LEADER Q is necessary for such
routines as the garbage collector which scan through memory in the usual
direction. The presence or absence of the leader is determined by a bit in the
array header.
If the array has more than one dimension, then there is a block of
<number of dims>-1 Q's immediately after the array header holding the size
of each dimension. Note that only <number of dims>-1 are needed because
one case compute the total index length from the array header itself.
If the index length of the array (number of data elements) is too
big to fit in the field allocated for it in the array header Q, an extra
Q is inserted between the header and the dimensions which has data type
FIXNUM and contains the index length. A bit in the header Q is on
to indicate the presence of this extra Q.
Now all that is left are the actual storage cells of the array. An array
may optionally be "displaced," according to a bit in the header. If the array
is not displaced, then the data Q's follow thereafter (in a 1-dimensional non-
displaced array, the data follows immediately after the header). However, if
the array is displaced, then the word which would be the first data Q is actually
a pointer to the data cells. Thus, a displaced array can be used to point at the
beginning of an area (this is done often, in fact). Following the displacement
word, in what would have been the SECOND data cell, is the length of the data in Q's
for the array. This is used instead of the normal index length, since that will
be 2 (or 3) to indicate the length of the pointer. This SECOND data cell is used
as the length even in the case of indirect arrays, unless that would cause a
reference off the end of the array indirected to.
Further hair is provided as follows: if the array is displaced and the word
which would be the pointer has datatype ARRAY POINTER, then it points to another
array header! This is called an INDIRECT array. If that isn't hairy enough, get
this: If the USER CONTROL bit of the indirect array pointer is set, then
this array has an INDEX-OFFSET from the array pointed to. This means that whenever
this array is referenced, it is as if that array were referenced, but
with an index <n> higher. The <n> is the offset, and is stored as a FIXNUM in
what would be the THIRD data cell if this array were non-displaced. The offset
is expressed in elements (not Q's), and is always 1 dimensional (it is added after
all the dimensions have been multiplied out). (Note that the length of the array
being pointed at is also stored, in that arrays' header, etc. When a reference
is made to an INDIRECT array, an error check is performed to make sure the
reference is not out of bounds.
The format of the pointer field of the header word is as follows:
-----------------------------------------------------
| 5 |1|1|1|1| 3 |1|1| 10. |
-----------------------------------------------------
| | | | | | | | |
ARRAY TYPE--| | | | | | | | |
HIGH SPARE BIT-| | | | | | | |
HAS LEADER-------| | | | | | |
DISPLACED----------| | | | | |
FLAG BIT-------------| | | | |
NUMBER OF DIMENSIONS-----| | | |
LONG LENGTH------------------| | |
NAMED-STRUCTURE FLAG-----------| |
INDEX LENGTH OF ARRAY-------------------------|
▶ No.927929>>927931
>>927926
>Stop drooling, you barely ambulatory (C)hildren.
Shit dude, you just burned me good. I better go get my burn cream.
▶ No.927930
>>927879
>>927927
>>927928
pt. 3 (This time I'm inserting some extra words so that 8chan doesn't squish my code tags into a single column so that you can easily read it without having to scroll back and forth from the start to the end)
The FLAG BIT, in the case of a string array, is 1 to indicate that
this string may be relied upon to contain only ordinary printing
characters. Its use with other array types is not yet defined.
(THIS IS AN EFFICIENCY HACK, WHICH IS CURRENTLY IGNORED).
The %%ARRAY-NAMED-STRUCTURE-FLAG is 1 to indicate that this
array is an instance of a NAMED-STRUCTURE (probably defined with DEFSTRUCT with
the NAMED-STRUCTURE option, etc). The structure name is found in array leader
element 1 if %%ARRAY-LEADER-BIT is set, otherwise array element 0.
Named structures may be viewed as implementing a sort of user defined
data typing facility. Certain system primitives, if handed a NAMED-STRUCTURE,
will obtain the name and obtain from that function to apply, ACTOR like,
to perform the primitive. One can see that there is some potential
The only one of these fields which has not yet been mentioned is the ARRAY TYPE field. The options are:
NUMBER TYPE USE
====== ==== ===
0 ART-ERROR This is always an error, to prevent randomness.
1 ART-1B Each element is one bit, and 32 are stored per word.
2 ART-2B Analogous.
3 ART-4B Analogous.
4 ART-8B Analogous.
5 ART-16B Analogous.
6 ART-32B Analogous. Since FIXNUM datatype is supplied
24 bits of data are retrievable.
7 ART-Q Each element is a Q, that is, it has a datatype and
a pointer field.
8 ART-Q-LIST Same as Q, but the elements also form a list.
By using GET-LIST-POINTER-INTO-ARRAY and G-L-P
you can get pointers into the beginning or even
the middle of such an array.
9 ART-STRING This is stored the same way as an 8 BIT array.
10. ART-STACK-GROUP-HEAD (see STACK GROUP FORMATS)
11. ART-PDL-SEGMENT (see STACk GROUP FORMATS)
12. ART-TVB TV Buffer
13. ART-TVB-PIXELS TV Buffer in pixel mode.
Note: the elements of arrays (those which are smaller than 32 bits) are
stored right-to-left (i.e., the first element of a 4 BIT ARRAY would be
stored right-justified, including the lest significant bit).
However, TV buffer arrays (ART-TVB) are DIFFERENT, for hardware reasons.
Only the bottom 16 bits of each word are used and the bits are stored
left to right.
TV-BUFFER-PIXEL arrays have a plane mask in array leader element 0. 1 bits
in the plane mask correspond to active tv-buffer planes, 0 bits to inactive
planes. Each time a active plane is encountered on a store, the low order
bit is stored in that plane (a la ART-TVB), and the remaining bits shifted
right one.
STRING arrays are stored the same way as Q-ARRAYs, and STACK-GROUP-HEAD
and STACK-SEGMENT arrays are stored the same as Q-ARRAYs are. The reason for
supporting both array types is so that programs can easily tell apart those
8-bit arrays used for strings, etc. Strings, although like 8-BIT arrays at low levels,
are treated differently at higher levels, such as by READ, EVAL, and PRINT.
▶ No.927931>>927932
>>927929
Do tell me why NATO used Ada for their projects and not your "real man's" language.
▶ No.927932
▶ No.927940>>927955 >>927988
>>921453
MIT is publicly funded, all their code should be copyright free. Don't even get me started on the fact Unix should be too since they agreed to make their tech public in return for their government protected monopoly.
▶ No.927942>>927944
>>927926
>There's a reason why pretty much all of NATO adopted it too
With disastrous results.
▶ No.927944>>927956 >>927972
>>927942
Why is it Ada's fault when there's an engine malfunction? Is Ada supposed to magically fix the engine or something?
▶ No.927946>>927966
>>927854
>lisp machines were fast with GC and dynamic typing
LuaJIT my dude
>>927903
>no oom killers
nice so they just crash? much better option I agree
>entire filesystem thing
ok so you admit you are a retard? also X isn't unix nice try
Define PCLSRing for me right now without changing tabs or never post again
▶ No.927955
>>927940
>MIT is publicly funded, all their code should be copyright free.
What lol? They did not write the symbolics code, they paid symbolics to use it. InB4 universities paying for anything proprietary should be illegal.
▶ No.927956
>>927944
Scheme R6RS has engines that work pretty well.
▶ No.927966>>928119
>>927946
>nice so they just crash?
When your memory is getting really full it will tell you that you that you are running out of memory and that you should turn the gc on / invoke the gc. If you ignore this, then it will warn you again at the last possible moment that you should run the GC. If you do run out of memory you won't crash, but the debugger will come up allowing you to recover from it. If the debugger crashes from running out of memory it will cause another debugger to be started for that error. After 25 times it will pop up the emergency debugger which does not allocate any memory at all.
Anyways, from the debugger you can save the system or just reboot the machine.
▶ No.927972>>927979
>>927944
It crashed due to control surface failure.
▶ No.927979
>>927972
https://archive.fo/wU9oi
Says it was from:
>The ill-fated Draken is understood to have been written off in a crash caused by a catastrophic failure of its single Volvo Flygmotor RM 6C afterburning turbojet engine.
▶ No.927988
>>927940
>MIT is publicly funded, all their code should be copyright free.
MIT is a private university.
▶ No.928073>>928076
>>927849
That's not a very good example, since tons of people had to buy entirely new computers to play those games.
▶ No.928076>>928089
>>928073
>doom
>entirely new computer
Wrong. People were playing that game just fine with 386. Quake was engineered to use 3d accelerated video cards, one of the first games to do so, so of course people would have to get new hardware to be able to use software that was designed for new hardware.
▶ No.928089>>928091 >>928117
>>928076
Dude, I played Doom on a 386DX/33 and it was fucking sloooooow even in the "graphic detail" option set to low, and somewhat reduced screen size. It was someone else's computer, and I was shocked at how much worse it played than my 486DX/33, which itself had problems with slowdown in Doom II.
And Quake wasn't designed for 3D cards, it was designed to run well on regular VGA cards, which is how most people ran it in 1996. You needed a Pentium, and that's it. A 486DX4/120 could sorta play it (badly, like a 386 could play Doom) but it wasn't fun that way. I've seen it in person, just like the 386 and Doom. Used to play lots of games over modem and LAN those days with other peeps.
▶ No.928091>>928095 >>928131
>>928089
>Being this old and still posting on an image board
▶ No.928095>>928101
>>928091
> Hurr durr, go post on facebook since you're old.
Next thread:
> Hurr durr all those old farts only post on facebook.
▶ No.928101>>928111
>>928095
Whats your point both those things are true.
▶ No.928117>>928153
>>928089
>Throughout the company's tenure, Rendition managed to get a leg up on the competition by working with John Carmack to develop the first 3D-accelerated version of Quake
> A year prior, Carmack stated "Verite will be the premier platform for Quake."
>https://www.pcgamer.com/from-voodoo-to-geforce-the-awesome-history-of-3d-graphics/
Carmack was always big on adopting 3d accelerated cards and open platforms.
I played doom plenty on 386 on school lan. it was fine. shit, doom was fine 68k macs.
▶ No.928119>>928129
>>927903
>>>array decay.
>>doesn't affect performance
>Bullshit. Array decay is one of the biggest problems with C. C compilers assume any pointer can be used as an array with pointer arithmetic, which is extremely slow.
Show proofs for pointer arithmetic being slow.
>There are problems besides the slowness, like not being able to pass an array as an argument or treat it as a single object in any other way.
But you can pass arrays as arguments.
>Lisp machines do more with less code
Proofs?
>and don't have UNIX bullshit like broken commands
Such as? Is this something that can't be fixed?
>panic
Haven't seen one in years.
>and OOM killers.
Every Unix handles OOM differently, so this is not a valid argument. Some have OOM killers, some don't.
>It's been tried with all those microkernels in the 90s, but they didn't work because they were written in C.
Show proofs.
>Compared to hardware that doesn't have tags for GC and dynamic typing. Those computers were all much slower than today's computers, which are fast enough to emulate Windows in your browser. Lisp machines would be even faster today because of how many programs use a GC and dynamic typing compared to the 80s and 90s.
Come on, show benchmarks.
>>927966
>When your memory is getting really full it will tell you that you that you are running out of memory
Just like malloc.
>Debugger to recover OOM processes
Can be implemented on Unix similarly. I wonder why this concept wasn't successful.
▶ No.928129>>928130
>>928119
>Just like malloc.
No, malloc does not warn the user. It only returns null when it can't allocate something. LISPMs on the other hand would bring up a screen to the user himself which asked a y/n for whether or not you wanted to run the GC.
>Can be implemented on Unix similarly
Just like how they could implement having the debugger come up instead of just having programs crash.
The nice thing about ddt/hactrn was that if your program
crashed, you were left sitting in the debugger and could
look at it. None of this "I'll run it again under the
debugger and see if it crashes again. Oops, I guess I
need the debug switch on the compiler. Oops, that
compiler doesn't support gdb. Oh well, ship it..."
Hell, Unix even -encourages- this phenomenon. Contrast
what happens on ITS or a Lisp Machine or Multics when
an program error happens, with what happens on Unix.
On ITS, Lisp Machines or Multics your program suspends
and you are given the opportunity to debug the problem
and perhaps fix it and proceed. You are given the
chance to assign some blame. On Unix -- *blam* -- core
dumped. -Maybe- you can debug it, but you certainly
can't proceed, so why bother? Ignore that (huge) core
dump file and move on to your next task.
Note that users -like- this behavior. No kidding. Ask
half the graduate students at MIT these days -- they
-hate- the Lisp Machine debugger. All those blasted
-choices-. All those explainations and questions.
They don't want to know who to blame -- all they want
to know is that it what they were doing didn't work so
they can try something else.
▶ No.928130>>928133 >>928135
>>928129
>LISPMs on the other hand would bring up a screen to the user himself which asked a y/n for whether or not you wanted to run the GC.
Hahaha oh wow. Do you have a screenshot to share?
>Just like how they could implement having the debugger come up
What does 'come up' even mean when you run an OS that doesn't require any user interface?
>instead of just having programs crash.
You mean programs that don't check malloc's return value?
▶ No.928131>>928132
>>928091
I'd actually prefer more oldfags than the crop of i3 ricers we get here. Also, you post on an image board now, do you think you can ever leave? You're hear for life.
▶ No.928132
▶ No.928133>>928139 >>928140
>>928130
>You mean programs that don't check malloc's return value?
A program shouldn't check the return value. Malloc should be wrapped with something that calls abort() in modern code if it returns NULL. It's not possible to safely handle an out of memory condition on most modern operating systems at the user level as it's likely parts of your code will have been paged out and trying to run your clean up code will cause a page fault and the OS will crash your code due to a page not being available. This isn't specific to C or C++, when out of memory code should be designed to crash immediately and cleanly.
▶ No.928135>>928139
>>928130
>Do you have a screenshot to share?
No. Though I could link to a manual where it mentions it. I could also show a screenshot of part of OpenGenera's source (the code is proprietary btw).
>'come up' even mean when you run an OS that doesn't require any user interface?
I'm not quite sure what you mean. It opens up a window with it.
>You mean programs that don't check malloc's return value?
or segmentation fault or dividing by zero...
▶ No.928139>>928142 >>928374
>>928133
>It's not possible to safely handle an out of memory condition on most modern operating systems at the user level as it's likely parts of your code will have been paged out
Either don't page out executable pages
>and trying to run your clean up code will cause a page fault and the OS will crash your code due to a page not being available.
...or reserve some memory for handling this condition.
>>928135
>I'm not quite sure what you mean. It opens up a window with it.
Some of my devices running Unix don't have a screen.
>or segmentation fault
Segmentation fault is grounds for killing. Don't write buggy software.
>or dividing by zero...
Check your operands before dividing. What's supposed to happen, a 'window' popping up?
▶ No.928140>>928144 >>928145
>>928133
This is some really weak advice. malloc will more likely return NULL because it can't allocated a contiguous block of the requested size, not because the computer is OOM. In these cases, the user can still free heap space and hope that one of those blocks will open up for the larger to fit, or the memory can be dumped to HDD and then freed then alloc'd, or the user can request less memory. It's rare that a computer, especially with multiple GB of RAM, will run out of memory.
▶ No.928142>>928144
>>928139
>Either don't page out executable pages
You're 30 years too late to make that argument. Your code runs on what we have today.
>or reserve some memory for handling this condition
Again, it's too late. But let's say OSes did this. It's infeasible as programs often touch a lot of memory doing a controlled shutdown. Many programmers deinitialize everything rather than call exit(). Even on a desktop where "heavy ram user" means something like a video game, that might involve touching hundreds of thousands of objects and data structures to shut down. There's no amount of reserved ram that would fit all cases.
▶ No.928144>>928148
>>928140
>malloc will more likely return NULL because it can't allocated a contiguous block of the requested size
A block of allocated memory doesn't have to be contiguous in physical memory, at least not on x86.
>It's rare that a computer, especially with multiple GB of RAM, will run out of memory.
In what world are you living?
>>928142
>You're 30 years too late to make that argument. Your code runs on what we have today.
It's easy to change paging algorithms.
>Again, it's too late.
no
> It's infeasible as programs often touch a lot of memory doing a controlled shutdown.
I was talking about handling in the kernel, as you were talking about the OS crashing due to not being able to handle a OOM condition, which is the kernel's job.
>It's infeasible as programs often touch a lot of memory doing a controlled shutdown. Many programmers deinitialize everything rather than call exit(). Even on a desktop where "heavy ram user" means something like a video game, that might involve touching hundreds of thousands of objects and data structures to shut down. There's no amount of reserved ram that would fit all cases.
free() doesn't have to allocate any memory. The size of the application doesn't matter. But whether free() allocates memory depends on the implementation, of course.
▶ No.928145>>928147 >>928150
>>928140
>malloc will more likely return NULL because it can't allocated a contiguous block of the requested size, not because the computer is OOM.
It fails because it is OOM and growing the heap failed. Even with rlimits preventing the entire system going OOM, your application is still OOM and paging will still count against your limit.
>In these cases, the user can still free heap space
Freeing a block doesn't guarantee the memory will be given back to the OS. And in an OOM condition, you're usually competing with other users of memory that could take what you free.
>or the memory can be dumped to HDD and then freed then alloc'd
wat
>or the user can request less memory
Avoiding the issue.
Again, there is no reliable way to handle a malloc failure on a modern OS. You can write code to handle them and hope it works 'most' of the time, and spend forever writing a million test cases to test each instance to make sure all that code leaves a clean state (you of course do this I'm sure), or you can recognize that you're pursuing an 80% solution that requires massive amounts of programmer time and try something else. "Something else" is designing code to crash and crashing on malloc failure.
▶ No.928147>>928155
>>928145
>Freeing a block doesn't guarantee the memory will be given back to the OS.
Yes it does.
>Again, there is no reliable way to handle a malloc failure on a modern OS.
Yes there is. Repeating that lie won't make it true.
▶ No.928148>>928154
>>928144
>It's easy to change paging algorithms
How easy is it to change what people are going to run your code on?
>I was talking about handling in the kernel
It's not a problem at the kernel level. I thought I was pretty specific about this being about user level code. "It's not possible to safely handle an out of memory condition on most modern operating systems at the user level"
>free() doesn't have to allocate any memory
You're not understanding the issue. Just walking a datastructure to get pointers of objects to free is going to page things in. The accounting data on objects will be paged in. A controlled shutdown in large applications touches a lot of pages.
▶ No.928149
>>927536
>If you think that a static-language-kernel abomination like Linux (or any other UNIX clone) could be turned into a civilized programming environment, you are gravely mistaken.
>And if only the bloat and waste consisted of actual features that someone truly wants to use.
>As things now stand, countless CPU cycles are burned on total losses that no human may even be consciously aware of, such as the impedance mismatch between an idiot kernel’s slab allocator and the garbage collector in the runtime of your favorite dynamic language.
What a guy.
▶ No.928150>>928154 >>928158
>>928145
You are one obstinate ass. malloc returns NULL when it cannot allocate a the requested size in a contiguous block. This doesn't mean the machine is out of memory. It could mean there are a bunch of tiny blocks everywhere that fragmented the memory and won't allow larger blocks to be allocated. Don't talk like you know anything or like anyone believes your bullshit.
▶ No.928153>>928157 >>928163
>>928117
A one-line soundbyte doesn't remove the fact the Quake was designed for plain VGA/SVGA cards, since that's what everyone was using. It wasn't until Quake II that they designed the game from the start for 3D cards, and then threw a bone to the peeps who hadn't yet "upgraded" to those shit technologies by leaving the lowest resolution available for play in software mode. By that time (1998) enough people got brainwashed into buying fancy, expensive GPU shits that they could just say fuck it and designe the game for it. Now some decades later everyone on /v/ is still going on about upgraing their GPU shits so they can play more shitty ass FPS games that don't even have as good gameplay as Doom. A retard and his money are soon parted, as the industry found out very quickly.
As far as Doom playing fine on a 386, you're a lying motherfucker or you don't care about framerate at all. I'm not the only one who knows this, there have been plenty of discussion on doomworld about the minimum requirements for smooth framerate. One dude said his 486 wasn't even enough for some maps. Mine had a Vesa Local bus video card so that helped a bit.
▶ No.928154
>>928148
>How easy is it to change what people are going to run your code on?
User space apps should not know nor care about kernel details such as paging.
>It's not a problem at the kernel level. I thought I was pretty specific about this being about user level code.
That was a misunderstanding then. But see below.
>>free() doesn't have to allocate any memory
>Just walking a datastructure to get pointers of objects to free is going to page things in.
And other things out.
>The accounting data on objects will be paged in. A controlled shutdown in large applications touches a lot of pages.
I never said it would be fast. But if it's entirely the application's job, then the OS is not to blame for the clean up code being inefficient.
>>928150
>contiguous
This is ONLY an issue if there's no more contiguous virtual memory available. But 48 bit of virtual address space per process should be enough for any present-day application, no matter what they're doing in memory. And if it doesn't at some point in the future, virtual address space can be extended to up to 64 bit on x86.
▶ No.928155>>928156
>>928147
>Yes it does.
Learn how malloc works you stupid faggot.
#include <stdlib.h>
#include <malloc.h>
#define STUPID_FAGGOT 1
void mallocstats(char const *message) {
struct mallinfo info;
info = mallinfo();
printf("%s: arena: %d\n", message, info.arena);
}
int main() {
void *blocks[1 << 16];
mallocstats("before");
for (int i = 0; i < sizeof(blocks) / sizeof(&blocks[0]); ++i) {
blocks[i] = malloc(10000);
if (!blocks[i]) abort();
}
#ifdef STUPID_FAGGOT
void *motherfuckingfragmentation = malloc(1);
if (!motherfuckingfragmentation) abort();
#else
#endif
mallocstats("allocated");
for (int i = sizeof(blocks) / sizeof(&blocks[0]) - 1; i >= 0; --i) {
free(blocks[i]);
}
mallocstats("freed");
return 0;
}
>Repeating that lie won't make it true.
Kill yourself twice.
▶ No.928156>>928161
>>928155
test.c:7:19: error: variable has incomplete type 'struct mallinfo'
struct mallinfo info;
test.c:7:10: note: forward declaration of 'struct mallinfo'
struct mallinfo info;
test.c:8:10: warning: implicit declaration of function 'mallinfo' is invalid in C99 [-Wimplicit-function-declaration]
info = mallinfo();
% man mallinfo
No manual entry for mallinfo
???
▶ No.928157
>>928153
No, I don't care much about framerates or other crap. I just play the game. I played Doom just fine on a 386.
▶ No.928158>>928162 >>928171
>>928150
Think, anon. Pause the autism about irrelevant details. Can I implement a 100% reliable clean up for a malloc failure in user level code? Or can I at best hope to implement something that might work most of the time for most programs? Think hard.
Now understand that what I'm telling you guys to do is to change how you approach the problem. Rather than choose a solution that cannot ever fully solve the problem and makes for a million untested cases in your code, choose a different solution that can. Modern code does that - it's built to crash clean and have something else cover for the failure. That ranges from simple things like automatic restarts of services by something like systemdicks, failovers like with VRRP, or language-native handling like in erlang. It's a much better solution, and is standard practice today.
▶ No.928161>>928162
>>928156
If you're going to use a meme library or meme OS you should know enough to replace it with its equivalent. Otherwise it's just embarrassing that you've gone with a special snowflake set of software that you don't know how to use.
▶ No.928162>>928165
>>928158
>Can I implement a 100% reliable clean up for a malloc failure in user level code?
YES.
>Rather than choose a solution that cannot ever fully solve the problem and makes for a million untested cases in your code,
Write tests, then.
>>928161
So you've been talking about a very specific OS and libc the whole time? Probably one that's not even a real Unix?
▶ No.928163>>928164
>>928153
>As far as Doom playing fine on a 386, you're a lying motherfucker or you don't care about framerate at all.
I ran it on a 386/33 back in the day and it was fine.
>One dude said his 486 wasn't even enough for some maps.
Even the cheap 486s were way more than enough for Doom. My DX2 was good enough for Quake.
▶ No.928164>>928167 >>928168
>>928163
Wat. Even a 4386DX4/120 was slow a fuck when running Quake. The turtle icon showed up all the time. If you played any DM against me you would have gotten your ass whupped good! I regularly played againt people on ISDL lines vs. my little 28.8K modem, and often kicked their asses. If I was playing on level field, I'd have murdered everyone incessantly.
▶ No.928165>>928170
>>928162
>So you've been talking about a very specific OS and libc the whole time?
Stop embarrassing yourself. Learn how the memory alllocator works on whatever it is you're using. It's going to be the same issue on all of them. The only difference will be in what it takes to create a worst-case.
▶ No.928167
>>928164
ALso, anyone here can check what I'm saying is valid, simply by running DOS Quake in PCEm (not dosbox because it's not cycle exact and doesn't reflect any particular hardware configuration).
▶ No.928168>>928169
>>928164
Worked For Me(tm) in the beta. For release, I had a 66mhz Pentium and did my gaming over ethernet at a very well connected college.
▶ No.928169>>928173
>>928168
Let me put it this way: neither the qtest nor any later Quake releases worked for shit on my 486DX33 with 32-bit video card. We're talking slideshow measured in FRAMES PER SECOND. So how the fuck is a DX2 doing to run Quake, when it's only enough for Duke3D? This is rediculous.
▶ No.928170>>928178
>>928165
Fine, I ran your code in my linux vm, played a bit with the numbers and the results (allocated, freed) are always equal. Now what? Maybe your libc is shit, not Unix or C?
▶ No.928171>>928178
>>928158
All I see in the malloc docs are
>ERRORS
> calloc(), malloc(), and realloc() can fail with the following error:
>
> ENOMEM Out of memory. Possibly, the application hit the RLIMIT_AS or RLIMIT_DATA limit described in getrlimit(2).
for RLIMIT_AS I see
>Since the value is a long, on machines with a 32-bit
> long either this limit is at most 2 GiB, or this resource is unlimited.
for RLIMIT_DATA I see
>The maximum size of the process's data segment (initialized data, uninitialized data, and heap).
How do any of those relate to the physical memory being gone and not the address space? RLIMIT_AS is the address space, and the only one I can think is RLIMIT_DATA, but it is unlimited. So if the address space is still free, but the computer is out of memory, where is malloc getting the error from?
▶ No.928173>>928183
>>928169
I got that backwards btw, I meant SECONDS PER FRAME. A literal slideshow. And if you think a DX2 clocked at 2x the speed, including bus is going to make enough difference, you're wrong! The main thing the Pentium brought to the table was a blazing fast FPU, which Quake relied on to perform adequately.
▶ No.928178>>928184
>>928170
Read up on how allocators work. You know too little to even understand what the code is showing you. I could tell you to try commenting out the define but the numbers aren't going to make any sense to you right now so it's pointless.
>>928171
I'm not sure why you're going on about address space. On modern systems it's 64 bit so you're not going to be running out of it before you run out of physical memory. It's irrelevant so I'm confused as to what you're getting at.
▶ No.928183>>928199
>>928173
I dunno man, I know I ran it on a DX2 as I didn't have the Pentium until I moved for college. My biggest problem was I had a nec multisync monitor that wasn't very bright and we didn't know about the gamma controls via console at that point.
Also, it wasn't as heavy a reliance on the FPU until the "ports". The Linux version's FPU code was reworked rather than just ported which is some of why it was so much faster than the Windows version. I've forgotten the history of who actually wrote that code as everyone seemed to have their hand in Quake porting in the early days.
▶ No.928184
>>928178
>On modern systems it's 64 bit
Seems like you're the one who doesn't know what he's talking about.
▶ No.928199>>928207
>>928183
id software ported their games to Linux in the early days. It was probably Dave Taylor, since "Linux gives me a hard-on" was an actual quote in one of the programs he released (sndserver addon for LinuxDoom, IIRC).
Later on the quakesrc was leaked when crack.com got hacked, after Dave Taylor left id to go work on his own game (Golgotha). That's when other people started hacking on the code.
▶ No.928207
>>928199
>Golgotha
I've not heard that name in forever. I used to hang out in their IRC channel on efnet. It was just their team members and me, they used it as team chat. They never said anything really interesting, mainly just telling people to come look at something.
▶ No.928209
▶ No.928374>>928393
>>928139
>Some of my devices running Unix don't have a screen.
I'm not talking about Unix. LISPMs were single user workstations, not random servers. To adapt this for Unix you could just show the the debugger to the user on their terminal. If they aren't connected then suspends their program allowing you to fg it to see the debugger.
>Segmentation fault is grounds for killing. Don't write buggy software.
Why? Don't you think it would be a good time to have the debugger to figure out what caused it to happen?
>Check your operands before dividing
When you are doing a long computation and after a week it gets a divide by zero since you forgot an edge case. It would be nice to just fix it and continue than fix it and restart the whole computation.
▶ No.928393>>928400 >>928698
>>928374
>I'm not talking about Unix. LISPMs were single user workstations, not random servers. To adapt this for Unix you could just show the the debugger to the user on their terminal. If they aren't connected then suspends their program allowing you to fg it to see the debugger.
Or just restart the process instead of the service being down until someone has the time to log in to the machine.
>debugger to figure out what caused it to happen?
That's what core dumps are for.
>When you are doing a long computation and after a week it gets a divide by zero since you forgot an edge case. It would be nice to just fix it and continue than fix it and restart the whole computation.
Extremely atypical use case nowadays. If this is a requirement, you can have it on Unix.
▶ No.928400>>928406 >>928614
>>928393
Core dumps not very useful, unless the binary has debugging symbols. But in most cases, the binaries are outright strip'd, so fat chance of that. Plus, you need to have the matching source code extracted somewhere, if you want to make much sense of it. Big hassles! Normally I just delete core files, and I imagine most other do as well.
▶ No.928406
>>928400
Would a lisp machine debugger help you with such binaries?
▶ No.928471
"We present a design for a class of computers whose 'instruction sets' are based on LISP. LISP, like traditional stored-program machine languages and unlike most high-level languages, conceptually stores programs and data in the same way and explicitly allows programs to be manipulated as data. LISP is therefore a suitable language around which to design a stored-program computer architecture. LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. The record structures can be organized into trees or graphs. An instruction set can be designed for programs expressed as such trees. A processor can interpret these trees in a recursive fashion, and provide automatic storage management for the record structures. We describe here the basic ideas behind the architecture, and for concreteness give a specific instruction set (on which variations are certainly possible). We also discuss the similarities and differences between these ideas and those of traditional architectures. A prototype VLSI microprocessor has been designed and fabricated for testing. It is a small-scale version of the ideas presented here, containing a sufficiently complete instruction interpreter to execute small programs, and a rudimentary storage allocator."
This PDF is an interesting read.
▶ No.928582>>928593 >>928620 >>928698
The problem is that computers became a commercial success from the 70s onward in the form of a pile of hacks glued together with text streams (incidentally, text streams were the one worthwhile contribution of this system to computer science.)
▶ No.928593
>>928582
Confusing hardware and software, again.
▶ No.928614
>>928400
Core files are extremely useful. I do detached debugging symbols for our products and save them, and if a service crashes in the wild I can quickly identify where it went wrong. Good Linux distributions do this as well, like the -dbg packages in Debian.
▶ No.928620
>>928582
Make better then you pretentiois poser fuck.
▶ No.928698>>928719 >>928778
>>928393
>Or just restart the process instead of the service being down until someone has the time to log in to the machine.
That's what people do when they don't understand the problem. When you use an OS written in C, you can't understand the problem because there is so much duplicate code and everything needs so many lines of code to do anything. Which of those 60 million lines of code is responsible for the bug? Is it the compiler's fault or the programmer's fault for compiling with the "wrong" optimizations? Is it in the program itself, a library, a daemon, your hardware, or the kernel? Eventually you just give up trying to fix things, or if you're an AT&T shill, you make excuses.
>That's what core dumps are for.
Core dumps suck. A Lisp machine suspends your process and lets you edit the source of a program and change and inspect variables with GC and dynamic typing intact. You can't even do that on UNIX.
>Extremely atypical use case nowadays. If this is a requirement, you can have it on Unix.
Except you can't. After billions of dollars and almost 30 years after UNIX-Haters said it was a problem, UNIX still doesn't do it. Mainframes from the 60s could do it from the beginning (even ones written in assembly), but UNIX can't. Computation was expensive, so people cared about these sorts of things. Being able to fix a problem and continue might have saved weeks of duplicate work and wasted time.
>>928582
>The problem is that computers became a commercial success from the 70s onward in the form of a pile of hacks glued together with text streams
Bullshit. That "pile of hacks" only became successful after people were already using computers for years and the right way to do things was already known for decades. Better was driven out by worse. An MS-DOS user might be persuaded by shills who blame preemptive multitasking or virtual memory for various UNIX bugs, but people who have used better OSes know that UNIX sucks.
>(incidentally, text streams were the one worthwhile contribution of this system to computer science.)
More bullshit. Have you heard of STREAM files in PL/I? Did you know Multics already had text streams in addition to direct and indexed files and virtual memory segments? The difference was that other OSes had better ways to do things in addition to text streams, so you really only used them for printing text, while in UNIX you're forced to use them for everything, which opens up programs to buffer overflows and having to parse and serialize everything which is a lot slower. They probably use null-terminated strings too, which doubles the bugs and slowness.
This article is indicative of the UNIX methodology; if
your daemon process stays up for 24 hours straight without
coredumping or just vanishing into a puff of smoke, you're
doing great! And if not, here's an elegant solution:
No really, this is the way all system software will work
in the future; No one can REALLY debug software, even if
they did have access the sources. I mean, why did the the
Lisp Machine go to all the trouble of signalling and
catching exceptional conditions, and so on (condition-case),
when it's so much simpler to wait and see if your process
dies, oh every 60 seconds or so, and start another one? Of
course what if your process which is watching your other
process dies mysteriously? Well, start another one!
[code]So, here I am, puttering along on this UNIX box, when I
notice a core file in the root directory. Hmmm, I sez, that
wasn't there before I booted this beast. So I tried to look
at the file with dbx, then gdb. No luck -- the debuggers
couldn't read it. Try strings on it -- no luck; no strings.
Bring up the core file in emacs. Hmmm -- looks like
something from startup. But the machine started up fine,
and there's no hint in either the log files or in the core
file of what program may have done this.
So, delete the core file, check the logs, boot the machine.
Machine comes up, core comes back. Same size, looks the
same inside.
So, why doesn't UNIX have some way of finding out what
program barfed? A core file is a large turd on the disk. A
few more bytes in it wouldn't be that bad. Also, supposedly
the startup winnage and lossage all ends up in log files, or
spewed on the screen. Nada in this case. Sigh. Yet more
drain bamage, brought to you courtesy of AT&T plus a host of
others.
Can I have a better OS, please?]/code]
▶ No.928719>>928734
>>928698
>A Lisp machine suspends your process and lets you edit the source of a program and change and inspect variables with GC and dynamic typing intact. You can't even do that on UNIX.
The price to pay to have a fast compiled language. Unless you're one of those retards thinking that GC is costless because of their microbenchmark.
You really want your handholding, with your language making a branch every division to check that the divisor isn't 0. Protip: you have such a retarded language and it's calle Python.
▶ No.928734>>928764
>>928719
>making a branch every division to check that the divisor isn't 0
Take off the UNIX blinders dude. You don't don't need to branch to check if the divisor is 0. This is typically checked by the microprocessor itself and signals a trap when it detects the divide by 0. Even the x86 architecture supports trapping on divide by 0. There is no extra cost in doing this check as it's built into the silicon itself.
▶ No.928750>>928759
If you had the resources to develop&produce one desktop sized hardware implementation of any single programming language besides Lisp in CY+3 which wuld you choose /tech/?
▶ No.928759>>928787
▶ No.928764>>928847
>>928734
So what's your problem with handling SIGFPE like a big boy instead of wanting your language to abstract everything from you (crashing with infos being the best behavior since you have a logic problem in your program)? Also, your mandatory check would probably be emulated under some µarchs.
▶ No.928778>>928847
>>928698
>That's what people do when they don't understand the problem.
Ain't nobody got time to debug everything.
>When you use an OS written in C, you can't understand the problem because there is so much duplicate code and everything needs so many lines of code to do anything. Which of those 60 million lines of code is responsible for the bug? Is it the compiler's fault or the programmer's fault for compiling with the "wrong" optimizations? Is it in the program itself, a library, a daemon, your hardware, or the kernel?
You've obviously never administrated a unix system.
>Eventually you just give up trying to fix things,
Depends. Time is money.
>or if you're an AT&T shill, you make excuses.
So how many people do you think are on AT&T's payroll?
>Core dumps suck. A Lisp machine suspends your process and lets you edit the source of a program and change and inspect variables with GC and dynamic typing intact. You can't even do that on UNIX.
Yes you can. Just run all processes in gdb :^) Or modify your kernel to fire up a debugger every time a process crashes. But that would be retarded, really.
>>Extremely atypical use case nowadays. If this is a requirement, you can have it on Unix.
>Except you can't.
Yes you can. Stop lying.
>After billions of dollars and almost 30 years after UNIX-Haters said it was a problem, UNIX still doesn't do it.
No market for that, obviously.
>Mainframes from the 60s could do it from the beginning (even ones written in assembly), but UNIX can't. Computation was expensive, so people cared about these sorts of things.
But today it isn't.
▶ No.928787
>>928759
Q predicted this.
▶ No.928847>>928852 >>928854
>>928764
SIGFPE is a UNIX abstraction. Other OSes use a better mechanism. Every division has to perform this check, so it shouldn't be slow.
>>928778
>Ain't nobody got time to debug everything.
UNIX takes bugs to the next level. If you read a post from 1990 talking about bugs from the 70s for any other OS, the bugs would be fixed by now. With UNIX, they show up in UNIX clones that don't share a single line of code.
>Depends. Time is money.
If time is money, why are you using C? It sucks to waste time on fixing bugs, but another language wouldn't have as many bugs.
>So how many people do you think are on AT&T's payroll?
There were a lot in the 80s and 90s, but I think they do it for free now.
>Yes you can. Just run all processes in gdb :^) Or modify your kernel to fire up a debugger every time a process crashes. But that would be retarded, really.
On a Lisp machine and Multics it makes sense.
>No market for that, obviously.
Mainframes still do it. It's one of the things that make them more reliable than UNIX.
>But today it isn't.
If computation isn't expensive, why do C weenies care so much about getting the wrong answer fast?
All of which would still work, although you might lose
your lunch if you make the mistake of looking at it while
all this is happening, except that sometimes the automount
daemon just tells you that the file doesn't exist because it
would take to long to compute all of the above braindamage
and obviously it is more important to get the wrong answer
fast than to get the right answer slowly.
Multics was written in a high-level language first. ITS ran
on the PDP-6 and PDP-10.
Sure they came up with an implementation. You just make a
machine that looks just like a PDP-11 and you can port unix
to it. No problem!
The latest idea is to build machines (RISC machines with
register windows) which are designed specifically for C
programs and unix (just check out the original Berkeley RISC
papers if you don't believe me: it was a specific design
goal). Now, people tell me that the advantage of a Sun over
a Lisp machine is that it's a general-purpose machine ("Of
course it's general purpose." they say. "Why it even runs
unix.").
Hmm, well this example shows that at least the weenix unies
know how to USE recursion!
▶ No.928852>>929200
>>928847
>UNIX takes bugs to the next level. If you read a post from 1990 talking about bugs from the 70s for any other OS, the bugs would be fixed by now. With UNIX, they show up in UNIX clones that don't share a single line of code.
Such as?
>If time is money, why are you using C? It sucks to waste time on fixing bugs, but another language wouldn't have as many bugs.
I use the right tool for the job. For many jobs, it's C.
>On a Lisp machine and Multics it makes sense.
No
>Mainframes still do it. It's one of the things that make them more reliable than UNIX.
And look how many people buy mainframes.
>If computation isn't expensive, why do C weenies care so much about getting the wrong answer fast?
Imply harder.
▶ No.928854>>929200
>>928847
You're still confusing hardware and software.
▶ No.929149>>929151 >>929200
Why not use both unix and lisp? The guys from GuixSD seem to be doing a good job. Why all the hate and D&C?
▶ No.929151>>929200 >>929300
>>929149
It's just /tech/ artistically devouring itself as it makes one last spasm before dying.
This board was founded on Lisp zealotry and a disdain for modern computing, here represented by Unix, (even if it's better than any alternative), and thus it shall die.
▶ No.929200>>929277 >>929289
>>928852
>Such as?
I already explained them enough times. C is full of bugs. A string library that overflows buffers is a bug.
>I use the right tool for the job. For many jobs, it's C.
C is the wrong tool for any job. UNIX weenies used to use it because they had no other choice, like JavaScript and sh. People don't like JavaScript, but common browsers don't run anything else.
>>On a Lisp machine and Multics it makes sense.
>No
That's what these systems do. What doesn't make sense about a debugger coming up when a program crashes? It happens on Windows, but it's not usually a source level debugger like on the Lisp machines.
>And look how many people buy mainframes.
They're the main computer industry of IBM and a lot of other companies.
>Imply harder.
Imply what? That C gets the wrong answer fast by ignoring integer overflow, out of bounds indexing, and all other errors that were known to be problems in the 50s and were solved in the 60s? That's entirely correct.
>>928854
System software is tied to hardware. UNIX ignores everything that wasn't part of a PDP-11. If you're designing an OS without thinking about hardware, it's going to suck. You can design the hardware after the OS, but you're still thinking about hardware.
>>929149
Because the problem is those 30 or 60 million lines of code that are full of bugs and reinvent the wheel multiple times. There is so much duplicated effort on UNIX machines and it's a huge waste of time and money. I want to get rid of all that bullshit and replace it with something better, like real systems that are no longer made.
>>929151
Bullshit. UNIX weenies took over this place, and now I'm using these posts to help them understand that software used to not suck and doesn't have to suck.
>Lisp zealotry and a disdain for modern computing, here represented by Unix
UNIX is not modern computing. It was bad for 1970 because it ignored problems solved by 60s OSes like Multics. Lisp machines are from the 80s, far more modern than UNIX. The weenies believe that instead of making things better, they should convince enough people that better is impossible so nobody tries, and it sucks.
>I liken starting one's computing career with Unix, say as a undergraduate, to being born in East Africa. It is intolerably hot, your body is covered with lice and flies, you are malnourished and you suffer from numerous curable diseases. But, as far as young East Africans can tell, this is simply the natural condition and they live within it. By the time they find out differently, it is too late. They already think that the writing of shell scripts is a natural act.
Of course Lint is useless. It's one of those programs
that was written as a student project N^2 years ago and
has never been debugged or brought up to date.
I was unaware that proscriptions against free()ing pointers
you had never malloc()ed and referencing uninitialized
variables were a recent development, but isn't it great that
we have (void) so that lint doesn't complain when I don't
check the return status from printf()? Even BASIC has
on-error-goto - too bad Kerninghan and Ritchie couldn't be
bothered to put an error handler hook into the language.
By the way, that free() didn't crash the program. Unix
memory allocation routines are so nice and trusting - they
just prepend a couple of bytes to your pointer and then add
it to the free list, regardless of where the pointer points
- if you own the memory, you just freed it
(congratulations). The real fun starts when you do your
next malloc and get a segmentation violation. In my case, I
was left scratching my head and wondering why 6 calls deep
into getpwnam(3) (a supposedly stable library call, but hey,
this is Unix - there are no guarantees), my program suddenly
segfaulted without the slightest indication why. Pfft.
Try giving LINT ANSI-C and see what it does.
Isn't it nice that all the different implementations of
ANSI-C are *less* compatible with each other than PCC-based
compilers? Standards are soooooo wonderful.
Try giving the Sun compiler ANSI-C, for that matter.
With the advent of Solaris, Sun will address that problem by
not giving you a C compiler.
▶ No.929277>>929309
>>929200
>C is full of bugs.
The language is not full of bugs.
>A string library that overflows buffers is a bug.
Not if you're using it according to the documentation.
>C is the wrong tool for any job.
Wrong.
>UNIX weenies used to use it because they had no other choice, like JavaScript and sh. People don't like JavaScript, but common browsers don't run anything else.
There are many people who do.
>>>On a Lisp machine and Multics it makes sense.
>>No
>That's what these systems do. What doesn't make sense about a debugger coming up when a program crashes? It happens on Windows, but it's not usually a source level debugger like on the Lisp machines.
I already told you: few computers have a screen AND someone who can use a debugger.
>Imply what? That C gets the wrong answer fast by ignoring integer overflow,
which is the programmer's job
>out of bounds indexing,
which is the programmer's job
>and all other errors that were known to be problems in the 50s and were solved in the 60s? That's entirely correct.
With a powerful gun you can shoot yourself in the foot. Duh!
>System software is tied to hardware.
Ever heard of portability? Thankfully C is a very portable language.
>UNIX ignores everything that wasn't part of a PDP-11.
Wrong.
>If you're designing an OS without thinking about hardware, it's going to suck.
Little has to be considered for most parts of the OS for most architectures that are used today.
>You can design the hardware after the OS, but you're still thinking about hardware.
Nobody does that.
>There is so much duplicated effort on UNIX machines and it's a huge waste of time and money.
Such as?
>I want to get rid of all that bullshit and replace it with something better, like real systems that are no longer made.
Fucking do it. Go ahead. I'll wait and wish you all the best and good luck.
>and now I'm using these posts to help them understand that software used to not suck and doesn't have to suck.
Except you don't because your posts lack depth. Why don't you show us some actual example code? Or let's talk about the hardware architecture - and not by copying and pasting random e-mails from 1960.
>Lisp machines are from the 80s, far more modern than UNIX.
And look where lisp machines are today :^)
▶ No.929289>>929309
>>929200
>I'm using these posts to help them understand that software used to not suck and doesn't have to suck.
You have yet to prove anything because your posts have nothing been, but pure aggressiveness and non-sequitur's. You have yet to show any regard for humility and respect to any aspect of technology and have only proven yourself to be as smart as a moody teenager on her period.
Make your case on how throwing everything technology and industry has built upon on the last 40 years out the window in exchange for lisp without logical fallacies to show that you aren't a shitter shattered sore loser or get the fuck out and stop posting on your C based machine.
Is it that there's some grand conspiracy by AT&T that the mean old UNIX meanies are out to get you or that you're so absolutely obtuse that you can't accept any opinion or viewpoint whatsoever? Your move.
▶ No.929300
>>929151
>This board was founded on Lisp zealotry
No, it was founded on relatively good discussion about interesting subjects. Newfags like you are the reason I barely post once a month, unlike the many times a day I previously had
▶ No.929309>>929311 >>929315 >>929324 >>929463
>>929277
>Not if you're using it according to the documentation.
How will you use strcat or strcpy according to the documentation? There's no way to provide the size of the buffer. abc + def is 1960s technology, but C weenies can't even manually pass the length of a string. The combination of array decay and null-terminated strings make it impossible to use known solutions, which sucks.
>Wrong.
What's wrong about it?
>I already told you: few computers have a screen AND someone who can use a debugger.
Debugger doesn't mean hex codes or some primitive UNIX gdb bullshit. A modern 80s debugger can do source level debugging and fix programs without having to kill or restart the program, but that needs a better OS.
>Ever heard of portability? Thankfully C is a very portable language.
C is the least portable ISO or ANSI standard language.
>Wrong.
What's wrong about it? Where is the segmented memory, tagged memory, or garbage collection in UNIX? Those are hardware features. If UNIX really was portable, it should be able to run on those computers, but it can't, because they don't resemble a PDP-11.
>Little has to be considered for most parts of the OS for most architectures that are used today.
Not if you want your computer to have anything more than a PDP-11.
>Nobody does that.
Symbolics Lisp machines have different hardware architectures but one programming language and OS. The hardware was designed to run the particular dialect of Lisp.
>And look where lisp machines are today :^)
UNIX weenies think it's about popularity, which sucks. I like Lisp machines, Multics, and VMS because they're good, not because a company paid me to like them or paid people to tell me to like them.
>>929289
>You have yet to show any regard for humility and respect to any aspect of technology and have only proven yourself to be as smart as a moody teenager on her period.
I have tremendous respect for real technology, academia, and innovation. I have no respect for anyone who pretends that problems that were solved in the 60s are "too hard" for 2018. I'm not the one who pretends the work of thousands of people didn't exist because they didn't work on C and UNIX.
>throwing everything technology and industry has built upon on the last 40 years out the window
That's what UNIX and C do and that's why I hate them. UNIX weenies say that problems that were solved in the 1960s are impossible to solve. Dynamic linking was done right in the 60s. Buffer overflows were a solved problem in the 60s. People knew how to build OSes without panics or broken system calls or OOM killers.
>Is it that there's some grand conspiracy by AT&T that the mean old UNIX meanies are out to get you or that you're so absolutely obtuse that you can't accept any opinion or viewpoint whatsoever? Your move.
I accept opinions, but I don't accept bullshit. Null-terminated strings suck. That's a fact, not an opinion. Array decay is broken and a bug. That's a fact too, not an opinion.
Date: Fri, 12 Mar 93 18:45:44 -0500
Subject: The Emperor of China
Section 30.02 of _Unix Power Tools_ by O'Reilly & Associates says
... /ispell/, originally written by Pace Willison ...
but hey, I was there when Pace ported the ITS SPELL program
to C. Sure I am grateful to have a few reminders (^Z is
another one) of bygone glories around, but let's give credit
where credit is due! Legend tells of a Chinese Emperor who
ordered books burned so all knowledge would be credited to
his reign. I guess the subsequent generation of scholars
were a lot like the Weenix Unies of today.
▶ No.929311
>>929309
>C is the least portable ISO or ANSI standard language.
No source. I've compiled C for x86, ARM, POWER, m68k, AVR, and PIC.
I've never been able to compile LISP for anything other than x86.
>Those are hardware features
What extreme hardware bloat. Enjoy being able to run LISP on anything, but Multics daddy's money bar mitzva starbucks machines.
Question, what hardware did you use to make your post? Depending on the question it might solve your question on what language is better.
▶ No.929315
>>929309
>That's what UNIX and C do and that's why I hate them
Since when? Machines have been running just find under C. They wouldn't switch to C if it was bad, now would it?
>Null-terminated strings suck. That's a fact
Facts don't grow on trees. Just because you say it doesn't make one. The browser you posted from used null-terminated strings and I say it did pretty well.
What purpose is a LISP machine? I've never seen any lisp machine be able to surpass the Xerox Parc Alto in terms of performance, while a Vonn-Neumann C based x86 machine can pull off https://www.youtube.com/watch?v=0oXCy-OeLHs or my company's CNC software?
What's the purpose of a lisp machine if it can't run anything useful?
▶ No.929324>>929336 >>929416
>>929309
>hardware garbage collection
You'll get fewer games and emulators than fucking 9front.
▶ No.929336
>>929324
By hardware garbage collection they mean into the trash it goes.
▶ No.929416
>>929324
Why do you think that's true?
▶ No.929463>>929474 >>929794
>>929309
>Null-terminated strings suck. That's a fact, not an opinion
Fact 1: you would need to use a multiprecision integer to store the size unless you want to limite the size of your string and add a check everytime you append something. Big memory AND CPU overhead.
Fact 2: strings are mostly accessed linearly, so a terminator is fine. You can still store the size separately if you really need it in your situation.
▶ No.929474
>>929463
>Fact 1: you would need to use a multiprecision integer
In theory yes, in practice no. Practically you'll be working with a 64 bit processor that can only address 48 bits worth of addresses. This means that you just need to store 48 bits worth of length maximum. (If your architecture supported a larger address space, you could just change the length of this length).
▶ No.929794>>929811 >>929815
>>929463
>Fact 1: you would need to use a multiprecision integer to store the size unless you want to limite the size of your string and add a check everytime you append something. Big memory AND CPU overhead.
Do you think it's reasonable to search 18,446,744,073,709,551,616 characters for a null? You don't think that has memory and CPU overhead? Let's be more reasonable and assume you only have to search 4,294,967,296. If you could design a computer where searching that much memory is faster than reading a 32-bit or 64-bit integer, you would be a billionaire.
>Fact 2: strings are mostly accessed linearly, so a terminator is fine. You can still store the size separately if you really need it in your situation.
Null-terminated strings are mostly accessed linearly because you can't access them any other way without possibly reading past the string. All the algorithms that need random access to characters can't be used, so they're even slower.
▶ No.929811
>>929794
>>Fact 2: strings are mostly accessed linearly, so a terminator is fine. You can still store the size separately if you really need it in your situation.
>Null-terminated strings are mostly accessed linearly because you can't access them any other way without possibly reading past the string. All the algorithms that need random access to characters can't be used, so they're even slower.
Can you learn to read, please? How did you make "strings are usually accessed sequentially" into "we would access them in random order for no reasons if we had the size"? Mm8, UNIX and POSIX are riddled with old age and lack of time dedicated to design, but you're mighty retarded. Especially when your answer to GC is "just do it in hardware".
▶ No.929815>>929831
>>929794
But what strings are 18,446,744,073,709,551,616 characters long? Or more than a MB for that matter? If you're handling large chunks of data, they're most likely not strings but rather byte arrays Byte arrays are arrays of bytes while strings are comprised of characters. A character is not necessarily a byte. By using null terminated strings, you're limiting your choices of character encodings., you know their size and store it separately.
But I do agree null termination sucks.
▶ No.929831
>>929815
>But what strings are 18,446,744,073,709,551,616 characters long?
Your mom's. And I'm talking font size 72 monospace.
▶ No.938350>>938355
Found a really great article that every single lispfag must read:
https://danluu.com/symbolics-lisp-machines/
It's by Dan Weinreb, one of the guys who worked at Symbolics. tl;dr it was mismanaged, poor timing, rapid irrelevance, wrong vision, wrong bets on auxiliary tech, customer base too reliant on DoD funding. You don't need apologia whataboutism, Lisp Machines were doomed to fail for many reasons, they were not superior.
▶ No.938355
>>938350
Oh veeeeyyy how dare you make fun of the chosen workstation! I'll skin you alive you UNIX NAZI.