[ / / / / / / / / / / / / / ] [ dir / agatha / cyoa / girltalk / hikki / just / sapphic / sonyeon / vor ][Options][ watchlist ]

/tech/ - Technology

You can now write text to your AI-generated image at https://aiproto.com It is currently free to use for Proto members.
Name
Email
Subject
Comment *
File
Select/drop/paste files here
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Expand all images

File (hide): d494f4c6d75029f⋯.jpg (351.29 KB, 1536x2048, 3:4, PDP-11.jpg) (h) (u)

[–]

 No.909679>>909681 >>909683 >>909734 >>909741 >>909909 >>909949 >>909950 >>909992 >>911724 [Watch Thread][Show All Posts]

https://queue.acm.org/detail.cfm?id=3212479

Or rather, it's only low-level if you run it on a PDP-11. Modern processors use a massive amount of abstraction to make it fast.

C assumes flat memory and sequential execution. Processors don't have those things, but everyone uses C, so they have to pretend they have them just so you can run C fast. Instruction-level parallelism is hard and prone to bugs, but it's the only kind you can use for C without completely changing the language or the way it's used.

Compilers jump through hoops to make it run fast. It's not just undefined behavior, it's also unclear padding rules that even expert C programmers often don't understand, and pointer provenance rules that mean it doesn't just matter what a pointer contains but also where it was made.

Much of the difficulty in parallelism is just about fitting it into C's execution model. Erlang manages to avoid that and makes parallelism a lot easier, but people keep writing C.

Processors could be simpler and faster if they didn't have to layer a fake PDP-11 on top.

 No.909681>>909682 >>909686 >>910577

>>909679 (OP)

oh so erlang is low level. right


 No.909682

>>909681

try reading the text next time


 No.909683>>910533 >>910578

>>909679 (OP)

I wonder what kind of autism it takes to have a hateboner for a programming language.


 No.909685>>909727 >>909805

Mostly incoherent ramblings desu. C has its flaws but they don't affect CPUs as much as you claim.

>C assumes flat memory

No

>Processors could be simpler and faster if they didn't have to layer a fake PDP-11 on top.

But they don't.


 No.909686>>909703

>>909681

There are no low-level languages on modern processors. You can get low-level on a GPU, maybe.

Erlang on a hypothetical processor designed to run it wouldn't be as much more high-level than the modern C monstrosity as it should be. C is no longer fit to be a low-level language.


 No.909703>>909712

>>909686

What is Assembly then?


 No.909706

File (hide): fca7c29247dc751⋯.jpg (13.61 KB, 400x400, 1:1, jW4dnFtA_400x400.jpg) (h) (u)

>C

All the baggage of a low level language, anchored by the fact that it's actually a high level language.


 No.909712>>909715 >>909723

>>909703

It's a way to get close to a horribly complex abstraction layer. It's still pretending instructions are sequential when they aren't, which is the direct cause of Spectre and other bugs.

There's a lot of abstraction inside the processor.


 No.909714

rust is the only low level language


 No.909715>>909717 >>909727

>>909712

How would a processor that did not hide non-sequential instruction execution look like?


 No.909717>>909894

>>909715

It might just not do any instruction-level parallelism that requires branch prediction in the first place, and offer better explicit parallelism instead.


 No.909718>>909727

>oy vey C is the fault for our "bugs"

thanks Intel shills


 No.909723>>909725

>>909712

Spectre is because of predictive (speculative) code execution, not parallelism


 No.909725

>>909723

It's instruction-level parallelism.


 No.909727>>909737 >>909741 >>910365

>[assembly] is not low level

C is not low level either. It can compile however it wants. You don't get many guarantees about how it will look in assembly. You can't for instance read something at ESP+8 in any useful way (it may just work on your compiler today but only because of coincidence). oh he says pretty much this in the "Optimizing C" section. C is also not high level

>In the wake of the recent Meltdown and Spectre vulnerabilities, it's worth spending some time looking at root causes.

not really, it was already obvious that cache is a cause of side channels without knowing much about CPUs in particular

>>909685

most C programmers think C allows you to program x86 without using assembly instructions. however this is wrong because C is an abstract language. trying to use it in some specific way breaks stuff, for example prevented people from moving from x86 to x64

>>909715

just turn off cache and branch prediction lol

>>909718

polniggers aren't qualified for this board. these new vulns have nothing to do with Intel, it's caused by superscalar CPUs. if AMD wasn't superscalar you'd get LARPers complaining that AMD is too slow for real life


 No.909734>>909760

>>909679 (OP)

> Processors don't have those things

So what do processors have?

And how could a language be designed to take advantage of what processors do differently?


 No.909737>>909817

>>909727

>these CPU vulns have nothing to do with CPU manufacturers

>REEE it's your fault

$0.01


 No.909741>>909760 >>909805 >>909817 >>910483 >>910754 >>910756

>>909727

>have nothing to do with Intel

>intel didn't invent x86

>if AMD

They are manufacturing CPUs with the same architecture, you nigger.

AMD and Intel bugs strongly overlap because they basically make the same CPUs with only slight differences. Otherwise your programs wouldn't run on them without recompiling.

>>909679 (OP)

OP you're a faggot and you know it

>sequential execution

If it really worked like that your PC could only run one program at a time. Also that doesn't matter and no programming language does it differently because it makes zero sense.

>Or rather, it's only low-level if you run it on a PDP-11. Modern processors use a massive amount of abstraction to make it fast.

No C is an abstraction of assembly and can be compiled into NATIVE CPU INSTRUCTIONS

>Much of the difficulty in parallelism is just about fitting it into C's execution model.

No it's not. You ever heard of pthreads? And ALL CPUs execute one thing after another that's how ever calculator on the planet works. Your brain is probably nothing more than a multicore CPU running native instructions


 No.909760>>909770

>>909734

Processors start executing the next instruction before the last one is done. They execute a lot of them at once. To do that they need to predict which instructions are going to need to be executed, but it's impossible to do that reliably.

Processors also use multiple levels of caches, transparently, in a way that heavily affects performance of C programs but isn't visible to them.

>>909741

>If it really worked like that your PC could only run one program at a time.

No? I don't even understand what kind of confusion of ideas is going on here.

>No C is an abstraction of assembly and can be compiled into NATIVE CPU INSTRUCTIONS

"Native CPU instructions" aren't very low-level any more, which is why it says that processors use a massive amount of abstraction. I recommend you click the link at the top of the page.

>No it's not. You ever heard of pthreads? And ALL CPUs execute one thing after another that's how ever calculator on the planet works. Your brain is probably nothing more than a multicore CPU running native instructions

pthreads is unnecessarily hard to work with because it's bolted on top of C's execution model. C (very reasonably) wasn't designed with that in mind, so you need a lot of coordination between threads to make sure you don't fuck it up. Adding parallelism to C programs is nontrivial, and starting threads is fairly expensive.

>Also that doesn't matter and no programming language does it differently because it makes zero sense.

Erlang is different. It's worth looking into its design more deeply if you aren't familiar with it, but the most relevant thing here is that everything happens inside cheap processes that share no mutable state. That makes it easy to run a lot of communicating tasks in parralel.


 No.909770>>909801

>>909760

>No? I don't even understand what kind of confusion of ideas is going on here.

You might have misunderstood. Every core does everything sequentially but the programs don't run at ring 0. And OP claimed that to be an issue

>aren't very low-level any more

I'm not sure what you're talking about. C is a high level programming language but the amount of abstraction has been proven to be reasonable.

>starting threads is fairly expensive.

I don't think anything keeps you from starting them at the beginning of the program

Other languages just abstract that as far as I know. So it won't get any better because your code looks prettier.

And no one cares for theoretical models. The only thing which matters is practical capability.


 No.909801>>909854

>>909770

>You might have misunderstood. Every core does everything sequentially but the programs don't run at ring 0. And OP claimed that to be an issue

It's an issue because C is not very good at using multiple cores, so processors focus on making single core execution fast, which causes all kinds of problems.

>I'm not sure what you're talking about. C is a high level programming language but the amount of abstraction has been proven to be reasonable.

C is a lot higher level than it appears to be because of everything going on in the processor, but it's still being used as a low level language. It's getting some of the worst of both worlds.

>I don't think anything keeps you from starting them at the beginning of the program

Not knowing how many you're going to need in the first place and what data they're going to process is what keeps you from doing that.

If you want good parallelism you need to make it a core part of the way computation is handled. Erlang processes are the natural way to structure Erlang programs, so the parallelism is almost implicit. POSIX threads are absolutely not the natural way to structure C programs.

>Other languages just abstract that as far as I know. So it won't get any better because your code looks prettier.

It would get better if the processor were designed for that execution model. Modern processors are optimized for C and do things that are necessary to make C run fast but would have better alternatives for more realistic execution models.

GPUs are useful because their execution model is so different from CPUs, even though everything you can do on a GPU could be done on a CPU in a more sequential way.

>And no one cares for theoretical models. The only thing which matters is practical capability.

These models represent the ways the systems actually work. Practical capability depends entirely on the way these systems work.


 No.909805>>909811

>>909685

I believe what he meant is that C is meant to work with an OS where memory is treated as one big array. Do you have an argument against this?

>Processors don't fake a PDP-11 layer

I don't know if they do, but x86 is a giant mess. RISC-V has major interest in it for a reason, corporations (including Google, Samsung, Western Digital, NVIDIA) aren't just burning money for fun.

>>909741

>No C is an abstraction of assembly and can be compiled into NATIVE CPU INSTRUCTIONS

It can be compiled into machine code like many other languages. What you said didn't refute what he said.

>You ever heard of pthreads?

You're saying that POSIX threads, the thread execution model made for Unix-likes and Unix, has nothing to do with C? Come on, you can't possibly believe that.

Heres how they are used in practice, if you don't believe me: "Pthreads are defined as a set of C language programming types and procedure calls, implemented with a pthread.h header/include file and a thread library - though this library may be part of another library, such as libc, in some implementations." See: https://computing.llnl.gov/tutorials/pthreads/


 No.909811

>>909805

>You're saying that POSIX threads, the thread execution model made for Unix-likes and Unix, has nothing to do with C?

I think they're saying that pthreads is a way to use parallelism in C and therefore parallelism in C's execution model is a solved problem and as easy as it could be.


 No.909812>>909814

>a programing language that isn't machinecode/assembly and requires a compiler isn't low level

No shit


 No.909814

>>909812

Read the thing


 No.909817>>909854

>>909737

shut the fuck up faggot. you don't have the slightest idea what you're talking about. side channel attacks on cache/bp are on all mainstream CPUs. this has nothing to do with "muh intel vs AMD", since it also applies to e.g ARM. also I haven't read these meme vulns like Spectre and Meltdown, but they probably are some general exploit on some way the BP is done in Intel. However, since cache/BP intrinsically lead to side channels, no matter how many of these meme vulns that get released and patched, you will always be able to form new similar side-channel attacks against any software running on a superscalar CPU

>>909741

>They are manufacturing CPUs with the same architecture, you nigger.

pedantry, i was just telling this polnigger that the vulns are on AMD CPUs as well. the fact that intel invented the architecture is not relevant

>>sequential execution

>If it really worked like that your PC could only run one program at a time.

wrong

>No C is an abstraction of assembly and can be compiled into NATIVE CPU INSTRUCTIONS

Nope. C is an abstract programming language, just like Java. It has nothing at all to do with assembly contrary to what LARPers tend to think.


 No.909837>>909854

Not everything is parallel.

serial computation is a better abstract modeling for reasoning, just as call by value is more understandable than call by reference/name.

Automatic parallel compilation was mastered by fortran architects, and resulted in SIMD, Very long instruction word (VLIW), and GPU hardware.

C is not designed for strict numeric computation, it is an attempt at symbolic computation in a byte oriented fashion.

American engineers thought call stacks and recursion were bad because IBM would lose money if they were to implement European "novelties".

I agree that C compilers should not aggressively optimize as gcc.


 No.909854>>909856 >>909861

>>909817

>>If it really worked like that your PC could only run one program at a time.

>wrong

Already said that here >>909801

CPU switches between the threads but OP claims

>sequential execution. Processors don't have those things

It is sequential and to work around that the OS abstracts it away by switching between the threads.

>>909817

>C is like Java

Not in any way. Java runs in a VM from byte code with automatic memory management. The VM with the standard library alone is 130MB in size. Running software on a PC inside a PC can't be counted as running software directlyon a PC in its native instructions.

Assembly and C are close enough that there is a GCC extension which allows for embedded assembly.

>>909837

recursion is simply slow and offers only laziness to the one writing


 No.909856>>909860 >>909863

File (hide): ed83ceba7bb1600⋯.png (490.09 KB, 449x401, 449:401, laughingwhores.png) (h) (u)

>>909854

>he doesn't know what tail call optimization is


 No.909860

>>909856

If it's tail call recursion then it's equivalent to a loop anyways.


 No.909861>>909865 >>910393

>>909854

reread

>C is an abstract programming language, just like Java.

you can embed assembly in any programming language, including Java. Java will just have some more overhead than C


 No.909863

>>909856

>recursion is simply slow and offers only laziness to the one writing

I'll have to assume you're DEK to argue in good faith.

A language with static call graphs coupled and explicit stack structures seems like a compromise in the machine's favor.

imagine you have a parser implemented as a set of mutually recursive procedures, trying to implement this without a implicit call stack would require one huge procedure with explicit control stack and some means of dispatch (indirect goto, or switch).

Sure, the best solution is to use what yacc does and do the whole LALR approach, but the yacc language itself has recursion.

We are arguing over whether a central concept in computing should be reflected in computer language.


 No.909865>>909985

>>909861

C's environment is a lot smaller and Java IS A VM. There is no denying this.


 No.909894>>909921

>>909717

The approach you describe is called VLIW and it's utter shit. You can't schedule instructions well beforehand since you don't know which execution units are free at any point (which branch did we come from? how long did that memory access take? did an exception happen?). Trying to encode parallelism explicitly leads to bloated, shitty, inefficient code.

what a shit thread


 No.909899>>909910 >>909911

The only reason for any problem in C is that it sucks. There are languages out there that do not have these problems, so the flaw comes from copying C and UNIX and avoiding everyone else's work. There's a brief mention of Fortran, but no mention of Lisp, Ada, PL/I, or many other languages. I still don't know why anyone would use C unless they like using software that sucks.

>A processor designed purely for speed, not for a compromise between speed and C support, would likely support large numbers of threads, have wide vector units, and have a much simpler memory model. Running C code on such a system would be problematic, so, given the large amount of legacy C code in the world, it would not likely be a commercial success.

This is more UNIX bullshit. Why must every computer run C? UNIX shills want you to feel helpless and it sucks. If a vector processor only runs Fortran, people will write more programs in Fortran to take advantage of the speed. If a Lisp machine doesn't support pointer arithmetic and has garbage collection, they would use Lisp and other GC languages. This is why this argument makes no sense and is just fearmongering. If anything, it's the only thing that could be a commercial success, which is why they don't want anyone to do it.

      OK.  How about:

switch (x)
default:
if (prime(x)) {
int i = horridly_complex_procedure_call(x);

case 2: case 3: case 5: case 7:
process_prime(x, i);
} else {
case 4: case 6: case 8: case 9: case 10:
process_composite(x);
}

Is this allowed? If so, what does the stack look
like before entry to process_prime() ?



I've been confusing my compiler class with this one for
a while now. I pull it out any time someone counters my
claim that C has no block structure. They're usually
under the delusion that {decl stmt} introduces a block
with its own local storage, probably because it looks
like it does, and because they are C programmers who use
lots of globals and wouldn't know what to do with block
structure even if they really had it.

But because you can jump into the middle of a block, the
compiler is forced to allocate all storage on entry to a
procedure, not entry to a block.

But it's much worse than than because you need to invoke
this procedure call before entering the block.
Preallocating the storage doesn't help you. I'll almost
guarantee you that the answer to the question "what's
supposed to happen when I do <the thing above>?" used to be
"gee, I don't know, whatever the PDP-11 compiler did." Now
of course, they're trying to rationalize the language after
the fact. I wonder if some poor bastard has tried to do a
denotational semantics for C. It would probably amount to a
translation of the PDP-11 C compiler into lambda calculus.


 No.909909

>>909679 (OP)

YOU LIE


 No.909910>>909911 >>910043 >>910350

>>909899

kill yourself rustfag


 No.909911>>909912

>>909899

>post random UNIX hater comments as code blocks to gather attention

I see, the UNIX and C hater is back again

Can you just leave the board.

Garbage collection is still a pile of shit. Even if you have hardware support for it.

>>909910

Is rustfag and UNIX hater really one and the same person?


 No.909912>>909914 >>909915

>>909911

>Is rustfag and UNIX hater

yes, the rustfag has switched tactics


 No.909914

File (hide): 79ff06d5f5c9871⋯.mp4 (3.63 MB, 640x480, 4:3, out of touch.mp4) (h) (u) [play once] [loop]

>>909912

Thx for informing me I guess I was a little out of touch.


 No.909915>>909919 >>909926

File (hide): 86ddb47b6eb05f4⋯.jpg (41.19 KB, 1280x720, 16:9, steve klabnik 9.jpg) (h) (u)

>>909912

Nah. I'm not the UNIX hater.

Also you don't have to worry about me anymore. I've pretty much stopped posting here. /tech/ is a shithole and I'm sick of it. Every time I look at the catalog it is the same /g/-tier shit and triggering you anti Rust fags hash lost its appeal.


 No.909919

>>909915

nice try rustfag


 No.909921>>910109

>>909894

The real question is: could PGO lead to better VLIW compilers?


 No.909926

>>909915

>fag pretends he's leaving so his new fagsona won't be traced back to him

literally kids


 No.909948>>909952 >>910350

>C assumes flat memory and sequential execution

About as much as assembly, really. Memory hierarchies and instruction reordering/parallel execution are microarchitecture details and programming while having to fully account for them would be a fucking nightmare.

>Much of the difficulty in parallelism is just about fitting it into C's execution model.

Much of the difficulty in task-based parallelism comes from synchronization between execution units and avoidance of race conditions. You will have the exact same challenges with pure assembly.

>Erlang manages to avoid that

Erlang "avoids" nothing; it hides it all under abstractions.

>if they didn't have to layer a fake PDP-11 on top.

You whine about compilers having to jump through hoops, but I guarantee that removing this layer wouldn't help. Case in point: compiling for VLIW architectures (i.e. instruction parallelism directly exposed to the software) is not nearly as easy to do quickly and optimally.


 No.909949

>>909679 (OP)

I see the argument has once again shifted from RTOS kernels, and back into highlevel languages. After reading, I have determined what is being argued is execution models. Guess what, execution models are not tied to languges.

>C assumes flat memory and sequential execution. Processors don't have those things

A chicken and egg flaw in logic. Processors parse their execuition pipelines with a program counter. Microarchitecture has features that will try to keep the processor pipeline (also a sequence) from stalling and keep execuiting instructions, and if possilbe reduce the number of cycles required for each instrucition. Symmetric multiprocessing? Multiple symmetric pipelines working in parallel.

>Look at this PDP-11, you are emulating it!

Wow, the architecture that inspired most other architectures, including the x86... no wonder I can't rice.


 No.909950>>909985

>>909679 (OP)

>C Is Not a Low-level Language

C is a glorified macro assembler


 No.909952>>909957 >>909964 >>909970

>>909948

Did you read the article? It isn't just saying that C is no longer low-level, it's also saying that pure assembly is no longer low-level. It's saying that processors have been optimized for C.


 No.909957>>909961

>>909952

>no longer low-level

C IS A HIGH LEVEL LANGUAGE TO BEGIN WITH AND ASSEMBLY STILL IS LOW LEVEL

It doesn't change because of hardware.


 No.909961>>909982

>>909957

"High-level language" means at least two things, and there's a common meaning of high-level language that usually isn't applied to C. It's sensible to make a distinction between something like C and something like Python, especially in an age where people aren't hand-writing assembly much.

Assembly has become high-level because it no longer reflects the underlying hardware well.


 No.909962>>910112

All these people who don't realize how much of a modern Intel/AMD's core transistor count is dedicated to determining which instructions are safe to execute out of order.


 No.909964>>909969

>>909952


ADD R1, R0, #1

In arm ISA *directly* coresponds to


1110 00 1 0001 0 0000 0000 0000 0000 0001

Which "1110" is used to set the ALU operation mode, fetching bits from r0 "0000", and adding 1 ("0000 0000 0000 0001"), and storing it in r1 "0001" in the write back.


 No.909969>>909983

File (hide): ee31e68a6d97543⋯.png (188.91 KB, 686x508, 343:254, Screenshot from 2018-05-05….png) (h) (u)

>>909964

Okay, but how does the processor actually execute that behavior? You're not describing what happens, you're just describing the result.

The fact that it's abstracted away so much you're ignoring it is telling.


 No.909970

>>909952

>pure assembly is no longer low-level.

What is "low level" enough for you? Some microarchitectural details are inevitably going to be abstracted away by whatever assembly you invent.

Also, see:

>>Case in point: compiling for VLIW architectures (i.e. instruction parallelism directly exposed to the software) is not nearly as easy to do quickly and optimally.

And not "optimizing the processor for C" doesn't really help with the fundamental challenges of task-based parallelism either.


 No.909977>>909984

File (hide): 537c99983f29792⋯.png (46.93 KB, 500x314, 250:157, 500px-Pipeline_MIPS.png) (h) (u)

Fine, but if we go down this route, I will step through MA, then to logic, and eventually we will find ourselves at solid state physics. I'm not interested.

The fact that branch prediction exists and has got acturate to the 98 precentile does not change the basics of microarchitecture. I'm not going to dump countless lines of VSLI on you. I could implement a state machine, that just always guesses "take the branch", and if it gets it wrong (which will be known at the write back, usually for the comparision), well, I still have to stall to get the bad result out of the pipeline. There is no abstraction to it, just many things manipulating the pipeline attempting to keep it from stalling because of the possiblity of a wrong answer.


 No.909982>>910350

>>909961

Objectively and historically it's high level.

If you compare it to java, javascript or PHP of course it becomes low level.

>Assembly has become high-level because it no longer reflects the underlying hardware well.

No. Assembly written for your architecture reflects your cpus architechture 100%.

Assembly commands are different for every CPU.

The article is garbage and the cause for Meltdown and Spectre were speculative execution where things are guessed while waiting for resources.

That PCs became slower because of the following patches is true and we should use different architectures. x86 and amd64 are full of bugs and legacy features.


 No.909983

>>909969

>You're not describing what happens

Are you asking for a microcode-tier level of abstraction?

>image

Some assemblies do require knowledge of the pipeline (e.g. to avoid pipeline hazards), and compiling C to them is not a problem. However, no sane person would rather use a language that constantly forces the programmer to think about those things.


 No.909984

>>909977

*VHDL, not VSLI.


 No.909985>>909991

>>909865

Great point autist, but I never claimed Java is the same as C so you're arguing about literally nothing. Both are abstract languages (you clearly don't understand this concept and are confused and think you can just manipulate the stack and registers from C and still have portable code). Do you also disagree that C and Java are not programming languages, because if they had something in common that would make them be the same thing?

>>909950

Both are true. C is as a macro assembler but at the same time they wrote a giant specification to try and make it into an abstract language that can run "efficiently" on multiple architectures. The most basic example of course is that integer sizes are abstract and making invalid assumptions about them can lead to undefined behavior.


 No.909991>>910376

>>909985

Java and C are both programming languages but C is compiled to CPU instructions and runs natively while Java is compiled to bytecode and run inside a VM with automatic garbage collection. Both are abstracted and portable. And if you embed assembly in Java it will become slower, if you embed it in C it will be executed directly as if you compiled it as the assembly it is (at least in GCC).


 No.909992>>910350 >>910376

>>909679 (OP)

>abstraction

>fast

Pick one.


 No.909998

what a shit thread


 No.910043>>910047 >>910350

>>909910

>kill yourself rustfag

First, there are multiple unix heaters on this board. Second, Rust users have seen the flaws of C, but have Stockholm syndrome. They just keep putting make up on a pig thinking it will be beautiful except all they are doing is making a mess.


 No.910047>>910376

>>910043

>there are multiple unix heaters on this board

>who all coincidentally put random quotes in code boxes

Nice try sole UNIX and C hater


 No.910109

>>909921

The answer is: Yes.

Then again, it also leads to better RISC and CISC compilers, so the margin of failitude is still the same for VLIW.


 No.910112>>910147

>>909962

Same for every other ISA. You want performance - you pay the price.


 No.910147

>>910112

Certain VLIW ISAs, like Itanium, force the compiler to compute data dependencies and didn't try to do it on die.


 No.910350>>910373 >>910376 >>910473

>>909910

It's more likely that you're Rob Pike than it is that I'm the rustfag. Rust has too much C/UNIX bullshit, but it's better than C. I think Rust syntax sucks and they shouldn't have tried to make the language appeal to C/C++ weenies who would never use it in the first place.

>>909948

>About as much as assembly, really. Memory hierarchies and instruction reordering/parallel execution are microarchitecture details and programming while having to fully account for them would be a fucking nightmare.

That's fearmongering. CPUs that don't emulate a PDP-11 don't force the programmer to "fully account for" all that stuff. They're usually higher level than a PDP-11, like segmented memory, channel I/O, string instructions, and so on. These are all hardware features that can't be duplicated in software. None of the new RISCs have any of this because it won't be useful for C and UNIX even though it would make other languages and OSes run faster.

>>909982

>Objectively and historically it's high level.

C is historically low level, but that's not why it sucks. JavaScript is high level and it sucks too. Low level languages do not have to "decay" arrays, use null-terminated strings, have a switch statement that sucks, or any of that.

>C is a relatively ``low-level'' language. This characterization is not pejorative; it simply means that C deals with the same sort of objects that most computers do, namely characters, numbers, and addresses. These may be combined and moved about with the arithmetic and logical operators implemented by real machines.

>C provides no operations to deal directly with composite objects such as character strings, sets, lists or arrays. There are no operations that manipulate an entire array or string, although structures may be copied as a unit.

>>909992

The whole point of the article is that not having to run C would make hardware faster, so all that bullshit they have to do to make C "fast" is still slower than a simpler computer that doesn't have to run C.

>>910043

>Second, Rust users have seen the flaws of C, but have Stockholm syndrome. They just keep putting make up on a pig thinking it will be beautiful except all they are doing is making a mess.

You're right. Rust sucks because Rust users do not look outside the C and UNIX culture. They still think UNIX can be fixed with Redox too.

   Hey. This is unix-haters, not RISC-haters.

Look, those guys at berkeley decided to optimise their
chip for C and Unix programs. It says so right in their
paper. They looked at how C programs tended to behave, and
(later) how Unix behaved, and made a chip that worked that
way. So what if it's hard to make downward lexical funargs
when you have register windows? It's a special-purpose
chip, remember?

Only then companies like Sun push their snazzy RISC
machines. To make their machines more attractive they
proudly point out "and of course it uses the great
general-purpose RISC. Why it's so general purpose that it
runs Unix and C just great!"

This, I suppose, is a variation on the usual "the way
it's done in unix is by definition the general case"
disease.


 No.910365>>910376

>>909727

>just turn off cache and branch prediction lol

Branchless Doom gives you, like, 1 frame/hour if not less.


 No.910372

assumptions C makes are convenient and powerful enough to use

if you can make assumptions in your language in such a manner im all open


 No.910373

>>910350

>That's fearmongering. CPUs that don't emulate a PDP-11 don't force the programmer to "fully account for" all that stuff.

Name examples. What architectures exist that implement which features that takes this burden off programmers, even partially?

>like segmented memory

>segmented memory

>cy+3

But seriously, this has literally nothing to do with C. C works just fine with flat, segmented, and paged memory.

>channel I/O

not a problem of the language. Just write an asm wrapper or use a library that does.

>string instructions

asm


 No.910376>>910393 >>910713

>>909991

>Java and C are both programming languages but C is compiled to CPU instructions and runs natively while Java is compiled to bytecode and run inside a VM with automatic garbage collection. Both are abstracted and portable.

correct

>And if you embed assembly in Java it will become slower

false

>>909992

C has abstraction and neckbeards claim C is fast. Checkm8

>>910047

I hate C and UNIX too

>>910350

>I think Rust syntax sucks and they shouldn't have tried to make the language appeal to C/C++ weenies who would never use it in the first place.

but that's the whole point in Rust, otherwise they'd have just rewrote everything in a non-complete-shit language, like SML

>None of the new RISCs have any of this because it won't be useful for C and UNIX even though it would make other languages and OSes run faster.

What, you want your PL to call some specific assembly string instructions (assuming they're even faster in the first place) to pretend to be fast for builtin functions?

>>910365

>branchless doom

>https://github.com/xoreaxeaxeax/movfuscator/tree/master/validation/doom

>mov-only

nice meme. this isn't the same thing.

>Start an 8-bit X desktop:

>sudo startx -- :1 -depth 8 vt8

also this is suspicious. he could have some inefficient path for rendering, which is common in these sort of meme projects


 No.910385

ANY 👏 LANGUAGE 👏 IS 👏 ONLY 👏 AS 👏 GOOD 👏 AS 👏 ITS 👏 COMPILER


 No.910393>>910451

>>910376

>>embed

>false

You know that Java runs in a virtual machine? The overhead was already mentioned by >>909861

>Java will just have some more overhead than C

If you run it outside of Java, it is not embedded

>C has abstraction and neckbeards claim C is fast. Checkm8

The overheat in comparison is tiny. Most seL4 implementations are in C.

>I hate C and UNIX too

>implying you don't accidentally have the same IP

>rustfagging to himself thinking it will prove something

I'm out. At least we have a thread for straight people >>910382


 No.910397>>910422

If you honestly don't like C or C++, then you are a 65IQ individual. C/C++ is the GOAT.


 No.910422>>910517

>>910397

C didn't age well, and the only reason it's in the position it is now is because the rest of computing stagnated around it.

C is almost half a century old. It's shocking that we're still using it.

I think Ritchie would agree - he said that Unix retarded OS research by 10 years.


 No.910451

>>910393

>you are an rust fag! I am real man XDDD

I've literally not programmed anything aside from C for the last 2 years. stating basic facts about Java over and over again when it's hardly on topic isn't helping you

>>Java will just have some more overhead than C

>If you run it outside of Java, it is not embedded

u wot m8. I said Java has more overhead than C. who said anything about "embedded"? With JNI you have to put your shit in a function in a shared library (IIRC) which Java will call. so yeah there's, overhead

>i'm out

good, perhaps retards who LARP about C all day would be better suited to a board like cuckhan/g/


 No.910473>>910517

>>910350

>string instructions

>make other languages and OSes run faster

lol'd. Is this your Lisp machine bullshit again?

>None of the new RISCs have any of this because

None of the new RISCs have any of this because it's understood that pushing bloat onto the CPU architecture only increases the cost and power consumption. I'm pretty sure that most computers these days are used in mobile and embedded application, and if you ever need extra processing power in those cases, there's a million ways to do it without bloating all CPUs with bullshit that 1% of developers will need.


 No.910483

>>909741

>If it really worked like that your PC could only run one program at a time.

LARP ALERT


 No.910517>>910520 >>910570 >>910573 >>910594

>>910422

>C didn't age well, and the only reason it's in the position it is now is because the rest of computing stagnated around it.

The stagnation happened because AT&T shills wanted it to be that way. They created the "UNIX is right and everyone else is wrong" mantra, which goes along with the mentality that you should never fix anything because all mistakes are user error. If you're coming from VMS and you say the UNIX way is broken and sucks, the shills trained the weenies to say that VMS is the problem and UNIX is always right. A language where 4[a-1] means a[4-1] is obviously broken beyond repair, but weenies still defend that bullshit.

>C is almost half a century old. It's shocking that we're still using it.

What's more shocking is that some languages were already better than C before C existed than C is in 2018.

>I think Ritchie would agree - he said that Unix retarded OS research by 10 years.

More like 50 years, although if that quote was from 1980 or earlier it was probably true at the time.

>>910473

>lol'd. Is this your Lisp machine bullshit again?

Lisp machines are not the only thing besides UNIX and other C-based operating systems. Instructions for segmented memory and strings would be useful to most programmers. If the file system worked the way flat memory model does, it would suck. The main Multics idea was to make memory work more like files so all memory is in the file system and made of segments that can change their size like files do. This is still a good idea even though the MMUs of most processors don't support it.

>None of the new RISCs have any of this because it's understood that pushing bloat onto the CPU architecture only increases the cost and power consumption. I'm pretty sure that most computers these days are used in mobile and embedded application, and if you ever need extra processing power in those cases, there's a million ways to do it without bloating all CPUs with bullshit that 1% of developers will need.

It might use an extra 1% of silicon to benefit 90% of developers, but I don't know the exact numbers. RISC fanatics used to say FPUs were bloat and you should do floating point in software, but the facts showed that they were wrong and now they have FPUs too.

    This poor user tried to use Unix's poor excuse for
DEFSYSTEM. He is immediately sucked into the Unix "group of
uncooperative tools" philosophy, with a dash of the usual
unix braindead mailer lossage for old times' sake.

Of course, used to the usual unix weenie response of
"no, the tool's not broken, it was user error" the poor user
sadly (and incorrectly) concluded that it was human error,
not unix braindamage, which led to his travails.

    Continuing in the Unix mail tradition of adding
tangential remarks,

Likewise,

I've always thought that if Lisp were a ball of mud,
and APL a diamond, that C++ was a roll of razor wire.

That comparison of Lisp and APL is due to Alan Perlis - he
actually described APL as a crystal. (For those who haven't
seen the reasoning, it was Alan's comment on why everyone
seemed to be able to add to Lisp, while APL seemed
remarkably stable: Adding to a crystal is very hard, because
you have to be consistent with all its symmetry and
structure. In general, if you add to a crystal, you get a
mess. On the other hand, if you add more mud to a ball of
mud, it's STILL a ball of mud.)

To me, C is like a ball. Looked at from afar, it's nice and
smooth. If you come closer, though, you'll see little
cracks and crazes all through it.

C++, on the other hand, is the C ball pumped full of too
much (hot) air. The diameter has doubled, tripled, and
more. All those little cracks and crazes have now grown
into gaping canyons. You wonder why the thing hasn't just
exploded and blown away.

BTW, Alan Perlis was at various times heard to say that
(C|Unix) had set back the state of computer science by
(10|15) years.


 No.910520>>910527

>>910517

Like a fucking clock.


 No.910527>>910534

>>910520

not an argument


 No.910533

>>909683

The non literal, imageboard memey type, since plenty of socially competent normalfags who work as devs have hate boners for certain languages.


 No.910534

>>910527

>implying it was meant to be an argument or directed at you


 No.910570>>910594 >>910623

>>910517

Please explain that segmented memory bullshit. Why would anyone want to use that instead of paging these days? And what does C have to do with it?


 No.910573>>910623 >>910656 >>911359

>>910517

>A language where 4[a-1] means a[4-1]

Do you know what the commutative property is? Do you know how math (arithmetic) works?


 No.910577

>>909681

I'm glad we have a dismissive kike serial first poster who obviously doesn't even read or comprehend threads. We all know you're just here to try to ruin the board and it simply makes everybody more antisemitic every time you do it.

Great job!


 No.910578

>>909683

It's more Rust shilling.


 No.910594

>>910517

>segments that can change their size like files do

Elaborate. And like >>910570 is asking, how is that better than paging?

>It might use an extra 1% of silicon to benefit 90% of developers

In most embedded applications (except maybe something like a router with a complex firewall?), string processing is not a performance bottleneck, or is not needed at all; that extra 1% of silicon would just mean a shorter battery life and a higher cost.

>FPUs

FPUs were first added as optional coprocessors. It's only after much experience with them that their general usefulness was realized. I'm all for a completely optional string coprocessor that isn't part of the main CPU architecture.


 No.910623>>910625 >>910632 >>910638 >>910656

>>910570

>Please explain that segmented memory bullshit. Why would anyone want to use that instead of paging these days? And what does C have to do with it?

Multics has both. Paging is about swapping and organizing physical RAM into virtual memory. Segmentation is about organizing data into segments that can grow and shrink independently like files. Instead of reading and copying data from disk, all data is directly addressable as memory and paged in and out as needed, which the article calls demand paging. The stack and other parts of the process memory are also segments which are part of the file system. What C has to do with it is that the C weenies didn't care about segmentation and blamed C's inability to work well on these architectures on segmentation.

http://multicians.org/multics-vm.html

>The fundamental advantage of direct addressability is that information copying is no longer mandatory. Since all instructions and data items in the system are processor-addressable, duplication of procedures and data is unnecessary. This means, for example, that core images of programs need not be prepared by loading and binding together copies of procedures before execution; instead, the original procedures may be used directly in a computation. Also, partial copies of data files need not be read, via requests to an I/O system, into core buffers for subsequent use and then returned, by means of another I/O request, to their original locations; instead the central processor executing a computation can directly address just those required data items in the original version of the file. This kind of access to information promises a very attractive reduction in program complexity for the programmer.

>In Multics, the number of segment descriptors available to each computation is sufficiently large to provide a segment descriptor for each file that the user program needs to reference in most applications. The availability of a large number of segment descriptors to each computation makes it practical for the Multics supervisor to associate segment descriptors with files upon first reference to the information by a user pro gram, relieving the user from the responsibility of allocating and deallocating segment descriptors. In addition, the relatively large number of segment descriptors eliminates the need for buffering, allowing the user program to operate directly on the original information rather than on a copy of the information. In this way, all information retains its identity and independent attributes of length and access privilege regardless of its physical location in main memory or on secondary storage. As a result, the Multics user no longer uses files; instead he references all information as segments, which are directly accessible to his programs.

>>910573

>Do you know what the commutative property is? Do you know how math (arithmetic) works?

It sucks because it doesn't work how arithmetic works. a - 1 should mean what it does in math, so if a is [1,2,3,4], a - 1 should be [0,1,2,3]. That reminds me of more bullshit "UNIX math" from the UNIX language JavaScript. [1,2,3,4] == [1,2,3,4] is false, [1,2,3,4] == "1,2,3,4" is true, and [1,2,3,4] - 1 == [1,2,3,4] - 1 is false, but that's because they're both NaN. Someone's going to defend that too.

>ARM's SVE (Scalar Vector Extensions)---and similar work from Berkeley4—provides another glimpse at a better interface between program and hardware. Conventional vector units expose fixed-sized vector operations and expect the compiler to try to map the algorithm to the available unit size. In contrast, the SVE interface expects the programmer to describe the degree of parallelism available and relies on the hardware to map it down to the available number of execution units. Using this from C is complex, because the autovectorizer must infer the available parallelism from loop structures. Generating code for it from a functional-style map operation is trivial: the length of the mapped array is the degree of available parallelism.

          The lesson I just learned is: When developing
with Make on 2 different machines, make sure
their clocks do not differ by more than one
minute.

Raise your hand if you remember when file systems
had version numbers.

Don't. The paranoiac weenies in charge of Unix
proselytizing will shoot you dead. They don't like
people who know the truth.

Heck, I remember when the filesystem was mapped into the
address space! I even re<BANG!>


 No.910625>>910656 >>911122

>>910623

>making excuses for not knowing what commutative property is

Your words mean nothing.


 No.910632>>910666 >>911122

>>910623

>Instead of reading and copying data from disk, all data is directly addressable as memory and paged in and out as needed, which the article calls demand paging.

So, memory mapped IO. The lack of memory mapped io for disk access on hardware architectures is not a result of C but rather that there's no point in implementing it. Also you didn't explain why we need segmentation for that (and why that's better than paging), or why paging "in and out" is not "reading and copying data from disk".

>What C has to do with it is that the C weenies didn't care about segmentation and blamed C's inability to work well on these architectures on segmentation.

Did you read the article? It says:

>The C model, in contrast, was intended to allow implementation on a variety of targets, including segmented architectures (where a pointer might be a segment ID and an offset) and even garbage-collected virtual machines. The C specification is careful to restrict valid operations on pointers to avoid problems for such systems. The response to Defect Report 2601 included the notion of pointer provenance in the definition of pointer:

>"Implementations are permitted to track the origins of a bit pattern and treat those representing an indeterminate value as distinct from those representing a determined value. They may also treat pointers based on different origins as distinct even though they are bitwise identical."


 No.910638

>>910623

Also let's talk about C and hardware instead of memory allocation mechanisms implemented by operating systems.


 No.910650>>911088

Buffer overflows, null pointers, segmentation faults, weak typing, and manual memory management: The Universal Programming Language®


 No.910656>>910657

>>910573

>>910623

>>910625

So, [] (as is a[i]) is an operation that takes the ith element of collection a. There is nothing commutative about this operation: the left-hand side is a collection, and the right-hand maps into it. From this definition, it should be obvious that the ith element of collection a is not the same as the ath element of collection i. In addition, in C i must be of a type that is coercable into an integer.

a = { 1 : "a", 2 : "b" }

i = 2

a[i] == "b"

i[a] is a type error


 No.910657>>910658

>>910656

It's clear that you do not know how the C standard defines array evaluation.


 No.910658>>910659

>>910657

I know it, I just reject it as nonsensical and unintelligent.


 No.910659>>910661

>>910658

No you don't.


 No.910661>>910662

>>910659

C's Standard behavior is nonsensical for the definition for array subscripting that I clearly gave.


 No.910662>>910663

>>910661

No you didn't.


 No.910663>>910668 >>910848

>>910662

In fact, C's definition of [] is so fucktarded that the retards who wrote c++, the ones who thought overriding << and >> were a good idea (as in

std::cout << "Fuck you";
), gave up on it.


#include <map>
#include <string>
#include <iostream>

int main (void)
{
std::map<std::string, int> shit;
std::string crap = "hello";
shit[crap] = 5;
std::cout << crap[shit];
return 0;
}

C's definition of a[i], as syntactic sugar for *(a + i) is retarded and goes against common sense notions of what that operation does. That I clearly showed and you are a pajeet for thinking otherwise.


 No.910666>>910691

>>910632

>"Implementations are permitted to track the origins of a bit pattern and treat those representing an indeterminate value as distinct from those representing a determined value. They may also treat pointers based on different origins as distinct even though they are bitwise identical."

Hmmm, I can't find any sources on this quote from the article.


 No.910668>>910674 >>910677 >>911359 >>911394

>>910663

>*(a + i) == a[i]

You finally got it. Now do you see how the commutative property applies?

>a + i == i + a

Your criticism is not against C but against a universal truth of arithmetic. Prove it wrong and you can win $1 million. That won't happen though, so you are grasping at straw to form any kind of argument for the sake of argument.

Aside: C++ != C.


 No.910674

>>910668

I'm so sorry that you're a worthless pajeet who mistakenly believes that he has half a brain.


 No.910677>>910693

>>910668

>Whatever C says an operator should do is right

Even if the standard says defines the [] operator that way, it doesn't mean that it is correct.

If C defined the + operator to add the numbers if the first parameter was smaller than the second parameter otherwise return 0, would you think that is how the + operator should work? Treating the C standard as a bible on how things should be is not a wise idea.


 No.910691>>910722 >>910729 >>910808

>>910666

It's easy to test.

$ cat test.c
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
char *s = "hello";
char *s2 = (char *)(int)s;
printf("%x\n", s);
printf("%x\n", s2);
if (s == s2) {
puts("equal");
} else {
puts("unequal");
}
return 0;
}
$ gcc test.c
test.c: In function main’:
test.c:6:24: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
char *s2 = (char *)(int)s;
^
test.c:6:16: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
char *s2 = (char *)(int)s;
^
$ ./a.out
aa0a6794
aa0a6794
unequal
$ gcc -O0 test.c
test.c: In function main’:
test.c:6:24: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
char *s2 = (char *)(int)s;
^
test.c:6:16: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
char *s2 = (char *)(int)s;
^
$ ./a.out
6a727794
6a727794
unequal
$ tcc -run test.c
46414540
46414540
unequal
$ tcc test.c
$ ./a.out
6007ac
6007ac
equal
$ clang test.c
test.c:6:16: warning: cast to 'char *' from smaller integer type 'int'
[-Wint-to-pointer-cast]
char *s2 = (char *)(int)s;
^
test.c:7:20: warning: format specifies type 'unsigned int' but the argument has
type 'char *' [-Wformat]
printf("%x\n", s);
~~ ^
%s
test.c:8:20: warning: format specifies type 'unsigned int' but the argument has
type 'char *' [-Wformat]
printf("%x\n", s2);
~~ ^~
%s
3 warnings generated.
$ ./a.out
400664
400664
equal

Very nasty.


 No.910693

>>910677

It's math.


 No.910713>>910721 >>911359

>>910376

>I don't know what 0 cost abstraction is


 No.910721

>>910713

It isn't


 No.910722>>910724 >>910729

>>910691

>unequal

>unequal

>unequal

What is this sorcery?


 No.910724

>>910722

Just because two variables have the same type and the same value doesn't mean they have to compare equal.

s points to a string literal, but in the last check, s2 points to a pointer cast from an integer. There's no reason for a pointer cast from an integer to be a valid pointer, right? It used to be an integer, and why would an integer be a valid memory address? So there's no need for it to be equal to any pointer that we know is valid. If they're equal it's probably a coincidence. So the standard says that it's ok to assume they're unequal. You don't have to assume it, but you're allowed to.

C is not your friend. Compilers will do anything they can get away with, and they don't even have the decency to be consistent about it.


 No.910729

>>910722

>>910691

What do those warnings mean?


 No.910754

>>909741

You can only run one program at a time. The cpu just stops it and gives control to kernel to start another one.


 No.910756

>>909741

>because they basically make the same CPUs with only slight differences. Otherwise your programs wouldn't run on them without recompiling.

Intel and AMDs execution units are completely different actually. The only thing that's the same is what instructions are exposed to the programmers


 No.910808

>>910691

>char *s2 = (char *)(int)s;

Casting to int truncates your pointer. Output them with %llx and you'll see they're not equal. If you cast them to long long int instead, gcc will compare them as equal.


 No.910848>>910883 >>910904 >>911058

>>910663

haha do you know what an array is when you declare one? an array is syntactic sugar for a constant pointer. it is not retarded it is basic computing. try to think of a better way to do it master programmer.


 No.910883>>910885 >>910889 >>910905

>>910848

>syntactic sugar

Syntactic sugar causes cancer of the semicolon.


 No.910885

>>910883

This thread is cancer of /tech/.


 No.910889>>910952

>>910883

Whats wrong with semi colons


 No.910891>>910902

Semicolons are only acceptable as binary operators for statement composition.

If a semicolon is needed after the last statement then something is deeply wrong.


 No.910902

>>910891

A semicolon in C is not a fucking operator (binary or otherwise), it's punctuation just like the commas separating function arguments (which are something different than the comma operator). It's a statement terminator/delimiter.


 No.910904>>910946

>>910848

>an array is syntactic sugar for a constant pointer.

kind of. They mostly behave like constant pointers, but according to the standard, they're objects with their own distinct set of rules.


 No.910905>>910952

>>910883

What the fuck does that even have to do anything he said, do you you have ADD?


 No.910946>>910957

>>910904

like what


 No.910952

>>910905

>>910889

It's a fucking joke you autistic faggots.

Too much syntactic sugar is bad though.

If somenone have to read the code again later, you have to wade through all those mountains of sugar to find out what something is supposed to do.


 No.910957>>911025 >>911152

>>910946

#include <stdio.h>

int main()
{
char *a = "hello";
char b[] = "world";
printf("%llx %llx\n", a, &a);
printf("%llx %llx\n", b, &b);
printf("%d %d\n, sizeof a, sizeof b);
return 0;
}

$ ./test
55db988617a4 7ffe87952108
7ffe87952112 7ffe87952112
8 6


 No.911025>>911028

>>910957

This isn't possible in Rust. C BTFO once more.


 No.911028>>911152

>>911025

What's not possible?


 No.911055

obviously it's not low level, that's why it's portable like javascript


 No.911058>>911067 >>911075 >>911122

>>910848

> Hurr durr! I can parrot the C Standard and that makes me an expert programmer! Dee Tee Dee! Fries Done!

Any time I want to use an array, I generally use a std::vector or java.util.ArrayList. C arrays are brain-dead. What is important is to remember WHY C arrays are brain-dead.

    struct dirent
{
char name [14];
short inode;
};

If an array name is a pointer, where does it go in this data structure? How big is this data structure? What gets written out to disk when this record is written as raw binary? An array name was made syntactic sugar for a const pointer to the data in order for a directory data structure to be "cleaner".


 No.911067>>911122

>>911058

the problem with vectors is they consume alot more memory. ALOT more memory and for majority of things a least what you are going to write with C you dont need a vector. you will just add alot more bloat to C if you want all arrays to work like vectors.


 No.911075

>>911058

Show us your work to replace it.


 No.911088

>>910650

>complaining about managing your own memory

brainlet programmer spotted


 No.911107

I guess i'll just start programming in llvm ir then


 No.911122>>911125 >>911252

>>910625

>>making excuses for not knowing what commutative property is

It doesn't apply to indexing, so a[i] meaning the same thing as i[a] is stupid. If someone made a language where f(x) was the same as x(f), and it made it impossible to pass a function to a function or have functions with more than one argument (and other flaws I haven't noticed yet), that would be stupid too.

>>910632

UNIX I/O is based on a tape drive paradigm and copying portions of data to and from virtual tapes. It even has rewind and seek functions. Pipes are character-based tapes, which suck because our computers have a lot more random-access capabilities than the PDP-11's tape drives. Multics I/O is based on a virtual memory paradigm where you use normal instructions without having to copy anything and paging is handled by the OS. Separate processes share segments, so it's applied across the whole system.

>>911058

>If an array name is a pointer, where does it go in this data structure? How big is this data structure? What gets written out to disk when this record is written as raw binary? An array name was made syntactic sugar for a const pointer to the data in order for a directory data structure to be "cleaner".

The "constant pointer" exists only in the minds of C weenies to explain why arrays and strings are treated worse than other data types. There is no pointer in the data structure. The equivalent data structure in Ada, PL/I, Pascal, and so on has the same layout, but the array name is an array with all the properties of an array, not a "constant pointer", so this C selective "brain decay" bullshit isn't necessary.

>>911067

This is why good languages have different types of arrays so the programmer can choose the best one for the task at hand. UNIX weenies hate giving the programmer choice unless that programmer happens to be a UNIX systems programmer, then the OOM killers and brain-dead strings are "just a choice".

   > There's nothing wrong with C as it was originally 
> designed,
> ...

bullshite.

Since when is it acceptable for a language to
incorporate two entirely diverse concepts such as setf
and cadr into the same operator (=),
...

And what can you say about a language which is largely used
for processing strings (how much time does Unix spend
comparing characters to zero and adding one to pointers?)
but which has no string data type? Can't decide if an array
is an aggregate or an address? Doesn't know if strings are
constants or variables? Allows them as initializers
sometimes but not others?

(I realize this does not really address the original topic,
but who really cares. "There's nothing wrong with C as it
was originally designed" is a dangerously positive sweeping
statement to be found in a message posted to this list.)


 No.911125

>>911122

Shut the fuck up Stallman.


 No.911152>>911258 >>911262

>>910957

>>911028

fn main() {
let a = &b"test1"[0];
let b = b"test2";

println!("{:p} {:p}", a, a as *const _);
println!("{:p} {:p}", b, b as *const _);
println!("{} {}", size_of(a), size_of(b));
}

fn size_of<T>(_: T) -> usize {
std::mem::size_of::<T>()
}
0x56285273b140 0x56285273b140
0x56285273b148 0x56285273b148
8 8
https://play.rust-lang.org/?gist=4ae0d9d82f42ded2643feb024de7006e


 No.911252

>>911122

I never really needed a vector when writing C since you could just make a linked list yourself very easily . although I can see how it sucks that there isnt a standard library to do this with.


 No.911258>>911259

>>911152

You still aren't saying what's not possible.


 No.911259>>911261

>>911258

>They mostly behave like constant pointers, but according to the standard, they're objects with their own distinct set of rules.

This isn't possible in Rust.


 No.911261

>>911259

So what.


 No.911262

>>911152

Rust syntax looks like cancer desu.

>syntax inspired by c++

Mystery solved.


 No.911359>>911384 >>911386 >>911396

>>910573

not an argument

>>910668

not an argument

4[a] being valid syntax is unnecessary, just like most of what's going on in C

>>910713

>you don't know what zero cost abstraction is

i have a good guess though: a meme?


 No.911384>>912359

>>911359

dont do it that way then. you can mutiply pointers by pointers you can divide pointers by 2. it doesnt make sense to do either of these things but you can do it. so many things are like that. you can create functions that will add 2 numbers together. its stupid but you can do it. what's your point in complaining.


 No.911386

File (hide): 2e893b1392c406a⋯.mp4 (611.29 KB, 1280x720, 16:9, shut-up-richard.mp4) (h) (u) [play once] [loop]


 No.911394>>911405

>>910668

>Aside: C++ != C.

Actually C++ == C. ++C !=C.


 No.911396>>912359

File (hide): 952f79d3d3bab1c⋯.png (219.6 KB, 500x499, 500:499, Dab.png) (h) (u)

>>911359

>i have a good guess though: a meme?

>t. Dumb and proud


 No.911405>>911406

>>911394

You're forgetting that "C" gets evaluated after "C++" has been executed.

$ cat test.c
#include <stdio.h>
int main (void)
{
int C = 0;
puts(C++ == C ? "C++ == C" : "C++ != C");
puts(++C == C ? "++C == C" : "++C != C");
}
$ gcc test.c
$ ./a.out
C++ != C
++C == C

Except that the order of evaluation is probably not specified in the standard and it all depends on what your compiler wants to spit out today, but it's the idea that counts.


 No.911406>>911410

>>911405

>discussing a basic operator

why wont gcc let me go like

++integer++

:(


 No.911410

>>911406

Dude, gcc is open source. Just modify it to let you do that.


 No.911429>>911453

What does the unary plus operator do?


 No.911453>>911454

>>911429

It converts captureless lambdas into function pointers :^)

#include <type_traits>
int main()
{
auto x = +[]{};
static_assert(std::is_same_v<decltype(x), void(*)()>);
}


 No.911454>>911455

>>911453

No one cares.


 No.911455>>911460

>>911454

I do (^:


 No.911460>>911477

>>911455

No one cares.


 No.911469

Just checking in once a year. Is Rust still failing to replace C in the Linux kernel?


 No.911477

File (hide): be9336331c58dfe⋯.gif (860.14 KB, 250x250, 1:1, 1413947157432.gif) (h) (u)

>>911460

No need to be shy, I know you care about those sweet, sweet type conversion rules


 No.911486>>911494

>i dont understand recursion

>but i think recursion is always slow

found the larper.

study harder kid.


 No.911494>>911496 >>911517 >>912359

>>911486

Recursion is always either slow or will become slow. Consider what happens to code that depends on the compiler detecting and optimizing a tail recursion when it goes through years of maintenance. Someone adds a debugging statement at the end of the function and suddenly everything's on fire for reasons that are not obvious from looking at 3 lines of diff context.

It is always a mistake to write recursive code as it is very fragile and depends on the compiler realizing what you typed was retarded.


 No.911496

>>911494

sure recursion is often problematic in c.

but in the functional paradigm there is some cool examples of very fast code. when you do away with side effects the compiler can much more aggressively optimize and combined with types if you tell the compiler what you are dealing with has certain algebraic structure then the compiler can use this to it advantage maybe in the future with better type systems this could become common practices.


 No.911517

>>911494

>TCO is bad because someone might do something retarded and break TCO

shut the fuck up


 No.911724

>>909679 (OP)

> Processors don't have those things,

Embedded ones might :^)


 No.912359>>912400 >>912406

>>911384

>i don't understand basic PL design

>>911396

this is b8. zero cost abstraction is a meme and neither C nor C++ have it

>>911494

>Someone adds a debugging statement at the end of the function and suddenly everything's on fire for reasons that are not obvious from looking at 3 lines of diff context.

You just explained how retard coders operate. They think they can understand code without reading it and add stuff based on their pretend understanding.

>It is always a mistake to write recursive code

Only in C because you have no guarantees about the stack. Otherwise it's retarded to turn recursive code into iterative.


 No.912400>>913101

>>912359

>int i = 4;

>i++;

Here, a snippet of code that has multiple zero cost abstractions


 No.912406>>912993

>>912359

>You just explained how retard coders operate.

Like Brian Kernighan? He's a big fan of printf debugging.


 No.912993

>>912406

>>You just explained how retard coders operate.

>Like Brian Kernighan? He's a big fan of printf debugging.

When you have to use a UNIX debugger, adding extra printf statements and recompiling doesn't seem like such a bad idea.

Nice theory, but I'm afraid you are too generous. When I was
porting a Scheme compiler to the RT, I managed to make adb
-- nothing like a fancy, source-level debugger, or anything,
just dumb ol' adb -- dump core with about 6 or 7
keystrokes. Not arcane keystrokes, either. Shit you would
tend to type at a debugger.

It turned out that the symbol table lookup code had a
fencepost error that barfed if a particular symbol happened
to be first in the list. The C compiler never did this,
so... it's good enough for Unix! Note that the RT had been
around for *years* when I found this bug; it wasn't raw
software.

The RT implementation of adb also had the interesting
feature of incorrectly printing the value of one of the
registers (r0). After I had spent a pleasant, relaxing
afternoon ineffectively trying to debug my code, and
discovered this, I remarked upon it to my RT-hacking
friends. They replied, "Oh, yeah. The debugger doesn't print
out r0 correctly." In order to use adb, it seems, you just
had to know this fact from the grapevine.

I was much amused at the idea of a debugger with large,
obvious bugs.


 No.913101>>913428 >>913585

>>912400

this doesn't even approach the definition of an argument


 No.913428>>913448

>>913101

You claimed C doesnt have 0cost abstraction and I showed you a quick example where it is present, do you need me take you through it step by step?


 No.913448>>913585

>>913428

no, you need to fuck off


 No.913585

>>913101

>>913448

>whines about no argument

>can't refute basic point

Wew lad.




[Return][Go to top][Catalog][Screencap][Nerve Center][Cancer][Update] ( Scroll to new posts) ( Auto) 5
170 replies | 12 images | Page ???
[Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / agatha / cyoa / girltalk / hikki / just / sapphic / sonyeon / vor ][ watchlist ]