[ / / / / / / / / / / / / / ] [ dir / agatha2 / animu / arepa / fascist / leftpol / rmart / tacos / vichan ][Options][ watchlist ]

/tech/ - Technology

You can now write text to your AI-generated image at https://aiproto.com It is currently free to use for Proto members.
Email
Comment *
File
Select/drop/paste files here
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Expand all images

[–]

 No.959117>>959297 >>959847 [Watch Thread][Show All Posts]

Brainlet gamer here is risc-v a viable alternative to x86?

Just saw the ltt video but I have more questions and why is it stuck on shitty 28nm?

 No.959124>>959134

No, RISC-V targets embedded applications, like ARM Thumb or Intel Quark. In theory, the ISA could be scaled up to match a lower-end ARMv8 or Intel Atom, but that would probably be a pretty significant effort.

That said, for something that isn't particularly demanding, like an RPi-type retrogaming device, it could be useful.


 No.959134>>959137

>>959124

So is it gonna be used like a co processor for normal x86 anytime soon? Would be neat when Intel's new larrabe based gpu comes out


 No.959137>>959140 >>959148

>>959134

Microcontroller is the word you're looking for. In its current state, RISC-V isn't up to par with ARM. It is free and open though. Nvidia and Western Digital are looking to work RISC-V microcontrollers into some of their products.


 No.959140

>>959137

Wonder if the risc-v is gonna be used for gay tracing rtx meems


 No.959148

RISC-V and powerpc are viable alternatives to x86 if and only if you only use emulators and open source games. No wine, and no steam for other platforms. It is possible to hack wine to work in a way to play on other architectures i.e RISC-V but no one has implemented the QEMU functionality yet because of (((them))).

>>959137

>RISC-V not on par with ARM

That's because RISC-V is lightyears ahead of it without the hardware backdoors and forced pay model for the same peice of silicone. Nvidia already has implemented RISC-V in their 900 GTX series of GPU's and onward.


 No.959150

It's a freetard ISA for companies to save money with and for the turd world. The ARM guys are scared as it has the potential to end their licensing business but I don't think anyone else feels threatened. It might take over phones some day.


 No.959154>>959164

Anyone used a sifive? I'd get one but they're like 50-60 bucks.


 No.959164>>959165

>>959154

>I'd get one but they're like 50-60 bucks

POORFAG DETECTED

BITCH I MADE 160K OFF LONG AMD AND INTEL SHORTS SO FAR JEWIN AINT EASY!


 No.959165

>>959164

APOLOGIES, I MEANT:

(LONG AMD AND INTEL) AND TESLA SHORTS SO FAR


 No.959224>>959670

If it's open source they could just write nasty code I don't see how it's any better apart from not having as many instruction sets and no management engine


 No.959297>>959301 >>959448 >>959612

>>959117 (OP)

>Brainlet gamer here

>Just saw the ltt video

I fucking cringed when I saw that video show up in my recommendations, Linus knows very little about tech beyond intermediate level things and Windows.

inb4 we get flooded with more retards asking stupid questions about this.

>is risc-v a viable alternative to x86?

Not right now, at the moment there are only 3 ISAs which people care about, x86, PPC64LE, and ARM. I can't be bothered going into details but only hipsters care about RISC-V at the moment (from a use perspective, there are plenty of smart people who care about it from a development perspective), its still lacking some really important things like vector extensions (currently being developed) which prevent it from gaining traction.

>why is it stuck on shitty 28nm?

Because SiFive are virtue signalers rather than people who actually make products people want and 28nm was the only thing they could afford. Even at 28nm it would have still cost hundreds of thousands to put that chip into production the final product blows the development cost out even more and given how niche it is its unlikely to sell huge volumes, people buy Raspberry Pi's because of the community and software ecosystem for them and the SiFive doesn't have anything close to that.

They also screwed everyone over by licensing IP for critical parts of the chip like the memory controllers and as a result can't release source code pertaining to those parts so the chip can never be properly libre they literally told the community to reverse engineer the compiled binary blobs if they wanted to make libre alternatives to them. The Power9 CPUs are more libre than the SiFive CPUs.

The most cringy part about the announcement between SiFive and Nvidia is the retards going on about how we are finally going to get an open source GPU. Nvidia is literally the worst tech company in existence when it comes to supporting the open source community and the lengths that the Nouveau developers have to go to are completely absurd. The chances of them actually releasing their current generation IP (or even older generation IP) are stupidly low and what we will probably end up with is a RISC-V version of the Tegras which are awful for so many reasons.


 No.959301

>>959297

Thanks I knew his video was shit but I wanted to know what he was skipping over what a joke cheers anon


 No.959407

You'll have to learn to love Nethack.


 No.959448>>959566

>>959297

just wait until ltt get his hands on a talos II.


 No.959530

I uses a BSD license

To try and break it down for you, it means that any chip that comes close to matching or exceeding x86 in terms of gaming will likely be through a heavily-funded proprietary fork. Also LTT is a cringey faggot


 No.959566

>>959448

He probably would have done it by now if he was ever going to do it, he doesn't really care about anything which can't run Windows, or isn't Apple, or isn't garbage of the month.


 No.959612>>959770

>>959297

Linus knows how to build a PC and run cinebenchmark, prime95 and unigine heaven. That's it.


 No.959670>>959675

>>959224

Because if the company making your CPU is bad (like Intel with tons of vulns, ME), you have more than one alternative because it's open, so anyone can make RISC-V CPUs without paying Intel royalties.

With x86 you basically have two choices, Intel and AMD.


 No.959675>>959770

>>959670

Hasn't x86 reached its limit besides bigger dies and more cores? Apparently 7nm and 5nm ain't that much better


 No.959678>>959770

Having less instructions is not outright better like LTT made it out to be. In theory having more instructions makes the CPU faster, but it depends on how they are done internally and if they are pipelined. Although for power consumption and ease of programming the more instructions the worst.


 No.959770>>960007

>>959675

>Apparently 7nm and 5nm ain't that much better

Any feature size shrink will result in lower power consumption and smaller dies (which makes the chips cheaper to produce since you can fit more on a platter). The problem that the pure play fabs like Global Foundries and TSMC are having is that the yields still aren't acceptable, its still better than Intel though who can't even get 10nm sorted.

>>959612

That still puts him ahead of many of his viewers. If you look at everyone who uses a computer on a regular basis basis just being able to assemble a computer from parts puts you in the top 10% in terms of skill sadly.

>>959678

One of the problems with x86 is that only a subset of instructions are actually used regularly, if you were to analyze the compiled machine code from various programs I would suspect that the result would look like a Pareto distribution where 20% of the instruction set makes up 80% of the compiled binary. One of the arguments for RISC is that if you got rid of the 80% of x86 instructions which are only used 20% of the time then you need less transistors per core which results in lower power consumption and less silicon area (which allows for more physical cores and/or cache), the instructions you cut out can be achieved by a combination of the remaining.

CISC became popular when memory and storage was expensive, part of the reasoning was that if you had more instructions each machine code could be then the compiled binary could be smaller which allowed for more memory to be devoted to data. RISC came about when memory and storage started becoming cheaper and researches noticed that some of the more complex instructions on x86 could be done in less clock cycles with a combination of more primitive instructions.


 No.959847>>959879

>>959117 (OP)

If openPOWER couldn't kill x86, your shitty embedded RISC-V has a snowball's chance in hell.


 No.959851>>959864 >>960030 >>960237

File (hide): 2c25beb91fa34e3⋯.png (444.47 KB, 1668x5197, 1668:5197, Screenshot-2018-8-25 Membe….png) (h) (u)

>Is risc-v a viable alternative

Almost every ISA is a viable alternative, it's the CPU microarch design + compilation toolchain that makes AMD and Intel dominant.

>Why is it stuck on shitty 28nm?

Because that is what that specific CPU is done in, it's a prototyping board for you to do bare metal testing of hardware and software.

You already have 12nm RISC-V cores in Nvidia GPUs.

To more seriously answer your question and to correct some misunderstandings here, the goal of RISC-V is to replace all ISAs, that's why they have a 128 bit version of the spec, it can work just fine as a GPU or a CPU or in any other desired computational work requiring an ISA.

Given how many partners they have, as Linus said, you will have RISC-V everywhere in your rig soon, Western digital have already pledged to use RISC-V, they will be used for RAID, RAM, GPU, SSD, NIC and all other peripheral goodies, CPUs will come in time but that will be hard, gaming? Never, if you make something that does x86 emulation too well expect to have Intel sue you into non-existence.

Personally I think they should just partner with PC Engines and just make an APU2 with RISC-V, I'd buy and deploy it.


 No.959864>>959944 >>960237

>>959851

In time LLVM will make x86 emulation unnecessary.


 No.959879>>959957 >>959960

File (hide): e9cf183d3d06c12⋯.png (204.21 KB, 585x703, 585:703, unrealonpower.png) (h) (u)

>>959847

OpenPOWER is only 5 years old and was meant to start off by bringing the architecture to data centers and the server market. They've given up on embedded Power for the consumer ever since the Cell disaster in the mid 00's.

Their new strategy to have several variants(or modules) of their POWER9/10 chips might pay off in the long run if they're able to get enough momentum going. Imagine it's 2024 and the POWER11 chip is coming out with 96 cores/384 threads and half a gig of cache, you'll be able to get a 16 or 32 core binned chip for next to nothing with dozens of motherboards available for it.

It's not going to kill off x86 anytime soon but they have a good shot at bringing it back to desktop PCs among GNU/Linux users in the next decade that are sick of Intel & AMD. Even gaming on it won't be that much of an issue as all the major game engines are designed to be easily ported to different architectures, remember that the last generation of consoles were all based on Power.


 No.959944

>>959864

How, anon? How come?


 No.959957>>959967

>>959879

>They've given up on embedded Power for the consumer since the Cell disaster.

Was is really that bad? Is there any reason why something like e5500 or e6500 can't be used for consumer pc's or in something equivalent to an rpi?


 No.959960>>959967

>>959879

>dozens of $2000+ motherboards


 No.959962

And at a certain point you have enough performance anyway. So even if something like an 2007 (Open)Sparc T2, shrunk down to 14nm doesn't perform as well as an intel or amd chip, there are still lots of other fields to compete in besides raw single thread performance. Security, price, power, and threading being a few.

Seriously, about performance, I'm running a 4 year old passively cooled cpu which scores an order of magnitude lower than current high-end according to passmark. It does everything I need, including running vm's playing full-hd video and running a bloatware browser.

Even installing and running Gentoo/Funtoo on it is not really a problem.


 No.959967>>959996

>>959957

Steve Jobs dropped Power for Intel x86 thanks to the Cell processor, Sony was begging him to use it. It was terrible at everything it was supposed to do and forced Sony to throw a Nvidia GPU in their PS3 at the last moment when they realized the 'Synergistic Processing Elements' weren't any good which increased the price substantially. The 10-20% yield they were getting by going with a 221mm² die early on didn't help either.

And sure you could use an e5500 or e6500 but asides for Amiga fanatics nobody would be interested in such a computer.

>>959960

You can already buy one for $1,100.


 No.959996>>960016

>>959967

IBM had no interest in getting the PPC970 in a laptop. Jobs decision had nothing to do with Sony or Cell.


 No.960007>>960019

>>959770

Does tinier dies always mean higher clocks?


 No.960016>>960036

File (hide): a8a90defa29b9ce⋯.jpg (38.34 KB, 400x618, 200:309, shippy-cover.jpg) (h) (u)

>>959996

>A former IBM executive, who worked at IBM at the time and was involved in discussions with Apple, offered his perspective in a conversation we had during dinner at a recent technology conference.

>Interestingly, IBM had hoped to amortize the cost of PowerPC on Cell, the PowerPC-based chip design now used in the Sony PlayStation, some IBM severs, and IBM Roadrunner supercomputers. Big Blue was hoping to move Apple to Cell and then get the economies of scale there, according to this person.

https://www.cnet.com/news/four-years-later-why-did-apple-drop-powerpc/

>Many people in the industry believe that Mr. Jobs is racing quietly toward a direct challenge to Microsoft and Sony in the market for digital entertainment gear for the living room. Indeed, Sony's top executives had tried to persuade Mr. Jobs to adopt a chip that I.B.M. has been developing for the next-generation Sony PlayStation.

>As it happens, Intel's was not the only alternative chip design that Apple had explored for the Mac. An executive close to Sony said that last year Mr. Jobs met in California with both Nobuyuki Idei, then the chairman and chief executive of the Japanese consumer electronics firm, and with Kenichi Kutaragi, the creator of the Sony PlayStation.

>Mr. Kutaragi tried to interest Mr. Jobs in adopting the Cell chip, which is being developed by I.B.M. for use in the coming PlayStation 3, in exchange for access to certain Sony technologies. Mr. Jobs rejected the idea, telling Mr. Kutaragi that he was disappointed with the Cell design, which he believes will be even less effective than the PowerPC.

https://www.nytimes.com/2005/06/11/technology/whats-really-behind-the-appleintel-alliance.html

>Kahle also had to decide to dispense with a feature dubbed “out of order processing.” This is a more complex way of handling computation. It makes for better performance but comes at a steep price in cost and complexity. That led Jon Rubinstein, who was then an executive at Apple, and Bob Mansfield of Apple to scream bloody murder. It meant that Apple would likely still fall behind Intel in microprocessor performance. And it was one of the decisions that led Apple to defect from IBM’s PowerPC architecture to the Intel platform. This caused a huge shift in the bedrock of the computing industry. I saw all of this happening from the outside as IBM jilted Apple in favor of Sony. But it’s interesting to see the names and circumstances under which the decisions were made. In Kahle’s defense, the decision was necessary to keep the Cell chip on track. IBM also was very heavily focused on server chips, rather than serving Apple. In other words, there were other things about the IBM-Apple relationship that led Apple to go to Intel.

https://venturebeat.com/2009/02/06/the-race-for-a-new-game-machine-book-chronicles-the-sony-microsoft-ibm-love-triangle/view-all/

He made the right call on the Cell. It was an expensive disaster that took years to make. It wasn't the primary reason why Apple switched from Power but it's part of it.


 No.960019>>960022

>>960007

No. Smaller dies mean less distance the electrons have to travel meaning less heat. Less heat means higher clocks. That's why any RISC architecture has x86 beat in raw power. Less instructions means less die space dedicated to said instructions on a reduced instruction set, i.e RISC.

With a reduced die size you reduce heat, which means higher clocks. Of course there is ARM, which is RISC, but it wastes die space on useless shit like (((ARM kikezone))) and out of order schedulars on the die. You don't need that shit. Hence why RISC-V is the future as it doesn't neccessarliy need to waste die space on useless shit like (((kikezone))) and out of order schedulars. Or even any schedulars on the die, let the software handle that.

There is an exception to this however. You know how if you connect a wire to a battery and a lightbulb it produces power? Now say you attached another wire to the battery and stuck it in the ground, thereby wasting all your electricity. The same can happen with proccessors, which means a smaller proccesor with a short, or wire on the ground, could produce more heat then a larger one. This is only if the proccessor is shittily designed like x86. A 12nm RISC-V proccessor will produce less heat at the same clock as a 12nm x86 proccessor, with no exceptions. This is because x86 has instructions in the ISA that force a short, or wire on the ground, scenario as to maintain backwards compatibility with older x86 proccesors. RISC-V doesn't have to be backwards compatible at the hardware level, just do it in software like wine or QEMU.


 No.960022>>960025

>>960019

Thanks great explanation

If I could ask how does making a x86 CPU (short) maintain backwards compatibility exactly and what does shorting have to do with older CPUs in general? Why was that even used like that and for what use?


 No.960025>>960031

>>960022

I am comparing older instructions on the x86 ISA to a short because they are less energy effiecient then newer instructions. I am sure you have heard of AVX256 or SSE. SSE2 and AVX512 are more energy effiecient/use less heat then the previous instructions for certain things. But yet x86 keeps both SSE and SSE2, thereby wasting die space and creating more heat. Now multiply this out for every instruction added over the past 20 years and it creates alot of uneccessary heat.

RISC-V has none of this bloat/heat/bullshit as RISC-V doesn't need to maintain backwards compatibility at the hardware level, you can just emulate it with WINE/QEMU at the software level.


 No.960030>>960034 >>960550

>>959851

It's being used heavily for controllers because of cost not quality. We won't ever see a RISC-V that competes with x86 because of the instruction length. It's no surprise, as it comes about 20 years since the industry admitted it was a dead end for the high end and started searching for other solutions like EPIC.


 No.960031>>960032 >>960044 >>960240

>>960025

ARM also has this problem as the ISA has to be backwards compatible from ARM11 to ARM4 or so. Which means supporting NEON instructions or some such crap across all their newer proccessors, which means more die space wasted on useless/never used ISA, which means more heat/less energy effieciency.

On ARM energy effiency is a huge deal since they care about battery life so much for phones. So RISC-V would actually get better battery life then a compareable size in Nano meters/NM ARM proccessor. Granted the increase is not going to be much, but then you get the benefits of RISC-V's security and FOSS like nature.

RISC-V blows modern proccessors out of the water on everything, its just no one is producing them besides that shitty SIfive company that includes propietary hardware/software on the motherboard. And the ones being produced need time to perfect the creation of the proccessor with smaller sizes and more effiecient usage of the ISA, the clock rate, and the memory transfer from proccesor to DRAM.

It goes like this, make proccesor design with ISA, make motherboard with GPIO components like DRAM, send design to factory in china, and then wait for them to send the silicone back. This takes like four months, and after all that you have to test them, put software like GNU/linux on them, and sell them.

Then after you use the proccesor or their testing you can find out what you did wrong in the motherboard or proccesor design and start over again. Like say you had SSE in RISC-V for the first mobo/procceosr design. In the second design you could have SSE2 and so on but with other instructions.


 No.960032>>960033 >>960035

>>960031

>blows modern processors out of the water on everything

>its just no one is producing them

Your brain on marketing.


 No.960033


 No.960034

>>960030

Well of course RISC-V is less expensive. You don't have to pay a royalty fee to intel or ARM to create/use it.

>We won't ever see a RISC-V that competes with x86 because of the instruction length

What does this mean? Do you mean how many instructions are proccessed per clock cycle? Do you mean the clock rate? Do you mean the dedicated RISC-V ISA doesn't have "enough" instructions, and if so which are missing?


 No.960035>>960039

>>960032

Calling a post "marketing" is not a arguement. Sure the proccessor is slower right now. It is also royality free to produce and is much much more energy effiecient then a comparable x86 or ARM proccessor, hence its heavy usage in controllers along side less expensive to use. With production optimization and better motherboard designs that aren't patent encumbered such as DDR4 usage in motherboards or modern MMU's, it will be more effecient and libre/secure.


 No.960036>>960040

>>960016

This doesn't make any sense. The G5 PowerMac (PPC970) came out in 03. The PPC portion of Cell was a stripped down version of the PPC970. Why would Apple want something less than what they were already using? Steve had already made the decision to go with Intel before Ken Kutaragi got up on stage during Mac World. Unless plans for Cell dated back to the introduction of the PPC970, I don't think this is accurate. There were rumors that IBM didn't even want to do Cell and that the 360 Xenon was what was originally brought to Sony. Thanks to back door Japanese dick stroking, they insisted that Toshiba be involved.


 No.960039

>>960035

Modern MMU's and DDR4 are patent encumbered, meaning for a libre hardware system and for avoiding paying royalities to some kike you can't use them or you will get (((shut down))) for using them/producing them by being sued.

This is why only scummy companies like SIfive and nvidia are using RISC-V right now. Because they are ok with paying royalities for non-libre hardware components on the motherboard. So even though RISC-V is libre, GDDR5 in nvidia GPU's and the clock on the SIfive boards are not libre. It's more difficult to use libre hardware and not step on some kikes patents. Now if hardware designers just moved to china and said fuck all that patent nonsense and made awesome proccesors that would be awesome. Of course then the chinese government would (((shut them down))) on behalf of the kikes since motherboard/proccesor/DRAM/the entire computing hardware industry is a kike monopoly. Along with a secure computer that the kikes can't hack means the GCHQ/NSA can't steal from you anymore as easily.


 No.960040

>>960036

Its part of the kikes plan to make everyone insecure in computing enviroments so that they can be stolen from with ease.


 No.960044>>960046

>>960031

ARM11 which was ARMv6 and ARMv4 which was ARM8 never had NEON. ARM Cortex M4 which is ARMv7 has NEON as do all ARMv7 cores. NEON didn't exist prior to ARMv7. Don't confuse ARM8 with ARM Cortex 8 or ARMv8 though. They are completely different.


 No.960046>>960047

>>960044

Thank you for clarifying that. It doesn't change though that as an example ARM has older/unused instructions in its ISA that take up die space and waste energy/produce heat.


 No.960047>>960050

>>960046

ARM is king of wasted die space. Nobody else does Big/Little CPUs. I don't think they give a shit fam.


 No.960050

>>960047

How is aarch64 in that regard? It is a bit of a break with the old designs right?


 No.960237>>960243 >>960459

>>959851

There is quite a bit of stupid in this post but this is the worst

>the goal of RISC-V is to replace all ISAs

I am not saying that RISC-V won't be adopted but to first say that a single ISA can replace all others is fucking stupid. An ISA isn't just a set of instructions but also things like overall processor architecture, ASIC design in general is a never ending set of tradeoffs and design decisions to make a processor good in one area ultimately makes it weaker in another. Processors and ISAs which don't target a niche fall into the "jack of all trades master of none" situation, for instance the manycore PEZY chips use a barrel threaded MIPS-like ISA because even RISC-V takes up too much silicon but it wouldn't make any sense to use that for an application requiring good single core performance.

>that's why they have a 128 bit version of the spec

< HURR DURR MORE BITS = BETTER !!1!!!111

This isn't the 90's any more, the bit wars are over.

>>959864

In time LLVM will make ISA fanboy wars pointless.


 No.960240

>>960031

how can a CPU have security? it only either is a real CPU (does arithmetic and shit) or is a piece of crap with a web browser embedded in it (x86)


 No.960243

>>960237

>the bit wars are over.

DO THE MATH


 No.960316

>64 bits are enough for everybody for all time

They know there is no use for it now, and not much research is done on it. Their argument, looking at history was that bit size of an architecture was an obstacle and hard to change, so better plan for it in advance.


 No.960459>>960505 >>960542

>>960237

They designed the ISA to be modular so people could build ASICs and special processors with it, it's better to have everyone being a RISC-V specialist at the cost of some silicon, but even if you did need a special ISA for something really unique, then so be it, a common base is better.

>the bit wars are over

Nah, at current rates we're going to run into the 64 bit wall by 2030, see page 105:

https://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-118.pdf


 No.960505>>960544

>>960459

You don't need anything more then 32 adressable bits unless you are running extremely memory intensive rendering software and or huge servers. Consumers don't need more then 16 bits unless they do video gaming. This is assuming perfectly or near perfectly optmized software, which is never the case. With the bloat of software going on 128 bits will become mandatory for (((the latest updates to windows 10))).


 No.960542>>960545 >>960550

>>960459

>They designed the ISA to be modular so people could build ASICs and special processors with it

You don't need a modular ISA to do that, the logic people design to perform special functions most of the time are memory mapped like a peripheral device rather than added to the instruction set as special instructions. The logic which WD implements in their chips to control their HDDs and perform LDPC is just memory mapped as its more portable from a firmware perspective and higher performance than hooking it into CPU as an instruction, the reason why they are switching to RISC-V isn't that its free as in freedom but free as in free beer.

>Nah, at current rates we're going to run into the 64 bit wall by 2030, see page 105:

Their claim is pretty sketchy, huge systems like the ones they are referring to don't run a unified memory address space like they assume and instead the overall system is made up of individual nodes each with their own internal memory address space. The 48bit addressing used in current generation 64bit CPUs already allows for an address space of 256TB and is designed to be extendable without breaking backwards compatibility. At 64bits the address space becomes 16Exbibytes which to put into comparison is a pile of 64GiB DDR4 DIMMs with a mass of 5400 tons.

Moore's law doesn't scale infinitely like they assume and the concurrency issues facing even distributed systems like current supercomputers are huge. Having all the nodes sharing a single address space would be a nightmare just to keep it running without doing anything and the concurrency safeguards inherent in current operating systems due to things like POSIX would likely slow the entire system to a crawl.


 No.960544>>960655

Forgot to address this

>>960505

>You don't need anything more then 32 adressable bits unless you are running extremely memory intensive rendering software and or huge servers. Consumers don't need more then 16 bits unless they do video gaming.

Holy fucking shit, this has got to be one of the most retarded things I have ever read on /tech/. You are literally saying that most consumers don't need systems with more than 65KiB of address space when even PCIe devices consume 4KiB just for their configuration BAR.


 No.960545>>961623

>>960542

>64bits the address space becomes 16Exbibytes which to put into comparison is a pile of 64GiB DDR4 DIMMs with a mass of 5400 tons.

Nice comparison. Pretty sure back in the '70 and '80 the argument was that something like this:

>Lol Petabytes scale story, like we even need that. Don't you know it would take a stack of floppys from Earth to the moon to store 200 Petabyte.

And yet here we are today, with storage in some companies already in Exabyte territory.

Also

>InB4 mixing up storage and ram, don't care. It's about the flaw in this type of argumentation general.


 No.960550>>961623

>>960030

RISC-V can have compressed instructions.

https://people.eecs.berkeley.edu/~krste/papers/waterman-ms.pdf

>>960542

DDR4 will be ancient by 2030, I doubt POSIX has a place in something that would be designed to have >64 bits of address space, it would probably be for extremely specific tasks if at all, but it certainly is possible which is why it's worth doing that one step up when it's still easy to do so.

>the reason why they are switching to RISC-V isn't that its free as in freedom but free as in free beer.

I haven't said these companies are switching due to some superior design aspect of RISC-V, a lot there are going to do it because it is cheap. a lot more will come after that because more people are going to be learning it.


 No.960655>>960775 >>961623

>>960544

>You are literally saying that most consumers don't need systems with more than 65KiB of address space when even PCIe devices consume 4KiB just for their configuration BAR.

Yes that is exactly what I am saying. Unless you are going to be doing video gaming or an equivelent such as graphics renderering for videos then you don't need more then 16bit for text/document editing, web browsing, and song/video watching. This is assuming near perfectly optimized software and or perfectly optimized. If you want an example look at DOS and windows 98, and those were incredibly unoptimized peices of shitware compared to what they could have been. You don't need pci-e with the integrated audio chips of today and SATA controllers on board. You don't need USB or a MMU unless you are a insecure faggot. If you have large collections of videos and audio you might need 32bit like a server to adress them. But most normalfags don't as they steam it all.


 No.960775>>961657

>>960655

A single-buffered 4k frame requires 6.63MiB of VRAM, even a hi-color 480p frame needs 614KiB. For comparison, the Apple IIɢꜱ maxed out at a 4-bit 640x200 mode that consumed exactly 64KiB.


 No.961623

>>960550

>DDR4 will be ancient by 2030

Memory isn't going to magically become ~9 orders of magnitude more dense. Just because DDR4 will be gone doesn't mean what will replace it will be significantly more dense. Even if we reduce each DRAM cell down to 1nm (which means each cell is only about 10 atoms wide) that's still only about a 200x increase in density from what we have today which barely puts us over the 48bit space per node for the highest tier systems.

>>960545

>And yet here we are today, with storage in some companies already in Exabyte territory.

Storage addressing isn't the same as system memory addressing, modern filesystems already support sizes so big that you would need a pile of harddrives with something on the order of 1/4 the mass of the earth to fill up.

>>960655

>You don't need pci-e with the integrated audio chips of today and SATA controllers on board.

Protip: Those are all memory mapped just like PCIe.


 No.961657>>961792

>>960775

>A single-buffered 4k frame requires 6.63MiB of VRAM

Wut?

4k has roughly 8 million pixels which at 4 bytes per pixel gives you about 32 megabytes of storage. Or am I being a brainlet?


 No.961792

>>961657

He specifically said single buffering, which is where you only have part of the screen rendered at a time and while that part is being sent to the screen you are rendering the next part. Its more memory efficient and lower latency than rendering the entire screen at once but can introduce tearing.




[Return][Go to top][Catalog][Screencap][Nerve Center][Cancer][Update] ( Scroll to new posts) ( Auto) 5
64 replies | 3 images | Page ?
[Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / agatha2 / animu / arepa / fascist / leftpol / rmart / tacos / vichan ][ watchlist ]