[–]▶ No.1038679>>1038727 >>1038807 >>1050738 [Watch Thread][Show All Posts]
>AV1 gradually getting out of the experimental stage but still suffering from snail-tier encoding
>MPEG trying to squeeze every last remaining shekel out of HEVC, thinking that hardware encoders in every $20+ GPU will save a codec that has never and will never be used for end-user livestreaming
>Intel releasing AVX-512 optimized encoders for AV1 and VP9
>h.264 going strong as VP9 is still too slow for real-time encoding while HEVC remains a licensing minefield only useful for Animu scene encodes
>MPEG working on VVC, forms the Media Coding Industry Forum ostensibly to avoid another patent licensing clusterfuck, promises 30-50% bitrate savings over HEVC by 2021
>Divideon trying to capitalize on the situation by offering the royalty-based XVC meme codec using patented tech to the extent that a single licensing scheme can allow, with similar video quality to AV1 and supposedly greater encoder performance
>Codemumkey still hasn't added AV1 support to 8chan
Who is likely to win?
When will proprietary software be outlawed?
▶ No.1038684>>1038713
>Codemumkey still hasn't added AV1 support to 8chan
It's not viable yet. You get a better quality video at the same bitrate and encoding time at VP9 than with AV1 atm.
Let's wait until the war is over and there are actually solid encoders and decoders instead of rusted rav1e and so on.
▶ No.1038688
▶ No.1038705>>1038779 >>1039243
▶ No.1038713
>>1038684
>solid decoders
dav1d recently added some SSSE3 optimizations and can now decode 8-bit yuv420p at 1080p30 with relative ease on non-AVX2 CPUs.
▶ No.1038727>>1038745 >>1062042
>>1038679 (OP)
> /tech/ in every other thread
> “All CPUs post Dual Core are evil! Evil!”
> /tech/ in threads about video codes
> “Dude, AV1 is fast! Just wait for the hardware support!”
▶ No.1038745
>>1038727
just ignore them. they use full botnet phones anyway so the autism with computers is pointless
▶ No.1038779
>>1038705
This. It would probably be better to make a solid video codec decodable by software than to make some half assed we-have-to-beat-x-until-xx/xx/xx.
▶ No.1038784
AV1 4k+ is unplayable on my machine.
I hope it doesn't become a standard, I don't want to be forced into upgrading to botnet just to be able to watch videos.
▶ No.1038807>>1038808
>>1038679 (OP)
AV1 already won this battle, it just needs hardware acceleration to start being deployed, as decoding (and encoding) it with software is simply prohibitive.
▶ No.1038808>>1039024 >>1039048 >>1060215
>>1038807
>(((hardware acceleration)))
I don't wanna buy new (((shit))).
VP9 and h264 10-bit decode just fine with software on my PC.
Software decoding is the future. Imagine what would have happened if hardware decoding was done for images?
You would buy your image accelerators for all kinds of formats instead of people writing good fast decoder implementations and inventing new formats.
Hardware decoding stops progress.
▶ No.1039024>>1039044 >>1039049 >>1039107
>>1038808
Because you're not decoding multiple keyframes per second when loading a single image. Though when displaying photo's and images taken and stitched from Nasa's telescopes I would rather have a much faster experience rendering millions of pixels. Not to mention the enormous power reduction from hardware decoders compared to software. Remember apple's macbook(?) draining battery because their 4k screens didn't come with a 4k VP9 decoder because Intel didn't implement it in their processors? Apples solution was to disable software rendering and called it a day. Hardware decoding doesn't stop progress either. There are plenty of people working on AV1 that will no doubt use non-meme machine learning and new algorithms to come up with the next version. And if you suggest that adoption rate sucks because it will take a while before everyone has new hardware with the new decoder then might I ask you wether the old computers, laptops, tablets, phones were ever able to use software rendering on the old or new format without their users complaining about power draw/battery life or stuttering due to underpowered arm socs.
▶ No.1039044>>1039049 >>1039120
>>1039024
>4k screens
Having 4k screens on a laptop drains the battery very quickly independent of software or hardware decoders for video.
I don't want to buy shit nor be stuck with half assed AV1. We already know it's just VP10.
>might I ask you wether the old computers
Old computers don't have hardware decoding and software decoding VP9 worked on a dual core centrino laptop.
>phones
Fuck them. Phones are for making calls. People misunderstood the concept.
▶ No.1039048>>1039093 >>1039130 >>1057784
>>1038808
>I don't wanna buy new (((shit))).
That's because you were retarded enough to buy things at the wrong time.
>VP9 and h264 10-bit decode just fine with software on my PC.
You can't even compare them with AV1.
>Software decoding is the future.
Only that it isn't.
>Imagine what would have happened if hardware decoding was done for images?
We would fix loads of problems, specially for professionals.
>You would buy your image accelerators for all kinds of formats instead of people writing good fast decoder implementations and inventing new formats.
No, that's happening right now and still need them because things don't go in only one way.
>Hardware decoding stops progress.
No, it is a necessity of higher complexity.
▶ No.1039049>>1039081
>>1039044
You're retarded, it seems.
>>1039024
>Contesting planned obsolescence
I guess we should make the scales for video files, image, games and the internet be all the way compatible with those early 800x600 Windows 95 PCs then.
▶ No.1039081
>>1039049
thats not the same thing tho. there has now been a new video format almost every year and none of them work with existing hardware decoders so the only option to get support is buying a new gpu or laptop if you use those
▶ No.1039093>>1039107 >>1039120 >>1039180
>>1039048
>That's because you were retarded enough to buy things at the wrong time.
When was the right time? 'Cause I've been buying hardware for 3 decades now. Oh you mean I'm a retard for not updating to the botnet just to watch shitty hollywood propaganda?
>You can't even compare them with AV1.
99% of the population can't see the difference between DivX 3.11 ;-) in widescreen SD and H.264 in 720p. 99.9999999% can't tell the difference between H.264 in 1080p and AV1 in 4K.
>Only that it isn't.
Hardware decoders have always become obsolete pretty much overnight after release while software decoding has been the go-to for pretty much all applications going back to the 90s. The only place with hardware decoding makes sense for the average consume is set-top devices. 99% of people aren't going to notice any difference between AV1 and whatever they're streaming now. They might pretend to but like audiophiles they just get the latest standard to brag about it to their friends.
>We would fix loads of problems, specially for professionals.
I notice you didn't list them.
>No, that's happening right now and still need them because things don't go in only one way.
You okay anon? 'Cause you're looking like a real baka right now.
>No, it is a necessity of higher complexity.
Go on...I want to know how hardware decoding is more complex than software decoders when the past has shown that all hardware decoding does is produce a worse result with the inability to update the software.
▶ No.1039107
>>1039024
>apple disables software decoding which means no video without the right hardware
>Hardware decoding doesn't stop progress either.
hmmmmmmmmmmmmmmmmmmm...
>>1039093
based
▶ No.1039120>>1039133
>>1039044
>Having 4k screens on a laptop drains the battery very quickly independent of software or hardware decoders for video.
Dont strawman, the point I made was that hardware decoding was vastly more power efficient than software. Hence becoming a problem on a laptop. Wether the 4k retardation exists or not.
>software decoding VP9 worked on a dual core centrino laptop.
>This one particular codec was lightweight enough for a potato therefore all codecs and computers shouldn't have a problem.
Hardware decode ALWAYS saves power and CPU cycles for something people use on a daily basis.
>Phones are for making calls.
Figured you would go for low hanging fruit. The market disagrees with you. People use their phones like they used to use PC's and Laptops (Which I am not advocating for) therefore their interests are important. Though being able to stream av1 to low powered Socs will be usefull somewhere other than phones.
>>1039093
>can't tell the difference between H.264 in 1080p and AV1 in 4K.
Its almost like this was already well known. All compression schemes have their limits due to entrophy. AV1 only offers "marginally" better compression in relation to quality compared to the last codec. It's not a silver bullet. Increasing bandwidth is the only way you'll truly increase quality.
>all hardware decoding does is produce a worse result with the inability to update the software.
You're right in that a hardware decoder is out of date the second its released due to the massive amount of resources poured in to the development of better codec's. Though hardware decoders ARE being updated as a result of new products saturating the market. Which is somewhere around 10 years depending on the product.
▶ No.1039130
>>1039048
Anon you replied to here.
>That's because you were retarded enough to buy things at the wrong time.
Why? I have enough processing power and I can't watch multiple videos at once anyways.
GPU acceleration is also a possibility. Why does no one ever use that?
>You can't even compare them with AV1.
Why? They are all video codecs. What does AV1 have over VP9? Developers: MORE
They are pretty similar.
>Only that it isn't.
>because
>...
>specially for professionals.
I'm no professional, so why should I care. Professionals should use a method specialized to their use case.
>No, that's happening right now
So where can I buy my PCIe JPG accelerator or a GPU/CPU with hardware decoding for JPG?
>No, it is a necessity of higher complexity.
>necessity
Why? There is no MS Windows accelerator and no Google Chrome accelerator even through they are the most complex clusterfucks in the software world. Through modern CPU power they still run fine, through.
H264 10-bit and VP9 also decodes just fine without any hardware accelerator.
▶ No.1039133
>>1039120
>The market disagrees with you
The market isn't a person.
>People use their phones like they used to use
They don't do anything productive on there. Try using a spreadsheet program on a phone.
>usefull somewhere other than phones
Do I look like a Youtube datacenter?
>All compression schemes have their limits
We are far away from the point where we have the most efficient solution.
>Increasing bandwidth
Half truth. Depending on the content there is lots of artifacts that can simply be compressed away (H264 hi10p and anime) and as above:
Current solutions are very rudimentary and only describe pixel changes and left right up down movements.
If something on the screen rotates it can't be well compressed.
There are many other common things to compress but that one alone should suffice.
>Though hardware decoders ARE being updated
You can't make any fundamental changes to them. Only slight improvements can be added on.
▶ No.1039180>>1039218 >>1039219 >>1039454 >>1040420
>>1039093
<99% of the population can't see the difference between DivX 3.11 ;-) in widescreen SD and H.264 in 720p. 99.9999999% can't tell the difference between H.264 in 1080p and AV1 in 4K.
>Hardware decoders have always become obsolete pretty much overnight after release
t. MPEG
That said, why are GPGPU video de- and encoders so rare?
▶ No.1039218>>1039508 >>1040420
>>1039180
>why are GPGPU video de- and encoders so rare?
For several reasons. First and foremost the codecs that were popular in the past were patent encumbered which means NDA's and huge costs to get the copy+pastable library. Secondly because GPU's are essentially a CPU of varying artchitectures heavily optimized for much fewer commands, a reduced instruction set if you will. So the codec library must be implemented with either special instructions in hardware or by having access to the ISA of the GPU. The problem with this is unless you are talking about ancient GPU's or nvidia gpu's on nouveau prior to the 900 GTX series no one gets access to the GPU's ISA. Except via intermeditary layers that are blobs loaded at runtime such as AMD's microcode or nvidia's signed firmware. So implementing the newer codecs requires knowledge few but the NDA signers have, which work for the companies, who get kickbacks from codec producers who they already implemented for not to implement newer codecs.
Say you wanted MP4 acceleration on a GPU, you just implement it at the ISA level just like a modern CPU via a library calling instructions. But you have to be able to program those instructions which most modern GPU's will not let you do.
Third the patent encumbered codecs want to stranglehold the industry in order to get more liscensing shekels. Jewgle allows vp8/9 only because it benefits them so much in electricity costs saved for youtube streaming and bandwidth costs.
The only way out of this mess is someone developing a codec that uses the best of the best of the algorithms, including the patent encumbered ones, and just making the best software possible for implementation. Then publishing and updating a implementation anonymously using p2p software so that patent trolls can't fuck them over with cease and dissists. Then on the GPU's with access to the ISA of said GPU's GPGPU offload could be implemented.
But few have knowledge of the codecs to make a better codec. Even fewer would know how to upload a thing without getting a cease and dissist. Much much less fewer then that would have the knowledge to implement GPGPU offload, and the few that do already are in the hands of corperations or are blackmailed into doing nothing and or are too busy with irl survival.
▶ No.1039219>>1039221 >>1040420
>>1039180
>t. MPEG
<I can't give you a good reply so here is some wikipedia information and bait.png
I don't feel like going full Encoder.navyseal.avi on you right now anon so I'll try to keep this brief as I just got called for supper. I'm sure the "everyone is a LARPer" anon will be along shortly to dismiss everything I say soon as well.
I'm not loyal to the mpeg.jews. I'm just a guy that's been at this since DivX 3.11 ;-) hit the scene back in the late 90s. I was doing fansubs in the days where we still had to do distro on VHS tapes before that. There were good excuses for jumping to new codecs over the years be it H.264, xvid, or whatever else. The fact is right now AV1 provides little to no major gain over what has been standardized in the last decade. What is currently available handles all content at an acceptable bitrate/file size fine. In fact, the reasons for moving to new codecs back then honestly don't really apply today because the file size problem isn't much of an issue anymore. People are happy to download FLAC audio and we aren't having to sacrifice the audio by encoding to very low bitrates in mp3 to eek out a few extra bits on the video side due to working with a shitty encoder like DivX. We also have things like variable frame rates in the .mkv container that allows for lots of savings in file size. Bandwidth is plentiful now and people aren't attempting to cram an entire series on a few CD-Rs.
I'm happy to see progress is still being made in video codecs and I'd love nothing more than a royalty free fully libre solution. But so far all I'm seeing in this thread is a lot of
>muh AV1 is needed because it's new
>muh AV1 will be great when hardware decoding is ready
>muh AV1 is great despite requiring botnet CPU/hardware to be viable
What have we gained if you require a botnet CPU/machine to even decode it at a good frame rate? Why this insanity of encoding at sub-1fps for little to no difference over what exists and is standardized now? I'm not some faggot that can't wait for a good encode either. I used to encode episodes of anime multiple times at 0.1fps in xvid because that's what it took to get it within the file size needed with my custom scripts back in the day. I'm just saying it isn't worth it for something that has
>a very small number of users
>something not standardized
>something that requires botnet hardware thus defeating the purpose of it being an open format
Who cares about royalty free codecs when everyone just pirates everything anyway? No one stopped people encoding anime and movies for the masses when xvid/DviX/H.264 were the fresh new thing and no one is going to stop you now. Furthermore VP9 already exists and doesn't take nearly the amount of power to encode/decode. If you aren't getting acceptable results with it something is wrong with how you're encoding it. Perhaps you should try not being a retard and figure out how to actually do a good job with it.
At any rate. You aren't going to get it standardized until you get the warez scene to adopt it. So if you want to see AV1 widely used work on getting some standards accepted by the scene and actually start churning out releases in those formats. You're still going to be stuck with the issues of
>no set-top box support
>low number of users/downloaders until hardware to decode it is common
and hardware decoders aren't going to save you on those fronts. Until you get software decoding that works on the majority of hardware people are using they WILL stick to the old standards. There are still tons of folks that haven't even made the move to h.264 releases even now and they aren't living in third world countries. They're poorfags in the states that still use P2/P3/K6 machines because those machines are good enough to place video encoded in xvid even in faux-HD resolutions. Normalfags do not care about things like stretched frames or line noise. They care about one thing and one thing only; does it work on my machine!?
t. Retired fansubber/guy that supplied you with lots of Hentai and free movies from 1998-2012
▶ No.1039221>>1039232
>>1039219
Case and point to my post, see a anon like this that understands why a encoder/decoder needs to do what it does and who doesn't give a shit about copyright. He could implement a codec maybe and upload it anonymously because scenefag but then has none to little knowledge about gpgpu stuff.
▶ No.1039232>>1039242
>>1039221
I understand as much about gpgpu as publicly available. As you said anon the problem lies in the fact that modern GPUs are so locked down. It's impossible to learn when you can't even make them say Hello World.
▶ No.1039242>>1039514
>>1039232
>hello world
You can, just RTFM for AMD/ATI and intel here https://archive.fo/fnnaH and for nvidia/nouveau here https://archive.fo/3TUnY . It's just theres alot of fucking GPU's that would need an implementation all using different ISA's and features i.e mmu or no mmu.
▶ No.1039243
>>1038705
...4 years later...
▶ No.1039454
>>1039180
>de- and encoders
transcoders
▶ No.1039508>>1039529
>>1039218
>Secondly because GPU's are essentially a CPU of varying artchitectures heavily optimized for much fewer commands, a reduced instruction set if you will. So the codec library must be implemented with either special instructions in hardware or by having access to the ISA of the GPU.
What about OpenCL? Is it a meme?
▶ No.1039514>>1039529 >>1040420
>>1039242
For sure, I guess I put it badly. The problem stems from a lack of standard between the major GPUs and them being so closed off. I have found in my experience working with video that the GPU produces worse results or is very limited in what it can do in my tool chain. I would off-load what I could to the gpu but often would have to choose to do things on the cpu because the results weren't acceptable to my eye. In filter chains there were only specific things that could be off-loaded to the GPU without causing issues of quality.
Also as the years roll on I've found less need to off-load to the GPU while doing final encodes simply because when the goal is quality waiting a bit longer isn't as much of an issue. CPUs are really fast now compared to when I started doing this years ago. Gone are the days of waiting 24 hours for 25 minutes of video to finish being encoded. Now in an area where time is crucial, say live streaming or moving lots of video around in real time like in a studio I could see where the trade off might be worth it. Even then the GPU decoder needs to be a lot faster than the CPU to make it matter enough to be worth the degradation of the quality.
These days I'm sure the days of making filters through scripting and tuning the encoder by hand are long past. I get the impression that most people are copy/pasting ffmpeg commands or using a full fledged GUI to do production work. Which is fine I used plenty of GUI based software too but there was always a script there to glue the various tools I used together. I feel like the proper entry to this stuff was lost in the last decade or so as well. I know in the various fansub/warez scenes it's rare to find groups doing anything but ripping a show off a streaming service or doing basic Blu-ray/DVD rips. Folks aren't working with old sources of material that need to be cleaned up much anymore.
▶ No.1039529>>1039537
>>1039508
OpenCL is like C for GPU's. Developer writes a opencl libray for a GPU and then a user can call opencl functions after importing opencl header files. Just like a developer writes a C library for a CPU and then a user can call C functions after importing opencl header files. Its just middleware to the assembly code, but it is standardized middleware to the assembly code making it easier to call for a bunch of different GPU's. But opencl is a unoptimized peice of shit because AMD has paid professional solutions for opencl so they don't optimize the FOSS solution. Nvidia is the same as they have CUDA. The nouveau/redhat developers haven't even implemented opencl publically for nvidia gpu's anyways. Intel has a semi-decent opencl implementation but its not optimized as well as it could be and of course intel IGPU's are slow unless you get the newer botnet/blobbed ones.
>>1039514
There's plenty of gigakikes into the transcoding scene as youtube has ways of detecting the slightest changes in a banned video and banning videos. So its a rat race between noobs discovering tricks, or the man page, of ffmpeg and varying codecs. Or things like encoding a video in one format but making it turn into a completely different video encoded in another format. Your a scenefag you should know this stuff.
▶ No.1039537>>1039575
>>1039529
>OpenCL a meme
Does Vulkan have any hope of offering a proper GPGPU compute standard with non-gimped implementations at some point in the future?
Are fully fledged video decoders running almost entirely on the GPU with no decoding ASIC present science fiction?
▶ No.1039575>>1040420
>>1039537
>Does Vulkan have any hope of offering a proper GPGPU compute standard with non-gimped implementations
It depends on how good the assembly to library implementation takes advantage of the hardware. Technically there is already a way to go from assembly > opengl/vulkan > whatever. Just convert it to SPIR-V and then go wherever the fuck you want as long as it goes back to hardware via something. See mesa's clover and LLVM's SPIR-V for implementations http://llvm.org/devmtg/2017-03//assets/slide/spirv_infrastructure_and_its_place_in_the_llvm_ecosystem.pdf https://archive.fo/RLpAS . Anything that has a assembly > gallium implementaion in mesa can do opengl/vulkan > spir v > whatever you want which will make GPGPU offload possible for anything a reality for FOSS. You could do something obscure like opengl/vulkan > spir v > javascript Why the fuck would you do this? if you really wanted too, and it can go in reverse too.
Just make sure to use it correctly as some instructions are better left to a x86/dedicated cpu such as single threaded apps with x86 specific math instructions. Some assembly to *insert library here* implementations are going to be better then others. Like the botnet AMD GPU to vulkan implementations being better then their opengl.
▶ No.1040420>>1040429
>>1039180
>why are GPGPU video de- and encoders so rare?
Depends on who you believe. For encoding in particular, some parts of the process (motion compensation, quantization) multithread nicely to GPUs, while others (entropy encoding) are serial operations ill suited to GPUs.
According to x264's devs, the bulk of CPU time in an encoder is spent on serialized workloads, so bothering to accelerate the other parts isn't worth their effort. But reading their invective on the subject, another explanation that leaps to mind is that they're just too assblasted about all the APIs Intel/AMD/nVidia have provided over the years to take advantage of them.
>>1039218
>abloo abloo muh proprietary codecs
Open sores implementations of proprietary codecs have been a thing for a while
>abloo abloo muh proprietary hardware
At least you eventually get around to admitting in >>1039575 that SPIR-V exists
>>1039219
>7-year-old technology that churns out 20% more bloated encodes is good enough for me
Gee, great insight grandpa. Upboat.
>>1039514
>I have found in my experience working with video that the GPU produces worse results or is very limited in what it can do in my tool chain
That's not what the post you replied to was talking about, namely GPGPU (which executes arbitrary code). What you're talking about here is fixed-function ASIC SIP cores like Quick Sync, NVENC, or VCE.
▶ No.1040429>>1040469 >>1040480
>>1040420
That picture tells your nothing of how it works. Picture goes in magical bullshit engine and something comes out as data of the other side.
>What you're talking about here is fixed-function ASIC SIP cores like Quick Sync, NVENC, or VCE.
What you neglect to mention is that those functions are not always using dedicated hardware functions and it gets generalized to the GPU somtimes because the hardware dev wanted to save money and the customer can't see the difference as long as its 1% faster then the last generation due to software optimization of the same exact hardware rebranded several times.
>SIP cores
Now I know you are larping and a troll, system in package, also known as a spread out SOC has nothing to do with dedicated math functions in hardware. Just because you put out a buzzword doesn't mean that buzzword is actually using dedicated hardware. You have to prove it does or doesn't like you can with FOSS software.
Enter every GPU made past 2012, just rebranded shit that is the same hardware underneath with minor changes or with software changes for respective vendors.
▶ No.1040469
>>1040429
>Now I know you are larping and a troll, system in package, also known as a spread out SOC has nothing to do with dedicated math functions in hardware.
It does if said functions are hidden behind proprietary firmware.
▶ No.1040480
>>1040429
>That picture tells your nothing of how it works.
It tells you what can be offloaded to the GPU in a best-case scenario, and what can't, which was what I was talking about with GPGPU encoders.
>What you neglect to mention is that those functions are not always using dedicated hardware functions
That's kinda' exactly what that pic is an example of, y'know? Anyways, my point was that what you're talking about is (at least in part) a fixed-function encoder, which strictly MUST impose limits that don't exist for CPU encoders. Whereas a GPGPU-based encoder could function identically to any software decoder, because GPGPU can execute any arbitrary code (although, of course, architectural differences mean that a GPGPU encoder may be faster or slower when using different settings and features compared to execution on a CPU).
>system in package
<calling someone a LARPer while being too ignorant to understand what I said
That's not what "SIP" means in this context:
https://en.wikipedia.org/wiki/Semiconductor_intellectual_property_core
▶ No.1040506>>1040596 >>1041086
https://github.com/OpenVisualCloud/SVT-AV1
https://github.com/OpenVisualCloud/SVT-VP9
Nobody has mentioned Intel's SVT, I see.
Fortunately for you goys, it runs on AMD hardware too.
▶ No.1040596
>>1040506
How does it compare quality wise to the reference AV1 encoder?
▶ No.1041086
>>1040506
>That last benchmark
How in the world did that happen?
9900k behind TR and the i9 three times as fast.
▶ No.1050674>>1050741
Netfux will adopt AV1 for livestreaming now that Jewntel's SVT-AV1 encoder has managed to encode AV1 at 1080p60 with abhorrent visual quality.
How would HEVC have performed as general Interwebs streaming codec in alt-timeline where it had a much less kiked patent licensing scheme?
It should've completely trashed VP9 for encoding 16mb mp4s given its better quality/speed tradeoffs at settings above placebo.
▶ No.1050678>>1050741 >>1060215 >>1062100
Does nobody use Divx or Xvid?
▶ No.1050738>>1050741
>>1038679 (OP)
People are working on applying deep learning techniques to video and image compression.
AV1 will be obsolete by 2020.
http://cs231n.stanford.edu/reports/2017/pdfs/423.pdf
https://arxiv.org/pdf/1703.01467.pdf
▶ No.1050741
>>1050738
>buzzwords
Haha very funny. How do you want to decompress that without a format that has strict rules?
>>1050678
>Xvid
Hi grandpa.
>>1050674
>kikeflix
Who cares? They also require some DRM bullshit in browsers.
▶ No.1056338>>1057783
I love Donald Trump! Heil Israel MIGA 2020!!!
▶ No.1057644>>1057783
I love Black women. They are so dominant and masculine!
▶ No.1057680>>1057783
I love eating bagels. Why do you guys want to kill the Jews?
▶ No.1057718>>1057783
Daily reminder that NEET-Socks deserve the rope.
▶ No.1057783>>1057845
>>1053871
>>1053092
>>1056338
>>1057644
>>1057680
>>1057718
>all this spam
Either Mods are asleep or glow in the dark.
▶ No.1057784>>1059702
>>1039048
>>Software decoding is the future.
>Only that it isn't.
Only it is.
FPGA require software to configure the hardware, therefore software decoding is the future.
▶ No.1057845>>1057869
>>1057783
They're way too busy bumplocking meaningful threads (such as >>1057834) the moment they appear. What did you think?
▶ No.1057869
>>1057845
>meaningful threads
>not a thread = meaningful thread
For cuckchan /g/ perhaps, not here.
>What is QTDDTOT?
..and don't say the thread doesn't fall under QTDDTOT because it does when the OP makes zero effort.
▶ No.1059702>>1059765 >>1059770 >>1059834
>>1057784
You're an imbecile.
investing in software decoding is asking the hardware to scream its guts outs while performing badly, while native hardware decoding is just making it part of its nature.
There isn't a signle instance where software decoding is superior or lighter on resources or battery than hardware.
▶ No.1059765>>1059769
>>1059702
>There isn't a signle instance where software decoding is superior or lighter on resources or battery than hardware.
Except the part where you want to do something to the image after decoding. Unless your codec is extremely heavy, having to copy back to the main memory will nullify the performance gains.
In the end, it's only for laptop users and phone zombies that this is useful.
▶ No.1059769>>1059824
>>1059765
>He thinks people uses either one or the other
Install a codec pack and have both then, now stop arguing nonsense because software decoding strains the hardware more and render your arguments useless, childish, ignorant.
▶ No.1059770
>>1059702
>native hardware decoding
It's not native. GPUs are often connected via PCIe which you can hardly call anything but an extension.
>to scream its guts outs
And when not used for graphics there's more available for other computing tasks. Thus the hardware can be fully utilized.
>where software decoding is superior
Software decoding is artifact free. Are you trying to tell me that hardware decoding isn't?
>lighter on resources or battery
Lighter on battery or battery?
Maximize utility and make a good fucking processor now!
▶ No.1059771
What I'm trying to say is that through using general instructions for everything you have more general resources you can use for everyting instead of just that one task that might even be surpased one day.
▶ No.1059824>>1059855
>>1059769
>codec pack
>windows fag calling others childish
How cute.
▶ No.1059834
>>1059702
>There isn't a signle instance where software decoding is superior or lighter on resources or battery than hardware.
Meanwhile in mpv user manual...
▶ No.1059855>>1059888
>>1059824
>Codec pack
>Windows only
How cute
▶ No.1059888>>1060120
>>1059855
>he doesn't compile his own ffmpeg and mpv
>he needs a precompiled codec pack with a botnet binary (((installer)))
>he doesn't compile libvpx and libaom with PGO to get that 10% speed increase
Why the hell hasn't chodemonkey enabled AV1 support after all this time?
The codec is more than viable for basic webbming purposes at this stage, long encoding times be damned.
▶ No.1060036>>1060038 >>1060104
I'm going to use h.264 High 4.2 profile with a 422p pixel format forever until literally everything supports something better and I can encode to it at greater than 1 frame per core-hour on a half decent CPU. h.264 just fucking works right now and the only people who give a single fuck about Hi10 are manchild Tiananmen Square documentary connoisseurs who throw an autistic panic over extremely mild banding in zoomed in screenshots. The only people who care about h.265 are fucking morons who want to gain an encumbered codec dependency in everything again. The only people who care about AV1 are people who have literal datacenters of compute they can throw at the few things which benefit from the reduced bandwidth i.e. Google and Netflix. h.264 master fuckin race, fight me.
▶ No.1060038
>>1060036
based except for H265. Non muricans don't have to worry about patents.
▶ No.1060104
>>1060036
I just want a codec efficient enough to livestream griefer attacks in minecraft at 90 kb/s.
▶ No.1060105
▶ No.1060120
>>1059888 (checked)
classic terry.. love it
▶ No.1060215>>1060231 >>1060273 >>1060280 >>1060282
>>1050678
Well, Xvid is still the only sane CPU (software en/decoder) friendly libre codec option today. And it is a good choice for high quality, high bitrate video (where the newer codecs don't bring too much improvement).
But for those who need low bitrate and "ok" quality, the other codecs surely provide better results.
>>1038808
>VP9 and h264 10-bit decode just fine with software on my PC.
They are either hardware accelerated or taking up quite a large amount of CPU.
>But I have multi-core CPU I don't notice.
You don't notice because you are living in your mom's basement and nor paying the electricity bills. On a global scale a 10x increase in CPU clock cycles can really add up.
>Xvid sucks grandpa
Seems like you are just associating Xvid/AVI container/CD sized P2P releases and never seen/tried/know about it's advantages.
▶ No.1060231>>1060295
>>1060215
>global scale
>being a globalists
Some japanese websites still rely on Flash for video streaming, you shithead.
>never seen/tried/know about it's advantages.
They're huge?
>taking up quite a large amount of CPU
>muh compression takes up processing power
It always does. If it bothers you, you could just compress less lmao. Has nothing to do with the format. And if you invested the money going into hardware accelleration in CPU power we'd have better performance for things other than watching movies too, you kike.
>paying the electricity bills
>for viewing a video on 1 device
I'm not one of the people who would write: "muh your just poor" but the children of the people here in Thailand have no problems viewing videos on their Chinaphones, so you're really just poor.
I'm having vacation in Thailand on an island you faggot. Even if Xvid was streaming friendly, your shitty compression wouldn't deliver.
▶ No.1060273
>>1060215
>On a global scale a 10x increase in CPU clock cycles can really add up.
As opposed to the additional network throughput and storage needed to shovel around bloated encodes in ancient codecs?
▶ No.1060280>>1061994
>>1060215
xvid/MPEG-4 ASP is as libre as H.264/MPEG-4 AVC to be completely honest - the tech is still patented. If you want a truly libre codec from the MPEG family, use MPEG-1 Part 2 or, to a lesser degree since it's still patented in irrelevant countries, MPEG-2 Part 2. Note the former will give you better performance at low bitrates, something people often forget.
▶ No.1060282
>>1060215
Also, Theora is technically superior to MPEG-4 ASP in most use-cases but the reference encoder and the one in ffmpeg sucks tremendously as far as performance goes, worse than even xvidcore.
▶ No.1060295>>1060353
>>1060231
You realize that the cumulative effect of not compressing and not having hardware acceleration is indeed huge, right? That's exactly why big companies pish for it, they have to stream that shit to billions of people.
To the individual, it's also beneficial to have them, as battery and hd space are saved. Oh, and you would be able to let a video playing while doing something else.
By the way, every single computer these days accelerate H264, and that's why this format simply doesn't die and other alternatives like H265, VP9 and the new AV1 can't make a dent on it.
▶ No.1060353>>1060474
>>1060295
>and you would be able to let a video playing while doing something else.
You can aready do that and if you invested the money in CPU power you'd be better off. Formats come and go.
>and other alternatives like H265, VP9 and the new AV1 can't make a dent on it.
Wrong. Anime torrent encodes use H265. (even 10 or 12 bpp because that somehow allows for stronger compression) The rest doesn't because of the patent issues. Google uses VP9 on the biggest video platform on the internet. Everyone here uses VP9 because it gives the best quality in 16MB achievable with current video formats supported by webbrowsers. AV1 isn't even finished. They finished the format but not the encoders and decoders which is the most essential part.
▶ No.1060474>>1060750 >>1062093
>>1060353
>You can aready do that and if you invested the money in CPU power you'd be better off. Formats come and go.
He doesn't know about the end of the Moore's law. He doesn't know people have better things to do with their money than to throw into highly obsolete-prone products.
>Muh codecs come and go
And that's precisely why they build AV1 to be a standard for all future codecs to be build on, so that they would be hardware accelerated. This is one of the key points of AV1.
>Wrong.
Just because the fringe uses it, doesn't mean it got adopted outside.
You see, you're just incredibly biased because you live in a bubble.
▶ No.1060750>>1060752 >>1062093
>>1060474
>Moore's law
Not an issue, media codecs are embarrassingly parallel.
▶ No.1060752>>1060774
>>1060750
Doesn't mean anything, really. Same problems.
▶ No.1060774>>1062025
>>1060752
Yes, just see the situation we're in now.
▶ No.1061994>>1062063
>>1060280
MPEG4 ASP is a standard. Xvid is a libre codec that encodes compliant to that standard.
MPEG4 AVC is a standard. H.264 is a patented codec that encodes compliant to that standard, and includes heavy and idiotic restriction such as: you are expected to pay licensing royalties if you want to monetize any of your videos encoded with H.264.
MPEG4 AVC and it's algorithms are not CPU friendly therefore they require acceleration by patented ASIC hardware.
AV1 may be libre, but because it'll most likely require hardware acceleration, it will not be usable on libre hardware for a long time.
▶ No.1062025>>1062032
>>1060774
AV1 is just barely past feature freeze, and all available software codecs are still completely unoptimized. Compare, for instance, the x265 HEVC software encoder, which gets similar compression ratios to AV1, but performs at hundreds of times the FPS.
▶ No.1062032>>1062091
▶ No.1062042
>>1038727
>body positivity is bad
imagine if all the mentally ill trannies learned to love and accept their god given penises
▶ No.1062063
>>1061994
>MPEG4 AVC is a standard. H.264 is a patented codec that encodes compliant to that standard
No, retard. They're both exactly the same and you're talking about x264.
>MPEG4 AVC and it's algorithms are not CPU friendly therefore they require acceleration by patented ASIC hardware.
Prove it. The biggest gain of AVC over ASP is CABAC, which is not easy on GPUs and ASICs since it's serial in nature (like most entropy coding).
▶ No.1062091
>>1062032
>muh asics
Not what was being discussed
▶ No.1062093>>1062131 >>1062239 >>1062249
>>1060474
>Moore's law
Kek, that's called dripfeeding retard.
>highly obsolete-prone
Acceleration for certain codecs or tasks is way more "obsolete-prone" than CPU power ever will be.
>fringe
The fringe is me. I don't watch netflix or TV but anime etc.
I really don't care what normies use and considering they were lucky with 720p TV all these days, they don't care either.
>>1060750
This.
More cores, more efficiency and less heat development will be the future. Waiting for those ARM machines.
▶ No.1062100
▶ No.1062131
>>1062093
>Moore's law
<Kek, that's called dripfeeding retard.
>He doesn't know, while being smug
Hahahaha
▶ No.1062239
>>1062093
>The fringe is me
>Normie
>Kek
You're an embarrassing redditor trying to fit in.
▶ No.1062249
Has anything happened regarding the (((Sisvel))) VP9/AV1 patent pool?
>>1062093
>moar cores
I wouldn't be so sure of that should Jewntel discover the wonders of NVCTs.