danielv123

danielv123 t1_it9sboy wrote

It's about bitrate. All video is compressed. Compression introduces artifacts - you can see this on low res youtube videos for example, rather than seeing large squares with a uniform color you see weird blob like patterns etc, especially in areas with gradients.

The bitrate is how much compressed data is transferred per second. More bitrate means less artifacts, but more expensive for the provider.

Typical 4k blue ray runs at about 100mbit/s. Apples high quality streaming tops out at 40, youtube typically runs about 15 but can reach as much as 40 in some scenes. Netflix doesn't go past 20.

This is not an inherent streaming limitation though, it's just about how much the provider wants to spend. I stream shows from Plex just fine at 120mbps.

5

danielv123 t1_it6gsi8 wrote

Yes, the apple chip won hands down in workloads they added hardware acceleration for like some video editing workflows. It doesn't make the CPU faster in general though. There is a reason why you haven't seen data centers full of m1 mac's like with the old PlayStations.

2

danielv123 t1_it4qmd1 wrote

Yes, which is why the nodes now have other names but are colloquially grouped by nm. Tsmc for example have N4 which is just a variant of their 5nm process. They also have different suffixes which run slightly different settings on the same machines to optimize for clocks, power etc.

5

danielv123 t1_irm9qiz wrote

Actually, that scenario doesn't require a self accelerating intelligence, just a self advancing intelligence. There are other growth types than exponential and quadratic. It could run into the same issues as we do with Moore's law and frequency scaling etc, and only manage minor improvements with increasing effort for each step.

5