CommunismDoesntWork
CommunismDoesntWork OP t1_j24pp39 wrote
Reply to comment by d00m_sayer in ChatGPT is cool, but for the next version I hope they make a ResearchAssistantGPT by CommunismDoesntWork
By all means, show me the right prompt
CommunismDoesntWork t1_j1vsqoe wrote
Reply to [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
I can tell just by looking at the jpeg-ness of the image
CommunismDoesntWork t1_j03qv7i wrote
Reply to [P] Are probabilities from multi-label image classification networks calibrated? by alkaway
Why do you need probabilities? You'd be better off spending more time on making your model more accurate period, even if it can be confidently wrong sometimes.
CommunismDoesntWork t1_izj06ql wrote
Reply to comment by rePAN6517 in [R] Large language models are not zero-shot communicators by mrx-ai
Yeah, we're at the point where models are improving faster than we can evaluate them lol
CommunismDoesntWork t1_izj03bg wrote
Reply to comment by Flag_Red in [R] Large language models are not zero-shot communicators by mrx-ai
ChatGPT came out after this paper was written. We're at the point where models are improving faster than we can evaluate them lol
CommunismDoesntWork t1_iz1qf6u wrote
You don't build your own network. You define your problem, and choose the best network for that problem. For instance, is your problem a classification problem? Then find an off the shelf classifier to use.
CommunismDoesntWork t1_iyecw45 wrote
Reply to comment by diviramon in [R] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models - Massachusetts Institute of Technology and NVIDIA Guangxuan Xiao et al - Enables INT8 for LLM bigger than 100B parameters including OPT-175B, BLOOM-176B and GLM-130B. by Singularian2501
> MF8
I've never heard of this and google isn't being helpful. Any links?
CommunismDoesntWork t1_iydruw8 wrote
Reply to comment by diviramon in [R] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models - Massachusetts Institute of Technology and NVIDIA Guangxuan Xiao et al - Enables INT8 for LLM bigger than 100B parameters including OPT-175B, BLOOM-176B and GLM-130B. by Singularian2501
Has anyone checked to see if training fundamentally needs all that precision? Intuitively, I can understand why it works better that way, but if a model can be converted to int8 after the fact without taking a huge hit in accuracy, then I don't see why an optimizer couldn't find that int8 network in the first place.
CommunismDoesntWork t1_iy3qmoa wrote
Reply to comment by Neurogence in Why is VR and AR developing so slowly? by Neurogence
Moore's law is all about hardware lol. But yeah, your intuition about smartphones is correct, Moore's law has slowed down.
CommunismDoesntWork t1_ixcqvfr wrote
Reply to [R] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models - Massachusetts Institute of Technology and NVIDIA Guangxuan Xiao et al - Enables INT8 for LLM bigger than 100B parameters including OPT-175B, BLOOM-176B and GLM-130B. by Singularian2501
What's the theory behind PTQ? As in, if quantization can preserve accuracy and create a massive speed up, why wouldn't you train on int8 to begin with? Speeding up training allows you to use even more parameters, or cut costs.
CommunismDoesntWork t1_iwnho2l wrote
Reply to comment by FTRFNK in A typical thought process by Kaarssteun
Unlike internet commies, I have a job. I posted my reply now. Check it out, you might learn something.
CommunismDoesntWork t1_iwnhkya wrote
Reply to comment by gynoidgearhead in A typical thought process by Kaarssteun
>First of all, I'm going to need you to define capitalism
Capitalism is an economic system, and all economic systems are defined by a set of rules. The rules of capitalism are: You can't steal or harm another person's private property, and you can't break a contract. This is in contrast to, for instance, Chinese communism under Mao, which had a rule that stated you can't own farmland and that all farmland would be owned by the community. This led to a scarcity of food, because no one had an incentive to produce much food, because any food they produced would be split up equally among the community. There was actually a small community who agreed to privatise their farmland such that the owners of the land got to keep all the food that they produced. Basically, they reinvented capitalism by creating private property. That town ended up producing so much food that China eventually adopted capitalism as their main economic system after Mao died, and the rest was history.
>Automation has reduced scarcity more than any industrial paradigm in history. Automation is possible both with and without capitalism.
Things don't just magically happen. Individuals have to make things happen. Individuals are guided my incentives. So you can't just say "industrial paradigm" like it's a magic wand. It doesn't mean anything. If there's an incentive to be more efficient, then sure, there will be automation. But if there is no incentive, there will not be automation. So when you say "automation is possible with and without capitalism", you need to be specific. Which exact economic systems have an incentive to create automation? Certainly not communism where things are collectively owned as we saw in Maoist China.
>Capitalism means the primacy of capital and capital holders as the decision engine of the economy - i.e., capital holders control the means of production and hold sway over the rules of the game
By that definition, you could argue that the chinese farmers which collectively owned their community farm were all "capital holders". So that's not a very good definition. Your definition also doesn't allow us to make predictions about how individuals would behave in such a system, which is the goal of any science. This is why in economics, economic systems are defined in terms of rules. It's way less ambiguous and allows economists to make predictions. Did you take microeconomics in college? It's a really good course.
>But capitalism has literally negative interest in eliminating scarcity...
And yet despite all that waste, global poverty has never been lower: https://ourworldindata.org/grapher/share-of-population-living-in-extreme-poverty-cost-of-basic-needs?country=~OWID_WRL
So clearly there's more to the story for each of your points. Food spoils, logistics is expensive, etc etc.
>and keep supply low by supporting restrictive zoning laws that forbid the construction of multi-family residences like apartments and condos.
When the government creates new rules and regulations that restrict the free market, blame the government, not capitalism. Also it's weird you're blaming companies on zoning restriction when the most famous NIMBY city is San Francisco and the people who live there.
>it sounds like capitalism is the very source of most of the scarcity in both of these cases.
"Source". Scarcity is the default. Things don't exist unless individuals make them exist. So the fact that there's so much food as there is right now is proof that capitalism has reduced scarcity. And again, global poverty has been dropping significantly.
>Then explain why insulin costs $5 to make and $300 to buy, smart guy.
Because the FDA makes it very expensive to do business. You can create insulin at home, but you'd go to jail if you tried to sell it to anyone without approval from the FDA. I could also say "explain why coffee cups are so cheap compared to insulin, smart guy." In general, when there's one-off expensive things it's usually caused by the government
>Capitalism is not "everything our economy makes".
Right, capitalism is private property and contracts. Those two simple rules happen to incentivise individuals to go out into the world and create everything the economy makes. But in the context of comparing different economic systems, it's pretty fair to say the capitalism is everything our economy makes as a shorthand.
>Capitalism is not "freedom"
Right, because capitalism is simply the enforcement of private property rights and contracts. But compared to other economic systems I'd argue it's one of the most free economic systems possible.
>Capitalism is not even "free markets"
Well, you can't have free markets if you don't have private property and contracts, so it sort of is.
CommunismDoesntWork t1_iwjxaps wrote
Reply to comment by gynoidgearhead in A typical thought process by Kaarssteun
>capitalism is unsustainable.
That makes no sense lol. Capitalism has reduced scarcity more than any economic system in history, and it's well on it's way towards creating post scarcity. Aside from inflation caused by government, capitalism has caused the price of everything to reduce dramatically. And as automation increases, the price of things will continue to fall. When the cost to produce something finally reaches 0, that good or service can be considered post-scarce and will be infinitely abundant. All thanks up capitalism.
CommunismDoesntWork t1_iwjukmm wrote
Reply to comment by ElectronicLab993 in A typical thought process by Kaarssteun
How would resources get concentrated when they're being made more and more plentiful? Just doesn't add up.
CommunismDoesntWork t1_iwjuemg wrote
Reply to comment by gynoidgearhead in A typical thought process by Kaarssteun
Commies get out
CommunismDoesntWork t1_it5bsz1 wrote
Reply to comment by stuffingmybrain in [P] libtensorflow_cc: Pre-built TensorFlow C++ API by lennart-reiher-ika
They shouldn't
CommunismDoesntWork t1_it4dch4 wrote
Reply to comment by bmer in [P] libtensorflow_cc: Pre-built TensorFlow C++ API by lennart-reiher-ika
Why would you want this to work on windows?
CommunismDoesntWork t1_ist73vi wrote
>I mainly did NLP work for 3 years
Did you apply to an NLP job? Machine learning skills aren't transferable. "ML engineers" don't exist. You can be an NLP engineer, a computer vision engineer, a data scientist(they work with tabular data and probably use pandas), and I'm sure you can be whatever the stock market guys call themselves. But you absolutely can't be all of them just because you know one of them.
However, I'm positive it wouldn't take long for them to train you on their domain if all they're doing is pandas. So it's weird they're being selective.
CommunismDoesntWork t1_is3sqc2 wrote
Reply to comment by Kinexity in World’s fastest internet network has been upgraded to mind-boggling 46 Terabit/s by Shelfrock77
5.75
English uses decimals not commas
CommunismDoesntWork t1_is0wj7j wrote
Reply to comment by _Arsenie_Boca_ in [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
If an architecture of more scalable, then it's the superior architecture.
CommunismDoesntWork t1_irwxgxk wrote
Reply to comment by _Arsenie_Boca_ in [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
>Are transformers really architecturally better than LSTMs or is their success mainly due to the huge amount of compute and data we throw at them?
That's like asking if B-trees are actually better than red black trees, or if modern CPUs and their large caches just happen to lead to better performance. It doesn't matter. If one algorithm works theoretically but doesn't scale, then it might as well not work. It's the same reason no one uses fully connected networks even though they're universal function approximators.
CommunismDoesntWork t1_irvtvle wrote
Reply to [P] Pure C/C++ port of OpenAI's Whisper by ggerganov
Have you tried rewriting these in rust?
CommunismDoesntWork t1_irdbsf0 wrote
Does it support SupCon? That's the only network I've had any success with in production
CommunismDoesntWork OP t1_j2515oz wrote
Reply to comment by d00m_sayer in ChatGPT is cool, but for the next version I hope they make a ResearchAssistantGPT by CommunismDoesntWork
I got that same answer multiple times from ChatGPT, as you can see in my post. Immunosuppression doesn't equal deprogramming the immune system. It's like comparing a hammer to a scalpel. It also doesn't answer my question why exactly gradual desensitization can't cure autoimmune disorders. I wanted to know the exact science behind those two things, down to the molecular level. Basically, I wanted to keep asking why until it gave me a complete understanding of biology. My problem is that ChatGPT wouldn't go into lower levels of detail, and instead got stuck repeating these same ultra high level summaries. The best way I can describe it is ChatGPT would be great at writing medium articles, but not great at talking about bleeding edge research.