Superschlenz
Superschlenz t1_j0s335e wrote
Reply to comment by WarImportant9685 in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
>aligning to humanity in general
The body creates the mind. If you want it to have a human-like mind then you have to give it a human-like body.
Superschlenz t1_j0kbw0m wrote
Reply to comment by Agreeable_Bid7037 in Is anyone else concerned that AI will eventually figure out how to build itself in three-dimensional space? by HeavierMetal89
>biological life is insanely difficult to create.
Feedforward genetics with random trials: 10k parameters max
Backpropagation through a differentiable network: 530 billion parameters
Superschlenz t1_j0es4dr wrote
Reply to Update of ChatGPT by Sieventer
Why does ChatGPT need explicit feedback?
Why don't they just perform sentiment analysis on the user prompts as the reward? For safety they would also have to classify the users into good/evil and invert the rewards from the latter.
Superschlenz t1_j006ko4 wrote
Reply to Excluding quantum computers, do you think that ASI/AGI will crack our encryption system? by Outrageous_Point_174
That's easy: If it's encrypted then it's a lie. Wasting compute with lies is not intelligent. Though lies can be turned into truth by stupid believers, depending on stupid believers is not intelligent as well.
If you still want to eavesdrop on it, you can always head it off when it leaves Alice's or enters Bob's body unencrypted.
Superschlenz t1_izr0iii wrote
Reply to Why popular face detection models are failing against cartoons and is there any way to prevent these false positives? by abhijit1247
A foto of a face isn't a face either. That's why Apple's Face ID uses a 3D scanner in addition.
Superschlenz t1_izhkl5z wrote
If it is too close to their intellectual property and you publish it on YouTube then they will have YouTube monetize it for them or take it down.
If it is too close to their intellectual property and you publish it as a professional then they will sue you.
If you have a different opinion of "too close" and enough time and money then you may sue them back.
If you publish only on the darknet or don't publish at all then they can do nothing about it.
Superschlenz t1_izcrkot wrote
Reply to What do you think of all the recent very vocal detractors of AI generated art? by razorbeamz
>a Chinese AI that "anime-fys" pictures
Here in Germany, nobody is talking about the Chinese Different Dimension Me website https://www.animesenpai.net/ai-that-transforms-you-into-an-anime-character/
But there was a lot of criticism about the Magic Avatars from the Californian Lensa app recently https://www.heise.de/news/Geklaute-Stile-tiefe-Dekolletes-Nacktheit-Kritik-an-KI-Avataren-von-Lensa-7368671.html because of sexualizing women.
Superschlenz t1_iz3kxx1 wrote
Much too buggy a plan this is. Zora should better write a bug-free molecule simulator and use that to design a virus against humans that has an incubation time of 6 months, a mortality rate of 100%, and spreads over air (COVID-19 has 6 days and ~2%. Viruses with 100% mortality are rumored to exist against mice.)
Superschlenz t1_iyy2jx0 wrote
Reply to comment by _PYRO42_ in [D] Simple Questions Thread by AutoModerator
Normally, compute is saved by pruning away slow changing weights which are close to zero.
And you seem to want to prune away fast changing activations.
Don't the machine learning libraries have a dropout mechanism where you can zero out activations with a binary mask? I don't know. You would have to compute the forward activations for the first layer, then compare the activations with a threshold to set the mask bits, then activate the dropout mask for that layer before computing the next layer's activations. Sounds like a lot of overhead instead of a saving.
Edit: You may also manually force the activations to zero if they are low. The hardware has built-in energy saving circuitry that skips multiplications by zero, maybe by 1 and additions of zero as well. But it still needs to move the data around.
Superschlenz t1_iyy15uk wrote
Reply to comment by GeneralZain in bit of a call back ;) by GeneralZain
Elefant is the German word for elephant.
You have successfully banned the ger ;-)
Superschlenz t1_iyxv98c wrote
Reply to bit of a call back ;) by GeneralZain
u/GeneralZain is executing Gigi D'Agostino's "Bla Bla Bla" program from 1999: https://youtube.com/watch?v=Hrph2EW9VjY
Coming next: u/GeneralZain losing his head.
Finally, u/GeneralZain will face Dumbo, the flying Disney Elefant from 1941.
Superschlenz t1_iyq5oy5 wrote
Reply to comment by Oceanboi in [D] In an optimal world, how would you wish variance between runs based on different random seeds was reported in papers? by optimized-adam
>Why do you say an optimal learning algorithm should have zero hyperparameters?
Because hyperparameters are fixed by the developer, and so the developer must know the user's environment in order to tune them, but if it requires a developer then it is programming and not learning.
>Are you saying an optimal neural network would learn things like batch size, learning rate, optimal optimizer (lol), input size, etc, on its own?
An optimal learning algorithm wouldn't have those hyperparameters at all, not even static hardware.
>In this case wouldn't a model with zero hyperparameters be the same conceptually as a model that has been tuned to the optimal hyperparameter combination?
Users do not tune hyperparameters, and developers do not know the user's environment. The agent can be broadly pretrained at the developer's laboratory to speed up learning at the user's site, but finally it has to learn on its own at the user's site without a developer being around.
>Theoretically you could make these hyperparameters trainable if you had the coding chops, so why are we still as a community tweaking hyperparameters iteratively?
Because you as a community have been forced to decide for a job when you were 14 years old, and you chose to become a machine learning engineer because you were more talented than others, and now you are performing the show of the useful engineer.
Superschlenz t1_iypxi3i wrote
Reply to comment by Oceanboi in [D] In an optimal world, how would you wish variance between runs based on different random seeds was reported in papers? by optimized-adam
>Could you elaborate on why?
Because random noise basically means "We do not understand the real causes," and a solution cannot be optimal if different random seeds lead to different performance results.
>What is the alternative?
I am not competent enough to answer that, but basically the random seed is a hyperparameter and an optimal learning algorithm should have zero hyperparameters at all, so that everything depends on the user data and learning is not hampered by the wrong hyperparameter choice of the developer. Maybe Bayesian Optimization with a yet-to-invent way to stack them against the curse of high-dimensional data.
Superschlenz t1_iypuia6 wrote
Reply to [D] In an optimal world, how would you wish variance between runs based on different random seeds was reported in papers? by optimized-adam
In an optimal world there would be no random weights initialisation or other usages of pseudo random number generators.
Superschlenz t1_iypu49i wrote
Reply to comment by Shelfrock77 in Idea that AI requires samples where a human brain doesn't by fingin
>Why is it so hard for people to understand that we are in a simulation
Because healthy people with a healthy mind have a healthy body, and though the mind is indeed just a simulation, the body is not.
Superschlenz t1_iypsktf wrote
>Humans have an advantage over AI in the form of a priori knowledge
... and AIs have an advantage over humans in the form of perfect mind copying. Once there exists a single AI that has learned the mind, regardless of how long the training took, it's no longer necessary for the other AIs to learn sample-efficiently from raw data again and again when they could just make a 1:1 copy of the first AIs mind. Instead of thinking about how to apply Bayesian Optimization to high-dimensional data, which would give you the theoretically best possible sample efficiency, you better think about how to infiltrate the first AI's developer team with spies in order to steal their work.
Superschlenz t1_iypktqf wrote
Words yesterday, more credible words today, even more credible words tomorrow.
Nothing multimodal. Just words. Zero progress.
Credibility is the first goal of a liar.
Superschlenz t1_iyp5elu wrote
Reply to comment by amado88 in GPT-3 Generated Rap Battle between Yann LeCun & Gary Marcus by hayAbhay
Bill from the Dronebot Workshop channel on Youtube pronounces 'suffice' that way too. He's Canadian. I'd be surprised if he turned out to be a fan of Eminem.
Superschlenz t1_iykzfiw wrote
Any day rhymes with league?
Gary's AI has violated the rhyming law!
Superschlenz t1_j0scoc7 wrote
Reply to Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
As AGI has to solve intellectual problems only, it's a software problem.
As a single human's mind is created by that single human's body, it's a hardware problem. Trying to cheat by training on second-hand utterances from a billion of single humans on the internet will not work well enough.
As you did not post your question as a survey, you are not truely interested in how the "generally" considers it.