Phoneaccount25732
Phoneaccount25732 t1_j7pncu3 wrote
Reply to [D] What do you think about this 16 week curriculum for existing software engineers who want to pursue AI and ML? by Imaginary-General687
You messed up the text in box 12, it's a duplicate of box 11.
Phoneaccount25732 t1_j477kis wrote
Reply to comment by hazard02 in [D] Bitter lesson 2.0? by Tea_Pearce
The reason Google doesn't bother is that they are aggressive about acquisitions. They're outsourcing the difficult risky work.
Phoneaccount25732 t1_j35k8s8 wrote
Reply to comment by thehodlingcompany in [Discussion] Given the right seed (or input noise) and prompt, is it theoretically possible to exactly recreate an image that a latent diffusion model was trained on? by [deleted]
Learned index functions are similar to compression algorithms and might be of interest here, but I think I agree with your argument anyway because they're very overparameterized.
Phoneaccount25732 t1_j2l2hcq wrote
Reply to comment by velcher in [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
What is there in this genre other than learned indices and surrogate modeling for physics problems?
Phoneaccount25732 t1_j2a6cxn wrote
Reply to comment by HateRedditCantQuitit in [D] Best Resources to Become Self Taught Machine Learning Expert by em_Farhan
Linear algebra can be skipped IMO.
Phoneaccount25732 t1_j2a64te wrote
Reply to comment by em_Farhan in [D] Best Resources to Become Self Taught Machine Learning Expert by em_Farhan
Deep neural networks are a specific type of statistical model that is basically made up of repeatedly stapled together basic statistical models. This makes statistics the broadest field.
Essentially all machine learning methods come from probability, statistics, and signal processing.
Machine learning contains other methods than deep neural networks, but they are not as popular and I wouldn't worry about them at first.
Understand linear regression, probability and statistics first, then try to understand neural networks. Focus on statistical modeling, not null hypothesis significance testing.
Phoneaccount25732 t1_j19v3re wrote
Reply to [P] App that Determines Whether You've Been Naughty or Nice Based on Your Reddit Comments by Steven_Johnson34
Focal loss is typical for class imbalanced data.
My brain prefers solutions that modify the training objective to resampling based solutions on the grounds of elegance. I'm not sure if that's a good attitude to have or not, if anyone has any thoughts.
Phoneaccount25732 t1_j0q7map wrote
Reply to comment by Celmeno in [R] Getting arXiv article published. Endorsement needed, cs. by No-Stay9943
Some of us are in it for the betterment of mankind.
Phoneaccount25732 t1_j06kihn wrote
Reply to comment by [deleted] in [D] Simple Questions Thread by AutoModerator
With a background in OR and fluid dynamics, once you get going you should check out Kidger's work on Neural Differential Equations.
Phoneaccount25732 t1_izzmxc0 wrote
Reply to comment by DeepNonseNse in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
Useful in what are now operator learning contexts maybe.
Phoneaccount25732 t1_iznc3s5 wrote
Reply to comment by martenlienen in [R] torchode: A Parallel ODE Solver for PyTorch by martenlienen
I guess you have no choice but to add support for PDEs and rebrand to TorchNDE.
Phoneaccount25732 t1_izl9pk8 wrote
Monte Carlo Dropout during the forward pass can be used for variance estimation.
Phoneaccount25732 t1_iz8kaym wrote
Reply to [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
Kingma's Reparameterization Trick.
Minsky on how single layer Perceptrons can't solve xor, and whoever did the follow-up on how MLPs can.
Wolpert on No Free Lunch in search and optimization.
Phoneaccount25732 t1_iyq5oab wrote
Reply to [D] In an optimal world, how would you wish variance between runs based on different random seeds was reported in papers? by optimized-adam
I think it's fine the way it is now. ML models are very statistical physics-y. Variation from run to run is extremely low.
Phoneaccount25732 t1_iyk2uw5 wrote
There's some work in forgery detection, but real artists are mostly angry about it and skeptical.
Phoneaccount25732 t1_iybrlqw wrote
Reply to comment by ThisIsMyStonerAcount in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
It's easier to break down the subjective experience of a fish into mechanical subcomponents than it is to do so for higher intelligences.
Phoneaccount25732 t1_iybqxxl wrote
Reply to comment by LazyHater in [R] Category Theory for AI,AI for Category theory by FresckleFart19
Does category theory continue to be as insanely mind-blowing once you actually understand some of it?
Phoneaccount25732 t1_iybm23q wrote
Reply to comment by ThisIsMyStonerAcount in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
To operationalize the question a bit and hopefully make it more interesting, let's consider whether 2032 will have AI models that are equally as conscious as fish, in whatever sense fish might be said to have consciousness.
Phoneaccount25732 t1_iybghm2 wrote
Reply to [D] Other than data what are the common problems holding back machine learning/artificial intelligence by BadKarma-18
Interpretability, causal ML, cost of training, out-of-distribution detection.
Also inherits every other problem that can plague statistical modeling.
Phoneaccount25732 t1_iwzkk6d wrote
Reply to comment by step21 in [D] David Ha/@hardmaru of Stability AI is liking all of Elon Musk's tweets by datasciencepro
Musk is too popular for liking him to provide high quality diagnostic information about a person.
Phoneaccount25732 t1_ivydmgs wrote
Reply to comment by maybelator in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
I want more comments like this.
Phoneaccount25732 t1_ivmiut1 wrote
Reply to comment by new_name_who_dis_ in [D] Academia: The highest funded plagiarist is also an AI ethicist by [deleted]
On the other hand, bioethics is filled with people who understand philosophy but not the subject material, and has a very counterproductive do-nothing bias.
Phoneaccount25732 t1_iuxaau0 wrote
Method of adjoints.
Phoneaccount25732 t1_iu0g36d wrote
Sensor fusion
Phoneaccount25732 t1_j90eyfv wrote
Reply to comment by currentscurrents in [D] Formalising information flow in NN by bjergerk1ng
This is my preferred interpretation of RESNETs too.