jobeta
jobeta t1_iud53kt wrote
Kinda random but if you think the size of the input really matters for the model to learn well (which frankly I’m not convinced is an issue) you could add one or two hidden layers of decreasing sizes behind the large input size layers, before you concatenate them with the smaller size ones.
jobeta t1_iu8i1uw wrote
Never realized AWS was subsidizing killing retail. If we all switch to GCP we can have corner stores again?
jobeta t1_itootcr wrote
Reply to [D] Neural Avatar Community by trikortreat123
Why avatars would be so important?
jobeta t1_irtaxz2 wrote
Reply to comment by slashjasper in [OC] 3 years worth of toddler's color picking by slashjasper
I was just told that toddlers are attracted to faces with high contrast too? Maybe related if true
jobeta t1_ir8ul2v wrote
Reply to [P] AutoPlot: perform visual data analysis using only natural language by Swimming-Nebula-4012
Very nice!
jobeta t1_iw6zxwa wrote
Reply to comment by scitech_boom in Update an already trained neural network on new data by Thijs-vW
I don’t have much experience with that specific problem but I would tend to think it’s hard to generalize like this to “models that hit the bottom” without knowing what the validation loss actually looked like and what that new data looks like. Chances are, this data is not just perfectly sampled from the first dataset and the features have some idiosyncratic/new statistical properties. In that case, by feeding them in some way to your pre-trained model, the model loss is mechanically not in that minima it supposedly reached in the first training run anymore.