Submitted by CS-fan-101 t3_11xskuk in MachineLearning
brownmamba94 t1_jdawqp9 wrote
Reply to comment by Carrasco_Santo in [R] SPDF - Sparse Pre-training and Dense Fine-tuning for Large Language Models by CS-fan-101
That's a pretty interesting thought...reminds me of this research from MIT that came out last summer. hmm...how computationally complex is a single neuron? Work like this can potentially help advance the field of analog deep learning. I think sparsity will play a role here in both at the connection-level and neuron-level, potentially further reducing energy consumption and allowing for better resource utilization.
Viewing a single comment thread. View all comments