CatalyzeX_code_bot
CatalyzeX_code_bot t1_iz8hqyz wrote
Reply to [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
Found relevant code at https://github.com/tensorflow/tensor2tensor + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_ixfa9mk wrote
Reply to [R] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models - Massachusetts Institute of Technology and NVIDIA Guangxuan Xiao et al - Enables INT8 for LLM bigger than 100B parameters including OPT-175B, BLOOM-176B and GLM-130B. by Singularian2501
Found relevant code at https://github.com/mit-han-lab/smoothquant + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_iwwvzlz wrote
Reply to [D] My embarrassing trouble with inverting a GAN generator. Do GAN questions still get answered? ;-) by _Ruffy_
Found relevant code at https://github.com/zhoubolei/awesome-generative-modeling + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_iwnbh16 wrote
Reply to [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
Found relevant code at https://github.com/YeWR/EfficientZero + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_iw35kb3 wrote
Found relevant code at https://github.com/BerenMillidge/PredictiveCodingBackprop + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_iviwezb wrote
Reply to [P] Implementation of MagicMix from ByteDance researchers, - New way to interpolate concepts with much more natural, geometric coherency (implemented with Stable Diffusion!) by cloneofsimo
Found relevant code at https://magicmix.github.io + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_ivicx84 wrote
Reply to [Project] Rebel Poker AI by Character_Bluejay601
Found relevant code at https://github.com/facebookresearch/rebel + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_iui8kht wrote
Found relevant code at https://diffusion-planning.github.io/ + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_itz7729 wrote
Reply to [R] WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models by YonatanBitton
Found relevant code at https://winogavil.github.io/ + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_itm5iyo wrote
Reply to [D] Neural Avatar Community by trikortreat123
Found relevant code at https://ustc3dv.github.io/NeRFBlendShape/ + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_itftfdo wrote
Reply to [P] Stochastic Differentiable Programming: Unbiased Automatic Differentiation for Discrete Stochastic Programs (such as particle filters, agent-based models, and more!) by ChrisRackauckas
Found relevant code at https://github.com/gaurav-arya/StochasticAD.jl + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_itfr6c6 wrote
Reply to [R] Scaling Instruction-Finetuned Language Models - Flan-PaLM- Google 2022 - 75.2% on five-shot MMLU / Forecasters expected this SOTA would need until 2024! - Public checkpoints! by Singularian2501
Found relevant code at https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_isrl0bs wrote
Reply to [P] Why I quit my lucrative job at Google to start Vectara? (neural search as a service for developers everywhere). by awadallah1
Found relevant code at https://github.com/google/retrieval-qa-eval + all code implementations here
--
Found relevant code at https://github.com/tensorflow/tensor2tensor + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_isfuwjp wrote
Reply to [R] UL2: Unifying Language Learning Paradigms - Google Research 2022 - 20B parameters outperforming 175B GTP-3 and tripling the performance of T5-XXl on one-shot summarization. Public checkpoints! by Singularian2501
Found relevant code at https://github.com/google-research/google-research/tree/master/ul2 + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_iruydyy wrote
Reply to [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
Found relevant code at https://github.com/lab-ml/nn + all code implementations here
--
Found relevant code at https://github.com/MadryLab/implementation-matters + all code implementations here
--
Found relevant code at https://github.com/astoycos/Mini_Project2 + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_iqn5ydz wrote
Found relevant code at https://github.com/facebookresearch/Detectron + all code implementations here
--
To opt out from receiving code links, DM me
CatalyzeX_code_bot t1_izcb6mx wrote
Reply to [P] Using LoRA to efficiently fine-tune diffusion models. Output model less than 4MB, two times faster to train, with better performance. (Again, with Stable Diffusion) by cloneofsimo
Found relevant code at https://github.com/ylsung/VL_adapter + all code implementations here
--
Found relevant code at https://github.com/mbinkowski/MMD-GAN + all code implementations here
--
To opt out from receiving code links, DM me