pier4r
pier4r t1_jead39m wrote
Reply to [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
As a semi layman, while I was amazed by the progress in ML, I was skeptical of every increasing models, needing more and more parameters to do good. I felt like "more parameters can improve things, then other factor follows".
I asked myself whether there was any effort in being more efficient shrinking things and recently I read about LLAMA and I realized that that direction is now pursued as well.
pier4r t1_jdqs2ar wrote
Reply to comment by Correct_Influence450 in Microsoft reportedly orders AI chatbot rivals to stop using Bing’s search data by OutlandishnessOk2452
GPT 3,5 and 4: "we are trained over vast data collected over the internet, written by users over a long time. And books and articles on arxiv and all other things that needed quite some effort"
Also GPT: "you cannot copy from us!!"
pier4r t1_jd39md4 wrote
Reply to comment by currentscurrents in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
> Llamma.cpp uses the neural engine
I am trying to find confirmation for this but I didn't. I saw some ports, but weren't from the LLaMa team. Do you have any source?
pier4r t1_jd0pf1x wrote
Reply to comment by wojtek15 in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
> 128Gb of Uniform RAM which can be used by CPU, GPU or Neural Engine.
But it doesn't have the same bandwidth as the VRAM on the GPU card iirc.
Otherwise every integrated GPGPU would be better due to available ram.
The neural engine on M1 and M2 is usable IIRC only with apple libraries, that may not be used by notable models yet.
pier4r OP t1_j97cwwy wrote
Reply to comment by zerepgn in Why Nikola Tesla is So Famous (and Westinghouse is not) by pier4r
The nobelitis part was referring to tesla's later period , there few patents are involved. Have you read the article?
One example:
> Tesla claimed that not only could he send electric power wirelessly for 50 million or 100 million miles at “rates of one hundred and ten thousand horsepower.” He also said that he had made a radio machine that “could easily kill, in an instant, three hundred thousand persons.” Even stranger Tesla swore that he received an unusual communication that he decided must have been from Martians. (Although he also added the thought that there could also be aliens on Venus or the moon as, “a frozen planet, such as our moon is supposed to be, intelligent beings may still dwell, in its interior, if not on its surface.”[58])
about "things that work"
> As the years passed, Tesla didn’t manage to demonstrate any significant communication nor transmission of power from his tower. Instead, on January 19, 1903, Marconi was the one who sent the first two-way transatlantic wireless signal from Roosevelt in America to King Edward of England and back, and Marconi appeared to everyone to be the winner of the wireless race.[62] Tesla was undeterred, but Morgan was done with Tesla and his promises and cut off funding. By the next year, Tesla wrote J. P. Morgan in desperation: “Since a year, Mr. Morgan, there has hardly been a night when my pillow was not bathed in tears.”[63] By 1906, he had to fire all his employees at his wireless tower, Wardenclyffe, where it remained empty for many years
Thus I still have the feeling you didn't bother to read the article.
pier4r OP t1_j8wv8up wrote
Reply to comment by darklining in Why Nikola Tesla is So Famous (and Westinghouse is not) by pier4r
I'm not the author of the article
pier4r OP t1_j8w545v wrote
Reply to comment by rastafunion in Why Nikola Tesla is So Famous (and Westinghouse is not) by pier4r
it is about air brakes? Maybe you can contact the author of the article because she would love to put things in perspective and revise her take (of course she needs reliable sources and documents).
pier4r OP t1_j8vg1ty wrote
Reply to comment by zerepgn in Why Nikola Tesla is So Famous (and Westinghouse is not) by pier4r
>Seeing firsthand how difficult it is for things to get patented, I would not refer to Tesla’s later years as having Nobelitis
I'm not sure how the two things disprove each other. Winning the Nobel prize isn't easy either (I'd say is harder than making patents)
One can have a good career at first, achieving things that are pretty hard, and then due to this success one could start going in the Nobelitis direction.
From your comment I get the feeling that you didn't read the article (or watched the video) nor checked the Nobelitis part. Could it be?
pier4r OP t1_j8vff33 wrote
Reply to comment by KnudsonRegime in Why Nikola Tesla is So Famous (and Westinghouse is not) by pier4r
I strongly believe that the "being famous" is related to nowadays. everyone today knows Tesla but not the other guy.
If I read your comment correctly you want to say that Westinghouse was famous before Tesla in his prime time and then Tesla was famous later. Thus independently from the situation nowadays.
pier4r OP t1_j8rwi10 wrote
Why I find it interesting:
The internet in the last decade hyped Tesla a lot. I didn't dig into his history, but I assumed he was someone unmatched, a polymath able to do everything.
The author is amazing, she went to a lot of primary sources and I was appalled to discover that practically Tesla got Nobelitis after some very successful patents.
Further Tesla was far from being mathematical. Apparently he had a great intuition, but couldn't follow his ideas with the proper mathematics. Last but not least his ideas weren't, like, decades ahead of everyone else. The 3 phase transmission was already implemented and perfected (not only patented) in Germany by a Polish-Russian Engineer, while wifi communications were done by G.Marconi pratically identical like Tesla's patent.
Further: there is also a video on this https://www.youtube.com/watch?v=kSyGFEjoYOM
pier4r t1_j8jpww6 wrote
Agreed!
One should also keep in mind the following though: https://leebyron.com/4000/
pier4r t1_j30xli5 wrote
Really? It is a cartoon. "Get motivated by arbitrary fiction".
pier4r t1_iyzl5ta wrote
Reply to [D] Simple Questions Thread by AutoModerator
I not too deep in ML , but I read articles every now and then (especially about hyped models, GPT and co). I see that there is progress on some amazing things (like GPT-3.5) also because their NN gets bigger and bigger.
My question is: are there studies that check that NN could do more (are more precise or whatever) given the same parameters? In other words, it is a race in making NN as large as possible (given that they are structured appropriately) or is the "utility" per parameter also growing? I would like to know if there is literature about it.
It is a bit like an optimization question. "Do more with the same HW" so to speak.
pier4r t1_isv2o6l wrote
Reply to comment by Obtuse_Mongoose in [WP] I (23M, human) asked my Orc gf (22F) to stop deadlifting my familymembers when she comes over for the holidays because it made me look small and weak. Now she and the rest of the family keep forcing me to run laps and lift whole roasted hogs when I visit for orc holidays. AITA? by Sidaige
In some subreddits there are "summarize bots". This could summarize Reddit easily.
pier4r t1_jegm5a1 wrote
Reply to comment by ZestyData in [News] Twitter algorithm now open source by John-The-Bomb-2
> world-class complex recommendation & ranking system
https://twitter.com/amasad/status/1641879976529248256?s=20
I mean surely it is great but my recommendations weren't exactly stellar in those years.