keepthepace
keepthepace t1_je0u1de wrote
Reply to comment by lqstuart in [D] FOMO on the rapid pace of LLMs by 00001746
US is not the only country in the world, maybe they wont be the first one on this thing.
keepthepace t1_jdzvxl2 wrote
Reply to [D] FOMO on the rapid pace of LLMs by 00001746
Maybe I am stubborn but I haven't totally digested the "bitter lesson" and I am not sure I agree in its inevitability. Transformers did not appear magically out of nowhere, they were a solution to RNN's venishing gradient problem. AlphaGo had to be put into a min-max montecarlo search to do anything good, and it is hard to not feel that LLMs grounding issues may be a problem to solve with architecture changes rather than scale.
keepthepace t1_jdzvbww wrote
Reply to comment by ObiWanCanShowMe in [D] FOMO on the rapid pace of LLMs by 00001746
> You won't get replaced by AI, you will get replaced by someone who knows how to use the AI.
I wonder why this is any comfort. This is just a rephrasing of "your skillset is obsolete, your profession that used to pay you a salary is now worth a 15 USD/month subscription service"
The person "who knows how to use AI" is not necessarily a skilled Ai specialist. It could simply be your typical client.
The current AI wave should be the trigger to reconsider the place we give to work in our lives. Many works are being automated and no, this is not like the previous industrialization waves.
Workers used to be replaced by expensive machines. It took time to install things, prepare the infrastructure for the transition, it required other workers to do maintenance.
This wave replaces people instantly with an online service that requires zero infrastructure (for the user), costs a fraction of a wage and gives almost instant results.
Yes, progress that suppress jobs tend to create new jobs as well, but there is no mechanism through which there is any guarantee of symmetry between these two quantities and when you think about the AI wave, it is clear that the jobs will be removed faster than they are created and that the skillsets from the jobs removed do not translate well to the hypothetical jobs created.
keepthepace t1_jdzp4ge wrote
Reply to comment by rfxap in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Could some parts of the dataset be copied into the LeetCode problem or is there a guarantee that these problems are 100% novel?
keepthepace t1_jdzm2ic wrote
Reply to comment by rfxap in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
That articles with peer-review is not something that should be avoided, even by Microsoft AI, sorry, "Open"AI
keepthepace t1_jcijjq2 wrote
Reply to comment by Hydreigon92 in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
> fairness metrics
Do you produce some that are differentiable ? It could be interesting to add them to a loss function
keepthepace t1_jcij20g wrote
> Do they actually add useful insight, or do they serve more as a PR thing?
The one we hear about most are pure PR.
> Is there anything you think AI ethics as a field can do to be more useful and to get more change?
Yes. Work on AI alignment. It is a broader problem than just ethics. It is also about having models generate truthful and grounded answers. I am extremely doubtful of the current trend to use RLHF for it, we need other approaches. But this is real ML development work, not just PR production. That would be an extremely useful way to steer erhicalAI efforts
keepthepace t1_jbe0b4a wrote
Reply to [D] I'm a dentist and during my remaining lifetime I would like to take part in laying groundwork for future autonomic robots powered by AI that are capable of performing dental procedures. What technologies should I start to learn? by Armauer
Medical robotics is a field that is booming now and that is hungry for tech-inclined licensed practitioners to join them. Just go advertise yourself to startups in the field, I am sure you will get interesting proposals.
keepthepace t1_janzb1v wrote
Reply to [N] EleutherAI has formed a non-profit by StellaAthena
Congratulations! The world desperately needs what you are doing! Was thinking about joining a while ago but got distracted by image-oriented research.
> As access to LLMs has increased, our research has shifted to focus more on interpretability, alignment, ethics, and evaluation of AIs.
Does this mean EleutherAI is not working anymore on big language models?
keepthepace t1_jalu94j wrote
Reply to comment by red75prime in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
The key thing we need is agency. The current chatbots lack the long-term coherency we expect from an agent, because they do not plan towards specific goals, so they just jump from one thing to another.
keepthepace t1_j9haufb wrote
It looks gorgeous!
keepthepace t1_j8ycv83 wrote
Reply to comment by ckperry in [N] Google is increasing the price of every Colab Pro tier by 10X! Pro is 95 Euro and Pro+ is 433 Euro per month! Without notifying users! by FreePenalties
Ah yes, several EU countries started sending warning shots about it. Makes sense. Good luck for the production fix on friday evening!
keepthepace t1_j7jgm75 wrote
Reply to comment by telebierro in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Google has been the biggest team player when it comes to publish advances in AI. OpenAI has been the worst: AI research paper of big players.
Most of the techs that made ChatGPT possible were published by Google. Worse: OpenAI does not publish the 1% of things that makes ChatGPT unique (though we know enough to have a pretty good idea of what they did).
I'd be whiny in their place as well. The GPT family is not super innovative, they just ran away with an architecture mostly made by Google (Transformers/BERT), stripped it of everything that prevented huge parallelization (which many suspect included things that would allow it to stay "grounded" in reality) and slapped more compute on it.
keepthepace t1_j3o8avv wrote
Reply to comment by GoofAckYoorsElf in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
I am willing to be that 99% of the code is overprotected and no one in OpenAI would spend valuable time looking at it.
These protections mostly exist to justify some bullshit jobs within the company.
keepthepace t1_j3o7vy8 wrote
Reply to comment by ksblur in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
I was going to argue that employees will be able to bullshit their automated manager easily but well, it is not like humans are much better at handling it.
keepthepace t1_j3fzg3p wrote
Reply to comment by LesleyFair in [N] 7 Predictions From The State of AI Report For 2023 ⭕ by LesleyFair
How about Microsoft's 2019 1B investment in OpenAI then?
keepthepace t1_j3e6gi8 wrote
Agreed on 1 and 2.
Not sure about 3: NVidia is dominant (maybe to the point of risking a monopoly litigation?) by providing to everyone. Making an "in" and "out" group carries little benefits and would push the out-group towards competition
4: I fail to see an "alignment organisation" that would provide 100M of value, either in tech or in reputation. It may emerge this year but I doubt there is one yet. Most valuable insights come from established AI shops
5: I doubt it. Artists are disregarded by politicians since forever. Copyright lobbyists have more power and they already outlawed generated images copyright
6: OpenAI is not an open source company. And this has already happened. Microsoft poured 1 billion into OpenAI
7: gosh I hope! Here is my own bold prediction: we will discover that multitask models require far less parameters for similar performance than language models and GATO successors will outperform similarly sized LLMs while simultaneously doing more tasks.
keepthepace t1_j34k8y6 wrote
Reply to Image matching within database? [P] by Clarkmilo
You probably want something like perceptual hash that find invariants in an image and has an efficient retrieval algorithm for a huge database.
keepthepace t1_j2m2xrp wrote
Reply to comment by currentscurrents in [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
Yes, they never reached the level of very complex algorithms but also no one ever tried to throw a lot of compute at them before we created gigantic language models able to mostly do these tasks with orders of magnitude more parameters.
I do suspect that if we threw a bit more of compute at a DNC we would get very interesting results, but that's little more than a hunch.
keepthepace t1_j2k0pvh wrote
Reply to [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
I think you may be interested in neural Turing machines and it's successor: diffferentiable neural computers. They basically force a network to accomplish a task through a Turing machine.
https://en.m.wikipedia.org/wiki/Differentiable_neural_computer
keepthepace t1_iw4u9kd wrote
Reply to comment by ZestyData in [D] Current Job Market in ML by diffusion-xgb
Meta feels like they are preparing to pivot away from that "meta verse" plan. Feels like the 90s called and asked to get their impractical VR worlds back
keepthepace t1_je4s97b wrote
Reply to [D] Do model weights have the same license as the modem architecture? by murphwalker
Honestly at this point I am not sure weights can be copyrighted: they have no human "author". It is a total gray zone. Tribunals will rule knia few years that the habits taken now are the jurisprudence