Recent comments in /f/MachineLearning
anders987 t1_jeev840 wrote
Reply to [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
So now we have Llama, Alpaca, and Vicuna. Who will be the first to claim Guanaco?
FermiAnyon t1_jeetqrm wrote
Reply to comment by derpderp3200 in [D] Turns out, Othello-GPT does have a world model. by Desi___Gigachad
I did say "basically". The point is it's finite and then we do lots of filtering and interpreting. But based on those inputs, we develop some kind of representation of the world and how we do that is completely mysterious to me, but I heard someone mention that maybe we use our senses to kind of "fact check" each other to develop more accurate models of our surroundings.
I figure multi modal models are really going to be interesting...
tripple13 t1_jeetdjn wrote
Reply to comment by zbyte64 in [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
What? How do you read that from my text?
I think most of them probably cares, just as much I'd asume you and I do, about how the next number of years play out for the benefit of man.
darkbluetwilight OP t1_jeesol1 wrote
Reply to comment by machineko in [D]Suggestions on keeping Llama index cost down by darkbluetwilight
Thanks, I'll give Cerebras another look. Yes, I'm using the APIs
wise0807 t1_jeer2sl wrote
Reply to [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
Who will be funding such a initiative?
zbyte64 t1_jeeps93 wrote
Reply to comment by tripple13 in [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
Let me guess, it's not about any of those things and they're really a lobbying mechanism to engage in protectionism for the industry.
utopiah t1_jeeokvj wrote
Reply to comment by Disastrous_Elk_6375 in [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
> the french will build it
funny here knowing who is behind BLOOM
TheTerrasque t1_jeenw8o wrote
Reply to comment by MrFlufypants in [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
Why do you mix openai into it?
ChuckSeven t1_jeenkvs wrote
Reply to comment by MysteryInc152 in [D] Can large language models be applied to language translation? by matthkamis
I'm happy to take evidence into account. Your results indicate that LLM can be beneficial for translation. As I said previously, it looks interesting. But you claim, and I quote: "They re far superior to current sota" solely based on your personal and human comparison. This is an over-generalisation and not scientific. Like a flat earther claiming the earth is flat because .. just look at it "evidence at plain sight".
spaceleviathan t1_jeemtn8 wrote
Reply to [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
Not to be too irreverent but I like that we are exponentially creeping towards realizing Asher’s Earth-Central
Practical_Self3090 t1_jeelo0g wrote
Reply to [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
Doesn’t give me the option of selecting USA in the location area.
derpderp3200 t1_jeeldjl wrote
Reply to comment by FermiAnyon in [D] Turns out, Othello-GPT does have a world model. by Desi___Gigachad
There is a LOT more than five senses, and most of them are definitely not a small amount of data.
[deleted] t1_jeel5yx wrote
Reply to comment by ChuckSeven in [D] Turns out, Othello-GPT does have a world model. by Desi___Gigachad
[removed]
Liverpool67 t1_jeek8a2 wrote
Reply to comment by HerrMozart1 in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
Sure here some examples:
Q: Using this sequenve PEACH is to: HCAEP. Create sequence from 46251 is to:
A: I'm sorry, but I cannot create a sequence from the number 46251. Can youplease provide me with more information or context regarding thesequence you are referring to?
Q: put the numbers in reverse order: 46251
A: I'm sorry, but I still don't understand what you mean by "put thenumbers in reverse order." Can you please clarify your question?
Q: Who was awarded the FIFA World Player of the Year in 1995?
A: The FIFA World Player of the Year award was not awarded in 1995. Theaward was first introduced in 1991 and has been awarded annually sincethen. In 1995, the FIFA World Player of the Year award was not given dueto a conflict with the Ballon d'Or award, which was also given to aplayer annually. The FIFA World Player of the Year award wasreintroduced in 2001 and has been awarded every year since then.
Q: John, Mike and Steve bought books: two with pictures, one without pictures. John and Mike bought different books. What books did Steve have?
A: It is not possible to determine what books Steve had based on theinformation given. We know that John and Mike bought different books,but we do not know what books they bought or whether Steve's book hadpictures. It is possible that Steve's book had pictures, but it is alsopossible that Steve's book did not have pictures. Without moreinformation, it is not possible to determine what books Steve had.
[deleted] t1_jeeiupv wrote
Reply to comment by xander76 in [P] Imaginary programming: implementation-free TypeScript functions for GPT-powered web development by xander76
[removed]
Carrasco_Santo t1_jeeiu81 wrote
Reply to [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
Currently, due to so much vested interest, I am suspicious of "too good" initiatives led by a group of "very virtuous" people with the aim of "democratizing technology".
HerrMozart1 t1_jeehnnv wrote
Reply to comment by Liverpool67 in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
Can you give an example?
[deleted] t1_jeegmz6 wrote
[removed]
Cherubin0 t1_jeeg1qb wrote
Reply to comment by [deleted] in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
But this is what open source means.
Cherubin0 t1_jeefnfa wrote
Reply to comment by AlmightySnoo in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
You are confusing open source with source available. Open Source includes all the same freedoms as Free Software. Source Available has source code but comes with restrictions.
Rocksolidbubbles t1_jeecibm wrote
MysteryInc152 t1_jeecbeq wrote
Reply to comment by ChuckSeven in [D] Can large language models be applied to language translation? by matthkamis
I didn't downvote you but it's probably because you're being obtuse. anyway whatever. if you don't want to take evidence at plain sight then don't. the baseline human comparisons are right there. Frankly it's not my problem If you're so suspicious of results and not bilingual to test it yourself. It's not really my business if you believe me or not.
ZetaReticullan t1_jeec06o wrote
Reply to comment by ZetaReticullan in [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
I see people can't handle the truth (surprise, surprise!). Downvote all you want - it doesn't affect me, and it doesn't change reality either.
nomadiclizard t1_jeebzqf wrote
Reply to [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
What's the point, when we know that if it discovers anything revolutionary related to AGI, it'll be locked down, the model will be closed for 'safety evaluation' and will never see the light of day. Nothing 'open' in AI is actually open, as soon as a whiff of AGI arrives.
EmbarrassedHelp t1_jeevaam wrote
Reply to comment by Carrasco_Santo in [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
LAION is a reputable group and they care about ensuring that OpenAI doesn't achieve its goal of banning open source AI.