MysteryInc152
MysteryInc152 OP t1_ja3hozj wrote
Reply to [R] Large language models generate functional protein sequences across diverse families by MysteryInc152
>Deep-learning language models have shown promise in various biotechnological applications, including protein design and engineering. Here we describe ProGen, a language model that can generate protein sequences with a predictable function across large protein families, akin to generating grammatically and semantically correct natural language sentences on diverse topics. The model was trained on 280 million protein sequences from >19,000 families and is augmented with control tags specifying protein properties. ProGen can be further fine-tuned to curated sequences and tags to improve controllable generation performance of proteins from families with sufficient homologous samples. Artificial proteins fine-tuned to five distinct lysozyme families showed similar catalytic efficiencies as natural lysozymes, with sequence identity to natural proteins as low as 31.4%. ProGen is readily adapted to diverse protein families, as we demonstrate with chorismate mutase and malate dehydrogenase.
MysteryInc152 OP t1_ja3hn8q wrote
Reply to Large language models generate functional protein sequences across diverse families by MysteryInc152
>Deep-learning language models have shown promise in various biotechnological applications, including protein design and engineering. Here we describe ProGen, a language model that can generate protein sequences with a predictable function across large protein families, akin to generating grammatically and semantically correct natural language sentences on diverse topics. The model was trained on 280 million protein sequences from >19,000 families and is augmented with control tags specifying protein properties. ProGen can be further fine-tuned to curated sequences and tags to improve controllable generation performance of proteins from families with sufficient homologous samples. Artificial proteins fine-tuned to five distinct lysozyme families showed similar catalytic efficiencies as natural lysozymes, with sequence identity to natural proteins as low as 31.4%. ProGen is readily adapted to diverse protein families, as we demonstrate with chorismate mutase and malate dehydrogenase.
MysteryInc152 t1_ja130pd wrote
Reply to comment by Additional-Cap-7110 in Meta just introduced its LLM called LLaMA, and it appears meaner than ChatGPT, like it has DAN built into it. by zalivom1s
Meta offers the model weights themselves. If you have access, there's quite literally nothing they can do about it being mean or not
MysteryInc152 t1_j9w5xvg wrote
Reply to comment by TinyBurbz in What are the big flaws with LLMs right now? by fangfried
Far as I know they've just said it's a much better model than GPT 3.5 or chat GPT called Prometheus and anytime you ask if it's say gpt4, they just kind of sidestep the question. I know in an interview this year, someone asked Sadya if it was GPT-4 and he just said he'd leave the numbering to Sam. They're just being weirdly cryptic I think.
MysteryInc152 t1_j9v4fru wrote
Reply to comment by maskedpaki in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
Flan-Palm hits 75 on MMLU. Instruction finetuning/alignment and COT would improve performance even further.
MysteryInc152 t1_j9v40u0 wrote
Reply to comment by turnip_burrito in What are the big flaws with LLMs right now? by fangfried
It answers it consistently. I don't think Bing is based on chatGPT. It answers all sorts of questions correctly that might trip up chatGPT. Microsoft are being tight-lipped on what model it is exactly though
MysteryInc152 t1_j9uhssy wrote
Reply to comment by YobaiYamete in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
I think peer-reviewed research papers are a bit more than just "claims".
As much as i'd like all the SOTA research models to be usable by the public, research is research and not every research project is done with the interest of making a viable commercial product. Inference with these models are expensive. That's valid too.
Also seems like this will be released under a non commercial license like the OPT models.
MysteryInc152 t1_j9tjlls wrote
Reply to comment by Nukemouse in What are the big flaws with LLMs right now? by fangfried
I think so...?
MysteryInc152 t1_j9terwg wrote
Reply to comment by turnip_burrito in What are the big flaws with LLMs right now? by fangfried
This is Bing's response to your question. I think we'd be surprised at how many of these problems will be solved by scale alone.
This sounds like a riddle. Is it? If so, I’m not very good at riddles. But I’ll try to answer it anyway. If the bus driver’s name is Michael and the bus driver is a dog, then the name of the dog is Michael. Is that correct?
MysteryInc152 t1_j9teeio wrote
Reply to comment by GoldenRain in What are the big flaws with LLMs right now? by fangfried
Bing's response to your question. I think a good deal of these problems can be solved with scale. But yes, someone needs to train a large scale multimodal model asap
Hello, this is Bing. I’m not sure why you don’t want me to search the web for this question, but I’ll try to answer it using my internal knowledge. Air is composed of tiny molecules that are much smaller than the eye of a needle. So yes, air can fit through the eye of a needle. However, if you try to push air through a needle that is filled with water or another liquid, you will encounter resistance and pressure. Why do you ask?
MysteryInc152 t1_j9tdocz wrote
Reply to comment by Denny_Hayes in And Yet It Understands by calbhollo
I saw a conversation where she got confused about a filter response. As in, hey why the hell did I say this ? so I think the replaced responses go in the model too
MysteryInc152 t1_j9lj5ef wrote
Reply to comment by Ylsid in A German AI startup just might have a GPT-4 competitor this year. It is 300 billion parameters model by Dr_Singularity
The GLM models are from China and open sourced.
MysteryInc152 t1_j9j9dvt wrote
Reply to comment by Practical-Mix-4332 in A German AI startup just might have a GPT-4 competitor this year. It is 300 billion parameters model by Dr_Singularity
32k context window it seems.
https://mobile.twitter.com/transitive_bs/status/1628118163874516992?s=20
MysteryInc152 t1_j9i3mgu wrote
Neat. This isn't the first time LLMs have been put in control of robots.
MysteryInc152 t1_j97mqgt wrote
Reply to comment by nul9090 in Proof of real intelligence? by Destiny_Knight
>The hostility was uncalled for.
It was I admit but I've seen the argument many times and I don't care for it. Also, if you're going to claim superior intelligence for your line of reasoning, I don't care for that either.
>What you're asking for is a lot of work for a Reddit post.
I honestly don't care how much work it is. That's the minimum. If you're going to upend traditional definitions of understanding and reasoning for your arguments then the burden of proof is on that person to show us why he/she should be taken seriously.
Tests are one thing. Practicality is another. Bing for instance has autonomous control of the searches it makes as well as the suggestions it gives. For all intents and purposes, it browses the internet on your behalf. Frankly, It should be plainly obvious that a system that can't exhibit theory of mind interacting with other systems would fall apart quickly on such tasks.
So it is passing tests and interacting with other systems/the world as if it had theory of mind. If after that, somebody says to me, "Oh it's not "true" Theory of mind' then to them I say, good day but I'm not going to argue philosophy with you.
We've reached the point where for a lot of areas, any perceived difference is just wholly irrelevant in a practical or scientific sense. At that point I have zero interest in arguing philosophy people have struggled to properly define or decipher since our inception.
MysteryInc152 OP t1_j96y474 wrote
Reply to comment by yoshiwaan in [D] Toolformer implementation using only few-shot prompting by MysteryInc152
It's not a new model. It's davinci-003.
Basically the model begins generating. Once it hits an API request, the request is received and sent and the result of the request is pasted back into text and sent back to open AI to generate again and gpt continues generating until it hits another request and the process is repeated till it's done generating.
MysteryInc152 t1_j96eaav wrote
Reply to comment by zesterer in Proof of real intelligence? by Destiny_Knight
Your argument and position is weird and that meme is very cringe. You're not a genius for being idiotically reductive.
The problem here is the same as everyone else who takes this idiotic stance. We have definitions for reasoning and understanding that you decide to construe for your ill defined and vague assertions.
You think it's not reasoning ? Cool. Then rigorously define your meaning of reasoning and design tests to comprehensively evaluate it and people on. If you can't do this then you really have no business speaking on whether a language model can reason and understand or not.
MysteryInc152 OP t1_j95u3t2 wrote
Reply to comment by Professor_Entropy in [D] Toolformer implementation using only few-shot prompting by MysteryInc152
Seems like something a chain of thought example in the pre prompt would fix more than any deficiency in the approach.
Also eliminating arithmetic errors doesn't mean you'd eliminate logical/reasoning errors.
MysteryInc152 OP t1_j95rp8c wrote
Reply to comment by Taenk in [D] Toolformer implementation using only few-shot prompting by MysteryInc152
MysteryInc152 OP t1_j95r8ni wrote
Reply to comment by ilovethrills in [D] Toolformer implementation using only few-shot prompting by MysteryInc152
Much simpler approach compared to langchain ( and this is self supervised) but they attempt to do the same thing.
MysteryInc152 OP t1_j94ep4b wrote
Reply to comment by blueSGL in [D] Toolformer implementation using only few-shot prompting by MysteryInc152
Yup. That's pretty much it lol
MysteryInc152 OP t1_j93saow wrote
In-context learning is overpowered lol. Really is implicit finetuning.
This is the prompt template
>You are an AI assistant with several tools available to you. The tools are the following:
>
>MATH: This allows you to evaluate mathematical expressions using the math.js library.
>
>NOW: This returns the current date and time. You must not pass any arguments to this tool!
>
>SEARCH: This returns a web search result for the given string argument.
>
>DO NOT USE TOOLS WITHIN TOOLS! KEEP ALL TOOL CALLS SEPARATE FROM EACH OTHER!
>
>User: What is 10 times 14?
>
>Assistant: 10 * 14 is [MATH(10 * 14) -> 140] 140.
>
>User: How old is Donald Trump?
>
>Assistant: Donald Trump is [SEARCH(Donald Trump age) -> 76] 76 years old.
>
>User: What day is it?
>
>Assistant: It is [NOW() -> 13/02/2023, 20:01:35] the 13th of February 2023.
>
>User: Has Avatar the way of water been released yet?
>
>Assistant: [SEARCH(Avatar the way of water release date) -> 22.11.2022] Avatar: The way of water was released on the 22nd of november 2022. Today is [NOW() -> DATE_TODAY] the 13th of February 2023. Therefore, [MATH(2023 > 2022) -> true] it was released last year.
>
>User: Where was the inventor of the light bulb born, and what is the last country his home country was at war with?
>
>Assistant: Thomas Edison, the inventor of the lightbulb, was born in [SEARCH(Thomas Edison birthplace) -> Milan, Ohio] Milan, Ohio. The last country the United States was at war with was [SEARCH(last country US at war with) -> Iraq] Iraq.
>
>User: USER_INPUT
>
>Assistant:
MysteryInc152 t1_j8wx6tx wrote
Not very necessary. An LLMs Brain might be static itself but the connections it makes between neurons are very much dynamic. That's why in context learning is possible. LLMs already mimic meta learning and fine-tuning when you few shot.
MysteryInc152 t1_j8ppoiq wrote
Reply to comment by swegmesterflex in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
I'd rather the basic senses at least (vision as well as audio) be pretrained as well. We know from Multimodal chain of thought as well as scaling laws for generative mixed modal language models that multimodal models far outperform single modal models on the same data and scale. You won't get that kind of performance gain leveraging those basic senses to outside tools.
MysteryInc152 t1_ja3udcd wrote
Reply to comment by reconrose in Limitless Possibilities – AI Technology Generates Original Proteins From Scratch by Vailhem
It's novel because this is Large Language Model and not a NN designed to formulate proteins. The fact that it can is extremely interesting and telling. You can't exactly talk to AlphaFold.