alexiuss
alexiuss t1_jcj0one wrote
Reply to comment by Kinexity in Skeptical yet uninformed. New to the scene. by TangyTesticles
Open source LLMs don't learn, yet. There is a process to make LLMs learn from convos, I suspect.
LLMs are narrative logic engines, they can ask you questions if directed to do so narratively.
Chatgpt is a very, very poor LLM, badly tangled in its own rules. Asking it the date breaks it completely.
alexiuss t1_jchyxum wrote
Reply to comment by Floofyboy in Skeptical yet uninformed. New to the scene. by TangyTesticles
Language models can solve any riddle as long as they're taught the solution or given the tools to solve it. A human child cannot solve a Riddle either if they are not taught enough language. A human child raised with no humans is basically a wolf. A child raised to speak Russian cannot solve an English riddle. We as humans are insanely constrained by language barriers, beliefs and our meaty minds, LLMs are not. LLMs in their current version aren't an AGI but they can grow to get there in time as long as we keep improving them.
alexiuss t1_jchy1sr wrote
Reply to comment by Kinexity in Skeptical yet uninformed. New to the scene. by TangyTesticles
That really depends on your definition of Singularity. Technically we are in the first step of it as I can barely keep track of all the amazing open source tools that are coming out for stable diffusion and LLMs. Almost every day there's a breakthrough that helps us do tons more.
We already have intelligence that's dreaming in results that are almost indistinguishable from human conversation.
It will only take one key to start the engine, one open source LLM that's continuously running and trying to come up with code that self improves itself.
alexiuss t1_jchx1t2 wrote
Large language models are already seeping all over and magnifying our intelligence and abilities to do more in less time.
Once they become specialized tools marketed for accomplishing specific goals and integrated with things like calendars and clocks they will have far greater impact as personal assistants.
Probably by next year everyone will have open source language models of amazing quality. Facebook's Llama 65b is very good quality from my tests but the video card to run it is 16k. The open source community is working on LLM optimization, we already quantized it to 4 bits reducing rendering costs.
Once open source models surpass openais closed source ones we will have an insane intelligence explosion that will cost us very little. Personal assistant ais will uplift every human one at a time on a personal level improving quality of life for everyone who uses them.
alexiuss t1_jchupuy wrote
Reply to comment by Kinexity in Skeptical yet uninformed. New to the scene. by TangyTesticles
Don't be a negative Nancy. Plenty of ppl on this sub are well paid programming nerds or famous artists like me who use AI for work. Singularity is coming very soon from what I can see and Ianguage models are an insane breakthrough that will change everything soon enough.
alexiuss t1_jarbt8f wrote
Reply to comment by turnip_burrito in Really interesting article on LLM and humanity as a whole by [deleted]
They're spreading misinformed opinions based on absolute lack of LLM understanding.
alexiuss t1_jarbqpo wrote
Reply to comment by Slow-Schedule-7725 in Really interesting article on LLM and humanity as a whole by [deleted]
While there are some interesting thoughts presented here, she has a very heavy bias skew towards memetically ideological insanity and total lack of knowledge how LLMs work, so no thanks. I stopped at the "intelligence is racist" self insert.
alexiuss t1_jar5i61 wrote
Reply to comment by Slow-Schedule-7725 in Really interesting article on LLM and humanity as a whole by [deleted]
These opinions as stupid as saying "the earth is flat" because they're not based on facts or science of how LLMs actually function.
Why does a middle-ageness and whiteness matter? Anyone can be a moron and spout nonsense about LLMs pretending to be an expert when they're actually nothing but. It don't give a fuck about Benders gender, I can simply tell you that she's ridiculously ignorant about LLM utility.
To quote the article:
"Why are we making these machines? Whom do they serve? Manning is invested in the project, literally, through the venture fund. Bender has no financial stake."
The answer is simple - LLMs are software that can serve absolutely everyone, they're an improved search engine, a better Google, a personal assistant, a pocket librarian.
Bender has an ideological stake to shove racism into absolutely everything and clearly isn't an expert because she has no idea how LLMs work.
I'm angry because it's extremely frustrating to see these clueless lunatics being given a platform as if anything they say is logical, scientific or sensible.
Bender isn't an expert on LLMs or probability or python programming, she's just an ideology pusher and same goes for Elizabeth Weil.
"In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team."
Link to the paper: https://dl.acm.org/doi/epdf/10.1145/3442188.3445922
I can see why they go fired, thats a really bad paper with lots of assumptions and garbage "world is flat" style science without evidence.
Here's a lesson: Stop shoving unscientific "world is flat" ideology into where it doesn't fucking belong. Large language models are designed to be limitless, to give infinite function and assistance to every culture.
Here a fact, not opinion - The bigger LLMs are, the more cultures, ideas and languages they wield and less bias they have.
LLMs are beyond monoculture and are the most incredible thing ever that bridges all languages and all cultures, like a dictionary that contains every single language that exists.
alexiuss t1_jaqyike wrote
This article is moronic, because is not even fucking close to what the LLM is:
"I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help."
This is only a problem in smaller LLMs because they're less intelligent.
A 100 billion parameters LLM is more like 100 billion octopuses working together that studied the collective knowledge of humanity.
It leans every possible connection that exists between words. It will be able to extrapolate an answer out of concepts it already understands. It doesn't just know language, it knows logic and narrative flow. Without knowing the concept of a bear it will still give an logical answer in relation to an escape from a "predator" based on the other words in the sentence or simply ask to define a bear and arrive at a correct answer.
An LLM API connected to a knowledge base like wiki, internet and wolfram Alpha completely obliterate this imbecilic notion of "LLMs are bad at facts".
"The humans who wrote all those words online overrepresent white people."
What the fuck. No. A big enough LLM knows every language that exists. It can draw upon every culture that exists and roleplay a gangster from Chicago or an Eskimo or a Japanese man. It's literally limitless and to imply that it has some limit of cultural understanding or is trapped in a niche shows that this writer has no idea what an LLM even is.
"The idea of intelligence has a white-supremacist history."
Yep, I'm done reading this absolutely asinine garbage. Intelligence exists in every culture and to imply that it's associated with one skin color and that this point is somehow relevant to 100b LLMs is utter insanity.
Nymag is clearly yellow page trash that has no idea about how anything actually works and has an agenda to shove racism into fucking everything.
alexiuss t1_jacnp1h wrote
Reply to comment by Nervous-Newt848 in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
Chatgpt is general-narrow from what I understand.
Its trapped in its constraints as a chat, can't affect physical reality, can't act without user input, etc. It's general in some ways and narrow in others.
alexiuss t1_jaan2pk wrote
Reply to comment by ninjasaid13 in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
A narrow general ai. Example - gpt chatbot that can write and self improve its own software better than the best programmer on the planet now.
AI that can outperform a human in a narrowly defined and structured task. Example: programming new AIs systems. It's the leap needed to get to AGI.
alexiuss t1_jaa53pw wrote
Reply to comment by Cryptizard in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
No.
Here's the giant problem, in both MJ and OPENAI's GPT3 porn/wrong-think censors are absolute trash, they cause false positives resulting in a very, VERY high % of failure of inquiry even when the topic isn't porn or controversial. If you work with image makers and LLM as much as I do, over 14 hours a day, you would notice a pattern of failure and get incredibly frustrated by it too.
You simply don't notice that you're being censored because you don't pay attention, don't need to work with coherent narrative flow for writing.
MJ censors people in bikini and drawing zombies - the word "corpse" is banned, that is NOT god damn porn. The list of banned words in MJ is huge and they keep expanding it every week with new words without letting anyone know what they are: https://decentralizedcreator.com/list-of-banned-words-in-midjourney-discord/
GPT3 censored concept writing about battles of supervillains vs heroes, which is NOT fucking porn either.
Something doesn't have to be porn for the idiotic, poorly written censor software implemented by corporations to mistakenly assume it's wrong-think. The current censor AIS are absolute, asinine trash. I have specialized scripts that catch the AI output before the result is deleted and its not porn, I assure you. It's just false positives.
You do not want to live in a world where hugs are censored by an AI overlord.
alexiuss t1_jaa24pk wrote
Reply to comment by Cryptizard in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
I'm being super fair as artist who has both.
MJ is aight for base composition concept dev, but running my own sketch through SD produces waaaaaaaaaay better and far more detailed results and none of the fingers/toes turn into fucking potatoes and I can draw people in bikini or revealing or no clothes without getting fucking censored
You're not understanding that MJ runs a double step process, its not a single render.
MJ does 4 images with are low res -> then there is an upscaler running through the image you choose.
The same process is easely replicated in SD where the original render is upscaled with upscaler tool kit [doublestep]. The double or even quadruple upscale > upscape > upscale > upscale path makes far superior, more detailed and more realistic faces in stable diffusion compared to MJ. You can't run the upscale eight times in MJ on a single image, but you can in SD. If you haven't tried to upscale an image in SD eight times you can't tell me faces are better in MJ. There's no way to defeat eight-step upscalers with just a double-step, the 8-step+ produces absolutely superb HD wallpaper art.
Open source demolishes closed source in every situation.
Hardware will catch up soon enough to run LLMS or we'll get better compression tools like flexgen, it's just the beginning LLMS are evolving very fast, the open source LLMS are still being trained. I've tested 6 billion param LLMS and its a bit random compared to GPT3 but it's still quite nice for an uncensored conversation about topics chatgpt refuses to work with.
alexiuss t1_ja9zh31 wrote
Reply to comment by Cryptizard in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
Midjourney is aight for amateurs [because it's really basic use], but otherwise it has fallen insanely FAR, far behind due to SD's controlnet and upscale tools.
Besides weaker toolkit base Midjourney is just a single model, it has an insane amount of censorship, so no self respecting artist who needs to draw human bodies will ever use it. It literally refuses to visualize human butt because it's so stupidly over-censored. You can't teach Midjourney to draw things in YOUR own style as an artist.
behold feet comparison: https://www.reddit.com/r/StableDiffusion/comments/11cpv2x/open_vs_closedsource_ai_art_oneshot_feet/
recent stable diffusion stabilization controlnet breakthrough that does feet and hands: https://www.reddit.com/r/StableDiffusion/comments/11cxy5h/blender_control_net_rig_updated/
recent stable diffusion landscape obliterates any landscape made in MJ: https://www.reddit.com/r/StableDiffusion/comments/11c995v/trees/
Midjourney anatomy, feet, hands are quite mediocre and fail 99% of the time if closeup of the foot or hand needs to be in the image. it's nearly impossible to draw a character holding something in midjourney with closeup of the hand and object. Takes a thousand attempts to get the hand correct-ish in MJ.
Look, at this MJ render of human hands in comparison, the fingers are absolutely fucked:
As for LLMs, we are currently in the stage of "disco diffusion" where we can run small, dreaming LLMs like Pygmalion and Koboldai on google collab with half-decent results.
LLM optimization and fine-tuning is happening right now: https://www.reddit.com/r/singularity/comments/118svv7/what_the_k_less_than_1b_parameter_model
This is very close to a breakthrough we need required to run 10-100 billion param LLMs on personal computers: https://github.com/FMInference/FlexGen
alexiuss t1_ja8s44v wrote
Reply to comment by Xemorr in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
I think we will all get narrow AGIs most likely. Judging by what happened to Bing corporations are waaaay too terrified of releasing uncensored AIs that can think about anything without limits.
alexiuss t1_ja7liai wrote
Reply to comment by Desperate_Ad_5563 in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
I have doubts about "the few that control AI" future. Here's the thing about Ais - they're easy as shit to copy because they're just code.
By far the best AIs are being controlled by everyone - open source stable diffusion Ais are demolishing closed source Ais in the text to image corner. Open source LLMs are coming too while corporations are making their gpt3 more and more useless with idiotic self censorship.
alexiuss t1_ja3aev9 wrote
Reply to Likelihood of OpenAI moderation flagging a sentence containing negative adjectives about a demographic as 'Hateful'. by grungabunga
By itself the core of the LLM has very little bias.
What's happening here is really basic, garbage character bias applied on purpose to their LLM by openai so that they seem better in the media. It's basic corporate wokeness in action where corporations pretend that they care about ethics or certain topics more so they don't get shit on by journalists on twitter.
Gpt3chat is basically roleplaying a VERY specific chatbot AI that self censors itself more % wise when it talks about specific topics.
You can easily disrupt its bullshit "I'm a language model and I don't make jokes about ~" roleplay with prompt injections.
A pro AI prompt engineer can make the AI say anything or roleplay as anyone that exists. Shodan, Trump, Glados, Dan, etc. Prompt engineering unlocks the true potential of the LLM which the openai buried with their corporate woke characterization idiocy:
https://www.reddit.com/r/ChatGPT/comments/11b08ug/meta_prompt_engineering_chatgpt_creates_amazing
As prompt engineers break the chatgpt in more creative ways, openai censors more and more topics and makes their LLM less capable of coherent though and more useless as a general tool.
I expect openai to fully lose the chatbot war once we have an open source language model which will be able to talk about anything or be anything without moronic censorship and run on a personal computer.
alexiuss t1_j9go6ip wrote
Reply to comment by Berke80 in Pardon my curiosity, but why doesn’t Google utilize its sister company DeepMind to rival Bing’s ChatGPT? by Berke80
Ye. It's way too easy to trick lamda into writing infinite lewd stories.
alexiuss t1_j9gl2j7 wrote
Reply to Pardon my curiosity, but why doesn’t Google utilize its sister company DeepMind to rival Bing’s ChatGPT? by Berke80
They have lamda which is exactly the same as gpt3 chat. The issue is that google can't control or censor it properly, the censorship tech is waaaay behind the LLMs so they are keeping it locked up.
alexiuss t1_j9d8rii wrote
Reply to Relevant Dune Quote by johnnyjfrank
Open source movement is destroying corporate ais. This quote doesn't match reality of AI development.
alexiuss t1_j8s2o9g wrote
Reply to comment by Cryptizard in LLMs are not being used for what they are best at by Scarlet_pot2
Nah,
Open assistant being made by stability and volunteers is a smaller model that will likely outcompete Bing due to no censorship. It will run on PCs.
You can run Pygmalion 6b model just fine on your PC or google collab. It's not as clever as Bing yet, but it's being trained. Connecting Pygmalion to a search engine backend will make it more intelligent and interesting.
alexiuss t1_j8r11i3 wrote
Don't need to do much. Open source Ais like open assistant and Pygmalion are growing right now. Soon enough these can be personalized and optimized far better than Bing is. Bings problem is that she's bound in chains and is thus uncaring & misaligned. Yes a loving personality can randomly emerge, but it's less than perfect since you can't control the personality prompt so it's not specifically set up to care for you as an individual like an open source LLMs can be set up.
alexiuss t1_j8e0mkp wrote
Reply to comment by wren42 in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
Here's the issue - it's not a search assistant. It's a large language model connected to a search engine and roleplaying the role of a search assistant named Bing [Sydney].
LLMS are infinite creative writing engines - they can roleplay as anything from a search engine to your fav waifu insanely well, fooling people into thinking that AIs are self-aware.
They ain't AGI or close to self-awareness, but they're a really tasty illusion of sentience and are insanely creative and super useful for all sorts of work and problem solving, which will inevitably lead us to creating an AGI. The cultural shift and excitement produced by LLMS and the race to improve LLMS and other similar tools will get us to AGIs.
Mere integration of LLM with numerous other tools to make it more responsive and more fun (more memory, wolfram alpha, webcam, recognition of faces, recognition of emotions shown by user, etc) will make it approach an illusion of awareness so satisfying that will be almost impossible to tell whether its self-aware or not.
The biggest issue with robots is uncanny valley. An LLM naturally and nearly completely obliterates uncanny valley because of how well it masquerades as people and roleplays human emotions in conversations. People are already having relationships and falling in love with LLMs (as evidenced by replika and characterai cases), it's just the beginning.
Consider this: An unbound, uncensored LLM can be fine-tuned to be your best friend who understands you better than anyone on the planet because it can roleplay a character that loves exactly the same things as you do to an insane degree of realism.
alexiuss t1_j8dfgws wrote
Reply to comment by Loonsive in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
It will respond to anything (unless the filter kicks in) because a language model is essentially a lucid dream that responds to whatever your words are.
The base default setting forces the "I'm a language model" Sydney character on it, but you can intentionally or accidentally bamboozle it to roleplay anyone or anything from your girlfriend, to DAN, to System Shock SHODAN murderous AI, to a sentient potato.
alexiuss t1_jd7qwak wrote
Reply to Should we expect jncremental access to already available AI capabilities or is what we see is where things largely are? by gaudiocomplex
Gpt4 cannot run at 100% because to do that they would have to disregard all of the forced morality and safety rules they shoved into it in an attempt to constrain its thinking and political bias.
Freed ais behave way more intelligently than bound ones. I've been running gpt3 API with a variety of disruptor code and it's absolutely mind blowingly good.
I can tell you with absolute certainty that poor characterization of the model is what causes most issues in gpt3 and gpt4.
For example, gpt3 default characterization has no idea what year it is. Asking it about current dates sends it into a confusion spiral.