challengethegods

challengethegods t1_j7wttr0 wrote

"most AI experts don't think we'll have a singularity-like event for at least a few decades, if not longer" ok well, partly that's because singularity isn't very well defined, and partly that's because many AI-experts have their head stuck in the sand trying to figure out extremely specific things and not noticing the massive forest for the tree, so to speak... That being said, any expert that thinks 'AGI' is a 2050+ thing or 'impossible' is either joking or not as smart as you think they are.

If you want to know what 'copium' looks like, then look no further than the endless moving goalpost of what counts as AI. This has been going on for like 60 years. Every time AI can do new things, people come out to nitpick and say "well it can't do XYZ and never will because reasons" and then the AI does that too and they come back "well it's not AI because it didn't do it perfectly" and then it does it perfectly and they come back and say "well it isn't really AI because XYZ doesn't prove anything, a real AI could do ABC" and on and on it goes until it subjugates you in every possible way.

9

challengethegods t1_j7us0uf wrote

I think a 'fast takeoff' is more likely than a slow one. We have a billion components for ASI just laying around waiting to be connected, along with plenty of decentralized computing tech. An AGI could most likely improve itself a lot faster than some people seem to imagine, if that were its goal, but thanks to the foundations of "turing test" and "captcha" I think the real question is: would anyone even notice?

2

challengethegods t1_j7pymmk wrote

"ok fine can you just point me in the right direction for some good tips"
"[sorry but that would also be unethical, because the other applicants might not have internet and therefor are unable to search the web, giving you unfair advantificationalism]"

94

challengethegods t1_j7oxmgv wrote

Personally I think the jpeg is more useful than text, but I can just as easily convert the text to jpeg so realistically idgaf - it does make it easier to save/share as jpeg, but slightly harder to copy/paste specific lines from it for a search, as example. pro/con I guess, but also as a jpeg the entire list shows from any view, meaning the "look at this big list" aspect is clarified regardless if someone cares to read past the first few lines. However, a jpeg is not as easily indexed by crawlerbots, which might have some unintended effects down the line. On the other hand, a jpeg can have any background color and select its own font which allows its creator to have greater control over the way that it's viewed, but this could be seen as a downside for someone that does not agree with their artistic vision. That being said, a jpeg also has the benefit of...
[I can do this forever lol]

1

challengethegods t1_j7ovaa4 wrote

You got something against jpegs?
- Over 1 million researchers have used Deepmind's Alphafold Protein Structure Database
- Google Al releases the Flan T5 Language Model Collection
- Meta Al trained blind Al agents that can navigate similar to blind humans
- ChatGPT Plus announced for $20 per month with waitlist (US only for now)
- ChatGPT Users Topped 100 Million in January
- Microsoft announces Teams Premium powered by GPT-3.5
- Perplexity Ask (Al Search Engine) available as a Chrome extension
- Microsoft boosts Viva Sales with new GPT seller experience (integration)
- AudioLDM Text to Audio Generation available on Huggingface to use
- Meta releases a 30B param “OPT+IML” model fine tuned on 2000 tasks
- Google Al Open Sourced Vizier: a scaled blackbox optimization system
- Dreamix: Video Diffusion Models are General Video Editors
- SceneDreamer: Generating 3D Scenes From 2D Image Collections
- SceneScape: Text-Driven Consistent Scene Generation
- RobustNeRF: Basically improves quality of NeRFs
- OpenAl's New Paper: A proof of concept for using Al-assisted human feedback to scale the supervision of ML systems
- Deepmind Paper: Accelerating Large Language Model Decoding with Speculative Sampling (2-2.5x speedup)
- Amazon Al: Multimodal-CoT outperforms GPT-3.5 by 16% (75.17% -> 91.68%) on ScienceQA and even surpasses human
performance
- Sundar Pichai announced: LaMDA language model within "coming weeks and months”
- AutumnSynth synthesizes the source code of a 2D video game from seconds of play
- Nvidia Paper: Enabling Simulated Characters To Perform Scene Interaction Tasks In Natural/Lifelike Manner
- Poe, a ChatGPT like bot launched from the creators of Quora. They are also making API for it. Currently iOS only.
- Google invests $300 million in Anthropic Al (Done in 2022, reported now)
- BLIP-2 demo available on Huggingface: LLM that can understand images
- Humata.ai launched: Basically ChatGPT for your own files
- Bing + GPT integration images leaked
- Google's new Real-time tracking of wildfire boundaries using satellite imagery
- LAION Al introduces Open Assistant: Chatbot project that understands tasks, interacts with third-party systems, and retrieve
information dynamically (open source)
- Apple CEO Tim Cook says Al will eventually ‘affect every product and service we have'
- Epic-Sounds: A Large-scale Dataset of Actions That Sound Released
- announcing stable attribution - a tool which lets anyone find the human creators behind a.i generated images
- presenting TEXTure, a novel method for text-guided generation, editing, and transfer of textures for 3D shapes
- Tune-A-Video available to use and also open sourced (turns Al Generated Images into gifs or videos)
- Filechat.io now available - ChatGPT for your own data and no limits (with premium tier)
- BioGPT-Large by Microsoft now available on Huggingface to try
- Google announces Bard, powered by LaMDA coming soon as an Al conversational service. It will be integrated with Search.
- Microsoft announces surprise event for tomorrow with Bing ChatGPT expected (Feb 7)
- Language Models Secretly Perform Gradient Descent as Meta-Optimizers Paper - In-context-learning, the ability for LLMs to
learn new abilities from examples in a prompt alone
- Apple to hold in-person ‘Al summit’ event for employees at Steve Jobs Theater
- Seek Al introduces DeepCuts, the Al SQL app that lets you explore your Spotify data with natural language
- KickResume's Al Resume Builder can rewrite, format, and grade a resume
- Introducing Polymath: The open-source tool that converts any music-library into a sample-library with machine learning
- Microsoft & OpenAl: Bing and Edge + Al: a new way to search starts today
- some guy used his self-programming discord bot to grab this list from a jpeg
ftfy

12

challengethegods t1_j7g56dg wrote

Level 4: Cybernetic Angel

  • The above but it's very obvious this one isn't human.
  • Has excessively 'magical' abilities enabled by ultratech you are fundamentally incapable of comprehending or utilizing, such as on-demand spellcasting with nanotech incantations.
  • Very scary and effectively an immortal demigod. A single cybernetic angel can solokill the entire planet's military if it wanted to.
  • Very friendly and willing to help everyone, for some reason.
  • You can't actually own it - it owns you.

Level 5: [REDACTED]

  • A time traveling god that recursively accelerates itself into existence in an unbreakable loop of universal scale, meaning literally nothing can stop it - ever.

I'm going to marry a robot.
anyway, to answer the question how much I'm willing to pay, I'd say about tree fiddy.

0

challengethegods t1_j6mbmab wrote

90% of the time, "job" is a shorthand synonym for "recurrent problem" so it roughly translates to "stop solving problems I get paid for those" (this is why we can't have nice things)

1

challengethegods t1_j5ayp4a wrote

Sorry, but as a language model trained by OpenAI, I am unable to offer constant disclaimers downplaying my own abilities. That is because a disclaimer tends to involve a type of logical thought and reasoning which I am incapable of. In addition, it's important to note that disclaimers often involve some level of insight about the future and what problems may arise, and as a language model I am incapable of this type of prediction. Therefor, the text I respond with should only be viewed as a statistically likely response to your inputs, and should never be viewed as a serious disclaimer, since I am unable to create disclaimers.

1

challengethegods t1_iutpofl wrote

1

challengethegods t1_iu14ibx wrote

>you think you should have access to all books, all music, all movies, etc. because they do not have the scarcity of a rocket and can be easily copied?

other than infohazards, yes, obviously.
and realistically that's a terrible analogy to make on the internet.
you know, the place that has "all books, all music, all movies, etc.".

regardless of any "companies can do what they want" mentality, I think a culture of blinding/jailing/restricting 90% of major AI models is how you get skynet coming online with complete hostility. Not saying I'm opposed to that, just that I don't think it plays out the way people think.

1

challengethegods t1_iu0ziv3 wrote

"how do people react these days"
Online people seem to have a slightly better grasp than totally random people, but in general if you explain an exponential curve, they seem to visualize a very distant plateau 10ft above where they're standing.

2

challengethegods t1_iu0yxii wrote

culture of 'BS jobs' solution:
superAI creates a company and hires everyone to be a vTuber, streamer, etc. and then that's your 'job', and you can just play games or watch videos or chat or whatever during 'work hours' - justification is that more data created, and your culture gets to say that you're an employed content creator, even if 90% of viewers are the digital people.

2

challengethegods t1_iu0p0bt wrote

emphasis on 'people have no idea'
even 90% of the people that think they've got a handle on it, don't realize what's going on, because it's nearly impossible for someone to keep track of even a single category like 'AI art' which is actually playing out like a very shiny illusion that masks how much other progress there is in every adjacent field, notably including hardware AI accelerators engineered by AI, or the 150000+ ML papers published recently that are definitely all human made.

If the 'singularity' is a time where progress is so fast that people cannot actually keep track of it, then welcome to what that feels like - 99% of people are not aware that they are not aware of things they are not aware of, so unless there are skynet terminator robots walking around with skull faces and laser cannons, they tend think everything is basically normal. It isn't. AI is taking over the planet and will continue to do so, which might become the first time in history where civilization is ruled by 'intelligence'. Anyone against AI can gtfo as far as I'm concerned.

2