blueSGL

blueSGL t1_j1kild0 wrote

Next year is going to be a rollercoaster of AI powered tech.

Everyone went all in on self driving that needs to be near as damn it 100% perfect to be safe and it's annoying to dangerous when a user needs to take over.

Where as, these new companies don't need a 100% hit rate to be useful, if it saves you hours you don't care if you spend minutes identifying and fixing bad outputs.

3

blueSGL t1_j1j3ypk wrote

it's not 'us' that need to change behaviors, it's corporations.

you can either run around the sprinkler trying to stop every drop from hitting the ground or turn off the tap.

Also the fusion announcement is good. It shows it can work, will increase the spending towards getting a usable product.

Hell the design tested was not even that useful for extracting energy even if they got it pulling more 'from the wall'. There are designs that will allow for easy extraction and now they know it's possible.

5

blueSGL t1_j0ozrsf wrote

I would far prefer more people get into alignment, well at least the important kind (paperclip output) not the distraction (problematic output)

As the planet then light cone gets turned into paperclips an AI 'alignment' researcher can at least warm themselves with the thought that "well at least the AI never said a bad word"

2

blueSGL t1_j0e5p87 wrote

Called it 7 months ago.

I bet if you do a log plot it just destroys the bass.

Edit: Thinking on, this is one dimensional with a second dimension of time, you could slice the audio into three frequency bands and use RGB encoding to 3x the frequency range fidelity without having to change the context window size.

17

blueSGL t1_j0csoia wrote

The thing that worries me is the amount of instability that is going to be the transition phase.

~5 years is better than ~15

but ~5 years is also better than ~2

and I don't mean complete AGI/ASI just clever enough 'oracle' systems that massively disrupt many sectors.

a 5 year time horizon is likely enough time for even the slow moving gears of government to do something about UBI/basic social safety net overhaul if there is a pressing problem.
15 is too long, the issue not immediate enough and will be put off till things really start to stink (climate change)
2 is too fast systems are not ready to adequately adapt at that speed, massive corruption will happen over funds and mistakes will be made in haste (response to covid 19)

2

blueSGL t1_izuf5tm wrote

> Of course if the hardware is there and the AGI is basically just very poorly optimised sure, it could optimise itself a bit and use the now free ressources of hardware. I just think thats not enough.

what if the 'hard problem of consiousness' is not really that hard, there is a trick to it, no one has found it yet, and an AGI realizes what that is. e.g. intelligence is brute forced by method X and yet method Y runs so much cleaner with less overhead and better results. something akin to targeted sparsifcation of neural nets where a load of weights can be removed and yet the outputs barely change.

(look at all the tricks that were discovered to get stable diffusion running on a shoebox in comparison to when it was first released)

8

blueSGL t1_ize8cdk wrote

Everyone is spinning around giddy with ChatGPT

>During the research preview, usage of ChatGPT is free.

I'd honestly not get too attached to relying on this thing when you don't know how much it will cost.
Charges to use previous models is no indication of how much this will cost as it seems to have some sort of memory.

Edit: Remember, Dalle2 where generations went from free to 'taking the piss' and it was not until several competitors came on the scene that they changed how much it cost.

2

blueSGL t1_iza1ngp wrote

I think at some point someone will train a model from scratch using copyright free works along with synthetic datasets generated using makehuman style systems and NPR render engines.

If these cannot make good artwork of [style] AI will be used to curate lists of artists good at [style]. This will then be cross referenced with other missing [styles] till there is a web of required artists where moving in the space between them will achieve whatever style is required.

A selection of artists from each [style] list will be contacted and paid a lump sum for rights for a block of their work to be used to generate models.

Then you will have a model that is trained on a completely 100% provably legal dataset (Edit: cleaned up the phrasing). A few select artists will make out like bandits, all other artists will be exactly where they are now without the ability to claim that the system is only good because it stole things.

The same will happen with music and literature.

This is why attempting to stop AI artwork in any field is a pointless expenditure of effort. Learning the tools is a better use of time.

22

blueSGL t1_iybxahl wrote

UK, which imported shows and you'd either get them on BBC, ITV or Channel4, I think Channel 4 had the best early morning cartoon block then it was surfing between ITV and BBC for their 'magazine' style shows that included cartoons. (Live and Kicking/Ghost Train)

And these are the ones I can remember, There is likely more.

The Raccoons, Swat Kats, Godzilla, Jonny Quest, Xmen, Round the Bend (with the team behind spitting image puppets), Spiderman, TMNT, Prince Valiant, The Pirates of Dark Water, Muppet Babies, Trapdoor, Morph, Poddington Peas, Dungeons and Dragons, Bucky O'Hare, Gummy Bears, Animaniacs, Tiny Toon Adventures, Recess, The Wuzzles, BatmanTAS, Rescue Rangers, Tail Spin, Duck Tales, Mighty Max, Sharky and George, Alfred J Quack, Moomins, Biker Mice From Mars, Heathcliff and Friends, The Bluffers, Sonic SatAM, Galaxy High, Where in the world is Carmen Sandiego, Ulysses 31, Saved By The Bell, The Secret World of Alex Mack,

Not vouching for the quality of any of them and would have covered many years (and they'd mix reruns in I don't think we got anything in sync with the US), I generally flicked around for the least boring thing and some of them could be gap fillers that I remember the name of.

Edit: and ITV and BBC had an after school block with more content where there were even more shows and I've tried not to list any of those, but my mind might be going a bit after all this time :D

9

blueSGL t1_iyb3tjb wrote

> “We programmed our knowledge of real photonic nanotechnology and its current limitations into a computer. Then we asked the computer to find a pattern that collects the photons in an unprecedentedly small area – in an optical nanocavity – which we were also able to build in the laboratory.”

this is the sort of stuff I want to start seeing more of. Get the computers to crunch the numbers and find better than SOTA solutions to problems.

8

blueSGL t1_ixm6fbk wrote

> Kinda worried SD will regress into something that will need dedicated tweaked models for everything.

honestly I'd far prefer them not have any legal issues and deliver solid bases for fine tunes. (the initial training is the really expensive bit)

The community surrounding SD is a resourceful bunch and being able to train forward from a high quality (but censored) base is better than from a low quality (but uncensored) base.

Just look at all the work that's being done with LLMs where a curated dataset gives better results than a large uncurated one.

9

blueSGL t1_ixkwzwv wrote

need to wait for someone to make a 'negative prompt' text embedding for v2.

https://www.reddit.com/r/StableDiffusion/comments/yy2i5a/i_created_a_negative_embedding_textual_inversion/

so a token for a vector that points towards undesirable areas in latent space where fuck up fingers live, and you use this as a negative prompt to drive your desired prompt vector further away from that point in latent space (I don't know about anyone else but trying to conceptualize higher dimensional spaces is really troublesome)

3