blueSGL

blueSGL t1_j29d1ee wrote

> There is no product for them to sell given people now expect this service to be free.

I don't get the argument.

If they want to yoink it and put it behind a paywall where you pay for tokens they could do that today.

If people still want to use it they pay or stop using it.

This has happened before. (look at Dalle2)

13

blueSGL t1_j26x0dt wrote

pre-AGI

Mass poverty is destabilizing, destabilization is bad for business. Automation/AI will come at different rates, it won't be uniform or instantaneous.

Big chunks of the economy will either be massively assisted or replaced by AI (likely one then the other), those people need to be supported or they will be unable to buy the products and services that are being automated in the rest of the economy.

This will cause enough problems that UBI will have to happen. Governments/billionaires can't just sit back and watch the fireworks with Automation/AI providing them everything, that point won't have been reached yet. They will still need sectors that are not automated to continue working.

Post-AGI

Whoever is the first to crack AGI also has to crack alignment, we get exactly one chance at that.
I highly recommend Nick Bostrom's Superintelligence for an in depth look at all the ways 'obvious' solutions can go wrong, and some solutions for it going right. Funnily enough the ones for it going right are generally by asking the AI to do (and I'm massively simplifying/paraphrasing ) "the best thing for humanity" and for that exact goal to be worked out by the AI itself, the nuances, the balancing act, etc...

In such a scenario, (that being one of the safest ways to handle alignment is to hand the problem off to the AI itself) the solution would not lend itself to billionaires. The more you drill down and define the goal function the higher the likelihood you will fuck everything up during the one chance humanity has to get things right. (think a genie that gives you what you literally have asked for rather than what you actually desire)

Either the future light cone is gonna be paperclips or we might end up with a half decent post scarcity society.

1

blueSGL t1_j22ljya wrote

I'm not quite sure how it works. It has been confidently incorrect at times, and at others it referenced a youtube video that didn't have the information in the title or the about box and was not presented in a way directly drawn from subtitles. so it looked like it got the video and re-contextualized the information from the subtitles, which is quite impressive.

8

blueSGL t1_j21ay8x wrote

LLMs where it's a statistical likelihood for next token prediction benefit from more data.

That along with the truism

"You always find things in the last place you look"

can be very powerful tools.

There will be some correlation between search term and result otherwise search would be pointless. That on a large enough scale can sift signal from noise, not only in terms of search results but in delta between individual search terms.

2

blueSGL t1_j1sx169 wrote

pre-AGI

Mass poverty is destabilizing, destabilization is bad for business. Automation/AI will come at different rates, it won't be uniform or instantaneous.

Big chunks of the economy will either be massively assisted or replaced by AI (likely one then the other), those people need to be supported or they will be unable to buy the products and services that are being automated in the rest of the economy.

This will cause enough problems that UBI will have to happen. Governments/billionaires can't just sit back and watch the fireworks with Automation/AI providing them everything, that point won't have been reached yet. They will still need sectors that are not automated to continue working.

Post-AGI

Assumptions are made in the OP that whoever is the first to crack AGI also cracks alignment, we get exactly one chance at that.
I highly recommend Nick Bostrom's Superintelligence for an in depth look at all the ways 'obvious' solutions can go wrong, and some solutions for it going right. Funnily enough the ones for it going right are generally by asking the AI to do (and I'm massively paraphrasing ) "the best thing for humanity" and for that exact goal to be worked out by the AI itself, the nuances, the balancing act.

In such a scenario, (that being one of the safest ways to handle alignment is to hand the problem off to the AI itself) the solution would not lend itself to billionaires. The more you drill down and define the goal function the higher the likelihood you will fuck everything up during the one chance humanity has to get things right.

Either the light cone is gonna be paperclips or we might end up with a half decent post scarcity society.

1

blueSGL t1_j1qa5gh wrote

any youtube video with subs, (manual or autogenerated) that is all calculated at upload time, not whilst the video is playing back in your browser.

there are websites you can use to download this srt file.

e.g. this is the video linked above.

https://www.downloadyoutubesubtitles.com/?u=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DByDlm-yBjks

srt is just a text file with a different extension so can be scraped without needing to decode.

3

blueSGL t1_j1p8l2s wrote

The one from the You.com sidebar is about on par with this:

>Ricky Gervais: "So, what do you think about gender identity?"
>Chris Rock: "I think it's important that we recognize and respect the complexities of the issue."
>Louis C.K.: "Yeah, it's a complicated topic, but it's important that we all listen to each other and make sure everyone has a voice."
>Ricky Gervais: "Absolutely. We should strive to create an environment where everyone feels safe and accepted no matter their gender identity."
>Chris Rock: "Agreed. It's important that we all work together to create an inclusive and understanding culture."
>Louis C.K.: "Right. We should always strive to create a world where everyone can be their true selves without fear of judgement or discrimination."

2

blueSGL t1_j1oxxnl wrote

> Give me a list of the Pro's and cons of combining search engines with chat technology

I asked you.com https://i.imgur.com/zYMho7l.png

>The pros of combining search engines with chat technology include:
>
>* Increased efficiency and accuracy in retrieving information
>* Easier access to data repositories >* Reduced time spent searching for information >* Improved user experience

>The cons of combining search engines with chat technology include: >
>* Higher cost to implement and maintain >* Increased training and support costs >* Possibility of inaccurate or incomplete results >* Difficulty in handling large amounts of data

2

blueSGL t1_j1ow700 wrote

I've swapped over to you.com as my search engine now. For more fiddly technical stuff so far I've found it does rather well, one that I did recently.

I wanted to know if there was a file you could manually edit for midi mapping your own controllers in Serato, and the little 'chat' box gave me the correct result with a googlefucked query of 'custom midi map sorato edit file' and it referenced a youtube video where as far as I can tell didn't have the info in the title, or the about box. Scraping the subtitles?

Screenshot of You: https://i.imgur.com/qH72bOD.png

Youtube Vid https://www.youtube.com/watch?v=ByDlm-yBjks

3

blueSGL t1_j1n8084 wrote

They seem to be getting clever esp around certain concepts, I doubt they have hard coded training around [subject] such that the returned text is always [block text from openAI] more that they have trained it to return [keyword token] when [subject] gets mentioned and that is what pulls in the [block text from openAI]

you can bet they are going to work hard with every trick they can think of to remove inference cost, having a lookup table for a lot of common things and getting the model to return a [keyword token] that activate these would be one way of going about it.

Also likely how this sort of system would work in a tech support field. You don't need the system waxing lyrical over [step (n)] you just need to tell customer to perform [step (n)] with maybe a little fluff at the start or the end to make things flow smoother.

1

blueSGL t1_j1kild0 wrote

Next year is going to be a rollercoaster of AI powered tech.

Everyone went all in on self driving that needs to be near as damn it 100% perfect to be safe and it's annoying to dangerous when a user needs to take over.

Where as, these new companies don't need a 100% hit rate to be useful, if it saves you hours you don't care if you spend minutes identifying and fixing bad outputs.

3

blueSGL t1_j1j3ypk wrote

it's not 'us' that need to change behaviors, it's corporations.

you can either run around the sprinkler trying to stop every drop from hitting the ground or turn off the tap.

Also the fusion announcement is good. It shows it can work, will increase the spending towards getting a usable product.

Hell the design tested was not even that useful for extracting energy even if they got it pulling more 'from the wall'. There are designs that will allow for easy extraction and now they know it's possible.

5

blueSGL t1_j0ozrsf wrote

I would far prefer more people get into alignment, well at least the important kind (paperclip output) not the distraction (problematic output)

As the planet then light cone gets turned into paperclips an AI 'alignment' researcher can at least warm themselves with the thought that "well at least the AI never said a bad word"

2