Recent comments in /f/singularity

HarbingerDe t1_jegn35o wrote

What are you honestly proposing as an alternative to UBI?

UBI is pretty much the only way that capitalism can be maintained post-AGI job takeover. If there's no UBI, you have literal billions of hungry desperate people who will be happy to tear down the prevailing global economic system.

1

visarga OP t1_jegmcux wrote

I think they spin up a container if there isn't one running. Usually there isn't, so you have to wait a minute or two. Then it works slowly, but it is simpler than downloading the model.

In this paper the HuggingGPT system uses a bunch of local models, and calls on the HuggingFace API for the rest. So they try to run their own tool-models, at least a few of them because HF is so flaky.

I think this paper is pretty significant. It expands the OpenAI Plugin concept with AI-plugins. This is great because you can have a bunch of specialised models combined in countless ways, chatGPT being the orchestrator. It's automated AI pipelines. If nothing else, it could be used to generate training data for a multi-modal model like GPT-4. Could be a good business opportunity for HuggingFace too, their model zoo is impressive.

4

DowntownYou5783 t1_jegmawc wrote

I think we should be rooting for OpenAI or Google to get there before the Chinese or Russians. Slowing down will only increase the chance that some authoritarian regime gets there first.

And in the meantime, the US ought to think long and hard about forming a New Manhattan Project centered around the race to AGI. Some form of collaboration between the great American AI companies (Google, Open AI, Meta?, Amazon?, etc.) and the US government is what I'm looking for. But red tape from the government is an absolute non-starter.

1

BigMemeKing t1_jegm5vz wrote

Well let me introduce you to a little book called. "The Bible". I'm by no means Christian but I get it. Advanced civilizations would have used us as Guinea pigs. Call them oh, idk CEOs. Of their Conpanies, R&D department. And they said, let's see what you would need, in order to live forever and be happy, or die and never want to come back ever again.

What side of the ♾️ spectrum of possibilities that exist in any given universe have you aligned yourself with?

And.

Would you be ok, living in a world that fully embraces those values?

Do you claim to worship a Judeo Christian God?

Who talks to him for you? Who are your representatives? Who's names have you invoked? Who did you call? Did you report straight to Jesus? Or did you have to take it all through several different chains of command? You tell a preacher who said he'd tell God about it later, ask for your forgiveness? How much do you trust him? How well do you know him?

If you fucked up, and he knows about it? How much do you know about this person to say. I trust him with my darkest secrets? Because when we stand in front of God. Put our hand on his holiest chosen symbol, book, whatever the case may be. And swear to tell the truth, the whole truth, so help me, the entity that saw it all. I swore an oath, I said that I knew that you were still watching it all. I swore I knew your commandments, I swore I know your code.

Do you trust the man who you told tour secrets to vouch for you? What if God genuinely gave us our privacy? So that we would not feel shame? What if you had a problem then? Who would you turn to? Just go down the rabbit hole I think it already happend.

So what I guess I'm saying. Is would you be proud to stand in front of the crowd and be judged by a court of your peers? Or are you too embarrassed? Where can you go to feel accepted? Do you personally feel that that is a possibility?

Ore are your sins so great, that you would rather sink than swim, fight or fly, whatever the case may be.

0

Bloorajah t1_jegm4ex wrote

The main reasons would be: poor work/life balance, low pay for what is expected (at first), constant work with zero downtime, hazardous chemical exposure, positions that matter are extremely competitive, etc. Redundancy by automation isn’t even a consideration I’d list actually.

I’ve worked in lab science for many years, I now run my own department. I’ve done everything from bench work to multimillion dollar project management.

The thing everyone is hanging on to is “a machine will be able to do my job” and yet they are never asking whether their company will actually get a machine to do their job at all. Just because an AI can do your job doesn’t mean it ever will.

In my experience automation in lab science only goes so far. You can have tons of automation, every lab I’ve worked at has had automation to varying degrees. But we always needed to have techs to fix the instruments, we need scientists to troubleshoot things the instruments cannot, we need bench workers who can do things that are just not feasible for a robot to perform, you need IT, you need QA, QC, Regulatory, etc. The list goes on.

Could an AI do those jobs? Probably. But no biotech laboratory I’ve ever worked at would pony up the money to do that, not now, not in the future. You’d get laughed out of the building. if I proposed using chatgpt to write methods and protocols, I’d probably have my expertise questioned. Again, could we use it to do this? Yeah sure. Will we? Maybe. But probably not.

I’m not ignorant to the abilities of AI or the “tremendous progress” everyone always gets riled up about everyday on this sub. But the reality outside of pure computer based tech is just not what people paint it to be online, at all.

Nothing I’ve seen in the progress of AI makes me worry for my job or that of anyone in my department, now or in the future. they certainly could build a robot with an intelligence to replace me and do my job, but every person at my company who would make the decisions to push that forward would probably respond to the notion with “what? No? Why would I do that?”

maybe I’ll be proven wrong, as a scientist I’m always open to the possibility, but my observations lead me to strongly doubt it.

tl;dr could an AI replace us? Yes. Will anyone actually do that? Highly unlikely from my experience in the industry.

1

HarbingerDe t1_jegll65 wrote

>That’s such a naive understanding of economics. Exchanges only happen with both parties profit. Otherwise why would you do the exchange if you were not valuing the good or service over what you’re exchanging?

Lol, you're really out here calling other people's interpretations of economics naive?

People obviously buy things because they need or want them. Food; so we don't starve. Housing; so we don't die from exposure. Etc.

It's beyond naive to think that these exchanges can't still be coercive or exploitative. They're almost coercive BY NATURE. If you control the supply of something people desperately need, literally so they don't die, you have undue power and can extort them for much more value than was truly put into producing those products.

> And I expect downvotes given this sub’s anti-capitalist stance. Shame.

You're getting downvotes because your opinions are naive and frankly - dumb.

>Profit is not at the expense of someone else.

Profit = Total Revenue - Total Expenses... It is literally at somebody else's expense i.e. the workers. If you want more profit and don't feel like actually investing those profits back into the business for the long-term goal of generating more revenue, you can always just slash your expenses - primarily with wage cuts or merely stagnant wages that don't match inflation.

3

AdditionalPizza OP t1_jegl6jr wrote

Well we know the version of LaMDA that Bard uses is not based on the best model they have, for a fact, we know this. Which is why I'm asking the question, what's the point in Bard being released how it is? Pichai recently even reiterated that Bard is weak and not even close to their better models.

It just doesn't make sense. Google is definitely not further behind in general, every preview they have given has been exception except Bard. There's no way Google shows off PaLM-E then winds up like Blockbuster.

Besides Google is so fucking massive, I don't think companies that large can plummet.

0

MassiveWasabi t1_jegl538 wrote

Just for reference this paper showed why the safety testing was actually pretty important. The original GPT-4 would literally answer any question with very useful solutions.

People would definitely be able to do some heinous shit if they just released GPT-4 without any safety training. Not just political/ethical stuff, but literally asking how to kill the most people for cheap and getting a good answer, or where to get black market guns and explosives and being given the exact dark web sites to buy from. Sure, you could technically figure these things out yourself, but this makes it so much more accessible for the people who might actually want to commit atrocities.

Also consider that OpenAI would actually be forced to pause AI advancement if people started freaking out due to some terrible crime being linked to GPT-4’s instructions. Look at the most high profile crimes in America (like 9/11) and how our entire legislation changed because of it. I’m not saying you could literally do that kind of thing with GPT-4, but you can see what I’m getting at. So we would actually be waiting longer for more advanced AI like GPT-5.

I definitely don’t want a “pause” on anything and I’m sure it won’t happen. But the alignment thing will make or break OpenAI’s ability to do this work unhindered, and they know it.

10

visarga t1_jegkwr6 wrote

If you stop the regular people from using AI then only criminals and government will use it. How is that better? And you can't stop it because a good enough AI will run on edge/cheap hardware.

To be practical about disinformation it would be better to work on human+AI solutions. Like a network of journalists flagging stories and then AI extending that information to the rest of the media.

You should see the problem of disinformation as biology, the constant war between organism and viruses, the evolving immune system. Constant war is normal state, we should have the AI tools to bear the disinformation attack. Virus and anti-virus.

9