SoylentRox

SoylentRox t1_j2ek52w wrote

>What you do not own, can always be taken from you. You don’t need to worry (too much) about your software being taken from you but you do need to worry about your house being taken from you.

This is not a problem if you have money. Just go rent something else. Also if your landlord decides to go through the eviction procedure, there is no ASSET for you to lose.

If you own a house, and a judge decides to order it seized in a civil action (like a divorce or lawsuit), or your corrupt HOA makes up some fines of arbitrary scale and then sues you and seizes it if you can't pay, you lose the EQUITY.

I'd rather have all my assets in stock, and borrow against it if I have a need for money fast when the market is low.

1

SoylentRox t1_j2ei4bm wrote

No, it makes you a digital elite.

If you own stock but rent your phone, car, and home, you can move whenever you want and always have the latest car and phone. You benefit from the extra technology.

While I don't actually rent my car or phone as I don't need either to be the absolute latest, I do rent software. As anything but the most recent version is useless to me.

For AI models it's the same idea.

I have hundreds of thousands, soon to be over 1m in stock. As much 'equity' as an extremely lucky homeowner.

0

SoylentRox t1_j2cpqpm wrote

It's the rules. White collar criminals are often smart - they know not to escape because they are able to plan ahead at least short windows into the future. (they wouldn't be criminals if they could plan long term). So the prison doesn't need bars or guard towers, just a daily roll call. If you escape they send you to a higher security facility.

They don't necessarily riot or destroy property, and they often have money stashed to afford the prison staples. If they do escape they don't necessarily murder or rape, maybe a little conning.

I'm just saying - some of the suffering violent criminals and poor criminals endure is because the prison HAS to be built like that. Another problem is the state prison system is inconsistent, some state prisons are hell on earth, others are probably about as nice as a federal prison.

1

SoylentRox t1_j2cp3im wrote

Maybe? I mean he's gonna try to convince everyone he just set directives and played video games all day. That's he wasn't aware of the backdoor hacks letting FTX gamble with client's money and the shady loans and so on.

Yeah I'm not sure how he could possibly claim to be unaware, the dude can code and in a company this small would have directly involved in the decisions.

Plus I mean they used slack and other collaboration tools. Presumably there are smoking gun thumbs up emojiis from him.

like back in 2020.
coworker1: "alamada is low on funds, we're gonna have to fold"
coworker2: "I have an idea, let's borrow a little from the exchange balance, we'll make it back"

with a thumbs up by SBF.

11

SoylentRox t1_j2cla5j wrote

Imagine if the next best search engine was like an early version of bing and NOTHING else existed.

And nobody was remotely close to releasing anything better. Would you pay for it then?

If OpenAI starts charging for chatGPT, whatcha gonna do? Keep writing shit by hand?

The computional requirements are so expensive that realistically this is going to be a paid service maybe forever.

I say forever because compute will get much cheaper over time, but the best models will use even more compute and be much smarter. All the elite people will be using top end models, plebs using free models won't have the same resources.

1

SoylentRox t1_j2ckmtj wrote

And there's a bunch of obvious automated training it could do to be specifically better at software coding.

It could complete all the challenges on sites like leetcode and code signal, learning from it's mistakes.

It could be given challenges to take an existing program and make it run faster, learning from a timing analysis.

It could take existing programs and be asked to fix the bugs so it passes a unit test.

It could be asked to write a unit test that makes an existing program fail.

And so on. Ultimately millions of separate tasks that the machine can get objective feedback on how well it did on them, and so it can refine it's skills to be above human.

3

SoylentRox t1_j2aehvs wrote

I don't see how "lifetime access" makes any sense.

(1) Assuming it's to the current model and not future updates, that would be like buying a "lifetime copy" of MS-DOS internal beta 0.7 (whatever they called it back then), or an iphone 1 loaded with a pre-release copy of the OS.

It may work offline for your lifetime, but it's going to be useless compared to what's available within months.

(2) who's hosting it? GPT-3 is around 180 billion parameters, or 720 gigabytes of memory. This means the only thing capable of running it currently is a cluster of 8 Nvidia A100s with 80 Gb memory each, and each costs $25,000 and consumes 400 watts of power.

I'm not sure how "fast" it is, if you see chatGPT typing for 30 seconds are you drawing 3.2 kilowatts of power just for your session? I don't think it's that high, probably the delays are it's servicing other users.

1

SoylentRox t1_j2adgyv wrote

Yep. I want a premium tier where I can make as queries and get immediate responses with no cooldowns. I would expect a monthly plan where i get a certain number of queries included and can buy more.

In a few years I would expect my employer to pay for the subscription but in the immediate future I'm happy to do so. I don't ask it to write anything I can't write but it saves all this time.

20

SoylentRox t1_j1xgzf4 wrote

Reply to AI and education by lenhoi

I couldn't resist:

ChatGPT says:

Using AI to generate essays can be a useful tool for students, particularly as a way to practice generating ideas and organizing their thoughts. However, it is important to note that AI-generated essays are not a replacement for the critical thinking and writing skills that students need to develop.

Plagiarism detection tools can still be used to identify essays that are generated by AI, as the writing will often lack the complexity and nuance of a human-written essay. Additionally, students need to learn how to properly cite their sources and give credit to others for their ideas, which is an important aspect of academic writing.

Ultimately, it is up to you as a teacher to determine the best way to incorporate AI-generated essays into your curriculum. It may be useful as a supplement to traditional writing assignments, but it should not be relied upon as a substitute for the skills that students need to develop.

​

The problem is this is frankly a pretty well written response and I don't think I could do better.

2

SoylentRox t1_j1xf196 wrote

3d space means no traffic jams because everyone going a particular direction just takes off straight up from a flying car pad, then ascends to the flight level that goes the direction they want, then they start flying that direction.

So basically every 50 feet or so vehicles are going a different angle.

It adds vastly more effective 'road space'.

With all the drawbacks of course.

0

SoylentRox t1_j1xeol8 wrote

As tech advances more complex tech becomes reliable.

Think about an airbag system. There is a bomb connected to an electronics board in your car. If it goes off at the wrong time you can be hurt or even killed.

It uses a computer, electrical wiring, sensors, and so on. If say you built it with 1960s wiring it would be horrifically unreliable and killing people all the time.

Now airbags usually only kill people if they have an unusual body size or you are unlucky enough to have defective airbags.

0

SoylentRox t1_j1xeh29 wrote

Maybe they could be rented? Like you just take an air taxi when you're in a hurry? (and can afford the fare)

And they only land on top of big buildings, so you either live in one or take a ground taxi to the nearest big building, then an express elevator to the roof?

I could see a world where this is common, but it costs a couple hundred bucks or so - possible for most people to do it occasionally and a few to do it often.

1

SoylentRox t1_j1fxj19 wrote

You have to absorb neutron flux regardless.

This is one argument in favor of aneutronic designs, that fusion may not ever be practical if the reaction emits a significant amount of neutrons. Hydrogen-B11 for instance. I understand it's immensely harder to do though, requiring much higher temperatures.

An aneutronic design could be designed to fail if someone tries to breed plutonium.

1

SoylentRox t1_j19ocmt wrote

What stops you from replacing the breeding blankets/dissolved liquid tubes for lithium on a fusion reactor with uranium targets?

Would this allow you to breed plutonium or are the neutrons the wrong velocity for this?

What bothers me is that this is yet another long term showstopper for fusion. If fusion technology is dual use - meaning it can be used to make plutonium for nuclear weapons - it cannot be freely shared with most nations in existence on earth. Only a few rich ones who already have nuclear weapons or are considered 'trustworthy'.

Meanwhile by the time this happens, solar panels and batteries are so cheap they are almost a waste product and can be dumped by the pallet load to anyone anywhere.

5

SoylentRox t1_j0qg7qw wrote

Generally speaking the more education a person has and the more difficult their career is to enter/how long they have been in it correlate with them having more to lose. Sure they may be a crook - sometimes high corporate officials like directors and vice presidents get caught committing petty crimes - but it means they probably aren't going to rape you or mug you for your wallet. Because they have a lot to lose, and also they can get what they want other ways.

It's not a guaranteed rule but there almost certainly is heavy statistical correlation, where the most dangerous person to be around is the homeless man, the low end retail/fast food worker is less dangerous but still dangerous if they aren't young, and so on up the totem pole. With the notable exception that criminality may actually increase at the very top.

1

SoylentRox t1_izy1csf wrote

I have thought this is how we get to really robust, high performance AGI. It seems so obvious.

The steps are:. Have a test environment with a diverse number of auto graded tasks requiring varying levels of skill and cognition. "Big bench" but bigger, call it AGI gym.

The "AGI hypothesis" is an architecture of architectures : it's a set of components interconnected in some way, and those components came from a "seed library" or were auto discovered in another step as a composition of seed components.

The files to define a possible "AGI candidate" are simple and made to be manipulable as an output on an AGI gym task....

Recursion....

You see the idea. So basically I think truly effective AGI architectures are going to be very complex and human hypotheses are wrong. So you find them recursively using prior AGIs that did well on "AGI gym" which includes tasks to design other AGIs among the graded challenges...

Note at the end of the day you end up with a model that does extremely well at "AGI gym". With careful selection of the score heuristic we can select for models that are, well, general and as simple as possible.

It doesn't necessarily have any science fiction abilities, only it will do extremely well at tasks that are mutations of the gym task. If some of them are robotics tasks with realistic simulated input from the real world, it would do well in the real world at those tasks also.

Some of the tasks would be to "read this description of what I want you to do in the simulated world and do it with this robot". And the descriptions are procedurally generated from a very large set.

The whole process would be ongoing - each commit makes AGI gym harder, and the population of successful models gets ever more capable. You fund this by selling the services of the current best models.

0

SoylentRox t1_iz38wf7 wrote

Reply to comment by Redvolition in bit of a call back ;) by GeneralZain

Sure. I agree more or less. I mean the body wouldn't actually be discarded per say. Keeping a brain alive by itself is hard. You would realistically provide the functions of a body with living human cells in artificial scaffolding in separate containers from the body. So everything can be carefully monitored for problems because the walls of the containers are clear. Whole thing in a nitrogen filled sterile chamber only robots can access.

1

SoylentRox t1_iyz0miz wrote

The FDVR problem is "find a way to make human beings sense things, with as much fidelity as their own body has, from arbitrary virtual environments. Interface with their brain in such a way that they cognitively do not have any deterioration, and keep their body maintained such that they live indefinitely".

That's a big huge problem but it devolves into obvious subproblems. "make these samples of human motor homunculus in the lab stay alive. Inject signals into them and ensure the signal quality is the same as their own internal connections..."

For keeping a human body alive, well obviously you need to be able to keep individual organs alive. And know which proteins in blood chemistry are bad news and what to do in each situation.

It's a tree of subproblems. The 2 top level statements end up probably being millions of separate research tasks.

And the 'doctor' who has to keep you alive needs to know the results of all the millions of separate tasks, and make multiple decisions about your care every second, and make no errors so that you can enjoy FDVR for thousands of years...

See the problem? it's impossible without AI, and AI makes the problem easy.

I don't give a shit which countries you name, there are many. All it takes is one country that lets you do advanced medical procedures.

3

SoylentRox t1_iyyz3vi wrote

Reply to comment by Head_Ebb_5993 in bit of a call back ;) by GeneralZain

>ut that's obvious straw man i wasn't and we weren't talking about AI , but AGI ,

The first proto AGI was demonstrated a few months ago.

https://www.deepmind.com/publications/a-generalist-agent

Scale it up to 300k tasks and that's an AGI.

I am saying if industry doesn't think someone is credible enough to offer the standard 1 million TC pay package for a PhD in AI, I don't think they are credible at all. That's not unreasonable.

2

SoylentRox t1_iyywjbs wrote

Reply to comment by Head_Ebb_5993 in bit of a call back ;) by GeneralZain

>Edit : also I am rather skepticall that there are any people who work in any way with neuroscience and AI , and from all discussion with actuall people in the subject I've realized that AGI isn't even taken seriously at the moment , it's just sci-fi
>
>In all seriousness , people write essays on why AGIs are actually impossible , even though that's little bit extreme position for me , but not a contrarian in scientific consensus

? so...deepmind and AI companies aren't real? What scientific consensus? All the people who have the highest credentials in the field are generally working machine learning already, those AI companies pay a million+ a year TC for the higher end scientists.

Arguably the ones who aren't worth 1m+ are not really qualified to be skeptics, and the ones I know of, Gary Marcus, keeps getting proven wrong in weeks.

2

SoylentRox t1_iyyvlhg wrote

Reply to comment by Head_Ebb_5993 in bit of a call back ;) by GeneralZain

https://www.deepmind.com/blog read all these.

The most notable ones : https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html

https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html

For an example of a third party scientist venturing an opinion on their work:

see here : https://moalquraishi.wordpress.com/2020/12/08/alphafold2-casp14-it-feels-like-ones-child-has-left-home/

To succinctly describe what is happening:

(1) intelligence is succeeding at a task by choosing actions that have a high probability in the agent seeing future states that have a high value to the agent. We have tons and tons of simulated environments, some accurate enough to immediately use in the real world - see here for an example https://openai.com/blog/solving-rubiks-cube/ - to force an agent to develop intelligence.

(2) neuroscientists have known for years that the brain seems to use a similar pattern over and over. There are repeating cortical columns. So the theory is, if you find a neural network pattern you can use again and again - this one is currently doing well and powers all the major results - and you have the scale of a brain, you might get intelligence like results. Robust enough to use in the real world. And you do.

(3) where the explosive results are expected - basically what we have now is neat but no nuclear fireball - is putting together (1) and (2) and a few other pieces to get recursive self improvement. We're very close to that point. Once it's reached, agents that (1) work in the real world better than humans do (2) are capable of a very large array of tasks, all at higher intelligence levels than humans, will happen.

Note that one of the other pieces of the nuke - the recursion part - actually has worked for years. See: https://en.wikipedia.org/wiki/Automated_machine_learning

To summarize: AI systems that work broadly, over many problems, and well, without needing large amounts of human software engineer time to deploy them to a problem, are possible very soon through leveraging already demonstrated techniques and of course stupendous amounts of compute, easily hundreds of millions of dollars worth to find the architecture for such an AI system.

Umm to answer your other part, "how can this work if we don't know what intelligence is". Well I mean, we do know what it is, but in a general sense, what we mean is "we simulate the tasks we want the agent to do, including tasks that we don't give the agent any practice on but it uses skills learned in other tasks and receives written instructions as to the goals of the task". Any machine that does well on the benchmark of intelligence described is intelligent and we don't actually care how it accomplishes it.

Does it have internal thoughts or emotions like we do? We don't give a shit, it just needs to do it's tasks well.

7