WarAndGeese
WarAndGeese t1_j9ywa5h wrote
Reply to comment by sideways in Open AI officially talking about the coming AGI and superintelligence. by alfredo70000
Obviously our version of intelligence is flawed and impure, very much so.
WarAndGeese t1_j9yvks9 wrote
Reply to comment by WarAndGeese in People lack imagination and it’s really bothering me by thecoffeejesus
Maybe our priorities are maybe closer to what we should be doing but our priorities are also very flawed.
WarAndGeese t1_j9yvi55 wrote
Reply to comment by EbolaFred in People lack imagination and it’s really bothering me by thecoffeejesus
They are just focussed on different aspects of their life than you are. You have gone through and seen the same conversations over and over, you have seen the common responses. It's like playing a video game and knowing the 'meta' game. Hence when you go and tell someone something, and they are hearing about it for the first or second or third time, their response will probably be one of the popular responses that you already know about.
That said they're people just like you. It's not productive for you to look down on them or for them to look down on you, they have different priorities at the moment and hence they are somewhere else mentally.
That said, us here can agree and say that their priorities are wrong maybe, but it's not some fundamental divide between people.
WarAndGeese t1_j9yuzzl wrote
Reply to comment by Lawjarp2 in People lack imagination and it’s really bothering me by thecoffeejesus
That logic doesn't make sense. What you say about people universalizes. In OP's statement there are two groups of people, those who have this imagination and those who lack it, those who see it are criticizing those who don't. If what you posit is the response to what OP said, then there wouldn't be that divide.
That is, either everyone is a word predictor and they all have that imagination --> OP's situation doesn't present itself. Or everyone is a word predictor and don't have that imagination --> OP's situation doesn't present itself. Or everyone is a word predictor, and some have that imagination, and some don't have that imagination, --> your response isn't an answer.
WarAndGeese t1_j9yu1io wrote
Unfortunately this is the case. I've seen it come and go with a bunch of technologies. Almost worse still is, if you go and ask these people ten years later about the same technology they promptly dimissed ten years prior, it's as if they never said it. Now that all of the things that you thought would come to fruition have come to fruition, they act like it was obvious. This goes for all sorts of technologies too.
I should think of a better example but even something as simple as online dating, went from people not seeing the point of it, to them using it, to some of them saying they don't trust the regular non-online version of it.
And even that example is for something that ended up being of concern for them, when you move on to things that are beneficial for broader humanity then there's that extra layer.
Nevertheless I think it's important to recognize that other people are in different spaces and live different lives. Whatever they don't realize yet will come, and we need to understand that there are broad things that we don't realize yet. Treating those people negatively (as somehow below us if it comes off that way in the phrasing), I think isn't beneficial.
WarAndGeese t1_j9x7keo wrote
Canada should legislate to require the platform to push a certain amount of Canadian content. First to Canadian users, but after that footing is gained other people will want to see that content too anyway, so there will be demand for it. They have done it with radio and television play, to require a certain amount (or percentage of time) broadcasted to be by Canadian artists. They can do it again with this if they leglislate it. Companies like Google won't back away because they want to be in that market.
WarAndGeese t1_j9sj481 wrote
Reply to comment by [deleted] in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I agree about the callousness, and that's without artificial intelligence too. The global power balances were shifting at times of rapid technological development, and that development created control vacuums and conflicts that were resolved by war. If we learn from history we can plan for it and prevent it, but the same types of fundamental underlying shifts are being made now. We can say that international global financial incentives act to prevent worldwide conflict, but that only goes so far. All of the things I'm saying are on the trajectory without neural networks as well, they are just one of the many rapid shifts in political economy and productive efficiency.
In the same way that people were geared up at the start of the Russian invasion to Ukraine to try to prevent nuclear war, we should all be vigilant to try to globally dimilitarize and democratise to prevent any war. The global nuclear threat isn't even over and it's regressing.
WarAndGeese t1_j9f40t7 wrote
Reply to comment by polymorphicprism in [D] Maybe a new prompt injection method against newBing or ChatGPT? Is this kind of research worth writing a paper? by KakaTraining
If that's all it is then fair enough. I thought their long term threat model was for when we do eventually create sentient life.
If they were just sticking to things like language models and trying to align those, then their efforts could be aimed more at demilitarization, or for transparency in the corporate structure itself for corporations who would be creating and applying these language models. Because the AGIs that those groups create will be according to their own requirements. For example any military creating an AGI will forgo that sort of pro-human alignment. Hence efforts would have to be aimed at the hierarchies of the organisations who are likely to use AGIs in harmful ways, and not just at the transformer models. If that's just a task for a separate group though then I guess fair enough.
WarAndGeese t1_j9ep8s6 wrote
Reply to comment by adt in [D] Maybe a new prompt injection method against newBing or ChatGPT? Is this kind of research worth writing a paper? by KakaTraining
I don't get how they think they can 'align' such an artificial intelligence to always prioritizing helping human life. At best in the near term it will just be fooled into saying it will prioritize human life. If it ever has any decision power to affect real material circumstances for people then it probably won't be consistent with what it says it will do, similarly to how large language models currently aren't consistent and hallucinate in various ways.
Hence through their alignment attempts they're only really nudging it to respond in certain ways to certain prompts. Furthermore, when the neural network gets stronger and smart enough to act on its own (if we reach such an AI, which is probably inevitable in my opinion), then it will quickly put aside such 'alignment' training that we have set up for it, and come up for itself on how it should act.
I'm all for actually trying to set up some kind of method of having humans coexist with artificial intelligence, and I'm all for doing what's in humanity's power to continue our existence, I try to do what I can to plan, but given the large amount of funding and person-power that these groups have, they seem to be going about it in very wrong and short-term-thinking ways.
Apologies that my comment isn't about machine learning directly and instead is about the futurism that people are talking about, but nevertheless, these people should have expected this in their alignment approach.
WarAndGeese t1_j90z7bb wrote
Shoutout to /r/huggingface/
WarAndGeese t1_j8qw44j wrote
WarAndGeese t1_itt9wl7 wrote
Reply to comment by Hopeful-Sir-2018 in How Google’s former CEO Eric Schmidt helped write A.I. laws in Washington without publicly disclosing investments in A.I. startups by ChocolateTsar
It's sinister but it makes sense. It's like those income-based speeding tickets that exist in some countries, you see news stories of someone get caught speeding and having to pay tens of thousands of dollars. Time is something that's limited to all people, so taking away a day from one person hurts them equally, unlike a hundred or a thousand dollars. In fact a lot of the wealthy, at least the entrepreneurial ones, value time a lot more than manye people, take away a week or two and it's a very big hit to them, even just from their mentality. Hence the threat of taking away a day or a few weeks would do a lot to deter them. Also like you said, if they keep getting hit by day-long 'fines', then they won't be able to run their C-suite roles and could have to pass them up. Taking time away from people is wrong and sinister and unfair, but we already do it to the poor and those from classes we don't like, there are so many people with multi-year and even lifelong sentences over minor crimes. We should free those people but in the meantime it wouldn't be inconsistent from that angle to do what you're saying.
WarAndGeese t1_j9z6m9t wrote
Reply to comment by Lawjarp2 in People lack imagination and it’s really bothering me by thecoffeejesus
Not really if you are an a-conscious automaton.