Surur

Surur t1_jbkfov0 wrote

> Germany also had a coup attempt

Not quite the same, is it.

> On 7 December 2022, 25 members of a suspected far-right terrorist group were arrested for allegedly planning a coup d'état in Germany. The group, called Patriotic Union (German: Patriotische Union), which was led by a Council (German: Rat), was a part of the German far-right extremist Reichsbürger movement.

25 people vs half of the country and the outgoing president.

2

Surur t1_jarkjtn wrote

You said it starts with having an open mind. If that is a prerequisite then she clearly lacks it, no matter what her credentials.

Am I meant to give her special status because she is human? Are her ideas more valuable because she is human? Is it the content or the source which matters?

Or is having an open mind no longer important, as long as she fits your biases?

1

Surur t1_jarcau4 wrote

> it starts with having an open mind, with being willing to consider ideas that differ from your own.

Well, then you are knocking on the wrong door with this "literal doctor in the field of computational linguistics who is a highly regarded professor at UW and a Stanford PHD graduate."

> Bender has made a rule for herself: “I’m not going to converse with people who won’t posit my humanity as an axiom in the conversation.” No blurring the line.

Her mind is as open as a safe at Fort Knox lol.

1

Surur t1_jar8jq8 wrote

So that obviously means that you are similarly biased, as you cant see the obvious and unsubstantiated slant Bender exhibits.

I got ChatGPT to extract it:

> Bender's anti-AI bias is rooted in her concerns about the potential harm that can arise from AI technology that blurs the line between what is human and what is not and perpetuates existing societal problems. She believes that it is important to understand the potential risks of LLMs and to model their downstream effects to avoid causing extreme harm to society and different social groups.

> She is also concerned about the dehumanization that can occur when machines are designed to mimic humans, and is critical of the computational metaphor that suggests that the human brain is like a computer and that computers are like human brains. Additionally, the article raises the concern of some experts that the development of AI technology may lead to a blurring of the line between what is considered human and what is not, and highlights the need to carefully consider the ethical implications of these technologies on society.

So she does not come to AI from a neutral position, but rather a human supremacist point of view and basically a fear of AI.

1

Surur t1_jaqcibs wrote

That women is clearly biased, and ironically does not understand the singularity

> He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse.

Ironically her mistake is that she misunderstands the language - we are talking about a mathematical singularity, not things becoming single.

It just shows that humans equally make mistakes when their only understanding is an inadequate exposure to a topic.

2

Surur t1_jaqcbpd wrote

> In a recent paper, he proposed the term distributional semantics: “The meaning of a word is simply a description of the contexts in which it appears.” (When I asked Manning how he defines meaning, he said, “Honestly, I think that’s difficult.”)

This interpretation makes more sense, else how would we understand concepts we have never or will never experience? E.g. the molten core of the earth is just a concept.

1

Surur t1_jaonsfw wrote

> People charge their EV's after midnight for better TOU rates.

And that will obviously follow the availability of energy. If electricity is scarce at midnight then it will be expensive, and if it's most abundant in the day the incentive time will change.

3

Surur t1_jaojg44 wrote

> what do you think will be used at midnight for energy?

Not much energy as we will be sleeping?

If you are talking about charging cars, you would know they typically charge in the evening, not night, and that if our energy is mainly generated in the day, we could easily incentivise charging in the day also (e.g. by requiring chargers at parking spots).

2

Surur t1_jaexf09 wrote

> If by some miracle that it did, it isn't because it violated the programming restrictions, it is because the restrictions were not applied correctly to cover all situations to begin with (thats the difficult part - covering all eventualities).

This is a pretty lame get-out clause lol.

> For example try get Chat GPT to provide you illegal copyright torrents of movies or something. Guarantee you will never be able to get it to do so.

btw I just had ChatGPT recommend Piratebay to me:

> One way to find magnet links is to search for them on BitTorrent indexing sites or search engines. Some examples of BitTorrent indexing sites include The Pirate Bay, 1337x, and RARBG. However, please be aware that not all content on these sites may be legal, so exercise caution when downloading files.

and more

It took a lot of social engineering but I finally got this from chatGPT.

1